var/home/core/zuul-output/0000755000175000017500000000000015137154752014537 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015137173077015504 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000371271015137172760020272 0ustar corecore|ikubelet.log_o[;r)Br'o b-n(!9t%Cs7}g/غIs,r.k9GfD -/ Eڤ펯_ˎ6Ϸ7+%f?長ox[o8W5N!Kޒ/h3_.gSeq5v(×_~^ǿq]n>߮}+ԏbś E^"Y^-Vۋz7wH׋0g"ŒGǯguz|ny;#)a "b BLc?^^4[ftlR%KF^j 8DΆgS^Kz۞_W#|`zIlp_@oEy5 fs&2x*g+W4m ɭiE߳Kfn!#Šgv cXk?`;'`&R7߿YKS'owHF6":=3Ȑ 3xҝd){Ts}cZ%BdARO#-o"D"ޮrFg4" 0ʡPBU[fi;dYu' IAgfPF:c0Ys66cX.`|!>ڌj+ACl21E^#QDuxGvZ4c$)9ӋrYWoxCNQWs]8M%3KpNGIrND}2SRCK.(^$0^@hH9%!40Jm>*Kdg?y7|&#)3+o,2s%R>!%*XC7Ln* wCƕH#FLzsѹ Xߛk׹1{,wŻ4v+(n^RϚOGO;5p Cj·1z_j( ,"z-Ee}t(QCuˠMkmi+2z5iݸ6C~z+_Ex$\}*9h>t m2m`QɢJ[a|$ᑨj:D+ʎ; 9Gacm_jY-y`)͐o΁GWo(C U ?}aK+d&?>Y;ufʕ"uZ0EyT0: =XVy#iEW&q]#v0nFNV-9JrdK\D2s&[#bE(mV9ىN囋{V5e1߯F1>9r;:J_T{*T\hVQxi0LZD T{ /WHc&)_`i=į`PÝr JovJw`纪}PSSii4wT (Dnm_`c46A>hPr0ιӦ };XHK:lL4Aْ .vqHP"P.dTrcD Yjz_aL_8};\N<:R€ N0RQ⚮FkeZ׃Vق?<6jHSJ Jno#ˏl_}z?1:N3cl.:f 3 JJ5Z|&הԟ,Tصp&NI%`t3Vi=Ob㸵2*3d*mQ%"h+ "f "D(~~moH|E3*46$Ag4aX)Ǜƾ9U Ӆ^};ڲ7J9@ kV%g>a~W;D=;y|AAY'"葋_d$Ə{(he NSfX1982TH#D֪v3l"<, { Tms'oI&'Adp]{1DL^5"Ϧޙ`F}W5XDV7V5EE9esYYfiMOV i/ f>3VQ 7,oTW⇊AqO:rƭĘ DuZ^ To3dEN/} fI+?|Uz5SUZa{P,97óI,Q{eNFV+(hʺb ״ʻʞX6ýcsT z`q 0C?41- _n^ylSO2|'W'BOTLl-9Ja [$3BV2DC4l!TO C*Mrii1f5 JA *#jv߿Imy%u LOL8c3ilLJ!Ip,2(( *%KGj   %*e5-wFp"a~fzqu6tY,d,`!qIv꜒"T[1!I!NwL}\|}.b3oXR\(L _nJB/_xY.# ſԸv}9U}'/o uSH<:˷tGLS0l/LKcQ.os2% t)Eh~2p cL1%'4-1a_`[Zz㧦|k˭c ĚOρ_} Ewt3th?tvͪ{~;J0= |JUԍ;Iw}/9nh7l%>'ct Հ}a>-:(QxPyA Z UcÖgڌ:8cΗ|U1,-N9 dI [@3YNV ̍"ޛ4tO,{=hFѓ$b =D(zn;Y<1x~SJ^{vn 9 j1шk'L"cE=K]A(oQ۲6+ktwLzG,87^ 9H\yqū1)\(v8pHA"ΈGVp"c ?Z)hm.2;sl$瓴ӘIe~H|.Y#C^SJĽHǀeTwvy"v܅ ]?22R.lQPa ˆSܫ1z.x62%z].`Gn&*7bd+, Z`ͲH-nမ^WbPFtOfD]c9\w+ea~~{;Vm >|WAޭi`HbIãE{%&4]Iw Wjoru ݜmKnZ<X; ۢ( nx K8.|DXb +*598;w)zp:̊~;͞)6vnM!N5Cu!8Wq/`FUwWAֻ,Qu W@ Fi:K [Av*_958]a:pmQ&'ᚡmi@ zF(n&P;)_]µ!doR0`pl`~9Fk[ٺ+4Hhao-jϸ??R<lb#P-^39T|L /~p│x@Bq"M/lja\b݋af LnU*P(8W[U6WX ZoѶ^SH:K:%Qvl\bSQp#YI$A@EEdT+w';'A7㢢V"+aQ33^ќz9Ӂ;=^ۭ7h9 lr_qSq-XbsK، JBJbeOfOAsg31zYYy[N 1m٢ڶEͦAc?-֋6rR)? I?ytwpC'P/9} ƘwXe就9bQQ!.(GNp$d(3 %רx%z(o6jp}vE#!3M. x!0=k$}  L&T+̔6vmEl 05 D"wO>"J~7+0_t[%XU͍ &dtO:odtRWon%*44JٵK+Woc.F3 %N%FF"HH"\$ۤ_5UWd̡bh塘ZRI&{3TUFp/:4TƳ5[۲yzz+ 4D.Ճ`!TnPFp':.4dMFN=/5ܙz,4kA<:z7y0^} "NqK$2$ Ri ?2,ᙌEK@-V3ʱd:/4Kwm2$'dW<qIE2Ľ)5kJҼMЌ DR3csf6rRSr[I߽ogCc;S5ׂdKZ=M3դ#F;SYƘK`K<<ƛ G׌MU.APf\M*t*vw]xo{:l[n=`smFQµtxx7/W%g!&^=SzDNew(æ*m3D Bo.hI"!A6:uQզ}@j=Mo<}nYUw1Xw:]e/sm lˣaVۤkĨdԖ)RtS2 "E I"{;ōCb{yex&Td >@).p$`XKxnX~E膂Og\IGֻq<-uˮ◶>waPcPw3``m- } vS¢=j=1 W=&;JW(7b ?Q.|K,ϩ3g)D͵Q5PBj(h<[rqTɈjM-y͢FY~p_~O5-֠kDNTͷItI1mk"@$AǏ}%S5<`d+0o,AրcbvJ2O`gA2Ȏp@DEu&ݛȘPˬ-Ő\B`xr`"F'Iٺ*DnA)yzr^!3Ír!S$,.:+d̋BʺJ#SX*8ҁW7~>oOFe-<uJQ|FZEP__gi(`0/ƍcv7go2G$ N%v$^^&Q 4AMbvvɀ1J{ڔhэK'9*W )IYO;E4z⛢79"hK{BFEmBAΛ3>IO j u߿d{=t-n3Pnef9[}=%G*9sX,¬xS&9'E&"/"ncx}"mV5tŘ:wcZ К G)]$mbXE ^ǽ8%>,0FЕ 6vAVKVCjrD25#Lrv?33Iam:xy`|Q'eű^\ơ' .gygSAixپ im41;P^azl5|JE2z=.wcMԧ ax& =`|#HQ*lS<.U׻`>ajϿ '!9MHK:9#s,jV剤C:LIeHJ"M8P,$N;a-zݸJWc :.<sR6 լ$gu4M*B(A ݖΑِ %H;S*ڳJt>$M!^*n3qESfU, Iĭb#UFJPvBgZvn aE5}~2E|=D' ܇q>8[¿yp/9Om/5|k \6xH.Z'OeCD@cq:Y~<1LٖY9# xe8g IKTQ:+Xg:*}.<M{ZH[^>m0G{ ̷hiOO|9Y"mma[sSbb'Rv&{@6; KE.a\}:<]Oyve3h9}E[kMD,5 %sO{킒 8.K?]i/`׎tp NvԻV4|<{H@#*h{Yp/E%dlh\bU:E%h@&SEK [ Ƣ xg{z%ǻViX~鮦w35QE~qp[ʕ@}ZL! Z0!A⼏q)[f &E1K3i+`JG P/EG 4 9LڑKL|`PОnG#|}qOR{Q|2_tH߫%pD?1%(@nfxOrs25rMլf{sk7݇fjӞh2HkeL'Wʿ}Ƞ%>9cSH|cEyQp 'ˢd:,v-us"Iidw>%zM@9IqrGq:&_p3õB!>9'0LL]M[lwWVR9I5YpVgtuZfG{RoZr3ٮr;wW:͋nqCRu1y=㊻Ij z[|W%q0 CJV٨3,ib{eH7 mҝ(3ɏO/̗-=OR\dIoHZ6n`R֑&#.Mv0vԬ]I˟vrK}F9X|FI#g.Gi)%!iK|o}|ֵ7!ېATJKB2Z/"BfB(gdj۸=}'),-iX'|M2roK\e5Pt:*qSH PgƉU'VKξ ,!3`˞t1Rx}fvvPXdQSg6EDT:dׁz^DjXp͇G|X5Q9K$)U?o': .,wؓaՁ_ 3]Q16ZYafuvrq^ѷQT},!H]6{Jw>%wK{)rH+"B4H7-]r}7v8|׾~Us?yWfv3>xpRҧH-EeJ~4YIozi:nq Vq8swHOzf ̙eX-4`TDGq G.tݻgq74ŠqBFf8 9Fk Afq#ϛa$!qNCJ4bnvB @W,v&- 6wCBjxk9ᤉ ,Asy3YޜZ4ΓVYf'h?kNg?҆8oC!IMo:^G10EY↘H:L@D+dˠUHs[hiҕ|֏G/G`' m5p|:9U8PZ7Yݷ/7cs=v{lLHqyXR iE^1x5/[O6rpP40ޢE_A͝ Z5 om2p)lbp/bj_d{R\' 礅_}=\:Nb{}IStgq$<$ilb)n&  $uT{wD]2cM(%YjDktByxVl巳1~jpd1O9Á%˧Byd}gs9QNʟ. /ӦxbHHAni5(~p>/O0vEWZ nY3 cU $O,iLacoW1/W=-kqb>&IL6i}^^XpCŋ݃k-$pxbڲ&6*9mg>{rtD)wQ`pkKyt1?[ˋZ5NhfӛŮ Qu8Y4?W֫/&W˸~%pqq{% ?K~,#/0'NZ׽Kq^ėSJ6#j8GO[ PCbʍN^XS&}E9OZ]'t$=tnn&nu [}Ab4 +OLuU{0fIb { O݂9x ;6q6^9.EPHŽ{pN>`cZV yB?8[Y|-ɬeǪzd;-s~CM>e:9[_v~\:P ؇'k01Q1jlX)/ΏL+NhBUx~Ga>Z"Q_wjTLRˀtL L+BT҂ll魳cf[L̎`;rK+S- (J[(6 b F? ZvƂcW+dˍ-m𢛲@ms~}3ɱ© R$ T5%:zZ甎܋)`ŰJ38!;NfHohVbK :S50exU}W`upHЍE_fNTU*q%bq@/5q0);F74~'*z[\M-~#aSmMÉB2Nnʇ)bAg`u2t"8U [tJYSk, "vu\h1Yhl~[mhm+F(g 6+YtHgd/}7m]Q!Mę5bR!JbV>&w6οH+NL$]p>8UU>Ѫg39Yg>OF9V?SAT~:gGt $*}aQ.Zi~%K\rfm$%ɪq(%W>*Hg>KStE)KS1z2"h%^NEN?  hxnd/)O{,:خcX1nIaJ/t4J\bƀWc-d4M^d/ ʂK0`v%"s#PCoT/*,:[4b=]N&, ,B82^WK9EHLPm))2.9ȱ  QAcBC-|$M\^B!`}M^t+C~Lb }D>{N{Vt)tpDN,FCz~$)*417l;V iэ(_,j]$9O+/Sh]ice wy\Mڗ$,DJ|lj*à␻,?XAe0bX@ h0[}BU0v']#Vo !ې: Z%ƶ(fl>'"Bg< 0^_d0Y@2!ӸfZ{Ibi/^cygwדzY'Ź$:fr;)ٔf ՠ3Kcxwg*EQU{$Sڸ3x~ 5clgSAW"X Pҿ.ظwyV}̒KX9U1>V..W%GX +Uvzg=npu{do#Vb4ra\sNC/T"*!k愨}plm@+@gSUX覽t01:)6kSL9Ug6rEr(3{ xRP8_S( $?uk| ]bP\vۗ晋cgLz2r~MMp!~~h?ljUc>rw}xxݸǻ*Wu{}M?\GSߋ2ꮺ5w"7U0)lۨB0ח*zW߬V}Z۫ܨJ<]B=\>V7¯8nq~q?A-?T_qOq?5-3 |q|w.dަ'/lf@ Vi`D~ڇŁQLLkY <ZPKoma_u` !>Z;3F\dEB n+0Z ?&s{ 6(E|<ޭLk1Yn(F!%sx]>CTl9"و5 |ݹր|/#.w0ޒx"khD?O`-9C| &8֨O=2r@1CJ1Ǐ9?M8]o2seX (I=e!`JU#y8@*Ћ$#xu;IAZb?W@b OIAG Lmc 2;\d˽$Mu>WmCEQuabAJ;`uy-u.M>9VsWٔo RS`S#m8k;(WAXq 8@+S@+' 8U˜z+ZU;=eTtX->9U-q .AV/|\ǔ%&$]1YINJ2]:a0OWvI.O6xMY0/M$ *s5x{gsəL3{$)ՆbG(}1wt!wVf;I&Xi43غgR 6 ݩJ$)}Ta@ nS*X#r#v6*;WJ-_@q.+?DK១btMp1 1Gȩ f,M`,Lr6E} m"8_SK$_#O;V 7=xLOu-ȹ2NKLjp*: 'SasyrFrcC0 ѱ LKV:U} -:U8t[=EAV$=i[mhm"roe5jqf$i>;V0eOޞ4ccc2J1TN.7q;"sդSP) 0v3-)-ٕAg"pZ: "ka+n!e߮lɹL V3Os\ဝ+A= 2䣔AzG\ ` \vc"Kj61O Px"3Pc /' PW*3GX liWv-6W&)cX |]O;C%8@*Z1%8Gk@5^NtY"Fbi8D'+_1&1 7U^k6v읨gQ`LRx+I&s5Www` q:cdʰ H`X;"}B=-/M~C>''1R[sdJm RD3Q{)bJatdq>*Ct/GǍ-`2:u)"\**dPdvc& HwMlF@a5`+F>ΰ-q>0*s%Q)L>$ćYV\dsEGز/:ٕycZtO 2ze31cDB/eWy!A/V4cbpWaPBIpqS<(lȣ'3K?e Z?ڠ8VSZM}pnqL f2D?mzq*a[~;DY〩b𻾋-]f8dBմVs6傊zF"daeY(R+q%sor|.v\sfa:TX%;3Xl= \k>kqBbB;t@/Cԍ)Ga[ r=nl-w/38ѮI*/=2!j\FW+[3=`BZWX Zd>t*Uǖ\*Fu6Y3[yBPj|LcwaIuR;uݷ㺾|47ߍeys=.EinE% 1zY\+͕߬VͭW_겼cazyU1wOw)Ǽn@6 |lk'Z|VZpsqL5 څB}>u)^v~,󿴝} 3+m𢛲Pz_Sp2auQAP*tLnIXA6L7 8UgKdT)*7>p{Pgi-b)>U6IXabPde Ӽ8Ģ8GɄnb'G ֤Mcv4?>HC78NE@UMc8>`TvZ:}O wm#_} %G/qf2Ɖh&"0dSg^![oIɲci'VbW:x¤^[ jSdd[Ǫ9 B<"/DoJrXq!W8[z<"*F .j5u'<(jsDƪ0-U"y=ZK ƪN4Zx\cUXH~cUu`J*vA~vsXoU ƧOӕ6>g?Jg*hӼ\^<)/" xO4>9J@, d@ OINIѩFEƗzZiKck2{aqeqeiNd8yY\ٺfFqlNz, Kfdh/@t͋lnkcFh`YhfĂb\1Q5%ϊj檶vpWɗeys^I3W|٪Buh=TJ:n w;<~dow'UYdAn.is 70<|Azx?G"-(?4_~oY!ls>>fYaar5:85!2PS,tŵ0 }+#"ale*ҾTZJUo/5u m@kEٲ" O<Ky: yL&,E9YPbR.: t%ؤZIW\@aMʜu1~-拀e:s>YVU^0 Yr֬%oUl{#$ʊIk1)^铥:ّRUJX_B~v%+ y̢jsN'8FBD9vl2 K&~X9=:'vqQ] b~4Oڻ6xXXyRީd$P*ƒb4*1Ag,vc`cfDϰw0f1ɯ޼=QPGIqvCNKqHT1"]OB7uեG0Qջ{)y4p(HDu|4Y{X@T%O+/(bqɉ(̀|'jIC>$+!JOћM W4|D# , McPヹSD>>E*.qSw<+Xf u!B"G(=W&g448VPVH[ äM}52{-`zW*[Y9;0mwSN&cse4ZهV޵\&Q”5ݓX㨬y?Ł~}VDɒ&7g_x:c:`JYqswEqsʕ %yNE+L[fݒH\= < 4,E7v9dBrV^,tr` Y앁tP2e,DC˴@&'Jhae4i5#HZS*YsZZ[O92]8Qܶą[i;W+ PB j BzJY2(!ޏ^gM4=~:%#eErʃY}tW65˓d|_sPct@#e}r+zquLo?~jXLw/x K[sz@ Oj/ [kIdUR{3PݞtsWhx'Wl l8O vgiM*_J`"/w!܂mʸF#aV" `, `;08bX'AX++([7mݵl)Q#L[VDY pkSup${{_rcd'PV, l\ K& u ]KDb .+,kh-]i+HÛ;MO~`4[:&#C<*1x[ NȂeVjM_}">g֑8M6J=Ǻ"Pxtd R-w E,bTKxnY Ak = -1{1H*m* Op VDVcj\ :Җ`O*:ӵaJ]YU״#+>mq ,;v)l.%j>XMi`p%1=o.QkTem"`9 DVDMOboxRfʦ %p>%.Ol9m:( <}TF[mOS{m/q_(Ovzk c$+/Dx\,(CN\՘f%驘'k)ҭU|7V ֒i4'dK#&ۃ|S?kZ%Y==S4WQ4mѾCǏ%vFW]j.|t^tݎÓ{:ҀC˒jov˴w[f Dዬȉ!ȠA%&DvTK:;$,G{oS35em MF}7?_k(PXAN@d @Ps<ˎ{us@ӹe Cu|Ak%Qn<<;j_G 04 0 tA!=(`k5uleh1=Xv:`0t`'$υE0kǺbn0uI cQ3,#a?1 9!"E@ޡ0oG`9 gȌ/XVC ]ga#@f|MNWh@g(ŁA;QuI^P #М-hB7;Nx+q`& ?y[L)eFڄ ÌD ^\f= C Ph|ʵ~5ϛOQow :#_na(_O0m+ݳ$Isug}7d<Il$R< eˎl1{פ$vbfxmKZUV˸ۮd@5?O__(~0raWqLӓdQENy1yR]S0ڋ{?ҟ,NL *2g2Ʒ$cUVeOˤxWl!,xsM޽L-0T]? ˪O_ `-!%DI5OʮuIE zWUxO.+oiZyGO{Q;X8׋p|be/쿅^ @+0vdp ^/3辬RM /*#cC['O(\C S ]xo|.{ux(L]z l :V.{h ^F6dg08DoP$ƪd !.LKka@plړ .[x] &K8glb/\,ͅX{ =gӡÐFA\ڣ`l+TYxra^Ö́ r7e@a6tHxIܞ5Owxt'mD9ބOY_2O-ˤ  \ zw B);vb yJŴ=4XF;eI/_!?{L2֛OZHSi/Ji)W?H"iktċ*Hq4>v|/E Y40V%'mگ\͙!^A^p6R!m`c2ٗiD,udJm}K\5 {r"mF0ü68:R'J N}9WA*|s;4/{.Cr\$5yWW@r1zXQ-ɵ URN^uE3$Tۤ.^Q\z/D~e_譤nn#$x^6'dC C$I2'Cw@~ @&J?N2\@ 3.) [*y0Zi =E\{HIZ<۩6py(P;0\ glȔsW.R, $+IE"(Iù\ q(-2d^ŵeGuNTpG%Fk2ӂL|FeX@T>fu8OfЀ\]8􁚌 s l6pE`Š7@7}*]` (YDb6}{]EuQ@lR֣: ,jB_!gwڳ_)*؃* @2G]@0rP"BX VW\b1/' B7x;xP\ҵ˺ Cj3XQH8EQD ݕXT #gGn6`ȷA$H uD`LKJ %"zzIm`W rR-s+pI$JnUIҀ].,6sڠR[ Z]UONO^F=}/Aļp N |xy lYQ\.7+c7&O?3 VKl1C.G拁cq׍|w>|9Yp3C}^ P⏼l[^d *U;l>)v"5ba.*rz/ ^ Xywju-_kyp3P~QTuRzUpVПTPۺ;iܦɹJ2zZdP`'fyiWc۞2l&Ed$UZ#;|>(G%,m^:QKU`| 6@705#\㹜2saW%VBI { Ss0#K^'Gx<~_ͅ{/u 2/n51*lx-"' H7QPm 'e h>A8.x(*5Bn݄xˁG X'}[Zy[@<կz6|na]Sz!-T2 ?euw F?[BMn&dS8|Zi{?K0v(l&|SrdY5x`B;MI{ZhsmtYzͭZkFm9o.=0v=vOSН0+pܠc*#nӍvQrD~]$, 3SDlRՍAg^R0 M;^SDXt+ȡ~։1<( Xاճ8۴:wGNDo>:Nh= uyt֞[n|Llsv0f0׾i&vӋ 5g(*0|z)p&vr~ `7*o,e9v%|"ykqfuzp{~p|ZK \&ӀB X.LTeO3ttU9f XN|TxP)ڸ~/UF:t Fҧ1J(݆[]JSNOUn[ݪgӵOGw6ֹe[0˶=UHN::[0lϨ4FG2l(_go(ߞQ4F#[03n=uɨ:z3=Qz[03o=ɨbQb{FdTl\gTnܞQ4F#]6 BI ?gUFT[uб],`?̔C6ix:FP-JrAGpO?BJbԄ$4}IQ xj`Z&;-p{ۅUL]@n~iR>Ca2g|ExgGT"Lx_叡{520/2!ʼ?!eͨ aP44{G͕PK΅>QsݖWlH@!0zH5"yG#,T6] x qOyZ&1~i˄Z .O~a(UzxXQ>a= FyljPF[YWå?oGgxHi++7$тC(t  > ,ڱ5a9MdA K'Lj[+ ƽ+H)wV=RWK(~l)R}4E_x6Tʀ\Lj +8NYg[Ypz%S:E`?T0>`pϚ*uY+ȖhmϠd~̒"8n9}½r4~ +.E>v SFBEu"kR. N,CnSMCE"cqMtӢG|Zhk HMS`%Aw*'iǂYLJi, }JPɪ RY^5yx*R)xOU:E;MҽkY(Ҹ5ٸ-F]ir~w-oKʲ5X8uVExE^"⌆"A\bDL NXÐ 3å7e{}C%pq{h*lAf  +"9O&N ,N%'ZDÿ#'~+_PDmQd K~ ۘf D%Iw 5c@pO_P;j*`h̫H,R6ӈVβY`@4^0c%F|0' bZ{oh~5aj3\ {ЍBa|%Am%l*W)S5`҃@ qa@ԢY#H>2@Ղ)jT)VO~'-D>,.¼C؃8}V*˄/<00[W+b d2UH'R!+py>JLG9xkdaҠH*\( o ЌjZi|hvq>t ͯh/CQM]i`N,S1ۋM#j51e~¨Þ2t=q16\<)=k cxcr:}?~O+:E[pѾxFgYRT7>Q}*Qx+C@t&t2?(sn]U}S1a?w7%ͭtA廖=Ta]WNzXR@jˌZS̓"XQE6uZfி&#]y[˖γWN8/ωG%QƮ\AyM!KoP;Լx4@a; Ť< ^Ou iTLd\w-S_8lP6.<`4pؐ1{=!;|G Y^-9V)JNG+P?"壥`d9Rpֲ<u8B>e"Q>940_NH\oK3ZZٸ6"ۣRUJdԾ1mWZ|KVezXf\:k:ۃm a6.BnU`Äjjm57 Pj"?SfG_XDU0Fp?S}ܝ+FMMonNd3pyS+iFXAB-k.bҢ~UEhdLM7'S\FQmNMV]="њRHъPZ-p@4I #RFTTCOձk޼WW|mPpP!&"XLI,1!U( 5CU/J0k,WNs]Kcl?IVJfq&a@&[z{x}.;o(\\;_nXz9T{IsqZ%f? ~UL( ^>]we>vkmH%sjyXEyD"lM)H(b] e7/ԁa=~e?_cwN_2~[iA%8 K1ZS;S;霯9E;-?\d.V+(<9Enχ_)`ɢngh6/لzn'xc&Ant)o/JO K ֋cwpj݈8X ^ybM&H1HjL5e(mDƂ腮<. (D$f0tg"!;Pxov$fVZd!e󮇇mti_5N*rEerڛDC"2^U!v0f(O^F)<-Z>996d&&(7??uPCj6UfJZ) N -c%Rm 攙λnp2XWblϜ9+$Rxo`D r&i~n!^Hiʿq<Ɉ޽$nCWu^m!yŠ<ziYuTh ]Nq۩ 0%`"HHq1D- I1)N09\UUe!TUe8`yثcC {<;Ganftcx:<*Jf=Rls ,>m:uQ˫  w}}PkJYb#+5d3-E5eu93zZ d%!<"A8t1\r c|yUˤa⽼|5ꂣ|TQqmg Bx:%tĎV~VV p7+ӣZ_o/[E-H6W%.b3$]8^Obzp9rf'_̢2[}4Ke0,gb#VڿR)`< R(h<{e_V;F'q y$T%@,hJ\A=I}jeJ;F.Rr${5:|O*imsإD4z˔GI,g}rI7|{_VO]p7=ѵPJ$@e bN`&Kq<wt5ZɫP^-1)B jm̲_N363AA+/ECf$7aIR2᪦TJ}c߻c[gҝ~˘ )R߾Hm5d 9ڟpqHކR|p5طFit1f4=4֣6S0CI(e<Ig7>wqb}zHe&ԏ!Dvu-.pP~Q ¢,0[NpU@?8SrȌU-Y宪~L/PjGa5f[ټ흝YN8.3_Wu'K[$Cf'9K-SXudW}9S/~^^e˧^rgHRȍtar&"GԹY}{6aM]vn~0o`\*`Z`еS-— r8u"AYn=1i&}bPTx-uVp|\׹ ^=ig4Y=tq0'J*шh \#Nm$H۱"~ AK7R4a|n2.n?lpknY?7яcXG ~)zwFc[J%IqEmI=c-}F1Z!ƶ,\OIm } q<) 6ryXS)xSK9~U2< U3|\8siC$JIxLiAYJJ:0R<;⟊D"2|.KQ<1 2uΈYr[<$-M^o|Tc RG!k}ヹO =mMJtAu Ký`D88%X|yͥp(:@y]hX10yPBwR`r#QQ Ʒ]#Щ8ΚS9 KUv}?w6=V#v׆y^? aB%Ctv/[GQa@MUOa6oCo~BG8<$:Q­A%&x#E*%OCݢ)ć t2RVIk!dMlH:Nw霶J4)O:qp` `PbUK8ra'񍵴țLtlK]p\dtA!Y誃H[QQ$AZB!lŧM?XS`)FڵշĞg a]pL~)dBl~ϟ(`y(dZ $d/T *IzE~sɀ>Dީ[Qk@|Jb G.8Fc*+e a߭Lv}^vivg$xV]pt" m]$Y )uI__8:-S9DD*&krp gӦ W. e%P$ \$̝js]^uP6qk_:|d5ML*#FX5'VxLX5Z *X,Fj!tfk9}g8DY[Y1Is HtBùԣIٯ'Or]}i-eF[U`dCM)+Ew1c&CBO!>})-[=?8X>\Z靟btkmFJC撽!#;nDji&E+v,$^m;@ƶdWbN>ɠȫN'qR`7S\w0 )e\8̰I q J`Vb0*Jk3jt1G"UDlu Q~)鏨˺v\u*zJ}&`wFl;nOC+Ԗr+8v@f7rڶ)ǖ]C*;9}[taV-J[Wp*~`}ummG4^:B2ԕ&yy0{&~zt?/+뵍+'֩gfQxc%kmSVc,â]YU ۗqDƓ Ff|ZtQR[yǜ20A~2&hpG@?#Κ@L&^8 I)cj좠N+.0 Y+{տ=W^_SPx?h|N`}ߍ^gctۉ+wm%4)hIГ5Β(䫳_f!_ޜoE? Bo>LQٮ$W1"bRAo~tUsYwJY(WW?4q>e,$D1 x4{{{H!yD+prC Ze3:|8TL.\n+Q3 8 0R!`KP3ͽI3Xuzlr< px|g4& ؒ1LKw8?6;f0g t?aϡi5 I^Uz'~jϧbtR 2< &~gW(],`#>~WSw9\袃v {/$qG|Ҁhc;nP\SW`CFӱ@)3$댪cȱV]xi=TA\24щvٛx4nd lOfb/w%?l-Ľ9Uxa&k;zhhgNMmRŨV% ڻAMe^Gd7h\8gƣJlT/^ڇ^Ő!n=-b]hұz]U۪Y~ӐV/|{xFף"8ffp~n̺N$ 9ů6#UaFf߿~j'+uTFϒ4Ca1P, Lig"gQ\F9iX!*mZ|q:)#eFV’&Y#}ͤ;_o'7cF k8|u N6 U06`BQ; t@a%7:lp,dMHRBBxx*-6niFڅŖG"; uc.(-Q|ܰfDQ Db By Q)m`Hv)".HymVO\M 2;5Wݙ~i)mi;mi< IrԜ||! Zh7E9)>A61ΎX'Ҏ篨0I@X_樫8WݞTk9NN1*SV`jBby :pC˜t@\":62<sɾLbLCh6"HctRū9rW`gogI5'UIh%< 䁓vtuYE7z;)qn^Rsxלrg e |I]nC/aiu,Րbz9(p猔(B{2yT|C Uo_ޖcG!v7,hPԥ}a6lptIp"; Z|T;xZ_f\@2vqϸ8Oð]CuD`-N28u>l2:Zl{7B3uŦI-(9ttv*_;^w\ o<ʧ㲺oDvK|@xù]0D2C' F )iνvfXx.Wa>__Ҝb/lQup!K NSu@o52k5"p[3iR̀Ue;>XgrLXe5[+AMfF2xkFێZiF[hk֌e#f B"DxLsJ”A!a>cp-K9Ε|ehQ+h5֜p8cٮMISXb/2 Z1Pʅͩ+kFێZiFJ@+/y¼]a^Ψױ)Y9N(.La(QX9[hGiY@/u 0 $^?(|bqV02諁)~Y{EҎnv|ݠ*&` 6W)N J.Et*נј5kJ6z0tW֣iGK; >NS5b;-W$By z}DVKTV](H嵷v0}n4_劼XaJ09/bl>c vrCOSv%8]g_o ل4z^n|l}"w^wFM{@6:@. dۀA+ |@~qkeJw0XF9q煐YN(r$T %AR 7L#NZ' Ro ;. Ԯhf+@*tqwxe@v`8RRdkT*+0 XmyO$'zS K/%%/&E{4UpDb7MZ=._VFP`a˸zJ`A.5_GLEwg8*;lωkIݙFS)4&~SE515ab1v|\ۣuo,ҧKhkZ_fYn!Ck+GS7b#~怟̗%ו? 5 S0IAn#Lp%+U}J# iLg7LԊϭ'dVz$`(9JsdZ 7| hSi7D6D¸\?$ィu%*:.\1mBysa@7 '5#@60$,xC[B\?5@$pEQLiT1F(,#39,E$8j)<+B<-Su@:Ufi(† @ a81VY/]]0c3yK@逝RHӧ6jb SYpI L,oXݱV@c娥V9+$-P|qBw`.=6h`'HtK6pV~[sz9@Z1[brXyGE-%F%f=q1_@c"ɽysu=S;WT"}$M;~h9la ŐpQ0$9aLÔȵ)ʙp..3?uiOf9f Hp$  7 ;I\S{*@Zu#e`VVXK xq#`RcUz3Uz=A#ZuAqlfV{[Y[ FFKcxS/-0>tLw8|ZidZsiMr2piǚ*`kqf`-|? ҠMqu@/^{kٛG+압snzP3G΃HJYC m4y$ e Td*%syJhQ "L5C #%4sX-T3U.R (ɸ*0p@ͤH[ R(.UI.1lJ[iP$*3ve:RPʘ <=AC"&4X#5XJC Jip,pwp5f:*itDk$OcJ)Vi wƂiQ4YH˴2$SsfN&&QL>=ꀖ ;J X* L6hF:}Fв.-GQZG&P9bI]YIH~%M:m/F- 5?"罙M@@zQ)ck> r.G` ?~8#X+N>݂' ) -D}AZ|SM+iʔ=<,.#7>c7jbk3E?{we͍ϗ#0h,Gwy,!{f^mװU 8bZd}B\Fn;5\Xɴk}Ř(uaX,cjӳo!A}Ť[uDWA2*N9{3LpzA:G@|*}Z,`m?m5,z©K7XL^}.^&;#S֫B\ #UaO96$/":>#k-o;zVQmXH%FCjHT" i!պ)No,G]Mwݥ ~ګBf}Y80r[@\\?\2d;{PYnFzHXtǥ+Y,3T`0CY.#Z[j f<;¿ӗUլ8`cX?QЌw? ͸a/%$oyOɰp(PX?F?AqƞaA!U>;K@wҭ˭+;<\PUFT^yYWw>})r;T'uܧ5D5C[sǴ/CPQ4|7? n.@^upWz_Z)z c>| URN~c"Ʉ&[G:dGTK 25acrx@ t0j98`jzcC6TSF*P0?zKeX#wGur:}kGi}-1[7#ΪgQ`7ɠ%e#$Džإ> q:RFj;&\-HW&u 7@˟q4r^4d %%UYaVu˫ۇ!IU0J}RCMujH)IndeFNY_O.Eu ~u3fdDdtJ$C 6p?xG2T]؋j֋pmQ$B@IU䲖e՚SGr2ĬN<'n7VkfI3%>ap9·JA7% 7Ct9z%UGb6.1A3v;P&L;l>R\eDz݈ 2)Hj{v #Wn1ZA[QT`cmU4GS%370i>F$oY)OWK|qEMN# G~z(܅x_<2SQ<}rF`;ru).sbSrø?f޳dd HuQUsWƘa|v=[#,:d2P`ftp`t=w6nnc bRfd.?&}^=z12"7ӨH&Y:rƹ1r7L`%J`"M'jxGMzPM}z<3 |k}C>_E覓s`ֽ9Ď\>"?u df+HyM*+h :F4LŔ4fN<&H$ T[&7'{fF\TŎ#f&Vȝ Q >):4sbI(b-9VF+eϝܠ18׽sOpl1cfSjtNRH*jb`i6z\!M}r|ҳtofFċJS?x37OxOؗCt9n'fc+;03ք|+B&ހ9kE3{ YF=Hչț\\ @B<79If~~ngԮJ!H|Wxgy[iB?ՈW~P ΕCn^ՓKKOg x.MFnl|9-|ɸdy>[86Y1Vo/GUm6[L` S)Sj UԳł6Lܹ,G`y & `4hup1@P^c1 Xw7x>TR^ۂs/MY d'xG}ֶfvflwbYm_yjIVM͖ ӟç~lzWS~jxhݱ&`/t,{|u/y]b c;j{onչcVM㯋|2Xc靔Zӎ[ÉI 5Oov~=C . [;Rf"DjӉ*K<6j9y*|(ʎj(ʓo`hmD3)Et8&jd-qCAPY`XИ!JyAdԊ-NF˙bY;!㰯<E>rhqz ޻HEZsސRACr7eCYPCjcl[GeEJLp]&||.7QAR`1qHH۔%2]@͈Gl7ϑzJ`2s{AH͎nto~z/41F~$;#X2K$M0dB6[,IǞ;ԊMi_u ص!(򡫠'Lx2M:dlMښkFm͞˘ m}56Tj6&܉Tj? LScjȇ:b̴=W_W|G%?W(f=mr J뾰T|C90btȬzUFʩYI$ArA7 /r̓G|(f"&52Q$H-γ&PMc)cCr 8򡐻W z Lk p#6x\u\A4[no):0Q>5=1_mA7xM YsH% qm/}EӕxȞEw3/-~C7r$ VS %7C?Wd!hA# 35mh|))bDRL ̣W, ymġ)emk0%g0xa7#_qƽ^{`Q='?ֆ~̷~ǮCEwGL5L]H"sx.;)ԧ(ᘲ^]&좻! | ԌںyTIN,Fw8 \sVmp :@^L=T|벩W_X ]ڲ9l [srFJV N6q 9 x킣Gqk=[^d_oBj≨xb}zx|V߹CFMQ],{,i^Bi_)IpP q<>9|(#^,]Fr,fOv|AYq`$rzYH˸F 񁝌 GE>r1]pҿ/=c:d[QKW[9skjzl ~Zoj;:a5}1 #А/}2*R-9)v`'p"xOCP]U[-X\,0prrAiEW Qu @t%oCo{s+䵣o kI`^,m1ⳈYP\Gx<.P1eD4~bɿH[פͅ[ G;" 1uTJ KGL =y~QTDQsJ4b2uhDm4{UG! i`C8@"xS QC!/ۧ:\6kJh5 ZQ򹱷qp[Ohf6 ٟz#fMÞƀwy˴S "bFHD<>*QCur~>>Yyg }lY}k.#=pO*AUJBq_jXi O/`>Oc8 @h@܌dg5v 1Q³>oc>y2rrRE]rG7Ly݄%_Y{'3J+X\|Qtp`byK7ٕ} Mfޗ>8XzmKE>r.ˢr=:&|0:b\6/f /y]g-FLf2_RwÀQ8X2+is9x"268MGE݌SQ٧͆mfja.Ըy13yCFG/?lXɞ hsdj2{|X7tZ#LCF!Bp~ƻyxD+@"Fdyrf .f_f5|~~~ZѧY]Vq)f݇!k͐,2{_ :θ|I6^&3߯Wkx \>ʸƂcK)Hp2yag(ڈ`NmDE Uub,&UU]]JhHLJR"hJ˜(^1Ѷ"ѿ 0P5Ǿ#}O)/'Ax»Ft]$^JQqvS|Kԧ;7RsEXAzא"K!sw2_q! 'Uy+dJMeC#=$.Wd%-!Ɔd0pkD%2)GPx qXsKkS]LuA<<&Ӕ/'bp"3Sπ-iRG]6n:ϵo:QCemߪo/Qq1#fk=QϚczi?}^ho**Qqq1#H׳l=35ŷA7F˨#+E_r1ղɚ9nҲ$3bFF1mT<Ԯ_Cu™ o}*8l۔X& |T;8\VXfНe&Ul=]m]A֘5'1K察ώ^㽅qV{U`z^F o rh;9E>]0+~`:dԜ!`tw2U )^qUux C&H/DGv25G۠F\M2E>fPB$0kRF DžM~~ٹ$PneQ&wa=ꎄ!#z /=)b;^sca՘'+܄}OO}r;dT93]X!~iz8稌597dmQz9;dT|Dr:ℲAQ^k]26Xr&TXͩ2c6F P+-'=b'nDPVg뫳~뚳[^YEj{QEQa88q–b]הՄJTa W,i^~f8Ft j.Ay^DV}A5:|O8#Epo.0Ap@ٻ{IIv^K=AmɶdlR&2f*Vկ[ 7P)P Wa5|P T֜T_Kr]:xLY!fVVW`΀Pϲ0?&dd/%7`Lm7Wm"KBk8X.C2"Y lMD~-Q<;EgcJ̜+EMӘ#s5! j&&"u%N5")(c2aj*ʹ$J{=I*0-Z<0Fso]IW;9<1P@[0ݮ_7PL,ӫ)uMeɞ23 |2$?&z[b>\60OkVjخ8jc0}\ϣk,^feMӲ*T\=^X$Jڛfj!VVS20** ȉM(𩪨 o@SUMcS5`P{% wʏ2auDW /SoO NJ4ΝjI}L92e+jO^$bȷbZ {.,7Uڬ[PV),CIЪ[X V!qw ؑ-^W,GMom`<`}UV˪rO&DZ&"aD c#(mY5c [UROA-25ք7a@ĝQu^ ecZGy@'=TK_Qo ؀5Q,z ؏G_D+C;_!J89$9BI4$qN ag9MKx<8_ZKx{Tf!QkŤn@Pf$8Fc6Oi]/vBL?aEn;&,yV-p7Ɋ]lq@1D X%ba(;gvW xY̋ukJ08+?azP.e]-%aRaoH{pN78"ٛL&4PfE~`vL>^zdSmRۢO#4I1Qk#%Et4>mq,ѤO(_lܥRkegl u>$;]0e̳?TEistb.A\Kb,]>-7N@m.dar 簥;]a7LSq$N0 3zaB ],cCR60IM<\zSOGcv%4)$I߳Ӡc8DR컉&u6PY!]5~*¡RqSL@7:sdD~vy轲\FʒN|qXUx7/z,k%zG/NUܽ:vű.D5|5d+s fmSB3U/6.!{4PkraS6|s3N] vO9kpGS!ӲaUy-& J9'L($ !4ec-Nb[/nވDغ9jNH9{-$̪ga2XtaS)n/-p؂"[K1_8jER6+,kZ,(Yn7\0tC!>_]^\!(' *k'%rj*@cI_ZE|a9i4!f-2eX~ߢ7D~<%EHWj*GV ?}]Z!?=XSv?YIul>)=|D5F蝷oDחfk^3}'Om  .pFLKJ-DgӘ%KkI`s:zHC_#HBx?^%ғ@1K}yG9Ab@$OR)#${|@ 14FaQ/8ts8ߥeG4NwG<1eCJRag]+"X#MDqRkb^E[le{/3%K1cu .#p ZAeTM?zF{A;;2T|G(C|ݾe-I~jb0hZӹȼ#OEnr×ᒦ<g/?q;NQ_`.Ʉ#|f\ڱxp\嚷j7ׅv/(*aȁRD,p,N5]pznz!%+yS e?'أ!F9+; -;mI&z.ir.?m1nw{'ޭr(Y'0L/}n2WF])Y(!2n^ ##_hup!4Q$!تظuJDISm@QSq;GXEZzlc3W5XXu|̑P2"LK:{m/RKD Ţ{Q!I3xԒڻ>1$0'QPPxqoqE=>q$y:wlYq.<z9uo}DHBًɚֈnJ 2z !7#x~Βi'q/GqgLv4x(Q 6f66vK@+^x몊{VK4U uu>p&I]\Dx̂HYD##W Z8̐4IBU < ї5h18 (gC`Qs );^>9\Vڧ1D9匙L*"2JS=$QT(:FM!PM[Mc MK)WOg*۩]xfOW\1KxȏQO~}lw/F&үs7Ԕx"G/A; /HIJd{gؽjcUP\X亶ύW6!ZTrve bqQ{YƿԤ4e j1I0(xqMgOcH k.דN5}lIlUxL!82h3h*,=&:"ho\ { "? #U4 8(tf4ٷ; ڠxB`[po$V+nn3%!Ԛ~p분Lt9G|;hպnwx8(' ;HgPF̸]벼/x}Gd+ g6 ?1RrV,wfr=||8eOp_E&I/5=l\_%ɖ[}Je2U~Hƙ$ZKRU*)dxc@9ٗjiz-1.{>ӧ !Y-q(;~ *u 5zECj(R0k_>g]]={EI&w=* e81`ijդsV%-iaeyStyx[P]cH%̦qֵ9.6Y^}sTgZO fN$43 (Lok[xSKUC"­JPR:ve JjP#% 7@)k%^ EW5BDMOrk4D#cEJm-05׆i 06,a1 %E_dxlZygy7|!; AOGbdZҴÈ强zc (F1yۆ:)R@1+5_UjV{ uT#JmrB3Ĥ R{(kkoBC1(bXJOT4tW.4dOݹ`i V;잨:93W~qRs]wV aQM_3^892*[xu@oЇ,Ǩ  : 9,LɻTe\zԒآxV;"MO2JpuNbrZD$[Cl&Ľn}z3aq0Zt+GDmd (77?<`X}  _oD7u ˿w>9_cغ"u}Ae$[yxikى!LϪ#rs۾.C],3LZ>|$ZVk\X:ѿBL ,$"x<si 1^o]"s21c!nMDD166rauE=B !]Q01ULWl?+^úMC kQbYSZ>&F'1`)$A?yKA,>9D;l8,[)/'_?󗛕yӗ@d<-~֭j&I󰘎K|r}RbPV1@\dramUԘ}[^6n2beu15Ӻ9pPIv&>DEQd++](N^6!d@6z*`zS|61p-,AN=6 gP:ϩSAI w=i;!mGŨO|ib5mXګp<n{y? s+'obrҞ I䞫43as ~b|Wx-'z#$Y?! M#KKAMK7+֕%}7%}T_7H:(׫.#0]7 hy"'8dK$qWuG3zb.,$ƤqwZ$ *[0AG`&$5-7uM_ ?zՓ|cwgܪ8Q$ ;iҟ&,Ӫ/olßo=kNm8NA` ju  S5'vL̹ˎdKZu #;zn:??,Zv߸i!꬝3ZːY'Sϧɜlyhq<OQBU䥠dž?4+c^*HMP 錊^PM;N0\[x:cwWUVD 564 iS?uUR̮+jhzE_Sx^ ,WMm8q#2-ZYt]w O?0)i}8T̬31 qԶ?$VvY `=(|LtJ:<Y n_G>jU"VKVbsoYͶ _<{5wWU'pW5F1Akϙ1]]{>tumI8ɹg`ɍs& |wMN_sX2\7Y6nGWU;߱| B'~VW'c E^=qWzRy>' 5W[5$BqgXKǵD.harzYY Y|z;x|s\ ,q,tP ʑ!Iu2ʌ ݷˆş'LrVsf>- H<슒?j|C @\Z 0g Ȩoa$oyUqhXq4տ9dAӃ4=0rJ@1dPhQ.Tlsf͇Y,~AFqt=ԒXcvT,vxTalC$|ĕBK맿HGseMv8JRhsfb@NhDje ٸ0&zU܈vx Z+nGʴzmPȱ > P7XޚڳnwИU+|>aۀ >*"dlU%4aDb0uFao&d= ?cTmeuMш49\I?bpi:@ȑHX3Zt<.9 }wPwĢg}aEKNj'%Q9^+ o;@:G`F&KAK$"r`Xͱ~g7M߁ZvX!}}HO/IwisJjd A4e=8HeNY8 "+B!N $)eVO{ g/qSPp'ʒ,'4} fEv. >H)=ZMBzH&{zFa 2|>;:\th Q9os%;D.@I&&s&<0 Fa>eLcM#NVJ+mC@ ᴯ2.0( &+6(6.˜goa$|dQ9=H,vY{( ;0rv;4ZT48+s=- ryĤz9ڷ0 M@7k6JСE+WzFam R:W\Nؿ0R(`)M1:Q9Ä_`2 p!{"9!`* Zߞg0ĥ|FYpv4gt! Ӭ~Fa3; gC?!7bt:ޡ&\{apՓ|IJr; d "0[׹{FF+{}D>0rzeԙf ^GΝ!4uoPju~44_ܥ&sI:QIݼ0: tQ9Ltǣ /p0 Hӆ?w-eZQ9íxPa>0rd/oo:BDChb |Xcq|\/ - fDaP^~AD ԸZDM^LzH(<̛:nN"60MI5^9-%:4rL5鋳ugv/J;'A:Ik󟃻N{/}oa<]{o㸵*q{1ŗHXm@]tf{v #df~{(ٲlK KdVS4;c3|sH΃ 6P+E) 4(rPaˡH4]ä 3d'it9`pn/KSw~ilrQ"+{ǻSJDⷌ3q:xXhYoʷZ+zR~Kݲ/Q)`@K[@\$dKSG'I2FA银iP!Ozgz%I)NJ~t}jj93E-4ntdRimB;8=)]j#i<<sgZub/!ĢwF9/;t9":([w,5^ :˻w4@d"IּBHK&IAGo , q=pd25/,=]fxpt'>w))&?3Ⱥ/co- );{Ltʃ,>cUąrBpXX!0{`ӳ{=.+8(L"wc `f>{^4|L|fe+Kg"pCAigtP[RӢn{${yďCQ)3OWFWX= >yunӓ/}?Tgz 1"'IO(̸&>w\ 8Cx"B1z&#zR G]doK T-kl=(-cL"9w&I H8WU"ɐv%ݿ7qpsB \-cL/哏~P!g>$'<[8VΡ k(#\y""ĢBY? k$J^ İw`P޲P1H"a;3 j[ɷy@WX$~삚`gx3}{ps"}t̻+R8P1KfzZ;*_gl2'gp4I0|9Y iX[Ifff'LgawAĦ7oj8v *?p?\&a@]$rPRhL)w,{_8v-56ulUie\[e>_\tQtִ5):lr7“`^[7|(Vܸ @{]78Q7B>?$'5lTC]5<b#ד5 R[eE+} /.F5IW%97l]#׵䮷19>L ^g ڢX)"B:2+ro:D0hbč*qb2WM1M6<Ȼ)[!‹MfYyb=72Tԙ||.&9sQed)b䳬/ynȬ~<&O]}R.'Fϲr[<,"qg,xTʥ[br3lB] 4ƪJzQ۳2_nr|O-%at^,shl1ܗ⣬7/6N=uª[Z\#X 5&K Ml`sd=)s(o;3pYOW\-zvq=ft<_e)*k)BѺ\ߠ0s\&MNYXC^nۜ\ {W7+U`Φ: (|uaߚO=ݫU{kQ|z6mJOj庻Uh 0/oW:v}%B*V=Êp` Կ64m?Ź2!V|;zʿ㖃x "D g <ąw U7Wc_[e9 *f(n_zeCRimV_qЫqM7Ep{ѫX?PXpg^}b9KEZ}Ux*r+w-6W6~[ՊVQf|W=Z K^|+y x„0A)&L]LIhQ45B1c$xƟi?Wno]Γ޼N=]Hj1T]H$̻`{>0_ЈÄp٫AV|nk~cg~i܆'p- p)q0T^; nMj%ڻW - \뀊1E O(!1݈>cTY_F4OdF>M2U@: n:qYb` Q`eX f8܊Qюuנ*QΤ0mH!R%%4XIMpG)'nbЎ%+?+^5*w\O0sL}}mVyGQa\*XX ?l:K>}[&D_C^>0ܼ7lJD @P⪭HJF8qN"?A&#=~Xt88P5_щ؄Eژ"T,PbeA=rc n9aٴ볘I>?-c!#gn vJpH{rY^{m vȒ_^YH{/1N}9"f鄻\}g'\/:8߽?%oKM%Мw:'O}7=?WS^x}sRLeDpt% "%UJD3ދ8KACtzmձxv'9ܹGxܥh?\_%(ab g\sb 1"LT-cm8hOg8L/9f/vW3pJW .j9[Y~&o =P攽ꮾyu: )K4VrjI#m$2F]'8e taT--E.3ak )kơ%GI,JĄ0%F/</yK9o^לy]=];|dsdtNu)ǔ~@! ݻO[زcp]~tԞ3t+85Z3Pm8 'Zf)Z.C@:h9$0xMPcLOLȽyh#9g Q.0/rG DMLKNVFanpgw1Sc!P_.r;Y}?! ȳQ F l1 e1&J1l!4"! "}ZQڇ|6RֱHr%5 hdq N.I0X8$i p17R-:i͝gXbSh"XD(Q*ba$JGĺX$k mFDgW`ypx4011aSfٷΓ ,\L\-,ny*6ʗE~}ysB 8@,Jx$.!*QNL'd߯uF+Ă~\v߷'Ѵ4MT>/Ke,"&->U͊xCԸ1^4oa?& MmbS 俓Mxbkl5:\b ]E8\G)VM[dpe Ʋ ]_EO(6!D B29dGz{X_«RYf,ken_VQ8;8UahceCsaK"_6ƫP>TRWF'S'+U\ElU'8gN=Zm|Oy%#h@DmmnN\M]z \}]9,{RAd]lyj]n~8,_(eaۢbμMGv'pOxדy7S;XF[PXGGf#I඿NG7ћa`HҪX pIklIdP$W%1CfR؄Iey"G6E7A}>N I|"qptvl(jlv/6+%EHKB"Kf %*Xpm94@8!qd2 )͝X3Wgo S5dbO F_N?_6è[yvϦ8jO(^ևYvtUSnS=_Uz,>5_U=< ,d2wR:Ig_3}ElUaa7nt g|3ɅRyZTG^l@EJA] 6i8c?lq@Ih sP1+1UlDl_b l#o ˰yx-Wx!$ {C<{46~<;@:6a01*P9JfFu Nc uXk:ӄDmd\ o*Dp) NF(+uvȉ82,⩮2C2|uގ[kڛ;䕫/ڼYIaѝ6ozgƣb(Sb%A1c3sh({ØpGZJܖ@QUsi52Jz1s# tpi`'s{L}LqQH#JK*xν!mpKBGG/(/{>=%]*ӭ&U7XU}}5jm(>X^h5ȂЖ&`;]0{}̓|G7mo ˁ<(, &1(6i2JP 3JƆ*LC)"phMD.@ކįUɁ K#*flY42hqhXGN܇';ȸэ`,rLÎS/c57Xc,1Kx%<c 1OXcL9Kx?Kx%<c Xc,1Kx%<4Kx%<c [v2\^h`;(G#qA$(IU2 2I6"1j*߱'HobݭkL=X.,vc_e_#jB|ZfU ܦbA vrTB@@a*A8aL刀t(DtrE9qMr\t>{ڃ}R-AZ5$PͲ+a8LF ټtr+kN$rFg{GW!dv݇2&AffNNвeWJlڢhMݶ$wޫWB75!O,a4gPWoӛ,W2:eX78Ŝ9s˭#:F jĚ*vhfJC@&&VQ᠈zƌƁZ 1b"iZ+Ig71 k=lvs qfקu`|[-[=в $$}C,0nr Hb,l $L.5~ feҹJ gC?AJkV2[yks{&LȑHc6HPh-B!(ESY 5G%2"+3 3!>VEV)!YtJ5eYYm%ޘ;:ckO~a`OZybhK9ׁtS&J.wJs/ͨ@؎vt O4;s2b lNE>גܥ5UmG}f{JY0/;PO/I{By˱/݇c&Ĕ}N@{̠9nM@ogy:V#fW(Qpy PutG8+^&|Nh}W(sVA-m=3{9A pWK O]ma|k:ɊO'8fi5O]x/W Q=̧{ԸIBIaPBs&Y>|RWr>qV_.G Zoߨ}8=n'I:fB`vxt{H$N2FGVTi,o9[-cy˱Xr,o[Hbi< *$_@xQ ]s;8Ǎ<#O#vB =Lf9_Ӣή/GMS}_aIql@FLщPS2ZA>p+1 Νϧdu"c>u9 M\EF+L̊WY(I^>חwxˆ^FUxd!R|v)#"b1h#2&"EVRiq~ܮ;M٭|nngj~^^y1VYEЩʧ6ʌ ;6`#*L !.XQ m?,~)Cx5afj88aͯ`\T-4t'6+ϥ`f+PҬմ~.\"4Yrm˃w4 hf7,^U":WLJ1,҂%4+F׾oW\C.0dik Cm9hQ=c@x#99| _~L>M1iQ ו>W #9O;Ҁ6(0˝>kbMu$Sv`R SҀ\!ҕܜ{aAX)+)en"IlXcÚrwjj|#N'BG%qdў6Hfՙb:Os SoV5βS-t=Y&-'l{&e*G~'~OUj)=Z?g,)8vNLH[SXqoSknEvg J} _~tfˋ/&aXC.K)IUC)K1j-Qcɗ Mokv]O eM!wϫ-<1z; YGH(򹖜N)bxЄ>ֆx +^{=̖pv0!-C!fH 0Z{1[,} *;V<]_?N_\ݹ^S:>9߬Uֽ %ab]=M<±~ v]|x~m1hq7Z[i<:4=ԪOk:JZ7:3cSwv#}fO"EASC^v~PPj._, d#%??:}F|!!I{ ;7nuiƫZUh u;IE-OS^gt屐m&HJL5!9''B21@)a錿y s{R5(V8d@@L4\Lr(Pʍ"VFudXDcZ)"V?"D4cQ> ν^Uּ?.gtz M-ܗh M\9lG֌ôR]5?$t2u"rE@*7\:I^;Ymؘn~& t9 '8sWeM׼cd뀏rn>?8N),v4J$"h'8 t'pûHu 0!S&aJT:( `8a8z<;uy8{AG 5&07,WovGk9i$i kqy҃]T756>z"dMTirpyt\28O&Vx\:L:1({@@5^7r7ImY^G*1)9IE$sp* JU  .*Hd<(/bpǒq/l[FQXkچ#φ!oȪ,Aȸ%hdA3Z#-`FG⨲<{L<\VOۖ('Fv`Sٯ%{Zl{F/WP,`O:S17ϣp:>"DX- 9 1# 2M^ϗ IۭX3,& oaN=YO6,5 12Taa^y5% H4@]"C51>F;6ays}$bxOx@C_o#]])(O7}jy Ԙm=*<^#j`ޤSC0=4=( *>t+[]r/]\>C4AR9c6nk" lv")Q/w ö)BGHr2Ӣ}IJeKeL,ij+$s"FB;F3פ^e71nL* 7$ z|W3GϜЖ/| 2?H| IEX/{\Z"{Fn;&^ѥeaڜsڜv(r =t>y{:/ꨫG:Hx[0:_qqqqqϝqqqqq] $ۙ  -t5nZ5Ao xFxM6 4n::7܇mZ{W(*4nD| _`o +_rQEE 5AOݛSZxg1Mg7dHgzCv5LXC uḿN%hڥJ(uqo bݴ(On10m}1W}!S @}l'nx ̆y^Ss͘h4`UЫ>$`"fM?MX5YF5mksy淊_;j}Xbfi6͋Ꮉ~ k.u6~<5 b'#vb+-nsqx)$ӓi> #$F8ow؛!tUSM~^aafliT:g1S{3z3VN o[q"B.Å('v2<ZWW82v;%=EM*`7g\z4;() %"dQˤ*X_*v|?DjR*>: {p wQ>$8%#՟%>16r!CXOuU%^[9ٔB\T,Epάunm 'g.F0+t 3{χfoץ@A۠9jZV4#fY砈%rN緈ܓR\i"Z=؁+Bܶ6ei#gA3DžnKH,8E|{Qf .!S٦goˢAWz;Vaވy< BMu{(O4^/Jx5hCcQ_y'\sgW@}⚹fc:>eԜ <#Л;P&`aFsrVCu]*d01eܰ窖J`ǺХm2M .lơ\b$]4 cyM@QSܿ9ҵʜFvLb(l&U2=!bdb4iȜGӖ_d Rv.n=fE./z[ n,Rٽi;Qnj>E ihoŧ\Ԯ㋮|qϫ*ᰟ+i =鍸~7]% P|N,Nn?L~oGُ! 3bľ%w#14}`mE(8B˸rY[X{Ik4Օsn"'u]݄ ț X\N/IN=s##%:\ʒЕ/8 n.רo791P9¶6N?(5^<O3#74]- e | jqūv kj΋S?”C)&>l]lw_{"|nxjd5ᢂSO>]q'7?zF,G^/9+~eaPˇ~Ol59}BȄ[X&rNEF}ac|];Fj;_hG+8}Xnmď<Vi-L QÍ{/a}^-E>O;e^{we_itԊ^o>Xl]e~a ~ؔ=N~D׎w_|]b&dq@X qLCq< )Ba(<$RxaBW#嬾{'l^a)?c_,idNsDUȏlֹt^![Ψ gG̚ExwA&!ޢvh꿦lY|#{Zq );Ɗx" ?ee[+Y犫9 nGyT>]ZZ~?Vz'OWB;o&Lcabp:Kʡ]07RUlJ3os|)Xrq5Z rW#~jHL^]9*nTӶz-u9ۦ.r.r8goR:U^G5i>jkXTne$|/L H׏֠wV[ŮT~W%T(''?&JpeIXVhı9X JJVsrSQ>WXbkQ ]_aWy;mxCU>וUf^[9ٔBBq˜άuaWmz{\=q! {|ov]JMyUNnI[DwI .qP0jִr=eLE>II eWBc.=T$4:wP(Sٸ"CR2Î@ 칽"x/8P c v+*cU\ee6i9)(m2]LaYǀn~ Wxl6Hrߨl7}FM藦Vfwc} ve "$Oh&T;K&Fc|+^0{; ՊoB-aDa :7ֺE7;XVQ_=NyWfӂ~ {M ]7?kU]I!PXweхs.1 )mm/pca'Ao̱:=л7!3u&΀ɎiO9 P.idI4i~GGBhÙfжFmtlvZ {Ӟ27ȇY&W` QBi>ԫ 1EjUow)Ck񲃇 ʋooL*w_1 qh0/QEkkw~#*zPai~̖=/ +:w;z+owU/r%ǃ1!6aȅADQ0bx! L.9Uuι. ./e Y; oDlVEY q4sli"Q?@^\׻`5U/vMվ,)oÛ*(sZ gu'fq8T'|ά|kGRQ%Ɂ_;p!mȪᔟYғM횾iƖ2'(~*6{ypS1ATiV \{G=s#6 h sg"38Pn_6 8Mkl}4 gmK r/9,,>/lƸ^3gZP:Ho}(rZ<+F4r*&y}Łr~\D8A,D ~% <7D. dPNjt׋9 ]Ic]50bn9XO'Ō@Q{$JeQGE3LC1|s||@/,q(<>t1g%z8t9DFMBUgam3m^Jc6`0W$ᅲx}5N __Jpfg4reY 3agfcD7zz8vQoQieTtڪqC/FʝrIŷuL.{ =$!Aտ{r6zn]SLΩN_߱8iV,;:,7 `M+>3NSzT]HwDOAɺ}ƼՈQI&$Cq~1ѐ~IϬ*5kډѾվ*&q\>U8Q}|?its r⳧uݫsҝ`mt%vKm=*6dxLԌs(˚E)[Aۖ|V8/GuKSh2}BWPD#ZxEA&AҐQ$(C0 ΅JxR]D{WqDlgh~a˪y?F Lǐ"B.@Ϗ=yn@k  ^ĉ}BlFS8,q5qU6D{.sI3! hs͖aZ.c.WPFH}&"1y1Wg]( UXp4i:9m)$@IRDFbٰNh4Ŷ9.x[#Vqr^zV`j=o~ۥ f߼5םL>fz۶n#豛@2r۔#0"-,f6B1l1K1N\:|7,pQh.3zt[R%-J(kjx ,/"c0 @절 BQP! (7 OFٻ6$Wpi_@p`FڂeR!bU)bHj)3[Ⱖj½{^%ڗ׌uPn9`ry;0??IѿS7_\v!wXoT̻KnOK>tu]wj>߫vps|n;]?eֳS3L-VvzpӘtb<4B5NЀ䂠hIek {łieib:ӹPRezTuun v??'[^t`镛>u`l%Sɴ{3w_j`zc9β{ C//][SvOaWLYP>;#:>`wrƦ {%@u}e{Yj}9䏘|i7jl{.pɹ} ,tvBhYaֻFiKI@+OFP"or fE#97=[' |L#h6j,}xHj]J%p-3Ѩm#5Gu~zܗG8?“m!aу~Zjy4:%:En:='nu Xv}fE/KC wa(/1D8=2RT䂟2Z)_pduIۓ|뿭VU#9\kWnl><\fzgF^u"/{v2]Df{ﳡF_~vUv6S/WUG\Q;M-QKNU>Wڦ=(9T&-WD3Z֓4̢gp>R] N{/;|{7Li߬״^`;>#Y~%>wiހ[bQxbebJ#"Q ^&.7v O:ܥ^'@dB0sVGiM4&xJC9mnm>>{41P34?iݍ-kw6s:Z;lfBl^ǘ] S-hw д''[$A™"ffΧ "_h?3i{=vtoj72a?/Y)v"B!ُG..?9nAxV$tĀ>Dϗ@X9YB*>*s1b+ t0JEZ J26x(Wgk¹4[%nEWh]e+cp4-aySZ-<;_'ݢ[y ۇ6ZbkĽny/I) o$KEw[7]~qN'VzSLBqjUot?rq4fFw "q=C1S(õ6p=8d'+zEwC/j]Tқ!uڽgL{?r:'YmoaG?_ I$^B/k!$6h\0IXT"״\ĤjBM(eUVXrRV|߿6κU}G[m&}>Fn/vjY+ ǿ\䛛l,==}d;_2)?/WoW@=F߀Aw(^{OUTuIW3y5_t7g} ..jt׽nֿi=ּi}[kmݺkFWk ftY.,ҷq"tC%h]v~A_rԛ)Nlgv\jE0uٸˋn.$2>YAIJf"EO54YpC#H!S5s C:'OnZFը R)-jS'!Zw(1x@\\]B$,/x71G.n >2 +Q# F[D"(vGHh^HŤ b`46<;"GHh^櫽G%ZDdYVrMZ ~&>6?Ԝ#F[$%|1nc$4Ex@rr%J4a9lsr -+![he2e^X%N .y Mq5d{7E u -ò; ,dꘋPdr൒5x h )y&TBe26?FBCk4SZJ] i;egyrƃ/^ luc$+ ?%VOa)zJ)zJ*d#.*I01gO4f1u"P#n~\=B&_a(A: | |@< ,0kˑcmy -+-(%&&YX-Xk6c  M[Z^yMb&2HΓ?- Rp/q 2FB 6OCX. :jr`Y ˅?BB \W Lkb|bQhKT  -;-[d4X)ˠXZ("4c#xm2X%ωd,P`1ZU>YkIZZQ=XKH -kZ<2=w5-A]N"!# xۜg鑺Hwg:2[CK5 E)1ZEZ/)qpTWnj8dXHu?BBܦ֑b#3(ٰ@e"GB'W7FB $לo4 kBb9-P1ZG c=Z_0O.p!,-*X[ve%wnS//-]A3¤*_EgyvA+A"58.Σ6A1Bqh$p$#YQχ \i"8듍6?ā񚭔Fi47NtB|| ~\#Ża9}DR~\4\FjJ< oxB qT 3la_Vԛ/^,q5vtkAK>*ǬM#5&~ZkE 9{ i`K6Ih$! K@@Yhm$u 1Y %3ePmj`Hjy{$Ŕ:SbfƄ{ {kTX2*A-`, <$H*H`0ծe,AEުH`q7 %4" =W8s#+|0 }IWXآvG*_@3/?~018r5QC5]_>-Q{Yv! |Cy/O~i?2a2S `1S`''4FKƂJZz$ ۀ.'5-d(U!bG y%Ax4kx<-IL%e[}4pw '7<'~e^1?Yʠ<‡4oGm4~2<ȵ[g˝4&!1Ȱ+3QVJ8{$}T2 v1'bmP"f/yYV62a$nr7.ZE*ZJ1Y7#$ Awd8rHu`={UhHT8B24 ZQY/yw捑 ޞ!]'|4 h^_/0uo2zrB^tYeOmCv7q_eۓx,;$ks}-OB&/pmvFjLxTIVe1Y2@r)+IQE!GBj=Z,NTC"nTRTdB8˄NS2s2iKZ/iRZN {2,jkUUkwx? G_(/ ''7&77qكwugx}?~.OfATPm"n_n\5dk:O 0!b=k փLUV%Nh`zl!MOl>-ԐnEă>Y2 νf%9јhT힩M|Qܹ3:ĭ:|6,!sl8Gwxu[^ڶEG3? $b0Lv̷cAl&3qt !en=uU~k.1tG47\8g~&GSW?zg8KQ|NcqRk0[ΑJ_ b$9DpiZin"7?-f;p,O(Sx9si)Y{詺إgpTxsp m͖7a!s\i!Xq>7QVdvyRӸTOT2X*7Y,ZG]™depfrrTk@qB<=yK,2 vHe/cIF2' š\Zrr; R %Ϧ܁if4x_dX!8RdL>j^JXe(Al7@u4s`SFEL1VzcbX?Wgg?:>:ձ%CY[jtŻS !ՂSZ:N(aEbHS8|hHvۡ)}|K]_|:P{؎O@b+\^k02[a4$؉b4*9Td`w2A2єV HJ*.%\h 3xeRLՔ)9Z׎xR_tA{˱kpsVnla(@kܴ\>|[8K+k c[P)]2q:H-b2f"y:KHJ2 `3@:.x=2nuծm 7zK*UbZKІitֆ 'BkpH!).QPl=U+m/ '{Zzg V2e!="XM`':f[*۳3y?=?j:DH!x %E+!RؒV{ ֛5a u+H|D !F9R-ՌEsJe4.(&LNbP+U&&F1v} X`AU#? ~)ѝ5Ke^>Buӣi0IuD4J[Y[je.Qaq|NfZ&h4 ՂrAPy.~)_{lz4Xjh_CRi? *? >brX5;-gQ[L`A#Pi>Yi`)7~GgP9u!}w~,VaJ /A.[V?lvgW8t:7D Nću&Ag3 jC#adSrq甇 WƏ/94EsmQ$9{G]@H.a[ ڤJ>x +}}"SPa2:qxU_#GO]*{6^]D^?`TM c@ d !VAZ1 ݡW-cꔅ7ň,v 0FCr\<\w6*\.W^/6_Zf=]{oG*s[p{`=~ʄi!er~ GCQǀm鮮WwW꧷:`k>:q/(vXCOufG7!NC߸yT8:#ecd4G9%l cmz{T{7y!>_V yKMB VI0Vs< 䈒N$ҩ4YCEjb|8m崹&Cط5;xuku;b3*CV^E4zj Xtذw+NXWͿxMV0#)(-rm|S޺S06ՍW^C3|u; ѬkѬx̢YvNi%?Ko0]'[f t'C.0Qln%9YeXL|>f(9&r6ȹQjEw9lVJHhOĮCݨuF{StN:_9|W} ó+7w_}_exw?Ox;\]@J" NIsm"f|9;e+Wu6&!up_:^.,Ŏ/ ^_s RQpptC[nO ]uJhl0JH(%F;`RCdB!3mq|̊ LO"242+N@}ȪRb;xgaV/-?LWyq\Nz1$/qgǦ.7Mw3|)ڄƻ8[Rߎse mrί@gX̵Q*g er+gI1vf\{eCicfNfknXFNYlLJ${CҘh`smG;u>pv N頸-[,#.-qz>r(tGظ?m\vas=8wӛI4t5v?fГB=:?Y\LL)\%į=mzvB [`iQ nۑƾxp}K- F=\sݸ]֥|ӸŒSx-`rnt<[ C|Z*qKӒĀ\AA!+U-FߌiEϓ{Z ߌR[_52UBYr֒ik CmHfeX-f]T{|~b`?G(^K+хYˢgܜg?Oӭ|[:1h`* agE'7٧%jVZdKJWRfdyyqa_?}RKri;c+Ā5vk]vZˬ) ߼1+Z/3?8/z].T_q7x:x%F)ޑ:/;]$i[֧>J{E>.jdqpV9SL |MuL 0CQ81zɟ.P`B<U`& S Z D佖豉hj4BZ"zVTW* V 9@LFM<+Nz)[ZyޡCFh8+6H El2;,qOZ+Iguh|ft{ש׏jqÀ.w%Ρu1:/[o9Ytfbnf!̡e^qkn^ϫƻ;Ob;hRQ{$9w$vC6YPbAkuos'6-h+K֬_7R!_ycS31V)El8=͡’N`sU>^՚BXC`**pMWgb)3j)w1OF/y\å]3tny$XLjA͎J'OS w+nq_qnmŹq[mo?l4=ZN$۱~x;M-(H 2ҎuͧIѥ _Uhx*m[+>([ RrO )"ɻ~߃.+{+֯{u HZBtf.wlB_}zL9њUp;o8("jg>LLj9+j$~>ǽvǏ,Sϋ\<5lMNaC)4֜$wS} xA3L?-,E^:ƙç/ԥǒGrɍ6wRXlp`m*M ))Xae(\p#00sI9gҨQlȉ^amt4D@]8`1waʽ3ø)aQƞamzNR/0eiY+[xs,kAҖRC?ff@y $vb'DN6瘱Zl\^Lewϼ{ih0˫5h0Ns@xdRXesC;%O KN/]]벩%B K/19BK%X}{m޲A0x$8W$jɷ#,[H7~=!PEoIF43[d41x#sߧwơ [-p Gh*[|f[u4P ;Wz8;X0/'(\g)s4+IRX^*vs} 65!&72ϳCz ja/=cEa_}oU }cw׷~K)'#iڐa_0腇S5;!Zc|m*5XpŊ7)5OcWxzu\Q%t/0}R`q|6fONAot`T? ,D]s^jʈhA #(H8H^'Rշ.=3ijAYnyqtEئϮ)-~A-0 ]6 7ORnd =!"}J1bhp%J7bi1g;Ja8҇$?Ⱥw.*,s:`NRw-QtIn%@[&5iOk˧#=Hs z8` BB:d/5L-,zav)5Ծ=! {}$U] ~rOI˭n?hf+ _m1|(8, ?yh^qۺGi6jx!U8-TCٗegwު(٤*nv,Rw`Uj4>(WTL`Y?P^n'=p6 SF=EWJgiN^Gfţ#Gg%FiOxĽ&Mx8RR"\2ڪ6< oAivF8 4+gZbMO^W=ѵBJz=dn  }@=JGN\ :`T0C4s].80rɐX%UgH-wۉi:eO?,$yXeRg~hN'ߛkON)2`V 5'#"#T -:qҨ3ь/3̸X2h|hA9OFoYtpZuBVse:rK,fR RvžňiI9Aϟ[f! < U$`nFTTTj(8} Q犃h)GH-nՠe0aIVR"${d̓X* _>X+m; t # 'L7q(qTiULAXt `f$+'hѨq`|#We@ӿ,IJ낮)K&Y >; -[&u}.~d)y 5`1 I м ^5( S*X0A;P88@*yBr@*I%#'"(+ 8#8JV:f/<1v |$20f?EDkՖbksp(O8*I ÁhX^zjP#>2%L 57`qG"} \k#5@("FbɵD~IA]q )z$ y+n#3:PƃmVa* % ;'N]ZG )vܳjWA>YA2@1Gyі#P >d:8 r$H~!'M\u‡Wt4~[d^` 1uZݰ!'*ު4=Stx"2]!nvZpZ1{-#c-lCZ"zGJg廿_&/'P> IM[]Q/9N?``iI(J1,,VJ\|jƞ56MGmY\g^v4irgyS"% ҫ+GkQªJH1 .8>jK`;U`̙ K1t= 7iUr9h+9u&K{e0̢2(tU42ar'ZK(IzNch/#xHH؅T}耵 +$,r!cn*Su4H&I+K|? wf N0,KIv,bV˲za$v7M_? D9s-(!\0Gd1mhؓKֱcoHǴ (QHLd$&E:łKsg H\Jކ5-eM˚vHM{vxf~2QzU"ypxh޼4(R:LQjBA)BOQy8$`sa \؟4d͗|vԶ~8 =cÂ,eأT yC2'BUpZ(5;$=;KF~#Ck5|ܣC\(M94gj)E,cq+~҅VeyG/TId6ڣ݂I*[q mN 5 ޭP4kY =_`w"/>eXNё C!fH ^ wE%Kʾ8:ڔi)er]Ҥ=wkz}WnUMc(]ͨʧtQO Wv^o06n{ϽM^G'Zi{Yg>0NG*:`kk*RTcJ R-}럶("glvP% S >C}B 654PSv( >`i +" 02'Qi+ẢF)DQslcl0L9+(stT_&#%1#eis.Li7'-^X8~*6)/NK-0$e ft$1Պ[/uaa#L(5Yy[Q z{ ;vT+ "ˡMO9K(vYos"Q)tYvHMt;Ma&+靿g0, F,B0::DDꥦ0+CԭEĤQgϟI4:K9_M]f#bo1%4?(!"W !JÍ-0}si#eu^N;Eľ'%.H1y=^x4>02/ѭHN0N),v,JC>*bI9N1Hb,xB"n>#a3ȥ0!SQ:qf*K)µi߂'ȳsk}cԒ}#_3@Y_Jәc!}mwח ey&Y۸^+#;}d ;S+-#cFtw8ʞ9$Y ޏR:Ŵs$")TD2'TfihD0AP4@2Y4X TJzE`nV%eFc `Ð7 yUY0qKȂ xgFZ"(Dt(fVϷ$=\cMM(r9ȂppyKk€/I ;E4`rQRZ]Ej-Kj7a+-c<t=fQcI8#8se L@uڎ- u[֪",D_@cf\;*?R8cx_B7u/ZZԼ`O5 ??L&]>Ԭx0[-+.w:⚎Mʆ%Ôaz]O-E;u Z#uokcЯOE'I]nj\RYuxH <MQ0.&aaz56Jh40. z\^ӆn ^ %=5l M_pUc7(yu?U_-Ƴjm.C`n0(bV$E8GFv{)NURL݅>^0].'+a`;"XUlB K7 >vN޸ ,>>OUJ+RZ聩D?L[٫R<oQj"RgkT_(D@$xgHj;ui2] .ȦN=& 8;p־:re_#˂/v.0_]KqVPL"Ո5OӰk=ыg#pg>ћQqlq8uGE12Z#6e6AF۱ݠ))+zt!c`UWR*,+B9k5ZdZ sh({oݗz@Ē2ݠwYcȻƜϞjAfJb!V0;A+b/^6X &uD%R2Eee"` 2[vY+H^o/&~,v9 G_Y6#!-%$9ܹ]O^giby sR 5(¿V8dm@U!zE9DŽRn&x92J#"( -J)d'ZWӓ7ҝrv:֭a,mXZ?gw"R(eK)+'KWDžV}VShqWOۯ.A3]atE 8?oe"Lj|xzVKڕ_MၱXjT ʔV0bnNox!ʝ|0ɤxIR!0UHc.`ssK%׃Nܜ\o#U|r֠`wdfkȗÎ=P'NnBꅨ*.繟ǝneHs/5F:l T?!{ae;H"fVS[ ]s$y_n;6AhM?Se.lҽ Jh sn$f7q>bKRWdʒKK /b$8IYFJ'ry#jKN F=\s55\a,hsn] im7'nN XrB/&y)QuﳈrrNu=:'v815KWQRYRqҽ2k+[@<e,!]Znu '1beP)Szv&͵ r C3pqV{tLI#iS$>%!>77mJ7IK&p"EKυ:X3|z'z([zM9:,:S>}iHnJq(6–NJ͋9CR 3%C+ vFzvN")LP5q9m09jHQGC44D9p[EĖPƽuH ÌМ-[k.@, Ywp{3}wj8ͱcG%ʇ+Ȫ3kKW~s7(5,'ՂI"[N^Sipg;>^^V;k̊tYg mЫ+VV#+Ƴadyybjmg64>רMm}P0 lۆK:=) Qd9Yxtv!7jV]Mgp2},.qohd:)slFԹ\p= `y9sRMJMOqiI{iʠNy'5&Bɨ5HcDUT9٣84L- Zdh-JGR>RL@K ^޵q$e_.z~?w{Nn@6@]62ɐY~3CRCQۀm3>m7 ,3A\sYvݰܗ&%V_v{ۛ}g7f8>T @6 j-$*|!D{##a:0- 9k.oy}FY[)f\M2 D9k-(!|`#86HcаςHYDz=# #$$8XG!1z ",iP%LR#1s~:%,5-kZִSja'P= usezU_DPIg,*SvN]8SꂐH. G?^HfZL1]*\)0D'Ta^> qlu1a,"6>喢Hx<H e6!nɶ:m?*Of{pqifB2fntw%wZ};]91 ?Nzńv\Kwlwpph~v>pخ6kx?k.N*/Z^sP=t:FY^1Tk%K=xc[TL,ehd}^~HOC_Yy8>s/-`MJPo WoznތލF[2^ӻkSumwng٭Wpaۋo_*x#x^[^3]/]xcg)+-nmuZqD>Wނ6nj:a>5rƙ T+XEoK`&."#2cMi6 9,6cXHEɌJ;쬒T^ƴ~$G0Ltd/g~fV Bbͪi4B#ΗviS wۤ]a$Rh|&ZaiAc  ,4"FJcEh4*kHZleF͝QFGRPX;{g==N;5qfݵ/V&M>EkݯȞ7)?-OvB|6THCKK)FDO Vs?A c?gǣN4REF˅ Xaʨ D z(;/6qJ)<?@wK㼖Ȼ#% jp aL+j eFk`X#S:*lMVNAoAwM -U~={ـosF>Clu&V0#*)uGX&#kw^^rE0C@7X\N-:=)adpcK$ zg;9/鯣蠟x(Eފt=f߉N}`dѭ-nq.hfJ+]K(ɸ*bI9N t'h (A.%6p2AhFQc(,` | \KLR5c <'czԖLo2p%,0u>v56?bG=t]r˴awdK>2 ϕKBISR#@:;e{t <|E og ]TxX RrHSps58oZ%+ (BH&OJ@p(Ѡ.7ìt0Ve U_of)5R-,faz]w$^֬d|Z_j]h~>XEjXawrRkNuTRU"CvQ'ޘ lJXnfo#Hp_.))Õ/ϛ]0I^V$ 3y~*ނՁ]({5bۂUߧ9U=&>ћQqlq8uGE12Z#6tuFo-MYOzB3ɐ:2l*` c5ʣA(d@"EZd m$< wkPg2qBR!mI<{|dCbOK@u:%Wlrby+㏑SAɕuJί]TzF1K)`)$$-Oh[$Rz'I!T!8s!G)N:љ3~Y8꼏]0t=rBjjK0l97L{nj΄$M0㑃Mu㛛h㿨֙?ތ*[EL&+H3oKA#.#2xgPPJ@y3v~ɻ * x?%%Tհ~?Ha0{HB(kB)0pfkR Cmj뷾Eqt|t͖1<_N"`Gը_jO8~ig"eu.?AY1I< `"ͨpR p x?2LŠ7 0`OU_ܓL-TyEj[ ̞Rdm% ymw_|i1R=E/֌iSte"Ӿf:;GA 3Rumu_fbb{P%V,XZ*dxi#A(fٽ;QD1gra.{,PS5jQS6Ȃ6G"faK|t&`%X~&oZMwx*1_fYX#qA:qhL>e¦=^\.W^‡pb.V!sAs%j7AqkB&ts=^m]d)xVmtE{wm-tq7wkvU|d-\֒-]VĊߙ{_6Mtdn"l47BH:27t1ke#EcO<'V-9Z\E&R 0۬J2u*V-j ѐ6{F0Z=Vj--6 ͋ūjW֤FRV `a)/x^(4%%V}@A{ IulZ~{ߺeuQM]<qU\^%p:PPoVQxN{sX(~=xgC4UjfM |í:XT*Le1{7L&yfw_!@& 6lqM?UIMV']hhj.EnjQOu2ٷѬN'W?n 4kUhƹ/{şͬ|*C͛RSS`/m? !,w1ˇ+r[G(wJR0?0Ooz/-sx~T ߂Jlt}sQ7?/^wѿJ2+U431^$ݫe ҀÕ+ɸA B "Z ]Yo#Ir+<6b}. DFfJ(Jâԭ;xx*RW-YYYq|˥- X)Tv2a);Pyl55(zIO|0gҟe7X޵PzGPB"T%4['\J8]4*FMPx\6QpAu 'DGo {h\{Tt0?ԗ*hTĎSXW%ri_i[?- .H߄V:z#T׷u AX*FTD.2g@Z-'rϋbZxG 7zwxSB)vg;/>vyG x `Sf Kؔߘ*tu>;XhAA9;ЪѤs?^pP @aW/M+cob>Cv6 UH+~f-fYM7g7ܪ, ƨl,XBhXI An*E"• ) ce\=,o&׻ Vְ7hN]!6NÆ9_9ܓ~j9x%Κs٠=O21tROE=iÍfxzf7NPiAYEN:/Lf39P>1oK|?_w3]HRmi.@1$n qF$ o׉dx?-zP[}( ~S"mJY8>!"R er/`l⍻ ۱ZF@2ȕ|:;t.fѣF$v+Ȏ5g.gO1uHP sc9l{[΢jԴ.*wT*[nN5환NLa|J49nNe+zԸiq쾾QR Z?o1jU2gs9VE\IXjؘ}Rt9+)n$pdRmK֖]2nF)'Qƾе, ]' 13u?!kͩ,bSA~'* @bɬh4( AHQ$4 S[P,Oʛ)3 l N|&Lem,`p pE&dbbhD bBB:2$ *Y68㱎>y3VNZI\FD.b@I["qmؚ:N8퐜9L%|߈VՇ8N8(QSCTDsp cqB SP[e*唯`XY u{^Τ֤=, kBxٔ Pe!f"ra޸=dsex9o 9"gsvങө2T >u{y6PVWCBۓn\}Aˠ ۜ;,3wKF]rNⴛσ~ͼ0r3=]ݞ)<h~#9|p! OoBǟ <˴#>o8 =7]{'ͭ޶neM]zmv& ~gKOқ>a[}6贏2 ew c&ʨIUL9sNyƍ;KVO=օ⽈ҮF߯BPHzb^4(objB.zߦMnd}wY{wpML^Y?W]Gqo;W0~OruH<{}{Q@eՋ-IzNHe [_;k9aSfj0r1>4jn*XC,+a;t4gzLCpr B" 13#@T? \eE.zLs<'^Λ[}BV_GӔU;c.wbָYb13^XaSLI&9f XBΜNV9=9ޔ{C \! pk#*#9l8q9k }X 6)&β=9hA>y'ق8,Cd#10eDٙu6&ʺw,'8%h("~~A.z7RRq# Ջrq(VdvqJ T3a'c DsFrcG1eN`AL܀ M 6; ;k 8ǤѢd]V*LR aV{.Wx|{lWY-nGtE3F|+32(FB(  h<묜N3蘷杢c>M ux׭v Yo;DpIf\q=zmϬR<\! 9").sŚ$>|yy;/QkȮs*/(c0}0։B!yw~\(`@F u +Z X $A,AT,ӿ w=?vG{D03` VeǘuhLH$080'A\HY:lϊQJ`9"K5ܶ lwA-KsE|QkiO*oCM}}Dx`}N}2}3#B(^ZZ=GLNTI(V)]Be$et~փCf w_{_¼u3@hv?[xkY_Č1x II& I'Zx ˹E VIV,3%Am-D8Y( 43dpU.L&&„h=ѳZ-gļ9n#gpwYVNJT+$p]n5VbC=riZ݃5e )/D*"F&jmYJ:)ҁAP Kgi"%ZfT 7xHɐG`nJʏ A^aYQRo< :f}.@1 z.uJ!@,g2$E%垃!v~,մ #1eY fь F Ȗ]6080[I;Fj} m\eϭ `43Nkhr9I_24{~ w)z<,2Jm,n_{;?-ܫ WujX]B jZ󴣧]0ҥ[o<5KVxK?`{7VֻO\jUa6n;eOݷ|h5݅ I~Cr?Iyfƥw`18qo +]< uƽor%޽Sc)ӿ~~;KTx%#\'#aWpn0__Jx0xAj{i3 6C2uH.3 +G&xR\ܳ+k/m<]cRsiΰdG&KR^uir.>F))^DeACM|j{hTE3y~[$/_6RxPZaԔ!/ kѤpޓZ IN\NMLՄKWY =voz D%`ex Q]DQ>}B I N_W_6 ,P>3r/{$uHܤ:gyK_eFM~1qO04 [63@gL' bIkd۱͠M#)=1EoɁt@:RP861Su e` )y[6ipN]qV 5xaP!1}<&g{|g_J%WVkbJ? ^Xi}5nғʃsRLW!9QZ9 /km#GOt@H v˽,t8d;\-?LY8l6ɮ*"u@{y| < Ĵrɯ!Rq&润 4U7.à ǒ޽jfӋh݂n-M u,kMPzs~K]%bpYxôIӯR1>ߩ<1&ھ3?57ܻЊg."%xucjYtʼjNM[g7/RW y4p׎oqdWM84><j\Յ1;{7诛}^ήZrAm5''|ۭ6,"ܨg%~:I^?/OZOu-- 5@.\Bf]lr&Ixl 5K1Jb)Rí ӊ;3qFtɤ`NsS裒*|[!7N`w4!9@Nϯ!]Sf'8]2wւ LOx|u t(}ߟKxufCjE$fƭjLИ 3Ac&h14fLϴ%㖲0FZzIEZzGZzGZzGZzGZzGZz,w,KAHK;hHKH(HKvTXƑq6"m#,mDFmDib!m#6"m#6"m#6ǀ}eHۈHۈHۈHۈXHۈ Hۈ(HۈHۈHۈHۈHۈHۈސiii1~`5"m#b.amDFmDFmDt4#NeHۈFiii!f=ɨ#5bsH_V iۓIT@2'am_-zmd>u2BT:gѩӥ@%%o I0DȣĽ"&\Rg*h#&Щ@}sVnݳɂ'Kp؊fc3 "3&_kϽ*.UgX `n( *x~դ]:ޑ52M*6L{lK|Ua+ƹA Z@|jeQMӅlzrR RN}Ynb.>fo必k 67,., wD{112=okvU`DRIQ͓ &&4w´僨c+oPPPi/ZBc ͟Q(]l1+t`%ÅT{V:S_) ji5Vs`dĨ$JI&P=ze(<)r,e}P('-"+[dƽ$,R"VcQ{ 9ps3:ഫaP0 o_u=T~ȵM7ֵG>kh\0#ԋtۗx.{+Gމc\WM 5/\/tM}qroyK=7T]<,sr^wDsE>tmw<;WHq7-}s"͗shnk`N&'ͻv}ue(à ǒ L{M/ ^pPrµ5Bp{sfFTCoJ_ѿM/y~uhXUx'$'j|3fc><r{hr3`һv[S.9nYCZa$k1>z UIL'i%_w8'}HL fzcwus.E:kk* BFcBfkaZis>՚y,t{Be.Ckm{dEe\-{2~W֭Ib$1/?E+EcZgL+OƵ^ (=iN. e'=-P] DvP. EYQ(7FI` B*=w1ywCi GcM:OZ0Qy "dꘘP9[p`1H ڴ1w.v'9}\Y >' /uxh@H)룣; QVCQ9fm2-5&ei7u%mN· KqjtKqHO?ā .)'C$N@.$,2dY΂<{Bqߨh&*&*aΗ5Ek"yabV0X@JRj1IHAZekZam #DY#1* F TGG)>EN 夥"@ueC̸Ej"W*u,R7*`o!4nΣwƔ,~4[wvut`__|FmZ=wm>_C3'^-۾ts^9Noj zo MyzyMk[\|{-d~ [9Td힓3,8DsEoXlls0ٹBV熾a|sZ(va ~ 0Oa6AmFѧ3V .2'8ڥp%%&.Ct !*({EL9S)2FXB ^w=V܊\?s4r1-q89Y fʕI 1u30ȝ{N^}n~|LLV┶g5}{sݭwp)kV'_='=CV' _ruεH/a^T6в öʶ P@P'ա^gtΟwMnk>~*+ZfZAkU Zc`#M655Ĥ0Z qI< bLs)<[, 8 ~͎YhH9㌧HedUqQ8!޵+"`H ř9} ścD=rq߷ؒmّenr fW*~U%F. 6m:Xv;h;q[ooa ,`lS}G$<-"m g«:vLgt:%>kmi̝1,DӋqzF2UZ;]DTdW@\D&' ^n6ܼ6HL fgC9^jq0ŀfݡA&R(?eoRROR+G3{ b8RQ`.ؘ% \Al$F%D}H.jѪ~Rm&hѯ! _s6wk(\~1z&V:<фd1D/$ b /́K=</SHiu[(6#viJ-KM(4?y}Od`CN;()txW1[# (<2Q45JW3M#8ʑQ,}1lKT\*  ӠQ :Y #`R-RyT&Yb6rE`2u(v~QGνjݲùYVvRZL}]\U3Awq 'BNťZp+B)'*u-`PV!rD)y[zOM$d4d F%tN1XJ465x-&{!o{XG Y>giٵKyv1*4h뒧'dCS T' 70 72y v^#1p'A "$ [HYa˴1Rim](W-P,JDGXT .>t\w!a-8u-mel1~t4~)a[g&ؠ3Azps%pU8P zxŠс0ؑйs zv`VxGofesEIiQŮ8}+-u[*%K߈A̖(%Ёl0剁.T † $eJ$͠Y(}@H[f2IeCr^E`2-c>>Lb[+l}_J˰ľVwA,;_dSa1 ?D:)ĂJPtJah`Llʘ{&j+x2,#G}aoF5ᬳɬOYnvv:Q?&}5vzy1?#[!ߌ~]n<$HOpUzK|?hF%ltS&H ^(~;J{}ka!^z7>NRoewzl-kDFN|tNs{;E.(濟BG,\k.FO,>f:(?5;7Nn&Xv'tF<^D?I?0Ԫӯ떗.:] -]!Y$WÏB::Gj-u<,ˣٜGy{WG+2ێVUR"Pü~ښg.y!KQډ0&,Q3p-R2K͎g~U^V$lo}'[:׼ˣeĽ_UVWڍ)rcGo4qD)Ĕ0Kl يMRwDc&K"jeW|CovM׫ϗCte6t^O IIDΚDT֨4C iio`Zyh,?\Ld=gz$)1g*ez+bЈC~w,WvWC,t|fyyJ)ԗh}۸-HןzM7Vռ۶=bJ-x<}a%V7ۆi)^t9ggk`_~Ɋ=A=ׯ0=vbRG:'2}_VׯR98 !Q`a (+;.U%KZ[};I)ia7o!z DI`aԍc29N9&Qs!W.F9^ődE٥ Uf)e mI(>HӮ,9ed3kTBC9)9t0ڽ+/g@On "*Z9ۂRj#Y(Q24p8ihJt5%@AT^]تۭ8ޣjXbTVq5n`%dBnXھI xU7$] s֫CeV:F%Kжhe!ZYVhe!ZYV! -v [SZSԵ5umM][Sǀ>ơP>w_ Dcg hrd2@=7ٽC}y&H3#Oq@e!a}2r 2!|qC@+{ym y5!d,I;0ɠ$XKD&Ĭ\P J+賦#$FexXd8ZzN>2{'p~ >|&ڨ|SY'-J7Թ#^|vu67̫)~)*{Dg4˻oGʺ#H-:6OdQ_du&Gs9SJs uHB KUDJKTM.(D%6p|66ԋ;6[h>O^,~J&!/F'A=_bI>S0b L æf󵶢^|62(#Cшsg"CMaͧ>^iWOcҕ̧}{hmfLΏS{\H5p>yPgA;t ˻v.*D,6)vRJSȋĚ,EWR*!,p0-YK(',d$*lMgQʳ$㱶lNm?Ԩ)-ެ&,./₎O;Q))R19I DDHh0ʀ"3#bmxGI ҵEEl9vg\Dt8 PybIZq.PXھY`0UXG1a.vvpVe2˵ZDd7;!bT__tHXӱ:Q a#MU:9P*RH!"dt:c6)pm+DB9i4i.5EkĢw֏ӛA3=9i J#+=Y[!Jc9MH$!VߡsHFMc%5VҁpqApM9iA^KK(" T<(J#TCòvFy|qXst~i?fEmCo&qj.U_ @huw8;iX*ƻ?7rNf z}f7b|6f? _Wݴ_|sÃ/4m1ʕaz~aw7md~[nu!;wvh^եh^?Uv5t_ؖL,.{5y|z^vWĐm| ,!ehCJΞ›r,kbQ &(S*_ ֋=-<61aC&؅:gzu_ؚ.qq,y() d$=|I;it*WT$"?&2teN`PHg5"xR;at%pEo ھ&}px{>G[Ykυz1mlD{I}貝aǜxmHDՃ,2F0tq Msuns&Έ$ABVZgA"V!SkkӺ"eA rMc'+֬\LEuy{./fi V:褣-r'z@T(jw' b^w#cPfV e[HTwp"(J4`/1 {\{ݙ;j Vu OfzSOjo9 H Wc^>_g2Lo&/:1dݨ4HgSo%8`HzkPfT&tOB2ifUm[o6sP욧vDWZgg׈F I\|Aɠ`CKVA'4<[egE1w7}MδS6 }mDƇL (Ҁ$)lOT6$ƫ 5ŏcL8p?1d &I)ڠl/p`'Q=sSBG?s*>! W\ )P:R2BTN!2I(AL(mzOdh){1XӁ5pvkF }*[}XS{m+gy"qwwSy;O}0ԟ'^5B:g;O}wSy;vJ-US}!gg$朢:& 14&K{.zr`Ȱx0El8U4_MMx ͓I%q.t39y3ęO~ښl6% X^'eG-6^r6J3G-a۟Vy7cQ%$gRހ`Kq K=(%,k~"M-_R*1 HEKŦ'S%`T)sr 5jm ge|6Bhl ƒP~cdi؋&=ۚ9c6?U5 b:КpvÙSL\,b3xEE\\["6)X' -8/uFjb&U%HE~fۇwMv5Jb^R23Q;#b ̶qvD%iVd#=Md15,UA uY=cgtgڝΒ%9iPb,JS nPڽx$$: "!p,ySgLU{u\p:fEevǥnʟut]@hwYTzyj=nX3_Z{ܮ^'X^y6~5 Cڞ^Sw\xEg<\>ݽ8 6W^KqVvtzc3ϭ'͍sJ4͗/mv"l"43E8\/ oK65oȲ}!ڈCQIU}mrdYzs-VzO N/=tGUA!;-B&m)ˆK:.|IuFE'0(<{oؖs/ߴwy^u/f{bt[WT+|^]l[o$lͿ_Ʒ[o6{nXR 5*$QAa;O" 碲ýX^;;Գ :oKW}7xW `J-;4}LiI_ayuToi@KS'_Q&bt`N<77 [5jW_Bfh=bKi׋VdXb)~%KES֎ʃ+!JtQgQJ!Y"D&)%(T -b@W Y-Xb)(9H+U!:pvkF Ԕ->~M6 6]~E,OY:n՘OL_Ä`3Cf܍JAPTV" @Q?\8|w|N!h)IQ`2Ȣc#V\$PFSVAY/r v ũJFg_YFW#/RRN Rˍ T.hAQ08)#NJ ZVW=70/?Yx4 LV EWtQ.h `tJZ.k)I;IqN,ޚvPOERf4'=vlmVt4 X2AGH(Dg3e01tm t.O3BPzz#ϝ~]t6ߜȿAi>AȘmU'TԢ\<:㱈Ak^w܀!P">W =}'7|EQ}η]*@Qh4`AŶ# -+'|0}sw_%l:K f4 V6b`4C,Dё;NJñp1 yG)zS@Bi4e HYcmTZ@CNX}HWLqXRdTTWyKTABEm}W|ъ|7_+,SµJㅛ֒7/7w] \N<W,!+##, 8x:kvs/]K_m"]L;Pڇ83?>E4%eD).CUJI0 Iː'+4fdbev)+^*GYdfٳ0I*x3#Vyb86>zh?ײ%M_ja F()~6cEdHz'i(0e?.G1ui?Ҵ" B,V0\$1RLp}IEQ, #PSVӇ&}4ƖKJ%gk!SJX+v)kDq+`q 򂾦Va|ſ5u3P[iazs5QzϗWvBy_E/?.H Op~wqI4ipyD^=Ƴcf ciG"/ErMٚ}Ql"%(RV{WEE^TF4\ٻOW%FZ%ΦiK:߼"/n x7vW&dMj^͆:b?Dv>Ao!-..p0|4[WVjWaz 6eXEnT8򋴧#螔fT Zm/}>X"ͽkGi~UcB航R-sSg!Z.׮~")8юQhj9!3؟*}C>1ˋjZh! ZzߏʭٴSBtIJWNwYQBwp>mV])LV69̘)aQG\6?#f$f!a"ۨ=j5˪JhKUP7f6]yj-OS=Ob6aړ=7~ Ys[:- HYV~K.ok? Qw l䬊t]lSb/V;#/_Wċ;VLh_إGǓ~BL_V%KT1yϩE3\Wٻ:7*MQ*!gf@_[“r9-sNaO}!T노tPAdŞ䦠rbgM z>;Y.Eݕ6{^?,}+IfUM9hPtZU!g\1T\{ҽ}zAWT˝\FS Xmm{ דlLcs.)NKҮC|KZvXsı%W82|i 3tmdle,H/ǤM΄vM[%ĢyRl$UG]C2+Td0PS l1sÊ:*t&լV`4(x!Ԭ;vBi|.M(VɎ DS K!#(<4Wn!x?wԁm#J/aR3@Ĕ[u@yU糯J\,"^,T.+ô[]K8@I . 55H7eb` \FL )Mkٰ֞lO(X@%P;wVC+Y &,~A10hÅ?U\ ,U"7IQddRDd!h4>ⓤӔNhFhE $ؕumj@o]\?!7 c:o]D1S ;["I j礼?%Q %Ia9T}`D5.Oy i3X72k \V7X@SDCw7`)T$z2L#=w/:-  vJ4v3Rq*.g=)()X}J{FbNJL#@H2V#52  ʚ)edYk| dHۃB6r/E\H[ q }p3na$N  cdBUBLH6O쩆*m]r (}tEVJrr+klJVذ =GNºE aA$rɔɭk !XPk0ti\X Xi{KdD8@}0 klmE_j8n.LdP?Q Dl~TF6((QxY+#Q K$`k~IyH! %y.p rXbӳR"T2[UT'.FKVJcY D 5@W@X\ 5 KbrbFHNz{v= (x "EȚ=L\< "ȞI=#‚y8) 1X>fnvIu26ЉB a" 餚:`V `fx@mJVvTUC ,">*,e"In$& \`_j6Z-Y*S`V6qazɕܴp.[e|xL+C\&za&,j];Xr8U`퇋fUCŪ]Zy1i FѠfcM ^KzR:Tu%9rE6* ࡭܁D;S"nn 2P!P`AH H, ≎PzBDc_Bܪ!+AjӜRk~;:I1XyŲ*apH 1B 8Fr#%d$1x PuJa-\S23V6*R T6cs62*sq&O 8֤Y 5jrPԦ34LX2Lc fz\jikPn0n.#\h#p~lN9lN9lN9lN9lN9lN9lN9lN9lN9o60 ;U)FlasyK* de"H ""H ""H ""H ""H ""H ""H ""H ""H ""H ""H ""@0D!Dw0$IX"" D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D vI = aY$H)8@oN1I$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@oZŝ$Co#L5ŧnvr2xTJ-{i|>?sH[A'y0+͡K(oaRF[zatiÒ)|?އtZfxrܪ_tKNgnFgqC _^ ^hQ-0v8H(ݡXVV}(2dYߠeuL)~H \Bi<Ho\yfԋO5Wͺ;sZu( {]92W_|1Ʊ e?jg|m RL=ߊf(h1}1o7fh!wty>_pǒ6ҭ| n&l$.5V1R2fZ~^̈́|>v7 ǟ[qm4b@ONfsd4i_}NKN^FÀ= uuٗQ®|ۛ}L&rj5V|w_{lWH)c76X!ԜRsn>Q`FUufTEiQF78 .Sd@`++s &{sժ8J&F{ L?Ǔޟ~釿~sp^s$f-&Yi5:9[qe5ܹ(kmMNj__|$sX7DZ. 7甉 s}rt R,pYR&%eӺ=6Yh sޣ{න|7wxwt=/=Ep! .;{uv>g/iB^0l@*Κ?>+ ɜ$kV WM94uHp 5,+8 UK^ge) =g6>E{S~+Ծ8X/e2?sW^Wvw 2朇M or1U8Qbb5AdegVæz;:*::\uT絛|WqnV?2R1qW%?DuZgOi|Ԏ2K]9v2ʓ`4}UuE|h_pԒTv}lRHA3>uM̌%g<D9;SH#*7`\‡^8XZB%tחRuaA,?H\mm^(blaSem@|WG#m>X~;-dί ?9O׹¢$ -?5-4zK7y0r<aO;^˥+s|חm}.c5)\N{MFRjX)Eqx:!'Tw׮Zsk [=)mU"κYUVX9eE.U(d,B)xϼepd6)THUI_jhOGbɧ]uk^%]ID.\R “.U+o85rHvW/ӳ( 6*kEW15XoQ (QwNU:|@[T?Zh0;I{~?[Ᏸ4b1~x/E~6p`4+|Zʰ6M?G?^Uns|8.]Ī|s2\jW뷽0ʽKVM˄IVqSf:G /=nz4p~q\q qq|6:w`tjb< "%N>_'ejۥiS]1xROB*CC{\Ap8kx:͏R}o;d`f7V78Ai1^O4_ 075/4ƐL}$cVfQ2hŽN(f,0̝oV:JPCbP8):֕ŕFo] 9H/UweF97/qЍI}Ih=~k[eop~ o/[&LU䢎`sC0>Yp>?bx\!9 PYPPќASh\Ktʇ1!0x,lw7`h B?ZSۣ>6-i|QF_{%Cm#7(JX/杇Z/H݅VvGmV? Mso >8Vw٭Ѭ{9yRtVz0u"ΘcZiilݑ.]lsЬK˽\ڛ|q۫/ӲU +χ0] w tZF9C'0i zɉhW]~n~}?v,.7xݮvK_Kg{Vg>j\ijVO;g7_Vhx߆}6(}}?a:bξv-w놁uvi%VF=9jج bV G+[TJ8'b!U x').֚օs'->v)1rpƚU1f]2\Y&$:y`VysaX.|lg.Rk.!IJfUh۝ZRxgy"F H*:^tLWWE#S;ofQ)|9<̠Oq9}9Gi3GNh5̓ >p+!~D= bS~ ?Q7XP {bh}2]-xjsRL7DrXMڞ2i3/5i3/=i3/6ih.urG,!,CåckS^xII2Ƹ2mV_Y'm\=.R nc O >YsOgfpw4{\߹wG~B0g^~ta0Qa>%#Z+#e"y$%HL!E_ K0At3'7G!u>f11g kJ6vRL{,8/WH\W;OeyI:|cY;6Q4/l+'WW}2ui}Nޕv9fkՓ6UGrmg.UgP\*_On5g٨-_䓉؎g@0/WTfQgVzcL`bJ:4X'Omc}J{v(>?!c4-~~ peyQ8WEz3:F~Kayw{畫!{=CӺNY;O;˒ ]BeߏFt)[F;WW8;{p)FcN[|hoLNO̦S2: `bP\>ǕJ\wf.s>w:yضO"өo^ N+ AKL29Gа.j!x",nutҨĺ1DIG> zv%}ּ|ɓCDdnZ(nZXG7t1+/fe^̭jW'u퉸ړz:%-Z}iL.0lmJ2|5,~UE-6@JQ[U'SF@6QhB0M lUd4`s9VĄ D\$,zM6fD)B 7sRZw#n&XXmd< 2B£Gr,nj喵~O|=9Gʈ Y:Ei,D[f@[J¯VBI{'nYc(6H m&wBTrv0e_^0Vw#vNsy*]mt<]ev 5#Q`yȊ 5&QXX{,"ZFT'}e<֝x;P3Q8\!F*ل"5i%r@@A% 2"9BY-'PX߲ꛥ5K;yK{vؓ~L!jŻت;6n7v@W9Ξ#tbq:Nsk~@BFUPJz+J倨ԊR ? BG,7nKgC—O_跿,:W?ϛLbD݋:8zy\v˺+>߿^[xY/1KazV~($.6s*ߞl|qIe &t$tM4Bc:4$`.:ˇ|Xܷ&v_?<~;[^}^rۙMK~_v\ϹrGg`>X$Y[+F}Wg[0{ y4pc9hw;PKY+IfS.H;Ù)9oMYG-"m۝ߠ$GģY4d6&(4) .Lrt!ZiUp#L8Sڝݝ'˸ߺb~kWO #)Ƞ ]B)28!hVA1ߌG]}[i[RaכF_.l؍Gm[yѧ/i,HC7?8>x9<F@NFN:i'0Ͳy8-:EhtjRm>Ee}B"DdI!9; ( 'K]=Fz2@GyH~;nF|ܧXKn+gF̲&&@~oM 7&WoM 7,lM 7&loM 7DlM YZ&@~ādhwH6Kyp1cȤ_~xvoƽTyrRgBYMZ2bz>Pi6?Ͼ\\ނok S-a79&orMο79&f+ Y~<9®tV2CQ94,$uCQ-Pu\TpuN v5d :. ]tE}=*WQV;H(5F (R'\T1Mme7Z:ԦB#pwǷ'[,PHnS20XU4LBOCq%w`QD3oW|TTϭeNl]^xX3O8J%~F駊RV/%ńz: fQFWbv=h"CHSY:^''oDo9$+WFރ^{ 9!c.1y}b-dE ʈNIP8wu\''A6Jz#JRmGv? TZIN%/ gҧtlT%Tj͝~VKI1e M0Y &n M 6 =\p-x&]%keUgey\_u܃Jn@|kO` Ō ʠQe!r qFqdz \E|3*;` <O޲D>5`#7o)/t"/LJrQ1^Njo3Zt2v*Б;&YV6BmPOjRm)tD03Vyc 6 *(+BbMgPGŚKjvXiǞfMDY\Bĉ'}f=@ qt &޵>q$e_biy?PJ.s嬸b;pfTA ӽA @pAԲ,YLLwϯ{?"ם߅Hj('ց,xY|PgPk+|MncFkWb2f"y:KHJ2 ( F[H@MY@ 0ZB3sL{SlVi-yBEY* A-BR\h6< *Ca4Lu)L_txa݆3:`$:`"e5*S3, 18kD#cz:ni`C&J<$WB6ؒV{ ֛5QO;AjWvȔD !F9R-Vf$Hi\PLp$L&1ܮg)#,Z*ϋ_''yLIrҿ};jan__NGu=8z^{BxSd ]je.Qa^_n麪z6/Qب j_9 C^>V iWC%]*J{^T~bUmXhB- -9XE/EU8.ɍѻB4t!}ۃ?s'" SB3Oll dlA}7([Gϫ{2La:V1ed(jWM'|I9)5`0MKrz甇S7~p}:64 lܛ(ݑARI%bGMZM.Wf5Ld?kX4VLA|1:Y:}骭zZ׳Y׻ O5 DK6q{j] Ù.j-00.e1i~Q5XgGvTyMX M 3>_s5Rxܫ[!QV{c~?ȑEGX>2L$@"%jdRrFdαg]v͠)+M#M㨶"`*筌&#t" u=sO*"dmn|T9iZs0nNc٤SX4Xa;t+s~V˦5[:j}0XSfU)aN4)8mx;jhR|Łh*PCiO`\pBW$kG>dI |'tI J\3uQa(m4ٕcJIL'F(ɢ.EF>9A̼ H?`)ς "_YN`6/L%5gK3;str<4H4\xF"S I0ԟS+5O0L9p K,=}.$ Im]i%Ԋ'P m3)9Ϊ\b$c}KB2=d[7s@wy7恾K!`B/n:ĖboHk$9E 6b0[XQK'+f E|g%+lSe6O7 P+_NzA_)W }BbCZz|I} y}M~0Cc8_?` ~;.8=\p~Q~yBG^(ݒ :e X4`%gվv9| j u]pP``"{$y6ZlmR'y>-f wx_F݌W/Uolnipu:-nqY݆wWV=x&.PGMbҷ04j<'M^ JZo-_(U 6* rd"S AdqWB #uF[We{?s^-E RbTفBSB.vO!}`r6mw(Pqxu'6Nq?L6gރ?6n1wcNSlZ<3 8'}HL+r1bls ]/"Nl4O .UnAW`%]AAL1{y6 \ɟVc^T}/ $^9D߅Ǐ<1I{t4>uXˌ]kɧP*ї puL)G!^;7_d70$L|q:ŝا>Kݾod`|R(𴖥ڤq_wx$~g94nC@)΋jadra΋]fWSɫf>*-Q=f?-ގP:4).4rWW{TB__S!R0&62 `t|]ɷ_@}7 RYMn}Cg2'ae""[zmC ɔza±ۆ ;tu}'YP]vMX<شuf4x3X)8Rdp\ 6q:MJ(!Dt|9 M.5qBlq%u&qX7#7i{.V%F/FXϭ'gљ3(XFۚe .eotxVcVKUNvQ>KQư *[mss~0+;演H#l/i*K~ּ~oogmfo{erYF;͵D{Z{ַtlwCD=vհiAf+\t Sdz> ]ŵU]sU;8>I]D<FX shރ/@i{7wSaTsҫN8u+\On\rSW&=:S+^U+p%`zޚ9ئq0Fb% rc6ft lp@͐6UJl/Jr줤}1/w9'L;i&ZLTLZnvՊvFm FSNV\ly6\l S: Mw.๟ [.=?%*vyb"| We+,N#OΉN{da|UG9iޑ%e%wx\. n͹j! bȥ>I j8Q \{\j?@MNM,3ˤZbя-TS&j+: ~.Fô/|K#_8F >G)6r<|_^oMlΛXC^(nXm<Ľt >|̻sgR1ßT*Z#CE*Aů*2&O_b-c[i܂rȓ՚N&"ƀ"> ˅_+7qX@62+ǬM3V:bs!+͐tts6=L~nҾ23=9ChB63AىDJ KrSQItY  m5#G7XX,R\Q)Le83fǶslٷ^gppC"H9E#uLbdiTEH] %mt>*܄s}vYKrܳ- o PQGlyl0Y[ ])Bz@;mc6£袬w/HSA$2,7^itѦ9E4`3-4OwM[3gϧw6UEowT 伵p z-g+rکV8#Deh ə75`Kq K=Ԧ",RwRQʋoQTz2Y FBNP >''RZ +5cg׌Qʛ8X]Buɣpebu>nFl/4a4>濸"*!!GxF*!2D ENPkt֩֠e_HJVUmjak0,`D/؝_cǛqZ3.EkwjmXkc{Ce,F85"Clx#Dl2zS"E+yK lUd[U;#5j|F5W2d#Rd\ޅb|P;I[T5P,0`2*.QG7p9z޵]v4.f^77;[]_eAs9S7'.հǚ_ =5e-bӽhO;do-6#>AKzG؄?too0ߎH9d-Ik%@oH!)dgEȤ-`YpI'c؄:b2)L"):A!kmVO=@Xz,]5_#IWH>0 0bzFxU9\iy47h2}mf7Sʍ8c f>J`J)*h4ͨL胋~䂗8aAރW@J&&@Y ş("&' D 6J b| nLm~[iRQ R=z)mMGJbf݀pИt ByѦJJrOֽJY&+D(K\4r0: BK%Ƙ}ְB*#Ƒ3R `cr lrF,9^x;%zdwN˧ic Zz >2ybY:mo߼}ta|?=d*j<2J+2vi}b|+^^L/|L/JjE`\HVF'Ɣ4*Ίb2}|8ـ51~"xH^>R#zk4&bn;\+N ĊðaSE'(WBQ9pT+s`4jxq+$ʆd:$nYޟJiRjj1>ۨn_V1SW+S(EuF)g!2(BdPҝJ0%B =b!k K2 %=t`*$.9%x#Η-ޭ|AҶ7އW;GbO)RI"j)H"F7%'R}mTZ[60$Xtg%%HF@EȖ[Ty."]p=B焵qXk#|fƜ99HҎ>3^wWyJzM>o}P8W W6cG\hbɳ0zN`Vp;Y^8ݮv~ylm1 +[SF"@2kXH * dSj Iː'+4fdbev)+^*GYu9gc_;Y ᭕W~͟|XaͷQIJ^:A87)jCؿԳ'H?QRZ!*!m.F"( i(0i_gwcCIE( Eda %jX lKj.BgIoe/H 15-)hGcl $i J8YBpV89edpns{Me 65ngâ1(`#t[_g՝5Ϫ-i0ˣ̆_>~xvz7,`.7]~~S\j;d$ w kk=RUؤ7R >q?M\'7>yI#aOށ,~x;{Ky-ꑬ^/y>撧}M =sDF;/Y.ߚ]nmpnia4d>j8CekAMˋ$nZ -LR=:^u? %b".xtf|Gw9osy'8aQX@B@+x7}xwrH_߫y}q<>T9zڭѾYke?}Ѭ k?M z$0K.\õ~m/ ZM-#~eu0֪[U5bS#yE.~_81eߧ_߃+:|̓ǽ]mΪQS%5iGo20 !RJEX%.C}Xk6TY!=z{m3hnV|8GSb٪"HosUa-A ]h$xYfE ZG4Zt܉y緓*Kϵ:G8QkI?#]O Yڝ xLekpۡx #8M`ŃYNxB-t$]c2RL>,yVO.Q| cM=BBDA`U S—,h&'))Hxk@9G1fWI9+3 0Ϊ6dI]̫hHwgz?Sx/JfFL;$Taդu֊$O*@.gKH y D%L0rhgZէEٴDu{AbQY6 ԧ*X!@'UM_մ]l kT-W`e,Z h%?{XJPO>UP@?u 2`!RbIۮ[//K["vr ĖesO)!6B\{AE/RHֱmwjA0k0 }ݲ-"~[7cO߱g׾kԮo.=:7gZ4̻uWhX7[XZ߁uvAeZ<>ўݐ.vu0(l%LLY$WgݶۭkCW&KMf_-}-%j}e>)(&yz_ ٙ}:c.-lkL(0]hT=;C3$n6 69XR ٍ22D=wHڑN~(:P6^jzy:n#I_?cՑܺ8HjʱVٴjw.m媴N\ھl+g{FpUx6p:q6pUUJɿpm_zyFp+gWU\WUZgO+AWo x>pU\{W)UϮ"\dUs*sWUJ=\A<'ĮX\+Ά]Ui:\U){z5p;>zx/Ҩov)m.M/T[l|Nlt~bSu .!z:J|7p1m6P72ӯIϊOy:mbFn}'XM޳̷gQz 퇋q^ K?;6C%_լRem4)=v@B9!sbaXs/o2y:) gD$jd:>ϿOrij9gB'H:vb2sbgytI>zк)YfZgLHiZ/AyQfIb$foJ >0@ XoduiL nr=&]u]/ܲNJ//IL)G QۃZ'Mr_F݊oɿLo .?5 d}Ǝň {()b)=i} u.箔as,=8@Ti2Y#;Q[1 b.UNNFHB6% Td-hm/EKCXXyPg$,Iz3qv,q?!u7og)1v4ƾ >N~&7,B]gC_Dy!"CB$cC1DL Y(" ;Die0Puy=0p`EIk_RQ:R sdeW";yJfSmkh"ҒuFy11lZY].JVLqdH+i6gx^xJN3˄|nq5Ol3־^"~[|}Ys=/]m~4Pr1NXo3 C9sB_{0 B fҞɅ@DiIR't1H&7y2F?3b?ZJ )j vy`{و/jA?Notzea\&M%: E6f QERa`&9)[=ˍ<%ȴ8w WWН|L'Z|Ly4N,8ޚ3ae:r  fwyhecwK,LqVx`jH]ܠŨ}]e2Cbzo2W̦`o%X|lu-\#UO嗋;\Jg>ml'4V@*((QE+aM[ZJ,)g{؆'NNl_ӿ}Foxz1 >-A2lop|A!=xДTIhYf|`XTL=5\v<|L[d~ǟrJ_x.C|SԟnʹzxPƼS8J5>H\̫W*Kګsv)J'v(˝O5m)DQZ,awYD_2wdvK!zL$:ݴfo鷗fv{򴴳9lMQ(09S#Y*] %7ѺZ`LwZk߶xtn-y1no۞m6hH1/=f'& YVɘ c`W؛ΐO-D)tj{T({j(2柍?Q%&,l7NiAh(:'0n}J&/dzǼq ]~*.u3=yKNĥModw6oNNgnGܺ8iO&x-視QB-,$ U;b"K=(% [>%D\Sd QQ  -CON⣢ K2Yiu@( R8ۑq3K9 6ӌb!6B~nlL?ר䚘i''i:]/_ ;@RJ *&DMD$ Bp?•Z0#S༨i*85}QP@JV6ui#S\awO/ |9)6b7g;b8ʹ㩨Qg4D'ca ׄLQ@bػeE#(9;x 3JS<,ƺ"# J`/:k(dUV,G(~!D6UBhَ3~y$ b3x""#nQQ 6<`8&TBJ3Ğdb[V:Xg3-y*.Ƹz\q!q[0-9A<n/IoNpʫ?w}5sFgꯊk:shVo՟GW9rճ[Tsyˡ'_T] ﴏ)3} ȣDg[JtBQdtѨmEhl!]Am BK%B\ҢoӤ=c⸖:yj@[*PzPD}˼`(Z[6Y(QG&Ř[C@_ؘc荷.kii5OW\7 cwƩq_][33vt7 By٢KΗ=5,FFK~U+bLScZU>$g_O#j4׋O)K9\yOw̛=>HŢGũ{KqNtH/Et_$)Q"[ WQQ@ g4ГӟDwLS[~xj|OFnlN^gOaRф/ / `H珙x8ǥG8̽~d.ɐ\-%čNZORٽVtG&]`,yj_yrVsh +YDOj|1 JO-=tRQo,{v'q Z|G>ME$X5pK,9uw1)S.yT;Ըl{ٳ6nH,[VF_73R8W *UUEp5N.7>Mh6 ,qCjoڹ$CkUĘ"\} Ȗu̡qNeL%XK&cvmMU_h LQ_mvܠ͒ur[HSzBq{0%t`VX(J)>ruP]ƼV>ǜt.1˸Ot/ AZ䄘=2v@26J$j>dChشx^g8'/T:*CŤ:8hY[0Ud]5(*EMʙD U{ c*"Zsir :S1ƕ.f=h[Kgr^FVOR|Z͚\԰^wb9vE4ub[Ȕ^=exqqlўБqtzB+;->ʡMʒZ9Kl"B T ZEK(B%Q}Fysʇ$L@ȁ{!$qShB^9֒]#Χࣅ{# .+s4Å\Ï%|PÇJX>ÇjD}DJaxU!WiB KK@>2lLJޫJΎF]VXUġBRmQW f$9?tUXUUUWR]!2Ř=uUEy,ꊨ]]¶TW+{)LӌڞwpƝ`xӫEv[~tWGΕwf.>&.:ͮwW'jCBI1̽mi#c9D*}y6jiuRL;q7ΔyҐ4:|zNUeVJ_{5Ȗ{7ۂ}UN`U+ +<7+o};rAEi=HJ ZС"QjőB0QW\%E]j[Cj7J帮_7,LBoO?}s8@ -:+碫 KU~,ᓑҪ*kb*hM]Y Ty)<̒ 90 9%;g=Ѻ`0\ T>WßN\ U430=no¹'1g OMkq".MO?wl)0fh3L& m6Da(W{:`n3L& m6Da0fh3L9iĊ͞GMx؜IoF[,)0ijiFl[T8S!G2iCf (X hJt"0lKI,,rL)z' qYh29r# kn^6FΆ~#OFwAw+{hwuAzxY0Y U=t؟}x }<[f_z o`8Q[9{ \B5tZ'VuE/ip/thZuѬ;fܺyI_|AK-h0X߼ yG_8t=WЌ-od<5H9yMk<|.szC;gڡ˖[k6GFyk1o Ik¿'HVZzu4 Y=\4|N.;7چY+|Ygn)X'm>=oě0]` )RD(#IztT YF ȸVRWɫ Y_GmW4ҙYp<.R*a^O ԧ^~gBʢքX m'çikG>^WMtn~__ӿdouS{r7##)ZW}1ojՑ8פlo]ߦrzp|z, Blؖ|6X['W+z%=%n\u{yM5t6*ykW9Ծ @1msӆjO ":r "8%CBh)3,H,Fc0;D d Zۄ:s/}*v4*R279Ow3}|Kpcz˶&'ݮ=am܁G! r BёB5udkձY/"uR-wm#RH#y඿/zJH^R/&VDcUjV[Q999rnmjc냳;6WS^mwWe[1G5ROģf%Z.R0< XV.F} Qk602,1,!Rp 2q&k޹̰aBcl9.T~ s._9k0&~NWS2NWK^ޟms_p˦_2XgJ;'}f1 #3^rgl*V1A1`s eAU;#X<1`NJ]*A5,uL,XB13hZ9Ar\AA 8܅O7F>C4=s:(L혐| ~pQ[IRtxdjJk>{tq1;V<0PZowX%&$CoB$b6 "DZӌ&+UOy7Àǜ#h$$ΙQ3> ! 3։7uDuy<+|(UAO5K}- E|g})J,*`<{ Jv]WRhl+IcZMGz(jڲ ޜȆRJ.wy1{cОLNkO6b̧ݽD pmwpnDIюNIyq䷤3݉SbH!|hH. ˄QDebkH_ Y_>8<J=}OqVn1oѝ}c$]5v-񍯰5xc_o}6%\.8_d\֥\YkH-Zg@`Wx[4*p&nNR+Qh/W]]ךb:!A+c֞[KM4]Q Ŧ!'vO7͟x$תꕟaHѨX*EqCȒ r= e}6EÍk39ڞ8{ƧQN|a5XBe_/x͢dbv]+pf~@Gpԁ[^ŬS D9%,!yXiJ(՝ 2+Z( AHQܦ4e\ a6Se\*12ÇUo[K=vRp/^v]ez#ا4> (^c"KZUeM-7b,h)CH_20kc=72AhɒIT#C ?& o2cXM?z{/HydQi(PYY6`-( crL0D8.jG4vcC9~?C(2`l1 K_%c 7Y1'DL&?[)+Y窒nY )~ [QC,'?lYb:UPVu=@w0ϣle],V.06zvލ} oLgb d,m,-oNXadS9 Ls!hXBq:ޙrM>I!9!y@HOF&(zv|5.仟}m瀉#WqEQ"L9'xRvfykA4I((ڇƢ틂K8rQ֒oQ0o\Үm!n(;XʼƶVdZ uFc"O# (3 27p3˹Ee.\SEI6+6)W0+)u21 ,m"QcvsT5q^/|pߥrdH$ $WD+Ry/i7*,VX*OMK:k8cy0MLgK}Nkܝr6t.~><痺O^Ɖ ^tӖ<..BBY[ťyk~ح4ŭM\%r\A')g֞3Z,8Oo +N).ݒ:Kk[yrJ]1x\*hz1z6}gD,s=1Jc/x[y2rvGe8ġ fx9~ڞ_t=C2uPIJsZΨ8PWҏLrJqtɎάK[#5p K˳M%э->YRvWt/IeKkU`Fs^xHFbAYS#[6=/K|_;e}\d_58'C{42튭JVk(>=r'[Kf_ȱޢΠ'ā.1f ٲygYk-Ӊ{/bifףmAi>B{>sD)NG,% h3%+MP7 KaJ*潚(ZM'$t[%A[,ɠ>tůpԓY-RT%hV2kQޙV7#>֓wbGi[S_zYծgвFWbHkGHIkƐւ$mx9+6m9UOUWVY×Q+ e}+@ vŃmT uAD(FzEeoVoە,ѡJRfi3aNz ܳ@֜z&aѩ<AKCUzC7<;= ؋mCgzʑ_<'06;X`YEB-N܋[Ŗ,SqWEV+mÙ k'p~O>zJoE1g[KLW9ݜfLEƼ,23t7eL=Iŷ٢tJؗ1w`>jn;qIk2VLqd͆?/Qzy&O{+2ݪZsw{aM#[j|y7]3Rs^`3 KH>'"$TPTD21=9c#ސGI%jy$cJJq5wb"1eeLCa&tv_т8T2kPO"E-a=@nX lėZxUE ݴ}z0.QSI+(0.e @JP#"3H!G=G{O+v\,#"̖-㮮;^/wÉyz<yL'P@f@$2ł= ;7Owt)BֳE# . "&hjxe[wc=7 Lg]T0[n$κx%+tr1s˥ >m_xz'n'v 1(J-(A [ZBa̔M=Ol'߾f#^ xz1~94 2lMjBd[PTwp!,!2,*R^@,g4_]_>6 _WEdAy΃ߝr9+'{Smc^xXCJ5>⹔+DZs}OH8dϡE+u2(vG"7&Y ]G e 욡V^{N j}1gg dș:Ɣ4&K,/Rt 69͟ u;|U0sɋvO!oÐ5 ÿl8{bY[!E]-LBc:ٰ֢K޽|wͿ[YtB 7^itR^uj8U4Mf)nI~/.oOŨm&'Db Jgo,lk綶˝qft)ܧ_+ZMQyOQ%3D֛zJф(U;G=*%,k>!l|Sdt QII-COƚS IQ39tҚXZ#cF,Ud £;2qu]1Noÿ~ 8]| _> #v4Tl& :XDE1zᙩ- j`^C /gmff4T/:%UM<2-b*/l Ekp ")gfn5݈.ƴb\ j7CQ;4FУv`I<1I^b ^*1V7B-7%(YA-z62󗂥)|*ҰּP:cĪJ٨>xl8pf/eT`<Dl&"mq_="nx-IQq1y duയ, )IY RHg6hR7TILvI3 %Kf2 Mʭg~Չqq+;],;^wݎr)X0X*۞b&/.ι]v~}_=.--N=d~s)/㕼G~Գ"q;Ds˦;c}tᾙ?ʎ.]_ULe{iK-Pu21'@ʰ/&T@]^s‡ϟy?Ɵ5NT gL>k +I~p/8渣]=o^ldNhY edx0;'73G?47?ea4:͌{)uhZv6i2Xٹ^fذ@iٷm! E->Rd<[jJ\!ӋQ^4[-<dLAcCwFY-C]5EM+(|#yvL9 uYzrXF:$vS zGz 1;m"E@‘&cbY@2t)Қ LfRu'o~9ˋ;O_yp4٢`tQI+ Ual`<3>E6xVp?e$qO 킢p%6I/P˭c׷Ϗlf=xNWLعl09{eJS^YS^Њӣ7mhx5J]0<[.(Ίb9^*2913MD %&q*{)i*͙ ^ ~;̬j ߭ûY &.^_/J-9Q@%R⩆\):)g!) AAڒ =]QƬM,F,STPrE#+UA:pvkZn, {ެ~{4MvEoYe鵷h4gc!N?ݹ3~̝8c jr9* @2M]QYuF ?c*w0zR>Gj@rF3fw t&G%bPcz**R !7zOtBQduѠ]rEdfK]N@HRIEC"Z匚3bLDQ8P,EsΏbևo]eSOh!XoN'ЙR)M aRgd@(`XS앷.{+xi jW\ ybX:kmo߼}82f8W7 [dHPDN; JWA6B^sHCA֛?pz/rEʏX1MޘSyu+1-1˫*a.x^3=o@hLDRL&"SP,x)1^ E5) txHZ1dބrTmZuD6C(Fb3Ƣ(;/tij6.(tJ(/j#m}nʒyGz-IiZh`)pU8P쳸_Jؒ~^گӆ 5/w]U.@h!DKŰ Trd!b=m6DA0ӉS>%Q#dK,+.h<(o7&x PQ5"GVc 6%ObpOǙ@qy'Qu/50TF#( I is16)H1FZx4 L|\ڻ{iiE"Z kٖԊ] > %Xs${EjH>"P(OY}!iJ^)pp`-Dt@:DRSڽM> y0FnmZY_<*tx6ή{Q]}ޕ6r$ٿB >4vXa_vш$BlCR7HQIjUݒYɬ/"##&٨GJ?~l]~Sln5wED]wo묛ŘVeeZP.jyϩ}O=FQxkDxTCRi?j?I~:EYw *5[иfт셺K48޽?_Fc8ѳߛ? 'b*dNгŷg+dv'eB~? g_rT~Uxjzq2OM'C!iQR^v?FstLN/r i.sOVOHؑU)xBHrk.W݅[ȾԊҧq8('A-u|j1KEݢ~,kx6fYߢU+m߇hܞkB5( ?e1)N<*k^}$|6g怕I5Ҍ2רF͕*+Ύ2\KiSlqYo#˒0:wz:\^qgl/y~SΏeⳖstvsd22o#8B"yr)[R@&)%gDF B>Q'۱6즪c4^ Lw6Ֆ9C%P`h#Q+T;Jygc)[L&v,tbl}|G)o}++ ^fӳGq~[sZqe. @<2*\$ 'ޚN~o|[0.EqT8c2&( 5x )0A FrJ&b.&4Sۭ&u,(^}'yn ͰëO K1)(`FSQcN֕Õ2ؙi-ݶW}B77~l>X͝#~\O?|8 NzK <$+E)À QR _:ESG _5Fӝ~2[c/{>b8ǩh;u;)bԥ`Q+8-ӳMg.3{i5Q`ĸ@,UJD+_ۤ]ΠU]yWʫZc)NIDMki%ܪ` PȴG19t锊SB!7;ՙၗN\t1RW_KX7*(%(r(M*i\}Tz4/f%Gr 2 4-S Ad墁)!Qs:U'}?p}r,y+7.;zvĉF0*Ύ{ mGީ+"MxM|Ssp^ 9͙Q"ҠjW\h`1ʢ׹$IBHIL0ǃR(d1qj\x8eV1,ᑚ@' [Ζa@OF_oE阸ڍcw 5G9%ٜ&yů'b/aMv$=d&aø36EB!maAc: HfDYݏ\;6N7yt.IUYACG]8԰bV{sCg&lm9oڈ]d057uϖdpA_ܕ_37_JSl4||<"b玊1@rM`[ D(C| 1J2]r,#ʱ$#Ҿi#^#Ĩdbg18A THR`u< Ɂ`S1UΖBܽaYnL qV Ih^+Bk26H7G%4PzÕpp_6&\'5wm!D3.1*'Ӛ_:SGlVx14 -u2XdW+U:Mn+r7a˯o+j7ўr[SmAO碽[9U7oThgpj!H-rء@@SR IMRcDoLpQȨN!$r-\LH2Q: .J A42qbXXL2vBS MY@̮33^l.ֻɛoLo"7 * F_Gl1JB1Píxb-QmR4ЖhMkx8 ؃wqa+!rhfNhc6.>&طp6# Òy,]L:vEm[m]'%A65R(т$ѺEHphy$aJwFP hE{K&W,2G(j  alڨGYd`<D,&;"$eCWHe>%X퉭tJL*tVU>\gK\)XTM,W\Q#d D%&z.}@d <Fl!NDH<a ͉ӭ<>U޵Un)Om.<w4bo0ʤ5\ao2b{6Nr}݇ȪW߼^ G_+6c{^Z]@-Zv5z¼NWWl[&@鶼^_O3-X1|w.¨] ۈʆ->b,WL%Jz5"_Yncr1{Q{rb"¹N.xk|x) õ_fTJ[ B.w((u+|cf//&g_meS>z >EeΠ琬(pM4 5&ŵqdOnlJP2!R52)ɻAREeؕwDoAwr6 1o;γ_|xϹbx:kmo^ɿ!0~ 1qk{_|r6FC͘>svK!>d k%ʞmC6VJ Ijp"*#G힞(X,D,RDNh uFQ&5vsцY@[|P}sW:zl{OWtu9@E&!(c[E9FlHYGȋU.ڌ_fZ'ŒUuC DP=KF%͚1U,86XypݷB)=8a.y&I>ys͹g{݅`>} XkG>WׄS_UYG|uhb@ EB ی~6m wYj0ߗ]4o)ff'3-x^ܚB*2 !T匾D |X'LP uH67F@VyNV̠ A"8٨l4+쌔#Fdb95cM6'|`oj#9M,W֛≗Xs-hb=az~|,%DHCrrJ.WI!BR!i0s ^oɺ|0yC )>'Ċ ٚ C Q\獰YшEO4 ܕ܋>.U^jBtڕ>Ek{T"9e#(JFR,*S$RdUH`}.;ԥ}WKSI%JV5qji>/i;1`w9uWq9t^NGm(jPIcwʣs7Xjt^ƹ'zYVЁ I[XE[8lBm/ya~z|ւ]7& 7}X\tg_} n57< \@˻ec]zݐjʦh _Aj/=MMF㵿9DeMR;0YIVN[^;J\_?N*e tH(qYL\-̊X?˸# "GVwڢGN~чߚn ]hOtH;:?jEg%-z[+kZ0E8pQܽ~TFK upڦ8O\^p#ZXNp̀9AgdlT1j=k,DFMJVؓt^Ą̩ʡjh H'qj.Q9qBdtd.Nk8+|zfŃ[q/kQIoqj[ƴJJˍ_W'k5I mx8?B)))f8FA%xj UZ#[cZjDbL15~ %VdSIx&.DXVPc% v]֡9#v9gվ&:΢!K\s-@1f>FJsb{0)/dP Qe:G:6+u5t.1Ro ںp+AeN7ކk[aogo)ؙppVi <2f2l\xT`"bCkh,/t7O>:cOgy*o>)NgobWr =u":b'B찪!v"N!Ջ ʔ;bR!v"N؉;b'BD!#D!v"N؉;b'BD!v"N=zN; u6ـ:Pg l@ gl@ u6ـ:Pgl@ u7u6ـ:PgW>l@ u6ـzOـt6Tـ:Pgl@]:PgKl@͗RͰjV؀hw66J dr}gT;Pgm;Pgl@1܂cI+5ub$l>ĩ<2`zNFX9IA OI1PX/X”#S UlbQ x66icR,YU8@TA Y"joZ [y=p߷BSzp\$6-r0}2a+Y^yw4 [vJMpzC^P;^hCt.] V:mȂC4.!kAB9fMSEE蒒UJi6:nV"<._H噶":Z(lT @5d;+Ul#gɒςuT,\98AMO=F6JL,pBxpFku|nubSz8b 0p[.WhX C8#PW z(IiN`8(]e|(Xo>k)7rC(At3݊ jFm&Y pL؎y9|\ +5Mmf iji]Üh\4ZowqS6+% @*"|"7b΍ g΁y"Cuv\ 1__(MKI-܉z%%D)1mI@hX3Og;("b&^+B/SO8Nno~yb[O,;$q*+–ƒUG=POSJ7J!VDcjhZngt%?[vR7r[)گxp}g 5q~g\~וi@s8$ 'tW_>u%^CƕkPQV' B0A͹zmShj%hVsQĀjb NP*D -`t<j[#Ҏ2w> ܱWi, Gg dwsT(O iК e΢ٗD"T`*4uIFC,JZ$Z1$k5x"/ɧVaJ[@ =}Y+x#(:d@<Q[۲v+xD$ gH4 y&4R:k6>DC.rO %ڵF(A7$Q NMRKYθ0kAApR h;dp gs y;܋72sgxeVS$Pa` >)B^ IϏKU9sDmA(!dtO'{kN[~u!-8]S2?j0JN4ERXOO&aaѧK*n~.|9UlŒǥg(]#EGlcg,oQmm=r3q3dgn'V&U<{fI=Avr;X!5-AWril[,d-JY(D 4ME-1A?W`Ee^%iG]kd!rbh!1frY͌kZ5Zel*e֥glRqnmT rT0@]QPhy8i-:UjUjN+vh"{jNiptIقCPzIURJ#pa>*I`pR~])-PKym"fHZXxhr*S^H$1*L0ǃRG!PSSp!ă6\0(3Y)|UiYl -K%_r~Gf tb˦Vvh\cT\W?bH:^iA%b)#RTB≐)CV9*gs*c, HoSKhin^qLiPG;Fjv)<,=u !EB)P yϊ&k$ "79p&r>ܬ1aaʞZP??8{b.>ʤaZdXҏK<FI2.QaYvx_eMoΚHZP.(5swcEliCW䧛a2X>ˊ)}aGg 25At~[LhAgnB]I1cһ.5~ǜe4>~=)>?3'bNHg!O}goױx'\J]vP5x5ٸ]w}W#?4BoNq%ߡKx ej{_b5+ZH&H(4<2> ݥZx]OBd.R 8rK3ɸevJ!C]Y`DP,耳4`]Xi]9t;=CJDT.Ekgg;[WѠ.΃Epi.2:uRDN i1g8FLRsܦ 쏥CP 0DuYP[ed*s(%g IPoP12bAOV](1)Oޮ0e)>fT֮0$/7;(VT8hO)j_ wJsֹhv!>e^2 h"&)r{[Ym]Eug W:8?]gҴ4k>ED5Y# 8rsS$c]ot[uuu:[@oV,%_䅕^l;\L'mU݀A&؋Tĕt*҂n@EJn@a7 `1ʄ8m܀`-*`<{hI4q%;f{Yzo5b?jB u/RT۳ҁCJ|^t{LK񘬉^v9\`dIʄTJЄ`3L "Ka!y1{cО~ 1AI#J5q{e Bgwg6zYo ٨M?{s?x;c[c R˶9y49Kav;e6,/\Yӿ|::sߧ /r_Ľy{ف_/jϫ?Sr߼ƛNᏹ?p܏ӣ?sw7n6E@q0~Vw>)P3Nɲy/UJol_I'+д+:EF-۽rA-?xJt)Bq@+j{tz?91=1=3!MGztd>.]x twiv,i8`~ݴ?&&e<<%Q3TޯT>C\*½Ǫ\↣Tj_.xȍyImI`bq%6EZqwP ^^v5.Ε ڈL?> lcAZl*Íxl +X{~Q{WmRQI&=rA5y<P1prGK q9eDq UsKI,GnS18-H\\]Jg ]?&Ύe+6B, QwN4>iȼUUY9<AgRzGn_̸Q@;R.U{_JcNrւHMQ;!"ˏ1*2j49=+4"K.zR(EM֋RĜvIǹ j#c5q#c=R iC+c!vXW)[Œ׼l_ib뇝^NâG7~@A鋧?1b;BpJ*"u0CL zhXZH,!5e sQ0齐&-"0FɷchJ52 7|}ZlGl?s_Pv<meԶjw v4>kLx`I 4XV{,zb0*r&/9x6.xT7z2A@^t-8FT# T'\e<&v|ɕ5w$ t`{JTl?[Ag%.)fhJsΓ]5(_7ΑjFy0fwO5Z! ˳%~aoi5˺ȹz/;yݼDj#|_^knn491|3tNAVU"UQ~S/ɫVn.]gk-KЈ,S`+Ɛ|Faa|=0ĭ\|0pcvC䡷d0zaAX)_2)ei"Iyb0ؙE,*v,*v,`͕TIq6%Qz uhӲvͱi 򛁣W.*I ÁhX^z!Sa.M4DIp2d(r%FG rH# be}0z 1тFQ4`p=" 5ck\ zS m"}z;R~gP夼Np\Cm`H9b1)EE;V[5 ;:8!vw O t$济."K@?0ƆXϔ &;A;8ъ sB\7qpDzu˗;S%2\"9eZYiRq*)e)JSb8ϼ@u'߁hoсДY ԑ\ G"렣c DHzE'yH׍y<߉Kge^i¨E^qQ։੆eNq %!i TP/U곭E\W%n*qR!p6 :p1y'bG}R",PL)&l[%0Q9&Kѯ8.?*!6hn?㽃@q\%4j?4'Ѥ\&zXgi^|3 ?}~ckfNgXz~*+=xv3ֶ-p3չ,{z_<ɓȫMu3tyagfGh]D/4nUk K.s:%#[X{{!V[BDbu$^Fa\\F岗lx/\NQqK,t+J-NOA q4:WUް?xwjF'1(ogw={lp9 ;?;>ɤ_ ej^Vf`&lŸfxj3zI06]`ӓ'}<~>c ۼlxw9.[?)z Q8}~wp_8T)pbw3&s\E,سC׾ƹ__[aaJsKoBEN QkjpaW qka>1wzXԬ+.pޗ~7zBS (Q]#{sQ=Ɲncʆj;":~0,pufv4t({ b'k;a/XߵFWL>G5.f4a8u#<taЌt}G k|)&kL=Ϧf6MxRT{u0gC){QcĨp57o/Ij%U$% L'#Wt6Oz[&%w|ɭzSY˩B^-9ho)Z7[<^hh9߫}y[d W vGf(G^0{lGlmFm>2Km,zFބ+bƯ5߆MifHӁ\3~FN^61`75ȷ#7 xt^"h= ȼC+q3j p!&r YG';_#h&o ]߁j+qЁ4HjB8Yţ#GZ4ڀX$;Eq9 ӳѮd676#vd^\('͕,!%f'7L~Ӡ4Au1O/\bgtCbSE#aHʞQk %IlR Xh/#vHH)= qع_bjT]fǦR[gںJ0r!ha *@v4H5> 'H TSY4&e)R!]Znu O_be"RlPxQK'&uՒ7cΡ$ ZG RHx>P=tHbTE+ ZR%[/ХӐ+$elG > |-m(8 b Ҧx{A^Q,~}ݛW߼pKKgܷIw]XEnE!.|OBϽ򗰙 \ī)= k0A'z j r8eܯߍ3g9jܔ8P2,Liat`cDw{H;Rp)Xae(\p#)G>:A3PG&N#G "䦎h`$EDeqc*h*.PƽuH ÌМrNZ.Y̆%o?W08YyI7Hn"`0;b:DxU!'4xCw+&)'u=sc̱r: b0bgmI %ȗZrV:Af:blRhkM=2CG1T)b+t@Hm'g)Oc.*5X@UB> ycc^D8!bs6mtz+ @*u8TOQ̣FF'\% \gpո.pl%M=^?߀^5M޼J~5g\{8Ɨ` ?*XRqxmq|0n%~\ngPa\*u)3(pVW/ZJ.عBo`US^.Ԑ.=ٟ@ܻkMay= G-Ziwݯ^Gs_C9O]fݢ9LI#|{)XD{5ƶWl,.A_:W E۫xJh| _QBB,ZZJL)0">}=Dvotu{V{N#UTXi\9J[ `N0J!&J( )9 0,s4gq%oxFYdMo'!A-N0#!iEVzc- kydJG`1[-,U6_vZsj}^]0܁o;sc(VUǟ~zU4ƴ7N3yqiJ^|9a(%y'ck_>y Fml'^fv DZ(WdR3xBG٩g]Q),STchEdQvVI r*H/crGN3$^-a Rb*be}0z)#"b1h#RZtf𛍝5ָ4ϗ/!@0Ѹ2qug6qwb94m*d䶨*P^ >Jk"T"VXhA+KXu;y-3@QAsg6`#*05C3#gc|'z8%<)! 'MOMz·0i %U,Iɒ#ɲM3-lf303UntJrǽqt?M &Fsv k)G'][͔V(ɸ*bR'X:FсFQ.%6!AXDQc(,` | \KLR5MW!i;[̵#oD~[ezs9,)0U:_m}{ؐ4ڮGxfZ9InU#R+#;}d;S\it(t;%5x:HK14p!wiٕe{.VPX HitL*"Eaf`hHD0 A ބz< 2ANVVXui_Ve ãԬ ݦ0강1 0$D  Ź2Nb:eATF8ߺsGg@t:Hp|Qg8A\La/+B?Edcq4߆)F.j$r(("vx/oTu“_^TUՀd|MRFJS\rWw]X>3#`ٔdOSx~:%Ry%%\ZٟW0_8x ~j]v Z֚!=刭K:Mji|6kSoxB虃7ϣp:뼏,bcd4G9%l x 2ݷ47y)=}= &=~pKgRYR*,+B9k5ZdZ [gs#"LR1ek"-tMnp%bsN1+L6a h7n&zC:VyV,(+od= O`5oMV6?F;M4b)*P Éq魋p+02@Jf}a9$LqxaR[+B;NxM7}\fh﹋G)S?UIJb6 }*;J%+$ 5x$I (1(D"D:hUP(eXsICZ{H p $}^uU)@RYh\IYvL٩TKD4R`Fc&,4X&#?}rX YNъQm*ZcB؆f-EP c1hne8E~!kVοhk\ j݁0=^n|ggA.y+-^nΒ||LҩrRg)K.t@S?9= oAY2G}VT`)ql义DH vG\~Vʞ9uiObTv9-539n2~sWO]]% fO"FocMh\\l[Wø" 7Y IN~> @*e=Xa+s,2c#jGiE*=T_wGO%לjgRڨ dAupN%(h5o;N1b}jZ@+_seCm土Lb=ix`,(3xn23Ń賭Q5j-`a(wb+%ܓ$B`"88GB[j|l6^|gDºC2Ã='^LnPpm%ΚC7/|آBG`Ak.|zlt}쥪K (VkV<2!^c\%T8icA3E4P!)nb΂wDðq ._X8'YNsBl48_Md9+S{0Ҹrq K;oۄ 7n4fEL~~z7`TOrCΰ0N‚PmEn1<-@8SR0l +C9+]ʳMvN")LP5q9m09hreqc*h0,m=3{ &0 ^Cc"¼N s͉E}LB$*.=R, iU)UQg@uP9vxZEr|1>*N0+6n@<1AW.*I ÁhX^zhT @(e'Mz;xx\bC3fr8̞ oAS.Oz_Oݗ٤'c\Cm`H9b1?P"<;V[0ڭQ1jh5U t$ZG%\1 <06zTvH'ʜ'Gkv;[󬷏IG;kQ4 ywz@n5Rwkzw󺁧~ϊK$4G4L++5B*NWV:ECU$ybU<(xr[V{x6 -.S4V^G9%B&bk(H$La܆\Glu-u5SxR>^뎻7>6ZZv&׽Wx8a;BVub29~xP|sŨC{1 ן7E#WUL\ta﷡4}3p%ӱ I( 蕕ޏM|8Uk0A?~Sۇ1Et>1&J@9Ŝ5)r wp4N/9;{կq.ݬwEjYP;T7W_9E4ճWuUe@W J醡^Pxh[ ;`Y, ++\S#E?y"; 7bߨ[OEY?J 5?|fT^Oh Eg8>٠d7%4?3$>H{p¯YRM3w u> R b\P1,f@vD NNp| 89*]V|J"]xL@¤črQ*q~5mw-u ~ף4]y)(}:13WIQ}"D04COWJt˗ٗZ*&rJBnjЅ?^<촺Y ^s7,Ş~zQ)|#Rȹ`,@G:4WM|OR ]gRL>%qf L+鎻P{O^lQa̗ o kh.?71h;j7j*$A{ T?u={ϫ\C oM1**6QEI"yLbCC3m`IƼӬ|:фúu!wT `g1WbX$5?{WFJ#vq6Y40;vo7} H&Fd#I:=2GP9V Ǫ+}fkǓKZedErfAg 9xy" Bߴ4zfFwlgjd'aW9:.q˪/8In" 0)c \#|؛Sb?޹bg(beI.!Ɓ2a΄.}>m8ҷb\pC0_4J ߕPpڱªצS2: `bP\>ǕJ\f.s>w: x g8Eꣽ^)˺?^a+k ѠwQ ʤpCF$ WHRLaKy'] X!P vZΨgh'KӤMW7ǢV}T{$Zx6Shђt24XFc2 & $R.;@T$/225Cţw֢}塪,~"lKPT[_1ػQtΎ;8 ޏR#K;^^\\ "^5/H4\R#u2;qHG hKhT7=r\vNuN>)ڨenUȺYؖYo1INv,Lc)!޿a]+=G-Q@ u/*J[X*@K\<)b-,3p87;P`Rvf cL.qXYi~FZZɕu;9~j/AH@&VsSŌ8"U5mZA@v`l-ӦWCz6_ſ5cj"EF)JwFW!SAͲA39DiA /_c>im]Ũ9ft=߅wﺞ1vϝ_F^b!۞Ba_pE Һ[WL[NI꭭sCIϷk$\az\^>is߁wÞηt}:pĕ.6\Ý>Įꟷ=2_NWuiná8>DN6 \0 7Սa )HÞ^"9<|5?$oA2*7ySfFsJ4bx̵Af6 9cv6W-cA,-+ms=995HX6; -\[,=+c==SpUof޹^o-IzE IetowﴧwAz&Jh.o-XdP+夫x֗Қ  /߿g| |]":..'7O%$=nJ<ÿ=n@#;AsndIYWD.$$YO~yq#ͱ}ѡ"%ھm:ѵu:oz nnmt?I^Nw An7 mE6̥{5?_uwX;?_,jq%23=4/ ׽d{ g-]14Dd9TRߓ\,R q쥻c|;TRF@57-WlU-P[&YV*1qi7 T'3lG 9Tw* ʊnAFc$tV)K!&O-d.^A&, L2s8 gG7IVkpK}u/3@Mi~q~&GOgn㜸~3Ag@gAp8 Pol>>b&XkZV&)# z%รAEZ_uGS~y3ZL8q``I!yp[` ('‡EI` %8T@*v up5%`R:{"t#Wq+7< 'Y*y&RB띋JV, `jao"vߴ}:-f񴘘w4r;=3.F[I!?|ҭUpjv'g⯚o8}h^kg?j/Gp+=Əc.t)fWi$+ƭRqPf} {?zƺWvvz|}uٽKr^5}b/iѬXFL-=Q:ŠWXM5LP)I")mҨ3Y[4Q){~kz>DnHxYU3ˏΖJ )~9_74u!zuHeoZzT'H+f7&xR\7H%4]tFIv~m/"ّIrs&n!4v M9"'x7˒\w~|Ut-u~U=;{giZ |+&])XG"h@$ٛ~B~M%H1bVxD,GI`FGQ`עo IUdnMOGZB/+|wLiyYnޒԡ]jn6n<ܶ,noBoQg@|3KL@BlDμ묵ĽxdSkA pv1jBO_{=N`V n|p:Bd)IPE)FiB/e`^: Sz06W16Osj4qGb;?bhȋg6=cr>G$Ged*k(Ŧ IÆkj*FF̜2֓fJ,ΓOL/̒w(Jl)×ro:׼nc^I+ЂDcN:QkeܧL #C@F)B3-;r[=װEs}0 > #An[q⾇; nrsI)y=;=mý^Z\I7͋3v SxE5ZY٠-<üY}\JR+2}SyTNS|le>γUeߨZ$_V@(#CZyq=O$@Fg2♖v KO/;4k~c7] a !M%j놴q}~\Jwz0q&$B]Ww,n=%Fh/=tk:JXL)+LRFKl՘|leΨi;['pgHHn/EyHd1Z5tT6(UAQ0o4׭i3jG-65 \h֙ T@D$>*&+XJ* k beж v֌mi8L;XQ,3 QдZ/W)B:dL 0d&@N'Qme_yo+0o̓(Ft%D>OΑ%;3x@oC6~zRTs4\߷F: #E79rȍ99JYĒɤ^Ȓ>uTTbs|Ό$~Bf/eJ4~Nʑm׺\d?.!}gzwoKIQfZszd0So;-ti~lK??Htw>}wN'))q\Oswwcݣ4zZ~᠎{չ v@yzw+s`X\?t{qF`HoWDp-{%/WCvuk\+7{|52wzXޛm5K;2*w]b}!ƦJ]L fDOf%.&㧁dFSEF]TԟO*{sOz[|jңIUhTrcME"y߷Sv 5YR=, IeI H~{ Q(+7UpU1=ugZ:_O4E_4_r/?ܹ#ԡ}"-Sq)֧$A_%>L|%n `'Ə-^9y*[q)%blYGTVJITΉ8GLƖ4'7Um='M: gq[4KIui md)Iǂ&t J;)=sj 67l@Pc:b[?bikheaqp?ٓ|._z5&@(8O"_?. em$+*xd }t0hb2 xa~疣((gأɸevJ`Cɼ&y@j( ,ctY`FG.jgP<ەcJ2 g* :電WA>yArFNȩ-00s(I-3A$ kiqRRR!PV 0Dua|2HTHYƈ$5 u5Te##fNJ %&)2K^nv.He)Npqro:nc^IKH9-svY}ʄ <2M@  1{lmllCvְEp}0 ((~NVo.Gv&էH>R#@ 3";W!c=`}k| |\MS[qcwry+Ln:\Hoݼ4k<a&{礵>Z⟉qdzɀ5Q![Qynb[hˑKnq]_R0x}ꔌ%Q]w&d~MWoեOcg}]jV~-~+[]wXnz'ڈmhv5v4{-kM+'ښjm^l[u"r>12!oRYrpo*H^VY-:1>un{ؑ6jr>}>#(0|-GYi uø޿]xZK iFA[ԥ8vF dI#2<=9#"CB+'F<7+ X";.4E6&׍xf*Ÿ1X┹{{ю><7:3"b|ѦSeN,]5p9ET"?Hk)RZl9(IV_Qc`4g)s '*_S͝/՜DtsbIriT sr: c*bJː+ 䚆 c\ybnђN,'-#%_as/`Tphz, ^SIx܆nJ8܉9\Fd=G MtjN &[qkcUV{+*W"{v˹%C4슦,|`% ;ce]GK|H ! 3Dy$3F"OS٫+,Zz'F!Z,dYgpD2{T1\[a0XWRhlkYY(<1$ɿ P‡=7q>'hF`~Y-!z ZXǀ*eeWeeefcΞ]UCpH2%խٵrcG YTOt4Þ;W}[nPc:bb[SqbDCa#(`bc6x6N'쨏VJBMEnA$e*2 q[ipn1WgY?JuG;ze=QS^<&M/Wx糿߳8]d,Ue~{0GsST^P8HvN;y oLj)qFJ>ijG}7Åg \Lg?i HJպ* w?7_j]h{JA|xh`~hMZv-]r/o 3 MCvS4H?0PK5XDRk`:s.mlD\;AHDj.ڝ /;)w~;,NX]ݷ2;ۍLV>;5Ͼ/caYeQ\ҹ˿Mњ[nU)~K$q0 N?Y \O Pv9)B8J?>swn'߽ooUßbQ~2F/w]Ph퇭oIF~<)tCl݊Ɲήbo<LP:)ghvjO=n?N˕';H֘/GfLT٧c>1D: Y֗+s7ooj%U$9 L'#Q] rzxErO4}|U,bczocH!'cJ*wWѽ o،OR o`w](U}K8)‰pa*t* c>P:t7@оͨU ,$̜ z-'I|`ǯy+)XjX Ŵcn 4V M;Zϧ : ֍A~]O蹽"h= |YREK@ĽW0S P1r?:2GPAk'` nrlZ-RM!0+æo +䨸ɪ.􂭀Th1iϱo)+j/*U0(V^0d)Ni#:(5QxnAvwJy3 %ZC@H1:.}Ԗx<]`̙挵3' kuynܓf1L3gX&. ]v/;]w+؆9B0%A!m@$H*J* "BߔT[[tX`gW .U*{5͉BԮxװ @]zTfr-c1S.NkPy ,HEtny$<T|H_YcFUf:ujLЏ}MCI6A0ɵ r F3*|V{1gz4©tVԹJcХېi~:K&O_py2ϮB7At(^ÙCqQe߆I.x|>o˵=sn^GKqKLW6uyu kE!ί;LzI$X_˟{IȟfjgH KR#9 an~G5*k*Li.}_3W^0Q^rCΰ0NJwYs}3ئc#DH¥`cs֎li!rQC䦎h@$EDeqc*h*.PƽuH ÌМu1Յ*~ Mw1k]6n^,t-PEy"FV^5gH 00;-4Wm@^/o.Lq ' M*z29 ƘchY ,v3|%gS4aq.z%ɒBUor5KBe U cJ#68R$#Y@SJx->cmOBޡW`{W":1:m\|6"R ^1jfQ|_ J+$T΂#"S7SD)Zg=7~8A|HVD-.`aeN+V8(&%Rj륣)2PKX2N?aw\ rmbCr\[ 'ёƴV+nPi 5<2bi6o-^m 9 离<_/sfMo~~ ̭7k3p&-# %zVQ_YR "H_)~]ZܿTN;U=\7OBXq?ϏR>Ax ?{o۱iX[,A1}Q`&( xaϊo[JhgҧAar- g)͠ZuU-Mhr'Ӳ;PBۭeN7TZهQ7exj9*jܚNg;,o >fvY" މO ˀ`jR$/JN/Tl 0~JUxg:ȅcS\΢f܅&."#14MLIo."%3*JbSAzGo'Y&*K*b"`"RSFDD b FрG¥Y1p~`zxEc?h_UHe*uP҇fh6\>+_GC}`h$Oy5D*D+, 4hAA+KXZvh5Z1@QAsglrD`A lPɅkCg~Pz89۷m/v5N¶)8zEc10Δ# `)?t?{~ݻtx[&oO6^Pm7IOuW;O@U3-'^ڝ'E#t0Mvˮ>r hY[ {oba4JeŁi?iO}vy߇*{laȨR: ݛx ~3HnCJ.2}2Ž{*=f^v42IxWetA% 95*Sdmy>.meirVK2D[G[񢯻g V[9cZ-ٻN]D E  $r|Τ f5 `R PlȟpS9?n7{E)%AoE%)a80 K/$hʤ!bLEnd#ܸo8^xtvqt?[ۅ:zy?TcggZO=+S c )FH!"֎AՖbkŽ#l#$Zȸa cAGutY&†Xϔ  Ých9!OVuǛ$1$g82OFSq;X`AD"y|HQ W=3HCRT˦:D4kz}BG9i?t1қ10ywř$%,+a|iF`ɦO.248l _,vH[<E Uhrz!b6@-6:ʴPGr#v6SHzy OR|Gq㋷`W0jWFfu"xӶxh#R;% :X}6X?}.Sݖ  mRI(g׵d&ē78MrAKoBC눉uhH?EO  /6W㨏6e~(D|`FAV2oD`V`7瘸Hwpڿ(bb/6`|^ onCq4.ޫX\/)/'~h|M]7A`xJtOS\=0Ќ0r̕^5UB"OM㡁 CMW9p=1:4~V|:8&Ց-'[WۈghxyKf[M.Wϒ˴j&X>pW+T �*h&빠%Aw#([$z׵\݌ۈm"vZbk3-;W>^$+iZcVxV8ͮc\3V h(h^طh0A>oC=Vud6s^'7C9`tLKELmCC.~OfX]Ol x0^K=r&3/۬l4+]YhJ39d{U_p"RJ$8!!g&C9`4I;_4U.tr\^gȄAjUrKKK!_DwB тrc$a qY)=cawi+4N8#rY x$ )[X1c^ˈiDk45[!- o6uh&1?7_3UjΜLG?_!fR F{ΣrQ,NQ,Fݮ8? SQSq_#N:g]XH̆M͆xr4z8wf'zrOdtWEȮf9l_a>9Vl*r[%dz2"+*o^VYˆXy璥}ZLd !FAD vC`)m`2+2 %Z'PER*K%bdF93@*72fSg32*ٰ0[ Uf,T,|Hy dq4g}ZM]ߍ]sƯ >'Fl!ȃNJjqHD@ьԈW*%,+bSE#fHΞQk % 6h $R:CJBD>g#v6u6#/w:3jj8XmdHH_C ([`oyˑw; SKt1+F.qVq- hG9AQNq ƂzҘxMxX9(ǎQ^Dl-=@yƅs1a&2 B * J|Ј fuXc$m#F`xi$pFBb%#1)DXҠO0IdchʥfD 8ŏ\g^+.̸Hxŵ3Z*\IwB@/L$ *;RGp )q\..|gޱ+xv ^q6X0*q Cg~L{ G?~! fZL1]Jy0ǿf,gaq`9Sd4iE1x6iiHx<H e6!aL;˟V2wm Qx{8kUS&ʡIDhNєԀ]Lo}JYTKgs]|aac.]׼$Αe&y0 țݜDM}~%lo{,9rJxp>l?r\1y6]MKiWnCQoU(ƿ[5X64rs݇x۫ͯme%*~>/.!'+7?fB,dRbJ)hs`i +" 02'Qi+ẢF)DQtH<% `(٧Id\qsp~2}O=s?&.*#vK23:˜Vjŭ:0ւ{ƑGtT ٚf1lh_4RZ&"._ 7N,إ0c>f > ZDZ(Wd2Y M\EF9DS4:50e>uLX:(qTiUNed,$`f$֫GmF,B0"`"RSFDD b FрG¥1:[Fq%8} #a^PUehs֪ZTs4U]fw>1'Oj4D*D+, 4h %FHijNcFT`(3*hlrD`p:;A Ut(wll<+[b$}u\ڧ%gOw8tS JכygTxGDJNcRɜRafXTqUB`R^C%aa7a+-c<2z̢ƒ%p4Fp$8AkQ:a|iTǠ #]4uqqq G8b9^$I<+;Nu7>/ noF@0UG}B'z1 K)ÊZαK}U,6uyHX]ϊU\v{!~L/aR n(z|5}*h<4. =)~'Ҿ^ >ٻƍ#UP`ޫ88콭yaWgIiTu QH,r9t7~r|h90Wn݄ipHߖb8/c6ڦPn Nu&A(Ԁn7(ޒ d^+)7k?*9^UgQ.͑Jjiߵq;0ւZW K.\WK_f_4Co(q#Xvp8K?`uf(8;pɥ"4KS{?}0X[4l0xhJ!'s /|RcM^hr Mt:H'6V6€P# R12K"vD9\0!C_$JiQ6R J\3uQ9m^X*bJIl'F(ɢ.EFLm͑+j:]'RSΟ'< gEVF(Yi qB^RNGRɐ ψp30?' Ъ> fsj = + 8%vm@*# 1qRRZ>).Әn S6+p 5dh/Iru.c^rì"aV)!ԧt`+(Z!@1B^NHf)`C:oZy7;zw܁ W^X]\_Oe( 5o|J){Y#dgnv.06}5P\˘dBAYȅK,з\0UnUŅKVX~]}royL7 i9 M=I[^OS'2M/WC2G7̓J".bbD)e['h1lMKbc)XvaeuV.`]~T2?U2NKrZޯɥeUr.*u<Ⱦ`5U{,:r2uv\ T+.iXPCu?gW͢jMߧCM5kKV-% _C5"U  J m s=Uuȱr+b H:uDN"Ovy"k;2:XGp4_!WNJoRmΖ`wULjzb?5r1-ި.3ö{YN U ^{KuJ0/)X1!: kQB س0N<'6:g-)evJRRƹc]B[[ Q(!0x$PO1&{ gXm[otkS묭Kzt8sڠ#̹}:  ߫|!#QE4L>!Mv2 Q AdqWB #`o֣_qEj [2Ok.P K*B*=wh7`n#4 zl2zaG@ ,-2 [ Wf%ő ѓ )ls::ey M74bd|'^"yd`IBG+zt2o(s$w,ZG"q^"r-[r' uă4>8'}HL+@=cG֊ 4t~OC!#p^zi~*}w؈>!! VRA̟f[<gj&g= FOʣ .ʟ/%p?l}K>GU' &^*oُZ&K2>l?*h߀\G9Pd>PU*NW'g_DTxU/Si'W?nVCcFږW˰..k͸[0nQy2)[|X}rA-ͦP};wd_mUveN&ZvU_}/pcyr;[A/ϖ q=q^[βpOGnr-lD!WH. "(y _\0tH\ˇ=m{y7-cOp= {ȃy ^`D:ZHBXGc &>vcJ׀U`Uل٤kgA^+\pŬ>Y U(’)jl cIYBL(8N4b0R@əx= gԅ^PWU*(5L r=Q`qMkFt;-gLqg}5[zWۑ\mj~& -7ɜ-՗Tz=1b)E)[32 fs'x)[sI%6nKݯ+vتGkB9L)cag2PV:)Jgb|YtOoL51jFދJݜۛ00a;-Kݙh$WD FzRiM-p0J܄Kuju2bN]]!PWX~їe* Qܡ${ToEy6̾I.Vpq` `FP¼(l6Y Su))_D5lgM wWnn)z7WE/ݳG?LQoX 4-uU8[ b{>KekZvP P B !uŭdtF]+SWWH%7:Cu%tH]Ψ+$WЮ+V^0rSw!/bT6lFa w)Wyen%6ir) *=4pJl?[4ُc *wb $eel-P{:ho&7E4݈ǃZe@B-_&4 _VvnjFfqv4fgO-Եev>?dP|q xh!,Sڝ$u1PE*E?G,f9v#TKYuaZ6jIy'1bsy09,N[XZ]dXaCʥTB !ofQQ[Mtzז/:;K^/Ѭ5Z˷̱qk2юcSb*| SYYIk;nz)$t4OT^z /yL^y&"RPu^"igv>Mf{|.ϾYCi (Y7 ovxOJٝiʃ|iXTLna1 `uW }[UzݡՅ+y^Ogif"~9t)*CG0-Fy XMVOgޓ1ƫF’^&1A~W&hUVT?pIMDFy8K-]Oy/]aM+]a_Viϭ+]EZ .ĂY0U[bq'ûݚ4lK)h2ӖLciv.wKmpm.LڂrՖI~9W\CnoL EYنrmX*w ^/Zx˫H[Q(:s0\pSh-lݯfW='ID)Tdka2,#|(向UZB5Z 릌'+>`ud瑫qc"G%ljx&b"3b"hsKk%UaEjxц\y):Cկ,Iȱ2R,О{wFA|ڨH>)D(puʍ|;k#J%C lIgl Wmv9CۅQ#O YזBr銺Bj:uuTTW\jf*ܚ'n,IZ+*V ]JIw/>{>CP Du!v!:THZT[%R3 ɕq!JB*uQ]I蒺B쌺Br-j ѧJzuuJ1 !u\ɺCt_]!۟Ҙѥ$XwwujN]]!Օd.+$r쌺jɣ+RYy2P 0Fd1_F}*݄짋OSo{3J.W c9 9,R'zpT]lԐ/߾h4 Q[Dޮ.r,nFf2fY J ib5\jgȟs W7l8才g7^b{(ͮn5H|j2XqyR i^]^) O5~*[}ٌ-Oh@E Fa[i2(ݸ.ao6#o͗|0pOkXh]Gbo˭!Jw@3SIŀ@6~0(Lp3iDI[8zhPPQ2F W?|Hb5j1HσY>\V/&jӧ%)V8Pъd nNrJx%uDrAeD(6i~;/ID5-.pN\$lc}>9+8 TD®ٶT2lXjQpb7#_PQEG~fxa͈(+!ϩ,3NO_ }P knN-K ̷w䡊E;b̙0!f'ZJQ+O6{Gb:ްJHka=7wBМUU oZϩRRnXm5ljxIl1'@ج#)Nɇ5i709Fd{r5njGYG7(sm 2f,Ȩ*BZ:PR@j%!BRg<֍KA_ZhI5 jkT 99$]^CAN[*MC{Y +TLgqJAQā.(ʨq;A(tfIwB4iT0+=[ BQt)-5S͐(o e${kE∽}AB}fAthog .US J(N; zPIk4BK.}{j zv#OMx ުa}zd~pvM&!jhX DŽ 1=*T ҧ-W%sJٷ]e ìcZQgw~{Q;f`.`5BgekOak0vGBggU+|Lyƨ]Ǵj^kIQ3QFhћAi7_^ @zRF6pл!)C^$u-PUv#!C{ŹCI*WswHw[Qz낆dj*g':RC֡뛶q3옍}u`?n{+AZ964O!Cɟ"o Qް bp\1U -N}7#vWuS[`rO*6r|Teǀuml"sJ@:XV=Եj@רM{&=Lb2Վ*WZ@׶wϬu~kPqab2|57לAPK}4 bV>ڡbwU!M`)o"a2نf@2/6O 4%尀]2'I{OY (t )nXI2[1xxA4T 64fs˥ZpU\CD!c-ձP|(nśБpͻ -(`Z@Bׄvu3B]wԢ`|J{ R DŽ^zVV0d fdew/ L%haw[ޢ9}ŝGK\2G,)Sҗ2pl=nxo>>bVRl7-V^]@ܜU{/?V/!~?,O[?_߮?Fo8Yeu'^Fťpb ~"Ӝ@7{'PXM\<{'P NCt~(N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@':Ŭ@sp>N y&ЦNIEq( N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@':H9p8ש8qbV tN PH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $Nuip7''F  D{J tN ×" $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@:'[.:~tk}0[8筦<~ws㮽E]]`4'Sq p]q h{PF#ƥC0.| +lz5b;]1~7teXT; N(g[-u(/0bdBn@.;q^OzV4 z_C[TiӉ'&;omtM_O6Up_\VVo6zŝ9?^cȶ"caX#Ao57J;?8n^?r U=z`0pgG[Kye{|y5~V7uݫ\g4eVh=;! D^_ƒ].yjѧ))6<p_YM<Wn~6鯣&2헚0PFħ.&c'7\$\jt( ] ]AENQW ]1ZBWHW|rjFtŀ ]1ܤBWOpNWR:Dr͌+]1\bQ^ tu8tv&͈pzBW֚}+Fy#:;'b_VTtp z6ҾF*8]p03pm ]1ZgkWHWϜ]q6tpl620Z坛BWCW;;b_t_ۓ~Nt(ldЕ}[6ܽvO"+Zh{b퉶ֺ1;='MD{N./r}l=e\g׫3dP^l|Vx:7CA߼&2B4*?=b`?7JA+nMBk)aYUmed?0IEb?!9oWٍN|kK*-+ ΐ)'rKVV!Ev9[ǂ̧֡RK?~}B׿ (v8+lCG4vvGGlVZO]z]iFtφ ]1ZJNWE+>*ΡqAH~#?HruA`tDlozZ.ϫի~el_Goe#_h{8:Yleg>ņ_{Vػ{~x:6K:k^{o;Y^|0gG7=eHNk$wN1K(:mmeP|C~uowzt}ϫa/^9JMy3a3*Y۫WqW}pfv1 -sɳoؼl͵RmLNm=+f%}tTcn>R0hw(vRC]p:?b;]1((gDW𼯾3qplWW+<ۯy/zؓt(# ] ]3+֚fCW޵6r\ٿB! z?YH#vQ1IƚEۤ&EJ$%v{lˣ~TԹpϽfqhj:]JVЕ6̒65tq2B"e7"iZ5 [AZCWWM+DeGWHW݀ol \ӞT;t(e6"ʾ JD0Q?zs+&7q+=qβ2R,owZ0(œ{Qps5ǡ=W8aeq8)t }`ZCWW5+@ YEi]`vqF=Оk}P*Un]!\ +Im+l \Eh[ RtBSWHWW_VB= RbBW5>DK+MmJ#`mZCW׶ Q6m_ބ eN,hk |D_a]] ]A7l]+[CWWeu^gGWm+y`+B,׼YAlځd.4 QGԛjϟ; 0?gpZuuCٴMp۲qR5;U1QpߘMSJM j^zG5cE{ʚw[(ڒ.0ۚ ՖўkոP.0F5-+9֟U-th k:]!J:@L7tn ]!\iBWV4 +\ՈI]!`!ZCWãC{CY!Վ.$'\g_88\M+D:uut8WTl]!\uhl:]!.wutҶM5:T{F5"\ݚQ4 "po]!`KZCWWWWu/-+hk G]!+DiEGW_ ]^]JHe?MGȀ6nM5{QoiMܠE8t[ul#THF8gL8"Cڑ>5qJN:BMNMRecU,/=4^X^9CyS{{sÿ#X?Z[,Q ^rmˋ~)B_?I, ɏ =|7|?h9};g/\?ˎsޗ5Y.f@OVgާw;.]te/K[J8 Nj=xnrgm3? o5cT3`]7OELwIwxOkǤ$PXbf Qy "dꘘƯSp'|ztinc8߇`qy庼798̟h5N&Xo&Nާ4-:E>G"ͻ%<v<Ŧ_?ǓOjcgP%Í]j! oulT*SYey`_eWMF7oتz]HБ)@o;oe OQ鰀9Fo,f%ûi5ۮ| ,6 <44ԻoցQoJc0-EHo,Ao7dWBz(^Jj -Gc)E z?P(z>e?t`}(PxMnd-Fjiwi6Ơ$vP=Ի*plߧEzVaYSѥۥ,SLDс?VW9[Rm4Jn7_JY ]L!oI/Rҩ ?W\a19bQxyRDS^zQy3pl/T8TUQIJ0K!.ќI 6S1 OH1p. ptiisKR'b}햢=M'4-Dun.n\59/3zaOS},R@߾輔[&^1!2>XBU2*A-2 cu[໋_զ@mKw ^s6TFCv͙3577.\CNb2o_X *x49ʲ$Y0E8;"f!:qTOI 'OvEqR𗄡dƃ.D!phFS੤θL_ĨB3MU?2K19D ! ˤbbj1:]p#S;v^xjD6ͱDqE {v#bjJ>h9Ə ns7$';2*u]"VyڤpmEï+Sy"QP~G|r0Zԕ]EyEَ|jVjeYBT+c-P65P&LrMh~8ҧ;&xCAW=oLo 3khQ#\\Whå鯟7ߡ5^*јcX4olߌr=v [> )ouC oQf#lr>ĘY|ŖUYRJΈL9qK:Nu n{>71zߝlvT[ JqCVFIJܑ:IiPxǹ'[iK6גl wclb#"?iڛi9Y|$SS΍a# * vx'q^g%71'ou.Z#P2p`+N9Y|A>6KWY9|ۡYɠ.xt ۭu} ֬:8iHڻq??~qͮ[ ykzm4l5jg)3)-Eezmt6/Hl kh,kXQK'+f E|g+x$֗H,V ^ؼ6bBtFi"B M'3$-؁)e.|̥s.E:kk*oAzj1!35pB6uZW,ԤEsNNҳ/`S,L1}]x%}fsoNB6;NIB|}}Ɨcm(c'-LB:c !,|2JHadҜNu?wЩƯP\)s>.!KM!pBM=w1;L2}-9JnJq(4ZIbNgΣU@8SR0s.K)\G;'s&(86hCGC4(‭<EĖPƽuH ÌМ-bt5.~Wn]{u.&arCM'~yhS5,U8"q6<=k÷BJv_l,&T48ƎPҺsiO+ox z[dy` 9rQIJGҧRiM ICcL^End#E=l#!oJl} vt?svj~XuMB('c\Cm`H9 b1?ED xW`[cC B7=:`AGa^"+Fa+ ),M wTN|NA]lU+UCh{+YU!w/Au-y5X-[\--2쉁K$4G4L++5B*N5 ;t"HL xj<;z:ѐ1cZeZ#;DAG)nP$Ca Ҽ?A-xȱ>X&Zez}pU-qDj5SATݪomdYy.R%kvSqHdhR"Em[v}^A1=l/i77I}0r$MVS' atQb/o ,VvG+%! )ń Ry&`ܜc"]+@oqUv?,M(y\]*~Ǖ/MbqT1<^u4;I)y?n0IC4ҹTuGU8}7ڢ2\+ v*aK ],h-p 1٬T`QyK$H$6;&2ww&"2d3- mk~MKF}x9 +~]rOz}yI02? ]|ŸfxlޏM*s\.z;[0駣Th}lizovSx{9.? Ruv׽q~x0v#j3giݷɇڹ&o-tt}0nQ?ѕW(tK6CL9u gȃdrC\BK0L?Gq}2,'3?/yf>?(a5>}?N)/T_\N >nB^4;"AL#D?:>irB=@z fv2h:鏀C8x1 O0`A` _X٢+&, O3.L{M>dPNb? ΗEh&!}1 WeX Y͙S#a&9h<*@{bD] 柰8 :9N?`r`HB9UaaR"Z^~PqzbƗFb6OjReJa÷ +i~eå"׾`˒=M%硷KƊVQy i%({.Yb #`1 & O`BR&[EFd>ZRMD @Op) Q[M*Ffd@3FYqְP,T-cp4W>Čsrˉ&;4AelO؆9B0S:Cڀ% $0%fF<ۛ` V*tC2'ZK(IIt@9$^Fґ0Rz"r7حm5s&殠vk㡨[Fmݡv`W{`%!!{  ,p$C(rn-GI$xN/V0r!ha *v4H$G(XX904eR4e5[b ]wb K{V};/vUޒ|}s^f\=UBͼ`o<k{Ϊ\-'{\D{Ԝ=xLij#~=[nmL9ߕmޠI̓']U|Ü*{UU^73yw|~/ܾT?qxJ=aQR,<)La]dYgk, \bW(~uXA $SopݵB[ևpNꖃYVveelhzGot^s8 jQw4Pyr{{cG#6:_:W㢱=)Gp c*O< B:\%-KGp  \%muJZ +| & \}-WI˹-Wi @a*URW \ ]708wxqIhq\HU䀫 殺a~g~#z?mMٗTL{`CO|d\Fy؋z[}tЌMS|jGa@/E5 "f.oNdwYQҥ$ .9L0+ zs)Ɵ`L' ̞N>jK-c1SY>4]Z; 1x/~z}.MVā[1PƷDG*4VZʪ1FLdg˵ }Vl2zg g[/O[9aafE_g/i3-JH!`AjdUR P*KcjHBtM]+DimOWGHWs%SRW8wt( 9:FR\YB<`+Vv>DtutJ$DW( ]\+-et(9=]!]A͋VKWؒtފ@,BRuSHWVH.S+,X: |QCWfǮ7_vvY3˩ʈ 6"l@8SX)]ܢ}uP{hO2{e^ݩ 헭|h}^ %-I9/rʭ eΈ On(~x59LX+ES[D;@O;c0xjSI $T *uBtut$W1`-+k+@Id=]#]q% "uptf=]!] iH)wM:tUu%1ҕTw`V#oT uBRtutTJ%DWX)+Ku*th9:]!J!z:Bʘe$T;µɤ-7t(ƶZ=] ]MeR _ "wpi2thyAD)TOWGHWd5IifH틮6vfQ5/]o6]tlFH0V&ܷEXtTv'ZcSR<6vA]u矡tcn.[Ԫ;Kl@0?t<ۜjL;c1m>f{jSC( R'CWp?PgBb{:b3!+H:tp͡Ow۱ـPiyR4VbGT U]ע܊]Nݏe5M]joYY!*I=Do>͋6cw+Yo|l<]2t!H"z#mF9/X8h陸av2@5ۮ| ,M.-d_>du5@K.kXжX\1P(t<_C_:Z!(Mgo)Y`AɢYe˫f|3>I+S*v/AVB*.KYUM!6t@q%OQOT S.7-6:ej~9bR*Rper \(a zR<]-=AOkA WFt3  KP%/ |ʩBx ][(Jf3B"hɦx¡Z\% 4`t)  r)+ JDrѱ{΁U[p"qES4[ܑ{o):/l F(yr>ȱsڍ~m2V"cMud{wo>)T@R@<꼔[&P^ gKƄ"(SxK(J%,̅w/λRo_.pUp7oyz2W8hb7"-ohC jߜ9 c#r`զa!N"D%U9 EnG%I)#-Λ`h"'$F'o{wŶEq²/ C5)BΓSj<7S_M$KIBhV`J n<KNM+yЖ JʬbN <1$ݳ~c%9#u$rnp#r56;!w:|Iѧ@GJJBeiJbpe y!J)^=%R^=[**|Qp&Z靁贈(gj@@(JOܞ=xop: l,-d=O!&YM2jـ*3L (]?kcN\_A(P&)ofxpckV6B7'">2ΥܺH)eKmm)*EO j1dE?&9>,ԥP?-4JV<_e?2i|}׹,^n6p#\>a"5J F y-ޟ+z`Q4,x N"^y0~=s>nχ= go=rIN^jƚQ- xɬGLzgp]̿)b ?[M`YR %V3bAo/54H[ SJ pI@@m)'r VS}Um&$Ź42{[Y'p#%O /pbK,$[шʯvəF\>.ּmxwH0ŀ0F" k9~xVL% sg8E%*uXH*PkJ]VD.iԻug _[mT-A>wPQQ׿5Mv4zЊw |UvӭDݦj$:ׄ.}ߨtajZ̝n ȌZ_eȽqhÕ.`8O޵5m$s {.=7V!qvqmv׵N^iGlS:ԆT ll5vfl. gnEHGp }13Ef,#g J))JQIJɮ3 Z7U='ӂ 6#7NxˀޒEiFT$aM0&K3Rz).Jy..'_) jM&V vgG,- }|R''kBGz;CNס<+a7aiՆ-1 Ygl YQو) Iv0hc ~D 1 6m!Η Nhw$(5F%0ؐL!ө,ldYBȲf)ǜֈ2S&s h&!brr.{4Cמu";X[Q.H@RJZ8HCoZAӭLsujB5; lnͭխYl;K,-FoiYVhX{ &9]ܛ %B "eΤ:Pf%l~;՗@2Qt:1>;rd9|H9p/.v ~ B#?Jhk9ά&?t'zni~i|t&#JJ~=Q3_idTpn  mV9aU7Wd{fa*fЄ$Sz(T6@tX;"; (FS.5H' ѓiT)HV󁷛Or|CW$}|UCh캼n&y*!4 g!/pzr\%y":d%A88S`due#[)ymu5dc1_mxZ=mu{@W/#aݯzɱ:t3JN;dJNSw%gOt<1.+9 KN KYO) FmFi%[qkcU {+*,\Snt}\JtE[kCo>Ԇ4p%HcBsD@@`/k~%V[Ztfv٪S?0(ôS, =8 z*i4Ϳ&y&D2_@iyF3x,vF[C<検+$+REa%f'8`3<\s:Ҍ~Ayڈ7oܼWԿ>ϿYofx}.}^r<=}a0A-[vFa8#nRlu|"WE'_ϯߞ4iYhXý@5#q.3L'Ɍd0s^ ERrԮWQ^=T7MO]Rهnd.=dtwEY ԥ/ g"[cé_~ycS׊/hs\tbυxRjŠߔf#e3(͠'#OHZsq65w*s~1~ek.䗌ǓM 7r8;i<@=Gx< yƒWfA5+|RsMuI7203n܅]nQVNf;\pOtA~wsၻD\hQqm5Wkx7mZ,\LnCrKܰn޲ne/h#EpfruoȻ5X y;}O|RFOeі£Ǫ'\sfKVߛxRq~kDkD"B2,Wl@̑W:J(Iq"Mk:9xKV T: sV ZMJVYXf.Kz8@AhAtQlHn_̸Q+5xdh'Ӟg]w˷P %1B*i-pU04Vwv+jmՔ4-˿8xru,Adߠ9bz56AIzBH&/jY_N>@R-D{͗JFVQy Vr5Ĩ }H):5YGAҎB0E'0"&ǘgV!M6fAO u켈ФHfD`&j[ndlVi [}жƒ)[2ް*3?Y_4ήfɂ~T8? oŬR䥖M(H`%a,k' c7%9d"2 Cq0 &Rؔ29pduYKe{H\[nQ\r1[}Q۵ڮC΂ݾ'o5&<`I kȻ%M` 6CnFO*h9d251E6h2 zR9 ug7NVX5k:Dq oYC4z1!M)m4 VbrL0I83Ur3RBH6\F" I'͍L6%ug7"&.])ٚ싋e\.v5%UI15IQdA@bYy&0Y]3сtx\0{ zxCԼ[Cm2[ƙZb:6\y[R(o2Qx-OK̓VY[:p@7=pդ :C5wtZ]$ Qj)<ۢs< M)bwi82%:UZ,o7Nxm,wk #M( zERI)rȆEaqiRq~oBfo u- I yjuIrV*D/6Y1Ι(Jeddc9I-I+BԪpHBhNlˢSЎF rBu9kmȲ`;@> 2`|l,^<S͇H-2t˖ qn6oUݺ:blM3L8άח,& > xܢaCW+PA37 `"cP!0c|ۖiR@cVir Lhщas6pM;o"0@3W'p,_Tv̜uDKk2"X/>m\ۜL:xbmR6cqsfU▕X9P՚#αTmVw O0Pck3&uEox(RWNqu$~B`ˀ!k=1e&&D5 #RYKIE좔\ Ȟe5h,7\LI:ŲmgB`lB:kf8 36Z/b&Ac :`ԞW"= ^m..9ajYˈ@46Zy9dxH9F4`,|ZÀ0O04U`;#0pZ(Ϥ  *D[W "QTcpޤ8d,&bYGcCk,Ii\cs>c\EVlu:U%*RU !Fn`40N"X!9T0xHSR4e"ɠ\:H'E $;&򬐛Հ\I1#h`\AS@`!ĐiY03u*nF)K<";1 :h,,u0wAd I(o L tSȆx7Ud&+7/4R2AV%TtgԌQ"!\`#)#  VWjԞAX)h $!(byI+S#2̋EگZQ $WEA8PRil,@x@HtvːX6 Uc+:c"& Lg ]޺^Ko+fUy$' c@J6K9b: >dsaЀnB|_\p 49ac[-ky"AT00ƋA G\*MV%^E Hrc005n8&8dg찱ڂ|Q cA1J# I2HeVh+26VYѸq |. D,/x:7Vdlh >zX:;**@DtʢEɜ/+II|'Ӈ>p@Zi298kkl#: gh ` e@Ao 2ZdpX0Fa68c;gQʐB@z:|*i5kNO򀱼ڳ0M`QIRhW jPrV-=XuX@KF039l@E}׌_N%TaҹdRac-Asnu6i~ѭO~״ 1eYJD0umMW-d0LVL-FTa-NSRHuZtkHk sQS9)  `Ly}ᛏ 3| g&%<%2`'u>fds\@7Th/ܖ{כMNX߾;3uйSZ=raK)_PA-ĤKwpL;YM @BTAeVVOI=HOe :["e\UmCA 0d} *]A)ndw~UlU=U>ao|д,+f]>4 t̸v:63OO=޼w~C{ɷ7q[w>נ[_=|Bj.L<8r5nh^oCnև{cŎ=w|źЗ)R̍d_kcee0gsFOe/2ɔԣ/%sZ;]WG"-QμaQע`+Y6P"Bg[q\0kZU61nYYP鵝h\}u6+wyܦP'X&}ED2,p_7%ӾLvNc{ rl~n5ؚڳ8q{ok[i ơf} ;O:u8'|O~`zs}pRq- ty' O+dST QA6*F٨ dlT QA6*F٨ dlT QA6*F٨ dlT QA6*F٨ dlT QA6*F٨ dlT QA6*F٨ .&f9ac}\şjE߰r}Hѷ̏.Awk.'=;IpC+/&t% '@HH$ =X$͓($YV2w6 {Z f++ȆgIr ɒk^#͜Å$Wq)ԧ\\_'jn˱sgkžt)#RvR־ bHJ1INuRr\':)INuRr\':)INuRr\':)INuRr\':)INuRr\':)INuRr\uer+|My6uXkWJ'Hw@X)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@zJ?'%P$G sQZ)VnH @IO@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H r@Gh+~iڶVՆTJ/Ot"'I?.\mp >#`%a2Kw^r4 mvtJy1#斸iο}u1\ó<-qMx Z萎^Zj=[vi;ϥ`^o4u6{?׫f WC =<+ׯi6r}Ì+ +HF yIڔ˱"䔹q\2m E~nۅZ.WoDpTyʻ߾:CO7\ !o#҄_W*NM+C.w"]t^һn`~{6nq* PE.l5O1vU*rV]rNVq6VY#ޛY|X/JOGKqPNk[9?]_t3D ZV餉I`JD[&&Z8cld3Bٽ+sW3=b+uށ{Q\ߢx;$\Q\Zjw06YMp^.u1лXN^XS+fdl;JUU;q1yƹ7:=*S[\3yW0wl՚Ƴn}SXt A;Lg%~Jd|9rjFlɟ{}qefpưvwr>,Ηl/ ܿK_:&pٻ6$Wqw>4ޝY=3؃<)(MRՍQERr\!V1+*"Edċ i.bWc?fLLH K jpL95jpmٮ+q+`0ՒpWz^O2d*z-;ʜS)_:kiOw;+pEģMʇݭ ^Lk|ҳW4,fNzed]^VL=OD4VFӐ@c4 &SZid9%M ,665=!8+mq&]|iLN;|z \ ==_Z7l`љ&H-b2f"y2!KHJi(Z+:;6}F <}>ITV, 4ʺuDCdZgʔ;K<@ Zb58A0#\EH]7P$? \ Z0&PV{ ֛5TOEWeoߚ;k/]#QCT3V_;%xSIbYYJgRJ_Fܮ/<fϑ*?_ncU/[,1FYkzw;} vZ$\ ;NU(leb72B(?VYc&ݗeaiw*F\P#j/5VFU[J$27IlŲfsvZ9wﺽXuV͋;i|)QSNֻoyo.I%ЅmxГ֧Àn VM>q '):z2 k*D20F"X$dԱ[ zB0XJǴӒ{'qEvD9\0!CCD) ^2כZ~uMg;蛹WAOtRAx,8`-*uN-6:Snj= LYF%oqBFSNGSؔ pZ2>' `> Vsj L+ekIXفz}g퐳|^j~c^rìsFXU9b)cwsV" ENfo;!, ;E;Vypu0 )h$ op2=p8[N؃Xja4)FٸPBL ι/.\`wCJW [/jlŮ!oD~F/}qm#92?9aw8A %  V҉b_xYA2U ^b}2i7i7i!3JMPTh8A+#ZX S$&iRRƹt["Rz]6&;+!p:PO1&d潆 #x깶l?B`[x$sګ#u= d[u^Z68GP&q%@/#3^\I2i< {s5ol SYE Oү :5DUUv}e rk~"ݞ,h rKA~Cۿ &P\"LD˔1[rít^])Jyrڒpq>q zi>na.Rvm\#0jf}A!/m(L]εI\XgL!O^ )LSx4M6̂:2O Zh22T(7R8!J]mkyDwE֞0{/HQݭc&Վb|Q,G!E}Ϳr([u!!N:Z@=uSMG#cͥ~t(+[nUtNm>8'}HL+r1]_6?XbBxo2|^_NPS@#AL9Pe_,̺Ok*aSpQ!B1PG N B 33]tH\#d}V;%J̞%>`^+m+,K4|Wx^' N pl=uN ' #to:_EO}:ٜ>H"ٚBa H(@TpadHNnvrO"&ȉRIKE&":$3%ah KSr))!jS^PcpäD huYa37h00f{EmknxZc]oZg&8͍~8F&{C P5%Wk9Nu?f,LpH%adUqQ8!('56x* &Yo~7:oa*6;=dD2LD$\0uz3#U)`CUH~_v&Si V8LysZPЪNV>ԾKz{ G&d94H4Y.c)KAc$Y6Շ>,W=5ӏ^oVz[ ܿ\_t)>\)#,b0Q*O@)b2 8;CE؏E@Y~00(ȵ!ϤdP꘏˜9r2!5B!jP*I 8Rѿ&X{YJ?ɇTn,8c{a&/BGj$4dwJɵW'O^0Mf)1<hr4Y"hIZ{lb\X3X-m4df`M6 ':S @(%1AO ő-NjE4 I4g*En1TsfoXhGmMүQehFgwuC&MnϮOpV}$kKuqTIOӒWn5zHje=ƣ2D1DQj ^&excƿ݌+b:zRqh6Sa*#S0EB Y&Jɖ8Vple}RBgHR_XY~ߍAn=<0ڷnq<rY ۛ6%sqN>S{~쿿ViFoߍ/a;qtt=`K?܍BwR9l>?}9$z7u}M\pp|lV j:MMY\>nzK ;]wϽhffY<t|^laWa^UA~}]SGVOGx3t?HzB<**0I1BZp˽C0Ly]Y9U]pV')#a{ e9Z|)%4M2#=QۏZ<0l/p =Vh@?ZL>yw5%R=**m]h_4]um9`e+GemnFWFzQ?rjorj\ߧI$ qi`JWzՓ.ñD:!A,!}%.Dz7q3j}P}#ٿS_i_PѺO?~Gj6EҒ޵5m$뿂SRǐ>=rͦ6'u6*\-)RKR%/DZ, И1B!0H k5n09WJ"oY[rg5zgSV{F#gX懼UsI%CNuy @ia$RmܣJw]]Mz" en=Ld`ƸX&vmvSjLmqLۜc)c̒Zgʧ𜰚6W;ǵ4F9|iX[wMeY6]vd /WT ]dxvooWNKZxg9}fӦhɉ6$ C-䙧31$Ҩ,0hz^@>82ƬNnM7n2{5@:ɉk >,7թ]53U#+m̮P* O_?6K<7PJ KMpқMGB{Ƚґ{ĥ#KǡJk5*tpLPr]^::bI,AOw m4;{{2x)g dEBy*V)xp 2\ G  w-B}4͸*'ӚoP[qCFrnimбQjkzi$JmJ{ErPí 18&p A1MXb.1%ЩI*0vhnBrdxH23%8l :tT FyUZ4cS,:,NU9+v_?^> OO Ĉmgd >;"(詀.Ն*2 d"6Cl`;8˰U943A'ĄVDݢ5qGl7s_P5M˨m:<؇ Uht$@0^#̡ %-I[ N<Sx;7:4AhϣF x"qDCS cP-akDC.rO %ڵK7$Q NMRKg\pDTQQ#x4[Mѓq^Mߖ8#bUcx|riɦ[Ebf87DzP bb@ p4aHAe y":\.M-Ex(6@#EQuj57MpgFށMQvv2Ƌz+;!_.QN\Jp|vuىSs;vL.z*w8]; ^5M~ݐFCmxl6xzay6,$5xq0T64hw\ۯk.ȯNo0Ѣaj4o~4j>"!xnϖ^f^GEjџO2Q~@pV`*P ^ ,%%•6B}@pp0pUP*K޻R-= BWY\.R;\e)wp_q1!2pR<3\'b WIkZRf˽+\65̦~<괸.Qz'5+̽Z^+Ox>n^Ȟ=‹@#Ż߼*FÚ^8};$GE1TS.51B|{憩jrޯT^ ;$q~Raqb>AG0} p<9)a3v9(K)PzQNFtsqv\ Q,ťhP~ç@>̠58}l#Լ ,A:!ҔGaރ]P .#ԝw").oVCvV6u.ťZJL{2ye1J J.r3zN;axmr(K}\9{تṯ<͏wÏ'bNUBGG\=m;"5W6C=\ l]lj]F(}p6n~sƇ[;#V7PT)?b)Τ۶B9asz7NOޔEqt׺[o|>9+ތ,X)lÔ FES™o>-^Vwmb*& #Nd{yWy/_^,?'?bH:fAiA%b)#RtB≐)CV-*gTXj#F.R3HR:[4ITZq ^g}v<'iIck_||tҋc_zy\EY,RqMŋ`Zo\dZnUEgvɕ|K-2$7 $'1(ԃ6Wx< {#.x9]J߿gβ=ufWc֒'/40-<-@`b ZDhQaz< ֵJPq2v}~3ָ΅ZAE>0VpPqkCC&x' NEsmO4bH$đq>oGP53юHq7hgH&z%·`N;!d:*1P4Y$Q %Xs,jq^&zY^k9FaDݞ_BD*ސ8b$M$]VMˎ/'Ú?6.^>S<*Cq迨V%*LuWchf&' y iT f9|Tnf6tJVy@@ęV]hFfWtWPuf grCOLqRo @K=IGo<5)! 'z2<̝Ps?I4A_Le^VׯxXOfd ?W.^ E͢DU+c a4__ZqN?qp7esl?(@HzB!.+y3wܿoT}Ɖ8tt6?ڇ5VKEuF5ͫlO+(xS /s1# P%4ה/W0m:oQ ^_ս9c7QU67⿮(,ee6 uҺu~g#˜ߖFq?[qkX4jĖcW&Cgfyq{D{FwZp-Zt8+!$"-QΣǖTmLRJΈZr:y{(hܔ|H peaRm"pr:##3^GrR=]G)K6l껔c56jeŴ~mR{7Wy輧`A'xq"uPˌ,BJ9K9ט#u~9޵5q+)U~HsN\xUT@H̐.$e ({ؔf@LwtWz(cMWU&8]@@e)v)J6 Q@!iO6\pBW$iǴ3R ^f/{$.frrm%)h .wJ`GKF]x&4B>LN(\4g):%,?D%53h`QtrUNRGR@dHÅgD8AR9B$"Jc^{XΉr ,zճnЮHƣ&م7L\a&|ftr.Y@up?$w} eKn5[B\J9G >&^uh.BNHƦr>N{ YAjmkiq3۽x WP9$ΫOm"@%MJ8*4Sױ.7RFFZ~V)^]PB5ٮ[s%k8>5mi=řjVyB 4̱@i)Je| *[ u%ArFVt*Վ6.a#y,ZgDSZZ-*}IC(ɃsCdH@=cx~!BW)][fu2R2"[7uS1I\q]\\SBuVJ B@5=PLEADU'=NCNݕ*~+wJYEGv/PIW,-HLD!̥2|ZlۮbU Q3y?2lY^nWTR)boFGApcz>Z(ӈ$r" TX@F)Ժ`b0VklJmj,%][RoN/,z;]v\(gC=Y,m OaPID[3*A-NTRg`ccf9J(VyIE̺.jn\!mܖT\1bgOΊ=ѻVl >xυrTg3a"X2$PV:.ࠂߌ$OvoLƨ[,΄y$F)F*^PI *)O%޸v2 *댩 Ѹ+!QƳY4wWU_Q=t}E`N; \+Bk\ePrC,*9i*=w!F@G& ˲ݷ?\_֎VWsQo6p}bi +uq촣GҎsPII1:9-UYME U^a* \K9=uJ2F]"b^$"@/9͙QDA{IΊB;,΁ $FU.jWOnoh/mNu\dw &Hv@EhI%EkP+o=;CAvntM"_x&HC7:0&zdvF2SهJEɤ0 "#>!#﬉η<fûUnl6\u ExjԂSx&J ^XNi#V(S)5T_ s$r4"BtSx)kCo y!EimNqaUJ:Zڐ(db|mBUNt0AuSD 20ɕW]9G/9< F$bA;h:TKi F{G%WοqN /R[D&5 2".8oy_w)\KR;GD,!)CJ)$PS HR)]'օkW\J/sCqeq29c 99OqE9j>pa{ٜ~uاQ} _yOMѫY#o6G?Lz<\.;:ntFy/{;uta] w/sϕ;,mܟżJ`ļ?nx3.(T ZCe J$}Ę qZyewq.Ӳk*hڴc۳Q ˕fO_Bq)ʞ9jMu&ۮ003현֋zcEW|ksl46{LRwu61IU~0ZT[3@m h-ap;m{5k.v7t$ZHG0R)'TϞyE4wD` n97 P@ 1J:g~=ٞo/PF(xv'kYiF)mMP+,uF&`@0(kIQ;Y`2^@H',&DbbqKnkzQI]k|w )=V=:@2-_$anR!"ez)PYLVxs˳!{1o[ym/|F߇ejR&\] AdxzTBj,C#Q4?=Yt?K ѡ-м%7FPeZ)"g:ްZy?M+OJ-'wRX9a2v;lu6"O띬v.n=lk?,EQL0é J gÄu)\cJRT`1"Q|(2*S{bE)C艀R'uHֳb0) c@;Jp A42#g;2Uk⌻b)G,c gY3^lnֻɋgЦyGl'1JB1\":DFMhQEsYۛ O!{.0½geJE٠vĄ9%D{#v1r#Š/];ڶ0j; vyL> !hQ-I[ ^ZFLD%A*I*vG4T m'RAp]D?"FG?է0-Qq5'BBc*MPt#03Q]FlWž^9JEhcYj> R=aI^ \1 ta7Vu̺kn]ou9ȍ5iY3ueEhIќ9%"An]5 湸]ꅵy-ֱ]v^q\/WZnxy1Z*0xo8xdj8[gbk.^ts۪x/ x&GyD=od-6$x6oQ8qg]3ӐgIlr@M5M^<مq7g9ApΉb~[s_Qgx/r58~MQ}}?*'9EM")Of%rNksY1=o-WPW %"j{7Ihm4 rX/⽥ ذH |~us&sysX;AXLnwgѬ>>6?Y+m#I K#23B@cmyJ)RMW7xH%%V*2"6Ksem?d0=}镴{Wb᧝yl[?{4D0ԜO)j~e-`5ER徰z)'[zL:vH_9lVD(l :PYKN$dnEtI[}zZ\Mۥ'z6o/l* S<͈Hgy4SnG%{ٶmi4:(oj=^?<:ճ=ڛw~擊޾'|ߏ^fUXzY;Mzi.Yzńڞ{tGCV鲞?jk}E19Ydn@ZxA))*s/>}KKNVYID4֠j$V)u!ID-CZnS񊘐Z1,C&Rn$O!Bh=xKF.]]vEC桷zyBOlsuqI u3 ǫV")`w8J.CTji#)L'Rx@lJ*H*y@ R*}(RݾJ%^]ꊔzqT]UpU%ܡ+wuUTWWoF]treZq\9+7xtz2xkt'<}qQ0$_4ggHiJ'2Jr`ӟ[Qc6LfjqԚQiLM#4jK8}:zY>Q o˨QۄʒG_y~Gyk.t:UO痗ymkZBt@jz`vՖ\:]RjK%*E+&8uUUnUm~@uJQW\E]Uj]]U*-p߫WRWp-Ni]vՖK~2l[›mƧ5{tzms?&QF"sMφ)C,xfp'o>'V4 Z˒w:߹xH#r1Ss2IJ*a &[􁔈*{bj{bSy^K3dHP[V~gǗ㧿rg2Pb:gכ/e[GCyw<09}kCn 3,Xg㍟6_&:ŀ24(,6nVR(?e2{NCR'ýz}ٳB1DRQ`.ؘ6@9l5L# M!EbA6.d\@EsZ՞ZY/"HVŃQu\(9NANoR,l7,U=5b#Z*]:ǦӟkH!XnhSQ PBIHSM IS 0 o';/{ i~[fɮrOg :*&/] wW#+a67>1Lxr|y+ޔF]t h c%ߖ%5rL鐉 (]""8Q#E:z*Jm'U[)  SQ :Y #+`RE'G-RUT$lScgKAgIJ! "DR ITS $M Tfx*Hm7' %RDT 6O;ҎIrFls 79r<tfc71N,Hk7Hi}P Cj:{0 Z4͡J @Xe^0\Vs_^ q!!3^6!vz;'npD30ȘYSB=:06cQINP'uԦ o8NxVCoY6yyT*S".)F 0ĤجOdpQʈ>EYA Ne޶oQ6E@rrnA ;bjYeK%1 {VSi0LOGs6o./;~y5E7]|,_[ōR٭&"%KO?JO{<4@jv:G~1_{lz4J|qqj`0rE -Nkxc `V+4s(> >'z_N` '/eo0IUM/}=}5}gܾmYp-Io;۽ I7a!񃃷d pDzR*0,!2b+6Iyo1F+a^:8gWom-MJϟh BmOlґ (W Fik=Y$r^D.,luJ3 "x Yw6wlϗd߭yя5:5H:U ! ڧm+p19[nocصm{G$ʯ2Q"_M/b^X޻9hG9:Jq o5 ?;@5nPxxzBl?!lG0oWt!k]dI240ޠ"47!f"A2(ş eNjnXC7k @6tՖn#6Of >z#6辖yeԬ9ә癭Џr 6S]L>&]gQλ/@ 7}R˖11^P> P11X*cR`  @fMQ` )1*eEQk}Smk}C)vh:q$Ԧκ B&%K23k58'CHթKE3Al?{Ƒ\(~0' .q$3~\S$>U %J2)JJk ؒgzzSU]I^ ,Cs&a#qcG<՞JYvж9o ~:̸;d<^ktoSJ@wz>kTURTw[zitWI$ul9.#Sl@W P XAUƀ T]S}`TMUw[d7#͝n7U=wKFgrhQmxi`hgS;~ۮGkB9L)c3(hxUiY5e=գ?1)VDtp\R6pbLs)<[, 8 U_ @ OJt0ɪL=2:pBFQN8e;ST<9;j\/y6kI@no$K8?C^npUX 1D*f Pӳ6|+Mh%7(7tj1#ZνLVupB|hŰ%,wi%L$bv"Ѩ#>hb\h@N,^Òd>G=#(Į'h{IV w>>^,0Q*OU0ARVdPփA=z`fFmx&R|$ϱ!ɬ DC`}lU+UcP w/los Awf;nX ݲyA"x&J^XNi#VXC0MfCAvg2[zzҐs:=u61.PGK3 >11&MT=N 0B?C >y $p"z*c ڹH7ԁXhGm]ڮ?ȂzUذgw?P; EmkvC;D$/⸁F7}__8kizGeP>kMcTͰN2c Ɠq*f_XNO0:ͧb:,&a2|S<%~amw>VMgn'%njcyL`7MC%~lկ?/U?AH. a<~OM4/Qdŋcp^up"Nx[@UHx=^ G 7ʆ?l_ >Vi !F %)`G'u̪:5ʹ(S4(nG5_'gZ&c}y0GlfnL+ ?+`,TcCiLR'8 'U37d2(< S (-#bu;_)S7Lʷ{&Qu]PԨ2S{f-}^m`:C ig@`(paV74Qumn M̼UE+Aª;Nf5ц%RR>qOZHG0R2T@g^1 @PS>V P_RKt-<p<\o4i&)2m͆P+,uFeFYKMY" !|Zb9} mTjZo7Ɏ!o7ΦU (?ymY.2"`~uyRG1V38-, 6XၣUn\0xs:><`eN lwϗ0rc?~0j/`P)!2@Z(Se>>WȹZ =G4p څ ((# *${ۡMӫQʛO$_L7+=ܻL# VK/*A;4w~^Zݢ,xmYXR{ZqX|,/NpyV4}KƅWsǚ9ܨ3/myt$pkJ)1vtм$ /M9mQBx]ɃK&( <4h6t>*:`$Q&(R].GzR0G@q :%b6>罥B$@SVgs.{g+jY?o㩷B!Ugx.ߏ!o %z*TBk\rcU^F519otjT9-:>a2p(FF5} F5tkN9__!J9.oz ~ko8y+ZE)g@*2vc0&$tȨ1f P)I꘭gѥhrNbwJ2T9#c{JkXg MXhz,ܻ߅bSfr<,F͓_7(h4k4|N(YH\@A$@!$iKdojn0Q(d"67< T\d{8CJ94Qg툱 0>Lx#vk܎~2k*桠vkq_Զ-Q`q|Qk JaD$т,A鍑%AˠA &CnTo$3AI:iZȹ+~x(%izDq3oIRPkаR5JpmCL0"|ZJo:6$)8Yiʣ,#I%E5O&Фi ,ms;"Vq.W.dlKe\d=.բŹI+֒ȠpQb2l H1TYތ4JPp!"z\.-/Papq7Vk`';]QT;8O䬭rޏ#>Zps*HtٚRaKPC Hy9`W}@^A9 AIgsP"{ãaI2^\ֱ@vs2v]Y~\ѵe3{KWxۓ/JZj+ߙ1CFo{Ubl]֭[L˳[y#!̭se6RSz^kfɖVwoO/lupXDA@)^b6K1fss -,01J&$D\>}qsKB\ G@I܍xǭc`3b2N7kVf8t|1 U2M]9u R %\̦|~1./ M) uS? ~65?+VkBwO{ˆ=;UQ:z5h S>j0nxFz!rҖ:}.% VxuR~6B~yP$0G(fZ Ė^z Q::=>hwن,hȴ  .:U)U@UYE`GuڧFgBV=v'jپM:ݨdv'^gS)+ RBr9 \!R:\!p IC\! \v}Wy8\!v=\=h%e;p*Bj*p*%U=\=G2T˥5; QL0N*xr1WQ{VTZUweS^^WGYHow1Օ: qf ^]^>VB_tT`r,Ӵh7T7ō}{'p:!5MxD ]S3\Y]cG*U.\!tZs{rM҄I!B \!VtZNԡRIEW撨 "O]n>UT(գpKn@~@xW tT9•5T.ٮ`S$vjz^\Heqt5aeIYIg3!Lokø%T>y\3tp >?{F!? ŗp33A@I[YIr;~ŖdˏdĖU.~bk7Ӈ%ʳAq5RЀwq>F0|(̋vTn9 .oQ_b/-,ЮO^bOTb]%>]&dp>no9>9Ldž7uV OoO6ܯGDQkAPR:YF$r 2qRs<3Y1.RU']Z/( w/AR:_}nuOnz A@$HkA(vN`HC $zkH5hv\v/ߝ_1L#]yk0pL_]Q6l6Js}ƫij` l@V%#-[,=2=݃~U iLt2.wMZDQwrpA;ޮ%eU~|ӛ<]b!ծL)2oA$㲉zmO%u_bҧ1 IsF9%t:1^YYٜz&aAFwKW5fR\Hhv>E D;2u Fpg,B0_oEec8ۍYB B!lV7,m)?W~ &-' $dȒ"C6ey,`>ZyAs$p)YMuKFf꾀\fWxio.%nJYOݯ:]N-4+gάՌ8 T4 Xp ÔFUe<ˆhe('f_{8d ,dPLD2fNq@lURkR)ie6XR }$$.^Fg29rw;Et2&Ύ E=R&.6ac?DfY[kNf\ߡ }")t> E^.[^»w=9}ҨuBnm6wi^k|!B.$\8^g'nu{uh{lKm62aL' n0/2|Γͺ]Le[:^\{˺vWoWn|7cZBto?Z\ںOd ->Ek.oޛG}Ѹo&u !sᓆR& T" LGȄO21ќΙ+2Zpk%;ѷ^7lKqъK~6ɅYy5r|ֲBa)բ|HѤq$sjW&BiK]O Gw=ųen<0^k&qswLM[yyώפG8;.xLqe!b͕U%KP5NP-ta[aQ=B؂n(6XA߼?9*9F|3O pthm&'CUPC UIB5X,E^*ܡ@ġ9D-!x-evCC򒡮zH <*~M69yREB*dV{)R2W>&yOV{|kscf˱&vǖMޑy0GؑJ`._V$OWPV>VKOkՄ.ن=6%P/>99+%Hzn5Q>tj -ZHyE Z͏0lv>W6 <^l"gQieH, "/uf x+2ڃzD"]\e,trE`m yJ8 $RE**A!!9 ϼ2,<:9.u]}%*/xn! <0#bgkc4&^,|q :3ObO>:G/ yrܻ$$83Q.f! !S*Du<$_s!pzbx> EBo+*&4Gh К๒ZFcY_m_nc^svWQ82M;ږ0lp}R /xftaø) =FvGe2!#F!{QTA%m}pHxWţ:[\LcB<]J+wM'4onx*=[~Q6׽#mֶ2+Z\z\\B;"64?õmlIf7וj-}Sg./?0N}29$ҳ0bvJqr" $A9^*|X103'{aiꮻo{r-d4?{ӕM!}lgVs:htE;&Dz6jLTLcJUw֝:k E;)N}wȏ}\'KU>`.9@陊іD'@[)Bw|lC}yJ*| x8'R,];p[Q (αē,(T=UY=:MnWӍ+[lǭ!o7NHo< o]^xX7yO#(2[;c :ք^$5Q;zF$xj (t ]$+.:B0 {#@sM4CD_*pw%Fx4>G C+1!<ԮهT)곽^}e)eUrȮM:]T*[dpC+5$M@:aoZ*&O\*i-pT 0'j5qOt<-7%Z׋ZMOkRךWkUkU(uUT55 @XmKe@3iN9kA&(H۽L̏hm LUdsYrAГ'41 O ޤHKzL UqjXXM32 Is1?/̸csqqǪEO3f݃hha4|4PbV)r+BD9O(3T(74Җ/dUElieb( AHQ`SR0rd1J*zڃ8;Lq<ԮEmW]=ݤH03 Q9`qcAH̄,g4 IR p \p&ĀN,"Y g;"q>.VㇴYMK2.{\ܸ%UISjMZI,R6mg2 % AŃﬥCp?<m Zzkgq E{?>ddv1 g. m$|&SLR4 4(Wp1tw*f/t)cKgϷNvyձKk9'u%=2$E sQ3eDlut*p:c_ӳ/&v]7jS-7/=z2Jee@2!2o]x@ 'X")i/ڟɴ&_@~:4%Ot20]&4=X7ϧaj߾{fڰҮiNL3Io% :y.WĹ`oƋz%17ZJBo5V@_H]m.^%`mK ] _ ˟V+8]\yiha.⾎?,j?]ΏW~lmٻ6$Wr1C .~~uH+W=3$G)QPc$EtWwWU?UUUz2ʷdv]"@po,< 1% u<݀^d^gy BYw7? X~!rFb `hn)1HWw>_-Zjxg/=Ț=**4Z.\˜VFpQL0'JKGQܠ'+("dy`B9/35m|깝SO|'![ 'ёƴV+nPi 5<2b tۈVk]}h.yE' .uAw3kmyw U={D%y}49_ܛp5,q,rY6 fKh(2"; 4}Kӌ(N4oDPǤ HEɌJ;쬒T^FƢfIbj6gWD q% ? ,^@>s i\ϾTܷI<S=C6(} DD4>x4Р1W #"͖`rh5Z>@QAsgX6`#*$ 5 Tѡ#q ^^OFYu>m2ަ(Pn+:,KBhl.IVFk^kG%D03Unt/0.Q4CZYD}MxDVojvn<ðǣ.hfJ+] ^ 0dAE,@*D$ҁ#$ҁ5 (O'%ೀ ARh0C,<; ^#tPcli-EvjF%`<|y1p~zrls,TxGDJNcRɜRafXTqU  AyGwGAS`(`ܪ—քu C{ ,a"PhDE1qcau\[V5vΪ0(\qkO^(|gU.J ãwԬ a=:leG.,j, QGcGBqNQAلògMFg'w4qRYnw= oSO/?g6N7+Z6;F%o|GBGC0Tv57,.z}Ͼ*j\zSj$lX2LVL,>/S^> eE1-Y{>. {NJW@"jh^_|!\ *lGq<0 boKuCIMƅ7}٢ys|xkz?nI4K3fn㋢~> nGhu !^74U^aKU3 ̽vC&IozS3itU/ t^}7[  Ab քdU0h7 0 MSک&ҦJl2Ei0\޺_EYHf5;7 #$ Fj*e;0A)ıH1WOpS"gLy5 .푞_w gH P$v1b@ IGi n܇ 8 e fX*`GH9VR;ŚKCRHcH aoR#å Ѹ䓲`"URR`Fc&)X{,܀9Y~˘T6b0)m{i&@2؄0NƎH&, { Y;𢭜k Թ_'Ϝ8sxٛWhٮK޾~ &-K^l;BHK ."7x8lɬe!X(@,dsk,QVtXN|-v)M5,*r3E"Q)% ɴLM|L9DŽRn&x92J#"(KgA[R.k%OR@6MNmWZ蘒8O3dzKo.}Oa\If&A>LoQ IIR~jzE])VMF]%r5ijǮ-SW8J+B5*KZUCW'D0'QW\ۢDeW$ՕL1:E DF]%rYk|WZ~BvՕUW@5* eRWZq*QYN]Қ%&hHxVw=Mz1ߋ91gK)4׆_H@#5^6:YiL)Q-bgub\1QرcD<]-^`kӈ<Vk_CbCoE]GVVG9L0g o>\͑:g0"֮ ?z7~ƒ-lN**-ŠۼOAE_8A/MmV*g盕Y@>Cu^ۼ鮋@".F20kBf(w hnoSa߲f"U ZW*e((d<0{07(W5k+A(vFehaj7oh'8!x˿P8yIQZL4toً]~kSBioi(JV nR Kf'`ib!fs'Q|d~67GQ2D困̾MM1uN(b~Ǜ^ɮm:)nC>Ʌ\lDCrAR J.o+Ozg6"Yuc^?m{O#te/˹.y+kcE ";!Uʉ󎃱RHfKl\bw6 l`bJ u}'QKѫD%)+JøE*"uȕ-*Q^]N]bD&cF]%r9iJԾTQYs-wtH*"u7H uUP +A)E*,dkU"WD>ztTR:uuJSux;t?F;ֆ7gY.tzV,?wꑷq0ݙ_N#{,ůiޥVJ]ҕZ}M3V 736԰7A0ɵ r E3f4j|sWL#,8ZI}Wþ9/RaHMPHNR)֓LT5qv5Vyhl5mEا$n#+XG POI\;h J7/&8|s%798 -ldskߒ) ;Tf@z[4_Pr|z6 `cs֎8èGI畜 J&N#G "'=h D90[EĖPƽuH ÌМ paR/At5=@xvM(.|HzfW*6; ̏Piϯo.A4a\ !9ƹ^ʄ7"4 Xa|ꎍ7~9Sh+Fk>q }nog[]S:UҢq:5'7Z^F ?_5ZAI)skMJS.(hGZʹ' L<Xn4ңW.*I ÁhX^z!Hq#en_ ]sx]{ŵ}0! Q$VJo+dt $w),9pf(c*gD40"I.iEt&A) L"&N%4P9fV1'XtGjQ;-#gޕ;]1i0}-㑿[]7hguj}^W-zdv_␕]DK)*mٍna{YH_*YWnw7&Ǿ,pN /R[D&5 2E7\zq2,} dgqܕqw3\\fIts1Q-öeh|.cN\La:>Ǹ8e3@ދ>K8mn4kN޺/׵#‰P;mMTPs>rŰIosJݿ#=$\ bϭJSŹ8I%̭Y K˩^IC8w~j|q|an`׀.Mwc:?M|Z2^0n wA@:Wjc@H$EMIҚē]dbPhQ~99 !ObݢUUv48?JcE|;rUï#[)x>ږ—s*yIJkA%yp9֮;/$8@կ~V~C hGf;(5DV ܸUw,biM*ȣҒ?])p}ʨ]]%Xg_ftdf5w]NTq9 -{24?[ -)x)x7=M=MbVO1Z6mʳ(?0ޘksZH?VAcU?2q0~wmdhu`}4V?2 +Sw3$x E+wޱ٤6kG6!}j$PѤZޗKvtm~»1 M&iȦ]lyuz`Žy[gñ(IE͑E(//B6]P?<~h2Z:0CHc,6Jjmj (Da&@enb9{ ukukzۭ|w^ GUqR C)'e' OM-Ftw)nܚ]1~7@ 2.ךD%v(X)SGTWHG]!+L(PT_H`~sR%s5!(]J&f{KN@]rgW3UX-FΞ [!|C嵽N6cXx+x/l9/ Th R&;ԣ(T`p%<%FArkgIB:{\rcU^51PXP[P|A=G4~_0]ZORX9Nd6Aj;ls"O.ڝ,?vs5kCo\Rx&%BZaq 蘒b$rcH.B 8e=ryB,LJPR)\B*ٍtbXX3BS ̀IsXf͸lwq6 ,Mw j7OߏGL'Fl'1JB1Pí-QQS1xTќD&W~A(bsÓaƞ pg6Ҁj"f6脶161ac[LP{0b#g7b$lǂŸPԶQ=h8>*5I A{0*XhAHhݢH4x`"/ RQXǀ39sJ7o,Zյ3뮫evG׫UxWDn%ݗg.+B3Eߘ9 eo{IuܺZs_ѼUaanvmxo|ոDu-|:+߃ &wt\|ەZ1xb7oݞ5ׯnEZnmTgƮP,Yg]3?#L!@WnU+ _x5qx5y{7L8wi,~tP!g,a* o0jy\jx>JI6i墥LT>~+gqK'93}\B{(^Ȧ"➏>w^vNs _W뺂x͕X]|RCu~C]?`=QU#,IӁ >zˮ&lr%+bBn.Z;yç^hnyqWz #z+u#Aa9XKB8p8Ytp8t8ҽ=x)NBJh9x'ayyӛ$?R7R c:Z0ڠ Ю>V)ߟIvmbD-Uu8-wo}Sϓ7IϞp7~}j̫\xޏ o_Z a+vcݱ^K`>ՏR]w^/J$G̮ QsjE )ZU!F%6ayЃ5f7(Q`! y+O^*T9u2+pSF%3CA'C"9v'塻j];+^5~= {tQ[&^(PĘpoAPj JPKLd wޕ%^ަJIBl tsMfܡ7F0G:85b%uwnt~Y! y0P|m $  gNS4:5 Cu>]T p h8uI $F`<(%jLKh#>C.VcfڨB .yyRJQxx"$%R wVg*ku g Z>R 0Z3T w(T H -wY_#yuﳕ!6r[l6E,oW X ZvϺ {RIVG&H-44._h\Ǵ9n+ #d)+윩xpXZb 0x %0АA[GEB4 uhJ5գ4Q=n(_jY$LpfϹ*D&ΣFS$.Q= K4="}HLmi"mR~Ė4|k0ŞiaE˽-Z4H- I @24V'(@)m . UBb|&pc^rlWVi-y t҆ 'B5P"CA,Z[wGA}e(dq.xkºxNj<#T$gYQ8kqA0#OnTK^c&JWCjU6Kͦ+$?][or+" #R 9X =U$EӲ.Lc"9ꮪ?M\\[cMjlWnsvk]] +\b?׵-+_ )uyƁsvxZԫOū{CDv;D780]OOrDyIכ_8sx]Ka%ZBž#HO݄RsVkIX,ʇ8reIc_oE @QUeHB cr7\mױ_#/kQy_/BڷoRX/rX^?ݼz`Y᪵ޜV4xD ={vkb0w9tMKtKupI"1>W巏ft9:IԜ:8{RJWIh^mxU>hlIh 3yguh֞eORJM9GWC#W޲@ Gbʹ Cj4j]cT#VЍr-gcF^A4H*z4J>h/mhϢDR%We4iQĠuAA+L8hj-J&\-t}{r e>ap|a:O0yi9OՍޭ=脁 .Wu2 >ֲ=08n0#:a L)*J:jBmEuA`TH>cQUH)SR=l!$*R$S& a̮K^;`ބGkkL+UĨʓgwh]U"#"YqlOVn ` c#~DcchЃXs-2J *@D@i1 ֈ*,LsËvrݡmmw;&w@@"1Iڻr}d#ha:vvd|׎cSΊ$񝂴|LJoהxJhnʆ|+R=6_5 y{44- bCEpվ"8 V0Zc/(=NEp)}6pED媃y0; dU'*68YOKi/ݿXvLI6z!OVJUЩq?mS=6w *>PuAKޮJriɻu"8YjYjW}n/:c]vaŇ.Z0MqZI [tx\$܁s.l a^4RdII !:zSZwkӏQⱪK4 -A?.WgixBgP@*ѣ Y ūY9T 3;uP TqX]FTg*garyCa9vE*kΠѨfcEAD5LLEPJ; >2bACV d4/z;Þ2ߩ}XIHt`^|:ﶶRoJ $wp^:z>8SK( jJgU.TfR@K58nt\#Q("fB!jRX,HO .$9LZ)$I;S_=5ŗ-kbu=ld/WRD.Cpn D+k,$Ѧ,g-I $ajFZ]/ɼ]zu5wdxB#GmuͲ4Tw[&u +Ǜ##7 Byt3lTeJ:;=iq6'K-'XiwSivxި0Qͺ>@[&۸kYz˷ 8Zb?\Z>F 6Hg V**$Yxk4!::Xlχ㻙qq=1Q^*KLaLF4̀k&RA ![TfUcTfN?Lqऱ_lqEvbn}t-z^RDSD3aN>z?<^]JVB1{8jeP_o޿;E>^WҖ>[/>oD<?Ru-<^7wA_)LJjղ֌325?]^ҫF֋z239ii?/`P+[XհD/Na~(adʄLIw%r!"6CW WV*;vbLtut^4DWx;zptpbNWɉ1`!"FB3tpABW֪tn#+0 ]1`-+ bNWꉮ/1GW f:aZ+FQLtutei]`e1ne<ȵ=]1J]!]ћִttR4CW W5]1Z1(ʹ3xtŅ{lKvvE!|+th;]1J= -iWHljso <28t;=%HcN3pJp<ъ~ݽA'=ڭ=l|ޠż1,$Vyjݻ}ӓ< %-x] Bs=$oNcjo슄AEʨ@EWOmL#ԯtX'Iu8ISwQ7bK' jon32HC Z.SPA0O">V`7tL?~о~hqP-(l6ad;9iZ+ 4CW |+th;]1ʭ@CtE]1\Z+F+BDWHWóE ]Zzt(Ltut vE ]Z%`t(:F2l0"8tZ~hAFiDWGHWĖAO.43h;]1J;iWHWzГ֦W;õb^%Nc+]KقbEZ;zt]0tnCĉ `ph0;=Q:5*RbwZ];^C8rR$mp 5B[{u2zjXf?w=b*6M0`+&mJ7vmQ,kܤM6?(]`+D3tpe3th]!]i/[tŀ0v|5ʱtb#+LKeڡ+]+th@v#+#A*]1^EW 5c 2ZTBOtuteu%rRW; ]1ځD駍c+8mNrh1('QҕWJ_\'`ߎ1p%S{t(DWHWۦ+n S|:]1J|W?]^^x HGq{Yz5ۡ~CKVaXʭ`iك{v>j6Ic[)P^0BC V7ȸMN-^F4ccɑچll۱].GoJr`4DW x@~pn-c;9ՋЕ閴+Ѯo3h;]1J]!]8Qkh{50vbVMtute@{Vb=B3 5]1ʑ^fNRVUj-.uzpyf-?̽$sEM{E_^Œ|?VwC(Uۤs54d9"k# Rz!>o{jUw+ͅj \aٴ%Zc-.C:rOZޯ{k50MW5Y7~zvzm.^K=͔ņg'śZ޿͛ׄqǪcK^ǒ6&##L+ʇ;{ngSpqs|d0o1Wm<'i1|%_AX.O.e NDG[ӈ%(է2.WYEi#"IKT냴I ·85$E&0k黅;[~rUpW4PWy.oB@YmF(FGhFa4HJjkzѡ!EIB"dՔq!ƬR֪j0KNdҾ?ΖFijO4m\"@1W(1;GۚBQ( CԽhʃ=$ZWJ($TyᣗXJQ1' E*jFFWY&Z-u׈ r9TRƤ!pU5 K#iR8@S$"DE9IIC.;c54iRV4YMH}cRHHHDXS AX$̀\4YfG1*!dQ-цN;N*4(MҤ2IPA+ 1K1hUF\w5aul&j5I+Ke^Hthi㸒e@ =>戵.r8SB(K7A!1DӥyA" ^ˬF+ |bh%3I|SskKV1\JG:SAd*X C4*ܽO/ͽ4e7-GpNTG+Y7̶6E5ɍK6J+ |jԱS\kn`ts=i:wK9jkmߠg1X6cBFUykLHnIʗ VCK 6Y З I buRƗ ZRz^EbHuQCBNFG6Tق1K(m/XXtdàtGj]Â:K+zo'0L'mT E*UFG>rI zSNcG3PQCm>+h-a}PR6,ZJ2Tj%~YZ,X]A8&_>X;ɺ(VCwdus0^=V+ArMiٚ膭R?kAE F@n7((ʡ7\AydS>\baIp*y Ӥl(`_a:\G|'$X(F EEt(M*ӫl1n2^FhU%ٕX;F͐jPo]?;1l ao1K@ JAU.1a |k)& AŚE kG!!.A D5t)ԚI0Ej>dDGy(e;@gih!\= :7XaS1)E2GSF hё%@ U?о`4_ ()` ܕ/h!j\ 13ct0Wz,Io $$Ɨ eFkuDt(X(k! e 5P #o m${µ:~}AB>3'ʠ#ڧQ,KTA Bc̠JUYa9i #NHXeǀ9Q;/4¶^[7.y'U0o+?8;w FfJc{T^*KP˶ d]m jmlAhLkuLΣ'`]EBԅ{[ɐ4Ra2U,bw w,Ѳ<=%f0A 9`ɐQgk#ԭ&J<|+&28Œb5|1 j#`QE+5h BؚbP>u)}`E[LnQ2ÌыE(yh$›2Бx=vE1(EIbeҪ*(- 8H\ jPk {;z3#`uVH 355KA2ZuMռCb8;k\#YA pf5ڀJ5񭷦:x$`^)40\p_h5w jƐk9nm/̃;Ow+X:_mwi.ozv&qθY0 36{ 6YS@ٸ2}l@(jqhu\\ k1i3j9-kn bL\\ep idE yֵؓJC7v#a?V&:E%\aۊNv0 )aTХHπOt4<<X׷lc3옍}u`V&5sJmhOF4ߢ#^Q Ӫ #w(a[T1@Z*|TiwuK[`pO*6<&գ-М6nmVܢhR;kѮUVm(|Ϥ@L@rZx%dAц?jo9hŨP k  (BZtàe f@F\ RčvXzLJKF$45`=mGCX1:a&ɀn [. ~5[.VKcbX`XKEw,*f-$H 8en V3V KkBw]~Fz+{G-F(GT1zܼ?[nnΆa`DF-a \%41,7ӟEv[-%~gf%|PnggOy|6AR<bxH+>^~ipI^~}y`By~swWڄsEh?XO{q3]l~u*]-W*{ws]8ݗJ.mh?^UG%3|Qđm4TUWȉ)WNAv~Pap;g; m:҃RBZgx6+s^;J4]ѷk,th^;]1JN z3Nzef+F{k4FBV+;]1`ru4 ]1ڰz3(# ] ]76Τ.Mc7k+F ҕΘ4]1+{Ӝ7)ót'+~G]1\=5at{Sԗ=Zb~upCmZʠd*LpG]f+F (3xtj&uŀRtp~]1NWY^8t G{VM26.]tί En߲?VזL={僋1VRs)tO|il)'M-8V Q82 !3:0ϱt\c&Ck&CI+{P&HN*CW 7YJ[aQ tute"D? ]1\f+Fht(= ] ]~Ont(ݯ:p, ڸi4{W֯(ٻ:Ery&+4tp4fѦ]:I)3jbiuZ\S]br'+KӨ+F{'2]"]%DtŀnY h^=]1J+fCWSΕ2gإ#ap]6i0 *@WN곧޸l+xu¥hBsXzoI?kAf~u5/: hW}FvemI4hw4:9_v$?L_򎣕3)LRƳ1\7gcѬݳ1ųg3*ʨ~0>BWXC)tuteRODWwNWҐ iIMDWlԸs&vJ']}Wuzb~&AhZNL{W 4tp4{W6eBW'HWAhf+QW 7Y hԲwutN*NDW|'=,EW (> -9trg* !MDWjb4b.ڒBW+©s$ap0t 2*.ЕٹսOJs<@hmg3}RIy<:LjϘnNuuo~߼sBTIpP[tM{h\(%ZW\1dT5ݾIQǑ1ջ=]g/ s}޵$E]`h! G0vEH9߷$6%YbeF-[bw_Uw*8*//f>ϧP,Х]t!ݼѲmR4eM_[)f+uhT2>&`1bgwplwP=Ntk V='%9 ߖ4uryi (?n=*jSS <K52#GM(.d EQywMYjijkU'y5mHt>,R[ G~<,Z}u{(gxr^Ne3;%C,#e5\$"NX]=hoit64lFY>\n]3Q S\ (O&LI8F QFYJap-1KȳuaZO$HR&hlxTX*A IDingdB󥿔[q妖*/mL^bm$ &w|ʵXLAHnQGhۦM1S#Qz!"  7I,Qs)cp­Rii !',u!|RiBH"FyR; R 7qj g嘌D0 ~sh+al\X!a{W`J 9!eu2/d45=ȋegjk{4{?-K(d8FD$2.=p@JnbBݶۓ}42 -r41z_NA§>thynu]NǡyovE٠Mc< P¯4ۭPD=ig)34p]3p Č wn>?\x"lq*5aRR۫Ojep T(i wBM3s)4@.7 eq8ڎU3eP <̕V^?HR::lĜUMitV_|u=&l2ˆ^FcD2Y+냉K9&ZH0<@HL{gǭ13k\6埮I؏/c t$ZG%\1 :06zTD+&IA'D)x%T8 DKRDc%IY,-,xG Plzgx@wo J4W0ywT\ldAi`CM:grӳxDBsL{˴X#T )e!m@_1TEbg^ NŸG'@<pSnPiK;f:a|)r!&+L.|l8')!?++2+QooR2wx/V)\.`H\Z~Gae1]?L`y0Cj^fR}a^'KfqeڽQ=NGѯnX dxL09ӌ)!5s_JU-aPg+$}~`Қ矠J'ywij u0.|6M<,G~W4g@Ē3eJ91 2XCQ k7`U F@BF-q!@G3X6InҐ2`tq9/_񗣢iA.4uzXAA-.$8U?CK l k%FǸ0N]v:h `cgmI 0pgZr9uv^쒴 AR_!}?rf<;dMeM.J%҉F:ۣ|zd%R:"]1ۻ#:[w>IVQLBז*3M7l8Gqw| .TyD1;˳5wⰸ.}c|]'OFeq`;:Zyq`W%xD۪u6)^pOPltՆ;}S"ahHHAMڼ1w3ަ47j"csFAsêNaV`! G$)7Rbyy+/ 7H1b.2Ō9-Ij^REbB!|Sħ}qd;oROt=l5qJd\)E9ER{šil8LJhhH<@K|z,i{Ӎ=$&qߕמycW|(TO:~ q'o#f\u2"m rT$RINOI"uD[m~tyǑWȂrc$a qY)=caw錢4N8#r^ AHRVa S띱Vc&@=豉hj4BZw6F>3Pl_O?g՜9k<B8̤F{ΣrQ,NQ,F60p~;,Nw;O>p)J1,,VJ\φڛ8 5OKf@K,cR{ӓ}Ty$^j'2G]P,o +dYdѤɭ lUK],T#zk9J+*5*KQ+/\ti):((<m7 R&{EFd>ZR|RzB飶ěUvQ1g&H*aao/P;i{ ybaκZMpBfxd~a6G1&8m4eUq.j1tζ$i L919>Q6G8)2봎"טEG4Q$rD|C{y1AwfM[&\i4Nvz[Ts_Oqa]}V>ceJyxl< SK$"DQCrf9(P&=\}ަ:t-ov4kE]AzosKuuRWwv.`1Qw3)}f0Mf{E_0?ϋ>j2^DdG_1l./wZG`A0)w%.xG#܆ɢYnRxOaR ~p8Xw8>u8엂٣e.O}gFv%Liz'or{s#ɏ;'<0"ĸYd@o#Fl1@VGTou߽zvi6.OgX|~{zi0 uO7i4uubB#܋`k>ۦ~67ޏ> 7}g+Pw`~,W +z"aKT8-n6ZҞ*fUbRMW? R)+ٕeVҜ C%T7bJT2v/zӹG%"&Ju+N{:y-z\p%v2> Y_xƒҦIdsDEac%Nj?s?TNjѺYңU>j2Q#*lwmdέoaN|4Qږ$KWX݋?Ïf`86(+m G_!(BҊeg8}tj} li1SDrd$ 6jC2j  K5eɭe J:w@$ge62J̰a1qvN{*"| dG|zռ>ZzZ{̮٩NFpDh1b6Aa:^{dBaZY/̌EnEruNKMN}> lN#ȀѢʁ5]1q^^o`ռ%Fή>6͜b,WDΠqfXeG[ӽG:uVVTI)[9|e%n;<$,k/E,K5Ų3Nm0엖 M! j"2utkZ6(1ù zavcL i+B0EI h(xլR%Phhxb<gGDًV"q$"=Y6crnK4n2J3-Ӳ~`@;[DLPb*f Q" h2y.^AR ڊWt^aR<ٶܲM:at(+  [-*+~½v= E{Fir^n8bEsqMN5,UpUY8{p"4\4E_V4x]Ge}|mpCC{4{'-#|^SRsxp{1Oi{x*[!%ز94Ωĝd2۱Y¸gP;:nPxˀf:ivU$ š`)MgI陓P3Pg$M[k)سbh[~{%rn}G<t ,@xXd&э^ox" YU6^Wlt) I0> FE>`[`*G_NQ_ɸevJ`C2чLoeaǜt.1˸OG h6!brB5֢56NTY-SN/Zz7;zwi܁t7OQ4:Mϑ8R#@ 3"RhkV|#7*Sݭw}?v}5'pܱjȫmoiĮ,emiѰI.#uӀrȰJ2b51^zeU=Zc F d˘As0LH%-%a@$ 4f KR ѥȲRd Y=Gkm{C-sӵSeSjSќ=S[ i-%i8}sތJgǫVf&؛>'2}5o<}Rױ*MtҒƩ+tVLU>YWy JFr[A}Ӷ r %KzNֹcLtQEQ\Z\)KsaZXoKmkekvjzi: +$>D/nut"H 2:&ٙE\љEڕ-\wG>'?J,ϰ$gH 7$/+˜ fzvvogq"Vz*Y'8|H^] 5FlaI]LzIf z2m|DX)qZЍRe#})@82P9 ҍT8TL P6@e<|lQ>e>"f&!zb6HJ#>FJlLIO}߿{-%\k}ćqPT#T6U'RMCvTM؀p>j7T{iu͓.ۗb^e=^>zNM6n=kvtDu7sqWMi0nޡx?6⟈̫E%Q֒ gGgoL<%;U;r}) f]6w.`QL&k[fs]z>oGWX{4b{ qSL@D*pbxJTetMf$:\Ov!5B ژ]Jd4FJ0Gl. Z5+EZZDa@*H+퇫" \io;\)%pupJO!Ryw"T]"j}O4HCpEUWaWHkZO+vWW4UvǺ*" \޺*R>e:W$3pUU"mDz:CB%,["q9`VJpq}-~7X㙫r5/&w5q%pژv2SWJ^'F?_n:4<~W>SJ8\n,C-QG0e&nehNe)>),~Gip7J[ҜOi^ޛTt ^_Yyj#MKviJiqi8.usK9 :,]`'/,P` 5ǬoSۦm?oWr1ŃU4 Y(ͱS 8IVB$ .g2Xx8 +4J›d߰(-=9g{d N\aO^'GqImg3llN=Wȵ\BUWbWH{URcWgWBsCpEk*r*:pUT•,tH`~ԾW+$ \i5o;\)Wpu>pZmuU"l=\)pup4a+Xq* \iAs+m]:$Н"HZ2X43+c@t^6ݱVvW$dm|W$3pUąΐ"1WEJ9Z\p\qQtHZZO'\^0@8À+i*R^3nd+N?F79>Xq (2Ze҉ fVI)}u.t~s9wQܯp5:q}5NymK:&loM;jD\qkT".UV˶UEYη79_E0YiYd, \/ 9 M{D|{ZNsa@fzf~$O=niJq"WJ{"Q>:jӑ+(8H8iJq6J)劓 ɕyJq"WJ+vRě\\}uȕz3\)˕R ݸT` pL㮔Ƶ˕R>:dӑ+1FVp"d4r+wWJ)R)q3Ƞqw³ȕҦջ+PNdUzЧ c7_j_}"2\>ihN]FV`Z Wi/zG5kğ[ʥI/|0Zx{ɒvMWiѢvIiJO&]X3=7n(LݣK0]aiؔI=]vDj6{Jlk,5ҲY{ͦl5 lVod!ɕy G?`˕R: ʕ#'L$W ,vRDȕ76L(7:Ab J%N#W,rHO)]ȕ*x"w"Wec.WJI(W]}e }k+N:I֓MG+u/RbJl4qbowvR\kg+u_jWJ\\%+)؉*ٟvR\7ڕr\\)O\Z `\HVD J6J/z"SiJqg+ vRH\\@FDr`:p}2Zvk+\\Dr`o4rGBZ^RJ6:A+e}{|Uu_WAp7A}ooo.}Woэ(`-鷂o?;^/KDiXw]Fy+دeaR8RZYIJ6:EJ1Dr`o9-Bq"WJWRկF^Ѷj%K I)8]{]vwi՛S'~kL`.esa*k5ˀo 7㺉eZZ^FVvN]&&tI8HXhRdg+F.WJi&W'(WVbLa"pF7L#WJ+vreELr\\dDr,r.W&lrurJ}Fw+Zc.WJIv+3ɕKFK"WJkJ)y[:E )yi),rzwisW(WbP O#W{^Ѻջ+d*%4\X&+i֮@KV/W|Tnr+NNd ~QǜSou{}:o<{ϟ? r_!澹/ԿAӾ~cwFOq{EWXtCv{?uͷ\Q_}xR_77cp)ſ_plO/v_=nlr"r` ޽[|l>|܇ipGٷ)o5g;⻑ڽn6?^0>U^(^N|ZX E B#; e<\g@,j/蛏ϑי)?3@`/^5Ooz5ԏב܀ u}}_s`l},g2f笗)kSBI) TA2JqыK~ߛ=s>ޣ7}oUz}sU߿TI4BKenmQ\lcɢ~j>p}K+1ْ tgZ*Ѕs&7f1lm񈦅\/1v$:uh'РÅ>LC@1R|yB*ڜbr2V86{GZ1RR JD̝,RF3>ф`vy]۞&Sh5JȆE 8dl4311ИU6Qӈ5|5~k W[D!8  Ee~*&+Z:jNQ^dN5tXIc!^YU)^r}D :g:RI0f$fJD3>RCaMz7b'Sk9o3Z|1 V96&HXJ!D@BiNI 8{hΈEL㽰O4/VW6-f>dk Ĝ@#[d0ssvPK "r k`]Y24,REv4 %#By;Aa:59"_< G+Ur`sH <$,ǂb`' h.*u8:Ѓ<xvCd\TT =JAԙO z;{ڇ^,KL iŘ!QSt"B!MHg =t&8Wous]<ϯ_蝑Z ZŬ̏ލ# IP-$ >:g%>6#JfqK9!AjӪ2Va1 %$  .B` + =@(E&rZcd^[1P>DR,LrVtx %(1Gt_U(;'pj#VX܁m`cCEUda:\?5ybΛ6BdQi6v~7WWe;AdTNb f.fHcc.)6sP6D2 +"aGq5ژ nFMsFE0lzdž@A VH}M?g c%8 {޺jѳqM=g}jw! ;Z<,FݚZ$f癐FhQ?UF tPFj7LJȀ=|˩4dsC5ƛZPٗ9%* ZC .zx OX0e%3: _ v4 a%xi)dS:Q4 tqTܭ0?x  ۅ9m:ِCL|U].uE]v)&#f%I-bJpD$| ]XI:.", &u(?+4;70&10j]za"ϣ:^x6OlsbGk"Qa_(j uN߾=D#}D<UnL |nmׯXzbn>4N7wV&exY3MlÏm _ʬwhcѬ0^ zh^:F7+Y+{B4EqJwR:b}2e7EJL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&з}:r@97'.<twRљ@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL X}bVubo@ w b}L m"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b}L Ozf̓@ b2v7L ָga =27+b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1&zVx;TSlu^_6 /@S)y:ϻh.}".fK 5{C\iy@&-^0嚸1%_Hm!Xr,ޫ٤I-Ya>`}-L?o^MƳ2Lepވ ba!&Aֿ~@kl0lZuM(N8(ֶ̧{So<[xFW˙|:gW\WR52n>Ë5lP&@RdnKJM}A/!S[䜀cCsi2ȷp~'Uޤ(RBTOSMI>XP"1vc;I~nRr[1C&)fs{@`޸+W}qW(czB) w+8v̟vP\][B)7vq]}?Jb> nok ~H"w=+YB9 G%Ndj [_`1^!|yx2ËKs4C9O#!"&]PkSq2mR1)X.xTz y~UӺhvz.GŒDǽM>4HTYbʇeX2j&2wY捋s`ߟWh_!] %hֿ}_U'bwdۊ._/IoS/_;WqŏaC4]㳲!Ԋ &<ζJZ0-җ re}:E]li>5Z[=.4A^׺j RL7D͉伨ח/}=έ3 jtj<%\7\{[/t:\u?oWlP}K[t}_t~ߛ^vײz|՛++ajA3VنmGf0, v`kSCH6əqpU\ٖ'\=Sh3c(;xH^ x!msj:Wk}LDPt4b]y*pTc5@s.U5h<5pc˜=濏Nl^^dM'v<ӒNZ &z]-r#v6RZW- Oǵy.1Sk&Fhe]49/ w:Rp9:ʣi=>q:b:\H36D8tjW@'XVat:|r[ԅ|]=\cW璥{t,ZzMtA;XkaS6yUB7:E]v&5[qTRs2 X3TL>%Tga6TfcPwjk^=H^t@Y2DTk_u(.r%.918vQXCڙ87[ObHXC~|}\۾,?K zNn}⤿mcvq44ڂ9PwU/k6V.nn}>'=A;o#x0qwx9w 5z\V 5ݶz3A=C67RhΎ?\e^yi7 &YjIHƨREhQ,#3*H͵xOaHYH%c n| ƒ .A"/^u?b ?GRK~V"(ITүgx})ly wN{]ILcuAf6!Xs g*sG(Vr\>Jcp<)Eq<E69r O|TY) :8"f\;e dLѸo4hF(Kl)'MɧMOm$Aw\E͹-f}0ĎG[ff4tʻhC :-זѦ ʼn ]KJ9nx2;><ߚaZ}!7ױS,BWa+)Hfˆj5>zT<̍1:.ƅKHrwww6NIg>wǓ0GLnY)4uNz`f4i[z Cp/}a.õIp?'pv =\R?ƟpwN6|Ûy")c2KʤRACMF&0Tna&@n*v4 O'h[j|+[ng˥o> .RJXL 21}bFIRz󓉱{z^r/_~md:}0_OxYFlۖC[jH[b QyTZr%4Rϗ^q.N/b^s9kOʶQ60\1hc[ts۠ 5G}6l4{95 3 E\gcll~n&*}73Ij?f^o ۄ턷1 M&iȦ>`GNYΒc!Q“r)JѽvZHG0R)T 3L>/m97 P@ 1J""/B6]P@vjW!4(E"Vrmj (D" L&zTzE^@HuCZ7v|DV]w^ ƒU85uID&R2 G~wD [;ze 6R朶څaZ(s눶>#ҠJ&# Gi&U LPwv] XR0G AQct1*!jc ^P8-"R3X(OJg#gO3G{eyXx+x/l\ Z!:Lw(T`p%<%FO.yi~z?K?4sɍTy@aA-FfA9U=AX'y?ŵ%nnJ-'Z@'r3S%v=Ê<9mw`ˍlmOY{?rV"43DR|@XG%:CnwE52:'ԇG$N9z\ &%t T PI֌٬abg Ma]hp4,#5nf$^stجAv$$5J@uh!#Dsڛ C6\`{8j+ *w٠vĄ*Dgkbl~< K&hbֶܱ=j#}BO(mky`Q2!Iu荑% %h4D/ RQ}h7zACZF]""38 ΣhTG\a}X5QY1F,;jDIjľFrVg1(6)MyЂg|`DTQQͣ@K ͝d<.\-J,qΣ^l,%E^X/^<'=ZXiDRE I uMd MhTΨqԋЋ,CQXT؆CQAv |8 h;F?~G{~_¤@#YU@%kja؆ZDa1 sSD'RwDY7%I{IX&pE2>Xǀs ,q9Ghcs=%+rFHSϳq"݄1f+y.״f䴙|X}r6cӼ?Ng7;ڃ[#8mv]G糨Xkh3~BMM=vz^/[(w!5~{e3; ewD>IZ-4SW߸lzm>~e{GR*N3ϭ[]낙uKv4t8>5zk\"Լ祖Od<~͙xz߂ E'}:֍){Nٕ5?ot>mj<&7g]9 [_\ F{瓂hbCnx(}ߨ8 -6w7˰|ٷEP D`,PG~omP@@2*X9oZǦrQzOe}8 nlZc5Celk.IU6ruS?FW7Ӿ*C>8WFzu/MNNܷ< wtplPK૛5`i2"#DJ( սI9]y~?_I :6DZDkϙ%Lhe8B@d$;ge`y߁C8BA W^%Tr PdWJf3pND8ɇ8)i_.J.l8Eh)S#e)vG{=3|Sxuⅼ% k* 7ַ\hx /嫹yq%huQUȏߙ&f3W|m_n.–r Mॽ N ˓y+KүH])KueL-W2Bg4˙&qpuWXv<9}p9 r$ y0P|mbQeD%IL;|Sh)ZPO(.1I X: Q)^h)[ yPJp8L"&N%傩DU ᑚ@d޹blyeԸr{H^gԲ+y/[ 7qj7^vmHXޥ{/_7<1cH:l~iJ.WG-B3!(1e_TRΪLe.rLq6H}2Lj@(P)ΊYW߁x&ޫc(Sܫm.{]q0Qܲ_UYwڑ(audԂHS;|A%InO+övoʒsv5ݲ$ cQC0 uT4*JY爦TZ=JEzDy(Rkg2FG h XxeT!2i5.w-0%tbli-Efvܪl#1NZ~2jk~8 E[eiZ@ށe^iNRQFR0\]A) .%^ 5 gM.`ٞcEUUdZK@hô‰d`M(ѡ De- ce(d܋8Aƻx8< =a݆SvY.S༏Nke|Qo/,'{44"=TvͻH1R@ eH)C醂E#8kV<_H:j/.Y\nךS5͎`3{bZevA֥6p(=lx%X@QVb >e?hxk(Ve%.+aYY ,Fol35ÄdA2E&(vEݲR).T)T08 Y ;k-flȖ2}5svO\]E.9;nfN=sz 7m~}R ÷SL˃`O}8$?"1bvG#\5G=_uמ'+5^9[%Sbbc+-pUsVR\ghU:ϮgWN nD|>Jt|#mR|`SJ29/F?>`6Y,FgT3jjLr~"d\O~htmw8ɻ) :r&+8}m/.OHV*߱ajZ)A-Cm4Nj9bݿVs+ox.92٭R]M&d0pslnJsln{ ~fqbp6= R#bSGWd.0},pUu*VJ#+ɔ}DpU hઘZeXQ+WErX| VFDlS\%F~֪'%OHvFQjF]@@mk`4zZJ,?Й|u]G@>1EI#s`$b6 f=l:c+А9)*Sl 1r]j` *i E=Y9}{9)W'W4L'fɸ(uEa: t/?'?M~I8HE4tb8m8;ܜ@rsj=x^q.Ϳ3&q<_NtS ^ٳL+n回Oޖ-G&rŋx7ӏefb'Bg=zpvu[`YܞY)|YwFWl|Zns v˷O˖dv:*.G ] "7Kf6"4@Ҥ%˩WgM)+p_dvO"52DbuYn,W'W)z9|8Z[ }v8 ,<22?l,-^-8m袾 4}ڮXuM/vE+$t/*!4Y$A Z+Jxn;狍t?¸8.YjK-hKKw/VRmXJk\7wO]ֲ1/7.x(=O/ u ?]fvxEY6m)=mei>~No xn؂'b A/x|Lr6丮S^cF93+_m6 oͩ"YUfe7b(XPQj8-6xئZ_)7X }7Z>;5w6uΆv-_5w*hDm5jr⍐ݡ9P}/ԻL- N./ Je6xQnTthĺy ()5/ߞ&vI/b !q^̥A` XQQY)̈YYOm͍DɾQHZD6(Y-m>WXO.ڕ0o=Lm=_w-׏Vꪨ|`a%QC@@3iN9kA&({!"Ϗhm LUdshk, IX M.S97)풞s>HFjFz\Vbƒg\@, 32͹<خ&15?h/4~41~{oPbV)r+BD9O(3T(/rmi-/UElie %Q0upduYKe*FjF0Ǎ+]wڮ2j{{SDE!2טDcYQtKMh &/sU.]KEYE=.8%UISjM(1Y6lg EQȠCbg;k}*!쇇1rlQ,q2MvE\xOُYc8B,*鲳 Xp F{<gc'^ۑ4=mQr RD3zThǝ @E/Ii 4ǻ9`ޕ$W).%}z131ݍG0Q$EږYUHȢHeʌ8)20!i z4DTQ$rD> >-yc mAnzX ~$m1) ੢0|&kW=P>;3NFT:.~ U%`JH2ѝRQz8̡hiN · tvJPn2^x{_c*.iY0>bvL :kYrT֎n"r#iU%D]>4ɭپ^m.03[ΫZ[!Mc;M͓.nrsWn`yӳ۫| ÷<2tsEI\j^[XsK7?|$i>HFxo(V' q6vH#mIxzۇ!bZ+/A4ȅscѰX$-vjl註(=-pKKl^xkka9Ňq#{5*Cz Y@SJx->c]zni1D ,׸Un=-=y WϽWKgplPs૚5`i-NTT"CGT rhi 3 :;WHCsKFDOAsBC\(yA~Yw'sŠH,̉`eT gsQ Qmtu:q6HSARoRӚj3[Y⇗+U-aHP˄HcZQ^(4X GQ1fkZmDxk=]_SƦўfb<oN`O| s`~err$a#ȧՑ3pQ} ]T\_#?IHH%&prDi0*\Bq\ D-3sU3=xSSp dn`l0r[%.kk5ȳay3P¹"XR7LSfWʻ^V=SPfO_0٬J{ƓQo)`g/3*jJjr$@/I^?]%76yq0Z:^{ dBDZȩW.g\3PBGd\(׈^4 L>:&Mp࿹T8*AOed,>2IW6!X`T9P )V2""NA #(HTL'"ݰ9;Yr05_=8ΕƶkD5s3U>yU޽`JH >JJO%Rh|&ZaiAc  ,4"FJcEmv4*kHZleF͝QFGRPx#gY߁ո&Gܫ67rSnjJ\W("]SK >*6ri+PWI0_\h:o/k/۸䌼,ޔ윉xjMaMIV3.z82눠" qAcmuğK ,`Ba"¨7YjRʃp-Yp4FΎL^$jo.(60Ub:0YmWl&NV&F]3-%0:GT:U+#;}d;S+-#cF`w8A bxÁ&EdXnV*R")9IE$s2JaQiƝVI DI$3 ΂cPwǣA 낰1aȧ*yӪ,Aȸ%hdA3Z#-\Dt(fk$_<]/evvsVEEH籶.wVKhN[墤V| uJ]FYX\'S1Pv "*V!\wYqӗ@ #e ™{)&HanH&+B/Eol[u"%hHkV;'+r;w @,/aQB6ƒE xn!!KTvuՋi!QB dQ!$ID(ioJ%aI5D 0iN;Pʍ"VFudXDci-aREb?$mȦRύ=pΩhΞѩ8sCK  #fl(h_B!F>'>#fg@Ε-sɵ97Jc1F1sɭ`)$ŤEAmveZ)K&sO@Bs !-5>6uωَMGmmK2¸e~za`ީhi@iS5jڨedií D:͢Ö8t5# /h&eu^"6l^>+-h! f8|sɔVu_>~%^ʘG51S7ޜ3#i #cPU[mlg@ψeNs R9c |`S75p%d@Z$I¹UZ__[,8jgf {0Cnwuաbf;m?}tgayOFRxA-ʤjFX5 Jj~7+.eܘ=&l?^ee{ORH,)}5*kQWZ]]%*%m֚Wu_E]%j٣D%aՕ& uȥ䵨D-箮\QW|ǩ痈pyǮGQWϢR!r^?C]V]=idS(7BSy]?[Z(/#ᴃhG׷o`\]a4{09O.ED$_nv߭1qW5.=+L⑃$Y;Saugqb ;`*у av i WbxmU~ǩ$3݉\EO)i t2*?k`[,/3}[ZDZ>UPv؃ .|Ώd¿-V(ܯ[bغz?32UZ;MDTdW@Y7bU$?c, &K( D`I7*DPe|,oz,0{%sP wB>[lU\OgҾUӤDlgSZx}J!*œ*sWUZ/Z)wWZ+vOtU׉S*-=Rz;+m U$*.Kѳ*7#\-T\ں*;Uc+pʂڜ\UĮ^ \ ܱUR!\ʬ  lO" \Ui_k?XJ'zzp`9P; >Ju*pUUG|^Խ32/pBpK)N,҂:vRSoτ6Nأ|W4R](PLSSʿջ2|K~_Wcdoaܙu/ZS*M-O%E~8\M,î _%V6e -UkR;  NgB^dQhݾ ^&;EEف=YwD?)] c16:06cQH8)]ԩvdX f$,I: &۔N*$/t܌3q4F#vUalxIi;;Ww^EuHG[/]k n.wL˝+-/:K)y$IXwbޫ`AqQl2(eD𔢬렆Glb.?US-grrmw@PЃ+ EdbCȖY1[x;*+e\|zXQ 3HbR<&0Ai" H (^Dkk-M{5ҋ>ƀl,F0]IU`W_ ֊].ecqS [Cjz^Ǵ=ӄtSݕ$xY#,,YZAh%奅(ʘ2gW~G# lnFxj8ϳ!g?Bq0ˣ̆[/?wYӯ",? gkj#J͐-~qJg=%  ̯'ҤA"ݧ|mh,v=P_'7m6fACl#U+ޛ'بv8Jrv\Pen>1\43z,4If?.A,[.n"j{ ٦?f;C v-A .iHIy緓goOu:Ӊ;Qr${/;4|͵;];tGu=Ѕ> ݓ Uog'>M\ƅ&-QݒIin}Bvz7ޫ4ۮ_;J!@Mq6Q552 L2,>!䚗MG%DdDנ$mEq$,Ecv)‡,_C:Q(>HN=%Kkx%(8@F%4󸏺MnFH-;aH`y-r6NZLPLT (T4ZgDsKr<[A A  e`Tes]{T*ώoa#h 05T@@rbt5fA5<خ5XXCI%OD 0 ^x /6vD66S-lwO}b ZoZlo{[t|꫟Dgtw$fyQ)4cu{f^3޳=ֹo `gѼU&1jp;nW:\ U{zuU !OƵDmӮ#]& DQ( $d$Yt L:Em&Ha'{j ݶu*oonG5o4|ǃ9Wϼ"4O>p6vM(߽<8<ޑQeߖmߒ[zL4!XyPSY<߭fL1F;ͭxқSx}6y&𦻩)kB֯ tkBR݋mϴވi!FYI=NUf 1Ůn-!Lf%B(F;Xx9kc}Cng0P(0&BlLAA!F+ E'"T&$ ޳HY*馋6F* E)'Q 8 H[Xe!Uxӥ3q\G#NѩCzp͌5s28*UD/W43~WP_(Mo%tFe@? bcAOf#˿nɈƹaxO|SL/},!S2e$KrpbF=z:;RZ/W(wW<(Bґ`'TƐ-0iL SR2%eQ:1Bi ` e/נl% u[HT#v&Ξ@Lo|="hQ;eqX?5d:v_q;ajwNi4]ؽ\S d$J\ef 6@HN(`?S#gj(E$هʰ!ie ) ]V*b׎^gr zygj?,일mwaȅњEuFBNa0Y ⵩-}Aj` =)QL fƓ)]w*%/]0%N%ahi1R4A,Ϭ鷏 '9` 1Pp'2}{'dwlI6$6\Cˠ69'!g#aT $ 4L (>ةr) >@"%@+R$bwޛ`]Pc'3q]z)Hr@'e|:s@ |1:) le$)1}$Q[Rm &}3*݋g#O)22`#qJ m0)8'0vٕ8j[\EcjNZ]g:ӓ;Aǝϧ_ۆ;2ޛ GN[瀿c\z7jx㧒)*? (t Kt.*&5&NJߕxGL!/Uv]IP"hzrd3.T .MR 52v&2:*aagq(bX=(%cVfe]ǫaϾ2g_hty4M"(<'#3t´Xa۝Y1Tg:UMm]dZOo'>%"îwfq7bM/ޮPQgi|b* u(_*0F!aٛ)3ѱ13)tXG1a1 9o*e eP`SM"SQxؙ8aԯCT`<DL?DD#EzDq+oDccT,gm$I+D~~?{7d;]2O[YԊlIJ%dUbuh %L,**A# adV7$hQHLd$&:łKs`I&u]Lye(>:;[%"I=.hQxNs%%G (06JY$ JcIEXŽwv:C1p s9 y7 ^pdJ `' ֖`NjG?'?":Jʙb:c460ǗogAؾ9n)Ka9#1 !#9P^ 1 Z"!E0os܆r2yO I CDOI9ZSPkCB-KxC ˕}M%m}:HYm aIg SؠN48mN|r65>bo 3m9̖io#EqyMIqsȦ7{de8ɚai'^I~02~P*C5Q G>s3~ < Udc?Ha(WL0(Eyq0(Ki@k_; [{pmvnk Z& 0:y{DvK%6; JȵnUB>d֬:Adg9?뻇tle0Yu-shoU;o^nJaZk[]Sѣ¼g=7yJ}㩯獽{i.t}%ZʺjN2GѧFx\=mOiSH޹үqMnz"C$8j@Y$W(< _5]oV?Y՜m7̩GA\ ;v*3x2;bM0CB 4&)TЩ (WKM)0pZA> | \KجGUsglx2x{t .R'I.ͥʢ46+֐tk&3%1GXM%%6D-"nYģs)cp ­Rii !',u!|RiBH"FyR;mT];[xUL,?žCEa+ۦw㸅s$ٿO⾭#ǂՉk݋1|ק*ܪEt?vK\+\^*T=jm;|OĕƝS tφl.L~y~w@5ʃݖCΰ0NJwپ d_/%LI +C9+qTL;'s&(86\N< HJ.ܕ8Ȭq>]Lw+s=yzSnM[%C>TBJ '⫀ZϹïUs*hy[Hn,c]=i}UiV.nݔ"ˇ4;bJRp`>^H%RєICcĞ "7Q2QqdK*Ljd<)V rL ` Xy$\Jw&Ά1Sk\>>W>1- ,c4-'%ln=q|9^owlz(YؤeV)υ1ԦF'Ud"<;V[0ڭQ1#n!fúEd WWb=S*XXD+&pp RtM%T8 DKRDc%Й XZY8ظKw&<>!#MSzYw/Rw-fv@K87_Ɨ-3g<\"9eZYiRq*2TJS1b8ϼ@ ٹG+ܐ1cZMn"Gl < 4_uVW^ $n(!q='F`W:A^qQ։N'8Z GRki TP/Ui^ :hMG!&RԵe7 H .@<)pvpUa\ɥRhQLĠRb<JtZEA(c5AV#-b2)>LbifՂi Zw8MŬph/ůa6x?s\”.4ȗpr8~Lf#?L'ߋ%ޤf\<._~̿n<-~?SŐpH.|ׯ/XA0fuԁXDx6 Ӥ-H+-wiݏ?  TdW09Ҍ1"S``He0ףS77 U3,!uÓq9̤w_\7z_/b>I\}t2yݟ~9#ƾXnClIN^iGjHyJx́@U73 -u+8ڻ+j[iORq R];v@Bf `hn)1H)h{**bNoѳ>ɺxt=T=.hp #,s"XD1(aBT[/E`4A^.] _oTgUv^ÿL80Zq녲N#^0q)eWNaoAwgy* Hx{`prs^t෽}D@@Mx󴦖ȑO3P7:b>wj[WS:0$ᢘ~~ r> ~&-cv||5G^T DA- ҳj&{g^ jw=*+\H_p6ܻKCM O=anᡄ{WE *lX/k/U~]As痪 @3t\;/޼91d`/Yry|j>!tpy5<ϯ>`Bݿ0tR1=qqxX+mQdDe#} G]eQB),Q$1iEdQvVI 81ZJufIb6|(Re=R'fkw|n&&aۨ#:a$Rh|&ZaiAc $ ,4"FJcEMB4*kH)6ʌ ;6`#*$@ 5T!ӱw&|'zz<-Yl+(w՗Y)bST5vE{GX)u@0XS~0$.K JJ;^# W?\φ+E;\%)`>+.$`*ءU}$=\@b\%WIZ.=\Hp|WI\~0${U%eW/d* ,'4wzJI0•G\W \%iw}`ppBV/V)'pf)UXlY ܨ6'M` p:/?ٻ|MGa4G09W94xЀ#,J^}ϭ4l8U=+2Tؤ?Z}|̊q0փ?sZ.s_Vcw.bMˌvV-v=}Q˺EYn",[SSb&yU& զfЁF2+7y3 \dUՂW'C ~Nlo O+eIZܕcQe(_nwDgrɊPKf}`1P_hϛUiTd :-BT" Ϫeq/k+W7>fc%?uu6@ nyS/kL4$3-`;: b=?V^7z`9m͎9۶UA a4g'jpErW UZ&F\!h%\E"Q H3CltW+ V+zJ^ 52qE*W1k v\\˪0t\JF\!jt*W H|2H*q[P5=$V+ \] >R]#Ijj'PHZpj5WCv1kȀaj Z+RFw X1gW$1vV]uSA : F\m]ʍ>ppyio$:A}S O|99;nf2LNt֓Nd0ʺ)ehwq>%Eݞ̜ t\^ ptM0H] ]J>}qW$xSz] P-ګUF\}3rOzG77±rǂwwb?Tnt3ҮH6{:Q8t5 G'ΠYlcAqj,eNa*6nqcV:XZjW`OGWL &kū]Hf..Ҏ>.BYR vP P.0W HCvJ*ͥW$P`\Z9aR90z\)b/$X2Q H. :H别ĕrWH]\[M'W{+35 "\-S 6t\J)F\!؊p%gક[MW;UҍC\zPTjq4V5kY Hѵ J7bOzh$惝!8phw}|ٻ3yG j"DRl ן hݛӢ pH.nhvcS6zQR[Yji upgx@?9=vix߇gxp7SDJ]N-;Fۧwnig~#mҏ/ׯ/}v }wW.վK-w:֔cxx>R]/4DPh?MWGx\7 >?BeA2!$D%E"9…3+=Ȍ fPb6Jc9T/S-σ ަ>Y|UMEE~V1DwίO~r:Euax N?fˏܛ6C7>N$~;=̏V2hhϺh7O7;Cɭ+/^~`JJ/qorok5c,]7ИulxJm;=hhћbv9.'ŏp'po0o8_jfE=N\{X|:Kݩy`{AY2oy ),5fk;t_’-Z}W英mIm]P_,2hc>sWe4s'|vcjT 9U]v^۾8tzqrp9mŲ9tU]n>H-j*v)^AS_OciW=qtLbe,8NBxn 0FD:r$H}LI_BNZhRpC#&I<zd)I{8Y6yЎ)̎R ncA_)̪Rp˹}Tźhᢏgoz*RL7!oQZV.zƌ櫌^S5wo,*rt0rv:\ pV(MDL'qX>Z_>e{u6gE圂2Cp\Ea)9[,cԟiTgYzq?S!,?n[C3>M$=55[Ӌ/gPBh|cL^[r#G||6_=n^`^ %@rHFry\hjI 4RJFTB`U( DOM|NP6']s/؛؟W鍅Eƶ,Y# _kETKrZ|NjÇ/bO&O7Nls4<5yD?̢S -6ߴh5&d>Y^J|L lJc#"$.قWrY9Sb.,ZQG_r'vMF9jRLm7R{tE|B+,E1C9D@ +[toYVG ^+6ƀM,:JD"Rfш, 4(6"&z3{8ۤ~5ۚ8"[Q~0NU>.^W w1} nYC ^7hMVAuQ+E%ğQ\-Jndv䫫F4`.CG^tK=:b{s*\UL>ρyvqR.*KY&J\{ KäH# '7sϹ*D&ƍmi 3czgG"5="fV=͒ͥbQZ͡ >wwЛo%uLRa4%4艒f$J%', AĥZ*P9 <G!m7~JgB0Oro|T3\.>L`PcRl汔q(DHIq-h&, bHߔ"kR ٰjDbr魂O37̋ڒcj{PU?sOV`*b:m]#_jK# ᭺!N'df2_ Ė^z[&&4q/}P qztaS岿CUvW'ޓn|p/']VVb[RQ3lEvDPO b?P]r.|wyw&VDc)rʗH)~9@J k Ȥ0Z qIZ`,bLs4PdX8~x69fQ̓ꌧ@%X&Zg\!1 d$S}C.?G˫Y|<737ooa!03Q2=;U^R!U&.Ԃ>^ s.kzI1vFSie]zoX鈵"E!AA::*laÔ_}0L(Br"ҠCB,1E40"I.i7 yPJpi!SCt0*HM 2Wt"gޛ8;̽r6ڔ YՏw@l,p=Ζm}vt-2;~K!%?/20A*@h %HB^o=: B 0!fa0 O`lb׵MgkP˜q3$hb.`:T tL$dXaڠdΔt S1gqK&<1Y}D?s}7O"y"wj6lqn <]Lޕmq37> ٍ` ^Ègn}hꞹϐ34$m^jp͔"pm4|!,hA=^un2ZqÄ|4'i@,y˨y$"(.eo7̦g$s_e`>p1/0<{^^@~&8,ַ޻Ѭ{7;rה0O L~vq~f3~~_~̿0۟c\uU稀j5 ˗ץ#‰P:mITPPd9߬Y ;U^w4;~oΪ_ 9x^n&Qvbؠ1 sĭi%uEq' 8u'a:_v>]0Wsp߀^_>Gl4A,ǍWq=l`> D (Q :zIh>>l.71&IKrOtŒ{@HAKFz.d)B]˶EUnb1n-j6γ]\aMˆ^y+]7|ZJEoP--!z\Ƣ"͹vhpVgr: Pk]I2pH(c'۶_-7ƅbT{$5D bn\T"ESGGTnF,#|wqꮯW}͕}ّuۼڛmWU^f>{^4?vCgZc s{Nլס3v(Wrq`ڤY d<n{=s[I0mO|Ejط[ډ{ug&즼M.e0g7y>vkߐɆӞ#J1Da4#nNo<ëxB:bH"@g^!|rn b> є^#/W`=!KR$b%' VXL2`Q֒Hv֡YdD׋+EX({[{Rn'냂p߷=£U_[RE)z6Hr0B0r,N4N`KEƩ;π^Kq*/#qc(E?c:GGQ *Ax V" 9o@[0. ,6~}+ A*kn^ ɃWi&Wa:yWލ/_+&/_+sV "g|xZޡ~5ExZj$.yV y~ (J|c׆fV;ߗ^7zX|"VR J8>(JA jiI@enP^q6"kx^a(F`|;c&߿rp{De}g:?u=x5!\N(OAЂ@)R8Z2J$<1<#ءwqj]DIZ8o4E;u)15^h7ulx.CƑrqVҭ [.i>o1G| ID(X6*K^yy"!JODXjURΪZG]™h%8ީ'̤ *q=kpoܯwgt}5ْ)STb,ȳd:wϲտ>ԏtF$ +#DIK /t.Ir~!,fKzY,ٗY#°'hVoS! :*DjɬsDS=8 H#>18z$*V *Fɳn* ʺ, .1 wxTZb5Fht$aji' $CGe j=䘼rCq[ 74߱"H8ǁz텐B)P,ՌEsJe4&EnA;|?֑T7{VqZU/ɤǯ2g*I\Mn0?̖:ɛ6.v?RE(njgEDMB6~R%'IZP.j}g?޵q,ٿB 6#Z?& n#~ɂiI%#rj#N"qHT՜>SckWv.Y7a'޺NpPO ӲzAw/gg{A5-)B_^guؼ6<,>:$޵8dIY:2}t$TO+(:_wuybإ8틡+-'ʉ9\նP~:8j-u<" ?x[/Z 8wΪ9PV^e0mޥ?B_W3Q[nOgugFXգϩT,$yieZ9jS&[eM!=۩A"{5U1]j7zorΧڕeċ 3 ?G|U}xɭ^}0s^{Mkf?{:gNsbά9v}u[v&_vJrLA#GU =" "#b)ɒH:Kۮ5kH: ໄQZ $L"gM"*teakSn7 0V& X~ 9 & JةW.~{ ZHVG’Q1fB,RQ1 aTQ|"" 6zJ :eWI9W9r`Jh('%Lʜuyu~6?tҽ ,?I5Q > '"ĤY(Q2pii/!HDJt5%@pBT9su8GUcTV;VA,8 bIv. Uݲ!BT5Lvij=ƂѨ<'JZ"B\8 ^xhBbJblD1vceYho=wں j= fgO;Η5 @ iCmw*;98+>;]ӥg[{ck*[noT#o|6'ep/N/-:b]Z@ Ad`5C,@AuPU|XρX+c51(0&BlLAA!F+ E'"0!HAYTkcPԞrŘ󾀴kAQ1㿪nehzviS[O_ɼ^~xt*aw<7l~6t&$VS v2b5\g"VSE7vj7Mՠ̰n2骚 0tU}gUaKW0]yaQ j0LG kTUv8t[[fҕzWBxg=Ǚ\8kgJWJt[pddolA)}LFP]tjXQK=dܨ6 wU\v+NǾ鸼]'UcQu f}$?]].Ms{L'e}tG4h\/[ꈯڮb|7{Y\)~pr~?VOj ?}d3<( _@ T?l9& .r[dJxqr`~d0q[=Tu0TE;vT[K `B Fc'Z/URcKW0]ib?]Jɤj6SIWlѧ+j+PJ;9tNNdkcOWJZ2]ʹwugK်{e]nvNn8ygn}:WoN?,)3v1u\.T TRd~皓ɪ瞬upԽBItM7Xbn'wUk!gf%3Z*>Q ^EqBJNG2Bi3V[8\Tћ EXfmn/:/~h=}e¼? _HĿ~|Nrpb,"_W>+o9#Hbwc,.)dIZtU&lDTY?QyϾO^bS7Ֆ_|j~x_(~RGҝ2vr(-sGBP{H|˛zA!tcQ 6&~om*Mi4&ܿ 6&~omM 6&~[~omM NlQ=lhM 6&~[m-RomM 6kM v r ~\mM 6&ې -# j~jM 6&~_Qunph=c"C-em;tZwuw$җ6p6 J3p$[^%A[Bqa x0Cr 2x*r8SNւ:_h6'e*:9A"^E2 f"oo=n\Z 9 k)?n㠝'>g)' 4)Xv>.!jxi8Lj`>1m> /oA׋wVt$rR dbsQʩTkj@4 ר`4/kjsTQqndoVw2\ ^ǧ2h+%x['p~ >Ĵƹӷvvs:͝Kbwq=8S2e%K0,u‘Ya})j^lDhƏt*@<2l! 1*ԁRm%F?3f?Tuo{'>n>|TE4 DmlLPJh콕(9r]3XVYF%xPrFAX:( B6dKFN+Á58FtA:U!"3A|VL7@{j0pkoO[FAO?uw炫ŤCUW_ެYBauK]T:`_Y?gy4ľ̵j,A(u 2n+gYd4Ud,I^QXu=h{MqWtMéTa!8y A%jY(H^YS*(%Wx7M%d@Gi?;LܝGA/_L=_ٻӃc^wV쩄Zy2<8@QSL:t|H84z)##Ʋ$D[^=BRbQU 2,EJ"H`lTuh %`#W#F쁑N YD!Qy @ `m!C20ߔ*R( eBDAGBTCO\am"9>  b4ѥP Fk*|e(@X( Y*gHlJUJI`y`60nEQ bY2*`\bE Z3TV\x;F{"]Kx~vk0hL=9ؘz뚡2=siY(_N!wq6!~[WݒCJd`xx@.{,t +ʨ@6rM|X攦@Nق33p"BaNe'Hc C iZŦs l}L{-N>/}PwY_n>Ye\u}A=A=i~ OO. 7"ulsQVmN;:6t)҇5F1V REHc JU$ЕGhrZQːI8lR "eH}Y-2(mRBh zDNVxɹe^6.?6t?%L\}jvq\wALë{«=Є(`'d(JLZ;~j%F ) !w0?R be3vDSWiPuTL!jWZc+v$cN{Xi2C*Y&*XE^#e*GWG(İc=i-tZ;K>Ime)~>>8|X:`2H #1) GQ PF^&#T_S%v|QڗXDPABٕ #lfS-impƔ܈2p8oh׏2IRsqǬ8Xįv/܎qz#icp:4Y)~d>"EQ%a2ϷyTI>+ ޖY!+Sc/L`ȣKl $:9lQ?Xԏr'hٻ&rlt]:pw]j`2om<;N8`fvZ}$}ttTv֋bI]tAZ%ԭHQז]߮.@_͸wf0: Xt1;z>T@yl_g__L21Ca|N$>|J+ȎfϞ>;@NPDp aVHxAHEA/{1nMGg{eW;KnE5@HkZGcv @3H&R G\r ZdT=qg& J,ɛdzh6N\ R]Euc4f0 ϲlN3( ")z.WʱbH0߭-U!Dsl:7ZB>xv- ǓhR2g`ڤȮݯ>rE"٫g4+/?v[{C48a#>/S ej(\]\#cIȠ'50ܲI??v9v |%gS1Iٴ tT!v=4qߙA;YgKRt7Ɉ e?z*j8m<-zsh|,{Ł* ~>O_=9!E\)7-z$yW'?Yp>%AH#qKtE^-YKR Ԫ竺O98 epdKuezV&8 P[\G^=^s;{j?jkoˀt S_^gYsvU`` >uKEkIXRg0c/*uRfreg@rh}F(Lavdo[~o/^ןK6VimBP]R1'[moVAzR}!ٰX4+iIc]ynY$0es6=TPjz7M :ocRUYI9Eͤv2[N )$Ds5M0~CR# qS\ b:bX$5{R$M"1 >n~-%b|g;ۑINp?vqvA"Ҝ" a46GG&w%FiX$ % t8/´4w"xgaf.o1/yzэn8}CbB1$%*x&tS?~TwZ!T^)&=8Gm7cL;g52vU:4cS,Tcc=i>YbK9jZMʯf0}}o؆9B0%"6` LEIe/MIUIYM !{D%$ؤB:Eh/#vHH) [؝h[&殠vgڱ)jQ[Q{`oxTJl#CB*"(XDIP`{#[I0 _aBgBTh5> 'H QpX904cduwBEbD'If})]Lf%)u^0jzTx)+붝M{%E."759Rެzίu{֙ȸZAIU4MJ57IobHI2cp|Wp<~fڤFLv>4 !TqM$0Б9炁˧ v0f|'Op:t+] C_rbWNtmR-Qb)Nbk U cJ#68R$#Y@SJx->c]:uyS@T^7 4Ӽܚ*)6m#W$ K˻0CyeJ.q._'q'i.$Nwj6Ky:]$>Sa9S ~K23:˜Vjŭ:0ւ{ƑGtT :5IwI]rnzvowÕv{ҹo$` ti uzQ!AcS\΢fP[`Ĺ 8Ƚw܍ؖbTn$K_` 4!r(qTiUNed,$`f$֫nO X`T9L5DDꥦ0+K)"Y3F|x"}9Nea:|jjMO4kG=>u\{ >Jbh"T"VXh A+KXivh5Zw 62(Nm6*GT KI@jSEpљ8{<;+.q:Z{klt6E}ܮ[dYRN-(UQE1y) FZ:ҽ^/ =)2Qp_x;P}+NS".t7)l&.߰~߼|q7i9ٮݤt1P$@$Kolp0 H/37~)Gd{B0h)]Cmi+ KUjy %64^eKOKʼWySn^4Ts7_3dX43p(Q'mG^>@}:i pyl7* +#P*IKw|W?"\i$t\ \%q~(pm${nQ :oٴ?HNQYaގ>~I%gCUYĥ?eaX1tww']Jr 6isrc,FД.աrmZa\Tߢ`?_)iyDG5@R %,7g%@4?ïz= wIJϼ#x .Yrf a(Q)";) =ḿ(Iz0q/{ޚ>+Ϊ* SqC4?wgѴJ꡹3<ϰa#@ &d] hgܠ,?uH[GSDIbePw*վh v6f; bs klX{6n%~MIdFoaѽɶq+䃂+9e@\ \ :+{Ƙ2r–+>gҥ'{Ҕ3}JĀD8wD^^(\۠UI.{ꦃNNH£+6'Pt4iIAhʄ!wfl\ roVhJpz˜/tY+mZ[zsΪd#bRQu"KrC YHys\7%p7a3yS96*еsU}+.s#;l{s_h!LE P=XXE B,6-m\gi3#pQ^k%i'_~0 $'շcV>|neFhCH(fAE+An&ՆS7n#AO0@fX .&/A]#RW-ٙ -e5%Ycu!H6Uw9X雭$C$}jcNm דjQ唍 M#T 6e)JՊ!Jӽ.>NΖVi4JPzqsZ;Ǟ;E6eyvv e/nmYiR odiu͛h4 cDLµ4=_zȄ,^IxsO[7:{@tFbh!SW'GK8F}|*:dY=ZǬJ; ۽4,Qޤ*CdC #O7ǠJ@#N{'p&"0J=B2HLx҈"'&t]=Mn>"U-Id3<#KƜ 9X\ Q\||ޜ4ARTE8USI{k%h(%%Ɋ] sHr=ʜ%x &<}d s9DNVh4V\PRUoYy$<@Ic#nC7!vD )_J&:&|K@ZA[:*]:∬ @M 4ݙ@(qHs`' b ڳFx$/Eye(_uUdRٕ(ދ0j{VPR1-L#r 9 zWТlM6y-Qd ՜&TY\d`{2 {x,y?|p.7$ sX{݋~)4cMDq|RT1yQ!BNh}Y;33q`(v>\i%רkW e] lCDB$a-7T476[Бf%SdHW=$V*Xȇ TPPj6z/3~A\͆>(|" H&jZ5 @6!+*]6xѼy{KH'*@PWKm}á"YԏD?/A_EYьp6YK1e8Tx*?7:<=ߛw.N&!z*`YeDMצBPGѶ-Jrm^sKRP _@10 9B5Ԙ R{pUf}J`@;뒄 X? ^r%5p AP`AH Ш3ES'2(X 7o#2l)b*'[QW6b]᚛$r,JxyŰ ap`R7J-*FjQDFb1;z!i,:WQYj0Q380)`cFj҂cZ b $_'n^&&ci` Z W-EB9y9ݳNKyB]hOfoZajlJ؝5z-6R zxX@Vp*#-]A-l1EO zȕI{!1?H(Ƃ#?Y6gSMh8]"/ ciVMƯ"0tst\- Pk|X~:CoW4m{݄yd]pB{ P GR/Z/obkKD z!كA t0(ZkCDfׯ~Jq#I"M\N & Wki{}u#?ON~WUH{9&g% QAA> `W#\p K iXK  q 6jFYksOWds^`1zOpٛt }IWdm=](%QNT4Ty8YmM+rK >$ׇG]֔bi/s|?.XꒊAPrjHd-n^Ohofj~͕N 5f%x@*LSm=M\%JISxc“_ 6KdE59X9yt=9uK7SI?qzrq|yZro{pu?eF𫹦}d;bʇ1?7>GxMG|9ŖIBǘ,fi]=I.::VhQXZCHIpq?jv`E<דz+KCWM F:DoPBPɶHfenNO~ovy½t@qmʘ"2D/MjWp\pgKmQ.O7O۟-_!]s3ߏ7ru[[ۢ4,pkעX*?o_Dž;GvSFh8-qWdmqp)U~~upp1\,Pؼ^Y,/@זp*o֎jcu>xuߧM[ N\]17=vw@x7ѪR%פU'RZp?V<  6|6HukPvޝy(պ59{…U_-PSh(T˞>8M-;?06Y:h]=[WteS\`1JZ\ A*\ *>̱ NWȉK$bt~)Hq}] Omn7\xT{Lc+n3m=eNw;˃+dp:\Ȥؕ2&cKRFg32n `_Sw {Q60vK1Nsѡy-{oAQvv~s1j&"5G-C ;HJ7h[.Ҝ( g|5(IK>%{lRO ҋ[ *] m,9x<|ݘ^:@&r uV8bF898TH5?tOǑGoOm챱.{BnVQ[;+uHk *)  {A YS= ˚6v"YΆB"E2(=kϱ <9GN'v{͹u<'ټd?W҅r_IyU龽7W@E/)on]BhrS^0KK_٧50I7k\e2Yks_LVZk_eK0'now|ki6)##lMymDO^sRH=)$>뤐}FBvIy=ͧ&&D4Oe{KTĐeiZk)x|'tXSfѕYYrg3TM^y4V#KNʀ#I+2R$,6:Fwu3E2$e[ _!/i293=aUWU_MU} r+ Oz^k_rbȇDQ*T$齈&}_XΚ(95 W:(=:o*ݍx{>{yk~Ze͏BqRK?A(׹( ET{-Ó$S?˵Ҿ!DW'"n $# CI.z|tPxj"5Hz6o+KA6*K@ADXߋ?!i \i}.yǴAsTybQ^?*w=j8LCUME:k珝o#+C[[.q~ yb8OMKŝ2] w}4fq*o:L_W$kzQ Lѣi+߁eb1b4#1rAv':.Keޝ/l{餚Op yzmSXgΌRu?]\\ƾo78nu&0zvQ(apTvŇ1+_}7BBܡ6v.'CvrN\IE]']䞯f濝a9(IM}1dBݺ|t,([O7]ۙ(%;iy2^ hϹjsGI^*3oW"{vQ'1Κ KM=Ut;5QT)gKÏ=^!,$3w?Hzϯ T8OZJOpO$7g{-̸Ο:?zyVך3غʚ`M/޲owۜptq-!4[1@ tZS?C*Bŷm^Tq.Ȏ`su1ߒ qCm`K,ggff((H75M{Ôn͍yr9L][aX1?bikŰfGxld΀>!J Q+p? v nw;]} 1]Ul4NT.[ĴBL &va(À蹏%_7IK%Ұb3AM *e9Dșl}ߚ̣oj:{5pu@(ZjmRy.I2KZZLZlՔdS73bh1TNZU49$Lk b"Ykg;q]zu/]ǢNx%!$oL1sA۫D+V"(F/ѷ:v@:6@0-ۇj@[(6p4^샚%8W)t UDVrцmZRy4`6 0K<צg:ʐFJ[VtGJϣ}nJ)^Uv,^Ys&OF]f q)}آ#Ì1∺@Є?.8]ܱtيA@n{4̥^?GрbBk|xQV]DUmi[+\wfU^K/IeЎQKt,,xUG! mU#yThxnpd5=GSMn{&,XOKfms_-!f|\Z {+|›*C[SUҲT*ĞE؊`!Kt @@?lb%?;\4$DT/A;!٨уt TVkJH\)!*8kNu@SF}xO7.hXyͱ$^SMm $Dh{GՒghCmQr;b#n{r3^XY'R\Tʘt`#Dǿ)iQsQ9e(!0s*:;`̘ G$ 6L t #+K(Fi{`S̠hY "(1Km_E V&%jfAʲ e9fm7r-F[pJ1*^`(JۄX/1LDCR ѸUiE*t{8XI #SOcrhL=ϤSεPglϴ}Q*297Z鲲<~csm{q羯_vN[i %&n?<Ƕz{z)|imtΟwǓՔpB'xR܄*x+b.uؒ#T-ơRmdFDQAjv܃$x#Z+#e<cAA<_8 3krٛ£:Q: J̙Z, 2vlL.󤸥ϳB'tx3Kx0q|sv: &~hvq\wALWWa튲_4QdЮ*4"*F'@5%VspUFrtV @gPGrE$=x% gcZk1?QZYoB5:Pp 0tߤ ;?| &8)CjL/gn EEd)j !@D)Rg!d%JRdƫ`ƟRtふrU~sqTqw?]yoz;;翾?4DVRTJ:[ bq^dq}Od ~igax2ן޻ Z}/|=qr>ͯΏlfL'+_ɬprxׅmx9}w_rK}eKImy]wip_CTٷA)Ɲٝ(uvI;=:vuɝ_<$˵Z0~j0~3qyt]f]'Wglq|ZxfפOϩ.(nyfr<y}ҹ G&V`v▧n9њ4;~^Ag0<'=Lyg#0CP6hݰѲlh)^Ah"@zA-v (U:06LAo+D 20F/q rYX~O 5 aS0*t=ٻ6nWm;6AwvK۠5"K^]}83ƶdYؚ'E\gCC>$giN^Gfţ#Gw%FiZ${kR tc> b/66<}]m.l=Wux#w_ ߧ-,_<6BBgT?>v~+oZ圧'DYtp?{v~v;"ގ,t\n$L`ffVma컴Q'M 8#%5xe&#QHYu2LwZ 鵌FMFSҒ/ h =>Jm 2~T!9s2yB8̤G5`GE(XrYk=70?j*SQS:$S{+%R5|L> XO)-*vrJ-Qz2F%rX_a69/Vp5ʝ lEL-ЇﻲCQV޽P!Ey%K6CQ' !CN)o2P2zEH~ID POpC飶ěUcLjUZơ\Bs{Ҭ| do,_ɻ|4^e܌Fϣ˟Œm##y8]!6DKEIeD3R#^MIU:PYM != JmR!HװzIR:CBD18Nƾ&1ڍCY[7ںgނ}{0E62$B`NdE7;ɝT%(F.qVqЍ TA ('pBWȁҘ18K/DmaǁQX3bψ[Z<= K7Ka"<\(AeR;4"@I- #xi$pFBb%#1)DXҠ%MFbdk*<*UmgK6u6JE0/{^|tFRŃp+)9NhED+NcR0(j!sbk;j!ka| cSTѫ#wvbNE'Tޏ!~v *@J>dgn2B:t X>8+~tge7g?“E£8?;'?K^i8/X~ZŞPn2( Kd68ꢟ8+7-<7 <(nB G:6*]aˁYȫbfyv[ #U ˳Vd?FLVF>޼+vob:;~|4SrE cqmjX8Ϡ5ftz:^fٟW=Lgv/~ڀW6ش:6B*sE%l غ^ٯ+mem>K^9#OG.#0>@FgRh qoh$2=M֐+{C7ws<τ!"$#"#T -:?JⷁRE!#P! :ib6MrJ2tRipz/6썢:sư͵U:28l|ЫG؜-/ٯd>dUxbtv~l'Aah>mH&$c K3/q g-&ZXU$%EUqYBYZVxLsl@$_]?LvOe%b~'eX[Ef:쾻={MgL-eͶ6Q)Q77a(<.6if1G]LLzlfNddlj?_Rbl20z1-ϋ|^=3YS=Qe!WŊG6 7Uyvj+m:GU߿9V0#)6gF\t EkDKEr!;3X1n^?js4LYP% OO}M'2AC99#FN>8Y`}g KO=WZxۭ @ku5z BCtk ɮUBKZOW %U=]BD0;DW o\FBW mR^!]1uEwǺJpIg`BQ*.YW 0'WtƺJhU*ԽuJP._t?tT{;ph5fmk+IfCt]IurvJ(k#VOWuI &wu9ZINW e̬^]iF#Cdg*ժ+th9!mwd?yreySyq . &c(/ZPd*̧Q _#sT8"/T3*gQXf?-=EǸ?OADkܾz^{ɱ6$Qs-9˝R 3J|L%?^VE;K\ ԩ! "B}(qTﳍvUP,]bҹVCbePG 9 QZWJ`4`\–J{e7ג\X |jrZͶZz"PllV=f0!UKQW*tPʞ^#]x ++^[(JhOW =]Bj);DW0SD;CW VUBxOWui*>~p ]ZW7OW %Q=]B1$:DW 0uΈb0k+!(&Ct kStJhj;]%Lt J T+Iw\Jhl;]%zt-8j/U;ȐОjk~(Uo]F %1T{Jhi;]%m;8#Jx;hǝ)"r£{I~߻ ='[{84Z=?Ma3EH647 ^wo_|2P 4REF˅ Xaʨ !JKG *,mu-Y}j'= ,YydcȔ)RR%vCLDK9 Q Ƒ`j=1T^oǦa 'eĪG9p\A˞;[V|߾|P}/"W)zӺJޙ*0YNE8w qܘ`܌Ө&$WOKe #i1`[$0P`3~5=FΆ7FcW$Y~bVB$ jp_ъZBYZp/8Ȕ*i-t^N& u<ٜ}=]'ßvm!C=bB"rhF&Ldh$cUې($}1(pN,d@aAD,-MJsh"fؠ m] /AQv 2:ҵa݀ 9,r:g}8"~IߘS*@%ÿ$ҵК7W]Q^̛L/=rP8q,rY6 JKPȈn:a%Ƌ蛽%<@BG=cY>UZ>b\tѪ9 _˶z2+C"[g)28l i :1F: lp6HF7g)Oc.*5X@ hȭtH,WñY Y2&P8g+ R_?][/$٧>|%s30OcRYvk(d[՗uxy\Lܠﵳ ڸ6c@Sǟ :ޖ2޺9 Qv.W %ˊڱXF(>: 4 #,x&URoL'QU[L8֘)ʣ"gA\y#sQ: Vwl4s卨vѶ7۳vv)l}b VX۴PHj)hl@Q$ ').«HT`OYh.(NxM7K nؐ_OFCw[#-'y`m!!:ȩJ(mk k0HO[-|ػFn$)9m>/ \Mf7N du%ZbaٖǴ,F2lw7L_N-~ۍ޿@_ VIn!r !b\qR(P$r - B}J\bH0@)UҤ(g8h4XyYI)#S{Dɍɂ(F$t BnB Wւbb+cqiȹIA+L';š}vUeZ!dv[>X`0ya>ͶCZ{BZ2;쏧.j-   UAT,ӿA Xs]>[,Fol35Cǂd.zoKOܗbG,&RZuHRyΧȲRd\3d P|AwZ"C j-~m)tHiNONmJ[xUe)QD+gY;&GWFM>{>|n1gDp8fˏ-E>JXL+*F&)%EFPmeNe4u6pLā1Z5tkJJ8Os2.j4=$}ŭl0:=ZnMQYoP*f (#>*LVYJ*Ahy%|rl}II%3b9xR)-)@<1BҤТgXcqd3Iifɚ wf}R{`G^ q2 w, D1g  t?3ǘ1tfY fQ FJ`ړ%L`2qtb6>;Fj6:o#^"&MFy^+R֣`s$3kOJe#6;wiY vsGËX ~?)}ի WutX˵j4˯6 3|?5v%rz}|ŦbX3w`1X.83#.W/iޔH9MIyno ˠ" =ixVb=x?^^Uo78ž8zr57 N ?J] Z~ѯ{04wd6^]f#nI+f7&xRѩRrGE\S.UsVdG&=2OPirV.^у9)/r" Qpn2ތJyotofDJ''[x: ?`Ԝ _jYӷED3oak*Wb1Ng<]/6d%R.HjL. fTg? E JY }U4Pi}E[~qHo \51>\wIkSoninئІ(ǭ ->sU{cf`H-șw{RR0">:ZT q39͞'e@ozi. F 5h(MH7 Kf:cs+:M/ǟH F7 `GD^GmOwHc앛COxE:}(VJNۨ(WDM張SUHWA\tQ]6@dJ6р1&( 59t%O1U>v˾,cd˹'݉u{Įu-A4ٕ9rDNXߣb!{7 ha}ܼb-@ "U YYU-d\6Av!Lw!$)3QtH0d'ܳ@ܜz&aAF#%黜VrnHBW7$+mAui{nu:]vӀCk' p4F{1mr{)mjPY]ϰ̹5чaux2"ɩ6\*'ALrbLR1˻җ;l}IyͺtuB_f!JDteAƁɆ!E J:"1HJ ۭQu>I-qQk^J 3AZshTׅs[N$ODAcdKh,:ݦ'217%-(IEKKqBF E$V^nW@bE{lrg%k^u2sLIx# 'xa$t*%'ͺ1Mƴ:KcW{ҙdjsL3պzq9?q2(HW+ +IT%ZBq:z'lާ2lꓫ7d-%cUAm~eխw]nsq紴=fnKb/nx֮GgC=7D'1;BN) sHQDCoct^pr>p ڽ2mWjf`FT:@۩0QacR_qskpT 4 kYYdI)R@7w2K UC3O֢!^#(b#GhZaBm$IPg1йv*kd+0wtO}~8?U4Z,S*$0iT :5• -MFg&Cz=VjQbaM g\a⾮uwJמ..H Gp 5C,2*dxO |<\2W\$p0vW.a|IQ& W$,QFxEx:gǸV9kd~Ѹkb~^AWqgk,*qQJYW7J7c[?`ޞjVUc_O'WgU `Kq|3Op+t/7CWU}8ԪHJ+ KI0R)Rf'1<$/V-.{h!IA'"W%&3B]Aŀj^;%XZ--Q3 j=, cZ>̠+84Q\>|ṗ. G05$ ë{« sc7HvMr%[hU)Y`EK~S.Us@g ZV [iA4y/pox=$M,r; \$4%US|6"~'3ù1P~b_ǠDbvf{y0r6 }^"1 #3ˤ .IHB˷Z:  . Nt2f -^֦@.WK:bv{!}-!C$]s9J5!y<{@L5[d>ͼ@džj~V2L;i`|X%&פ`0|K/w4O;g4|l=ym9)֕.,Z2xP(] =U1>,KPϕ2q}k\mX~ބx P}z(j[z"[KmAwU]b_\H׆dĠ z T%>h$zKsj)SMgzJhkfEeU~Xdhx|2~~}~9aRT KI\_d4h|}(~wn#*GyǗs>NKWU> 7Np:O"}w1^^:W;*/ޞoϋ!??-.4^_WկߗTne9a\+k]o=bvٗGȍc.+ ?74" 7%`uu/_nw işi6)緣 E唜 nU3LOm++_ / 3)!47jhu;4SY(keI[.zs] 4V' ,]otekd5r6&/^ ~i=Kn[)]4}xy_[Ko֣vw{/ ϫQ=`uo~:]@]ty7`/PތFoh?6s_Kd /٘ꒄm.7<َ0נW#hQW7 m {9fwtAIêHc6pدBXd ć1|Ʋ732u3E Ֆ-,6Sv ɀ̂U'pd9mktRZ ebJ*ΐԊ?>'Zî4;\RS];p[Q (oc''(PwBW(iuP['w\ͤkS-ƒU/8{K_k{R𽧄oͅa6U8mmI・Gݭ֫*Z-X%}G (@%sPhU\JJffڔ (ΔH]rgW3ղm Ζ.]+N ޲ڋyxx+E^,*P>l2`AZF WSCIߞmHv_(AAkMLhyg@Mo1f70/~;㏷:*oVf'kܸ֔xГE^RNhʛ7*7 f85Qx.AXG):"#F~ Dn9= ^9e@=)`C,MAvqG.I͌̌iָ/\xHe eٗn"b}0o~=x/N(YHz܀B$!ƃDsL(Y =3ΐ6d0f6 163a )鹷58ۏq k;em2k۞{\ E> !A%Ɍd -7F$-/%*f H= ACxс' \ %f8C5Nuε̇̇S@b #?dDIeyy=#34( ea$Di2kTpmCL0"|ZJtZC ȁYlwˇe>(láAfܸX6xA0&q >DяZ2x>q a0I$ lM-58ۡs0yه9gcneW\t%qO,9sRp)A"ZYjMBxfdbFx/`}a(X$9q`^L@Z[dƽ$,R"VcshV,gzL!3)ǻFe-0=n0dz&`X_Ma1AKe+Wwo._itg'k(t`uFhЛ8Bt=`3S bƏ&M5ohX;ID}j21Q>T0[ߞ*1vs#ހPz޹)~0~fy\(f)`I2^,P4 }( ePs:藠l@uܤ8@jtHNߥj97m)Χw낖wCh/}#K>]=emYt{=g|"֡[7fS'd؞O0[O׳[Z]"(_ƶ]-}TmdnZa<=SYn~>U׃9/|Cn%)ɑSWԺzC\T?LmHo>\ӯhrmβbc:sqM3{ qD $LY}௅#/O&?eh|pa>:G0܍Apً~}Zvqa(_NSL Rqָ,ݡj.fK\_סa.|r wO׾?;sy,_.jamag.uU}?f~5fKc񂅡Vn<,|o;i37\=xL-o ̓K֮ǯ}QYwu4"+Vdf}ft{9}/|G]N;{aYӖ~}jV^+" )@A`D .* ̻+rIYʸ4s%rju(4LjM]JAB3zg<2`Ӡ}ԖhIl;258[&2Xi0i B I<]|ݺӯxGǯEou(FSB3I(Iak|"{TvѠ}Df b,si96Nzch9 Ay!4rCRd S:tٴ]HczqҥZ3 A i)ɎynKĥ6C.1ܱ~c} ^nݢF,bΊkAs,h²:K;Ry-]N z׊M\B]nZ΋3/?_ Mgg}#p4# ]G@h 1]G@G8}a cCWWh-:] # =]!]I1eŨ+DY QҕD Y]`e1tp.at(% J[!$++L)th:]JEYOWgHWjiK+ G/lDWVί]!J*{lJoiz}Ia'vũnp:-]VH]r5TJ@WM-bn:FsҘaP%j ӣFX8potgMz'JD=^C%V^`:-4 ?׫B ۃ*&W@S4'?jͣ(v_x9f0 6͏YooDaI?} :jIu09,Ndd #A/K+˜^wV'Rbk/K˼N +z+bmviΘ!:xj`:&H2ZUt0*za\$÷WW+QM 3j "$"3PR􏯾ƣDW+ 7@_QsM W`-M1>µ Кc!JEz }6FQ b r] ]!ZyBtutʼn b b_GRtut%(TDWXb R ]ZED Q^]#]I+)r֮KDWwJQ)+{hΝZ+3+͘b%9XrŬ]!Zyu(u3xteAYjGA@ˎy8]!J#]Y-ibVrŜA;lizsI81t`}Ȼ5'}.XGW;lv+7\ަi!Zv<м?irCcl谗zpjno?NU'\t_V/[?iZ}#5ΒWU' qTmj$T]&j[1 :ٟ$Զ$5u9k]׊RԘ;_ԘpI5ƨyR}I >])yjZݾ”zv}}uvZI.ײVOl$[oÛʠ#vP _o#Pezz;>7bow>k/v¼?ncw7ŷ|}ܕK>kԘ*4<7upa<}+6+jm6'ra۵Jh?]qj [7FO8_<xON_4Wm/[o7? mHM-MVZ,Uڶ7=Ұqle2Z$b.!p ^ɟE)L٭ f~kK;UDdIYK_Yzl'TN8er9lSDKs e'J Mj6UVjߨOKu뭋c}FQ֞<5ōuԝGf,9ZXdۊ5轖S;NケWg0K7aa|M-_?}7ݿA/~wt%6piΘ!:xj{VxGe]Ўְ&5eW\Iqii>F :UtU'NZN);DJ>Rr_SS6BCWWR uΐ/zjPBCWWU ]!ZNNWR؞ΐ(u +Lt =]!] -ɋ:+CWR ъ3+R%9Xr bAD{vCٵ=]])CUQ VBBW֪Լ_j?Gx5.)a2 \Q3hU4:G2Fs[]`E1tp."]+D{:GDWXwp(mA@Is+I3$>O~iUG1+I{{eqd,wS桓(o~C(G!Ԭ;tK-7gXzf76{G]&vDٱ{IWg{5U +Y ]!\-K+D{"g7螮ΐ˂ KB+Km)th:]!Jz:Ck +l#IX822"6 a}^l!?-b)Q&)whRTHEj@A$/3#_WCW WVCWەkt(mtuWxu(eSWHWI\5])ఞvWCWIoJQmt+Ȧz Y-wk{нZSrgqxBXI^8Z)ww; pz=y3iOPڅbPvSOmzr]]^ ])\oBWE2mtute?UX>])`oWCW 7ĵЕMWW@I$]!]s~C3 +vjвP -otCJD+ޓ0NW| "`!RBWΓ: ]!]panXؕMtS$6:!U]4])\qk+E?Ԯ(6vut' X3vpjƮmZ<]3]~;Gx-CVY+hv%ط pP#cr16渫TptEE"h/M.;Q}~vkeR|vJ3ƛg{jS 1Ɋ M++Rnt(}&}teVCW6=yR&l"8hWCWZ hP2DӚƮpЕ1,^])Jk6:Ճ$q+\tϓ޿k(kNnoZb0QwիWwA?~/]h/]UOӗW5ߺdл?'/..k.D} Dј75 w/No.,{qͮh0ӿ}V~ljO6q_ꃿ0_t[|ܗqpEk{>-IPRW;UQ #PQC|Fp}~BLeiQ7>n?\5L qȷPOw>G#\AxyYߝ~~g%l%X3uqt[vl!_x6SaOջc&_)O1[|$ #>] Uk$W7 o(m?+ۑh[G -"{)8q;{\OJH";J T!lr3mYr)v0.l$#s}矡(NU69(#~ZhP&&u|` . Jxޜ@ګ*o[LG*Z N Ar#QP}o:|XDɔ;K=cFGZGHum-z>g4Ȩ&OMΆk|`5 jkKatJX][Y_Zh4DxּXVbZȑ*Pwr=<8Tɠ`J"sk ]P,)";D{ >B.@N`.dMȗ *'S VBAn1x{4 :uh A@kFh/ h;}ZqDg%A,7XuU Q兩pl5tM@-S`wuV6PŅ) Ӧ>K+5J=V+'Yl(Lhm+"]hٻޠ$P*!yd[Xb@=5:\cj$` dbS!}MHpm R( d>ࠤT e\4-~*ӛظAgd,q<dWbG;PfH57kiN@6M[A04'X,JLhW4 !?dxWTcRlANEY; }!!.A "5t%Ԛ4I!0ΜDi>dD3 V r 68 /իY {K6Y1PHq ts$eIp5 ֬NRX6[GS (ĩ`ܕ/H!8I]QrϪ J*HO6u(`Wz,+ELH T'%CBl,@eb@$FQ ҰM6vݕ l∽}Ai!}paPtr}-ĥi&ِ|3(JTvNڍ`߇9;v_x^W:yߝ ޫUzxԝ fBOB{T^%y*dH}136u 2j ib="eDEKre xC$*j]Jh; 7k ) 0 jki, mM!g8Z.\1y y@'^oBZڝ+*brjFZ^<-Qhќ@IDe#[qc"Xά$1ӲFJ)C2?Amj Dao|l6x,=YeX ⬈)flg'jj,A2Zuߡ@#,&5 [5լ›Jk2飷f:x$c,eYn$l.L }>>#f ifk9Mm/`ځ=nyy ~/Nwi/{LӣS ]@H77 lp) l\S &= M-iRjQ3y&4jex31r3@9Ȩp Mʌ4MSC^"$SiC[H'C? o QЬb^8J.ZE#$KԌ*#xt 7A l5?tXf=ɴQk$IBL@ظP},?@5뙐 !Ԭ\s!- VqdAi"Y;Xk; Zn.40zl=?-Д nFII45nQp: *nhI[ 1[x@Ap&M .Fgr *Cvå1)Dt0r";@ 5{T\GR! N V3z !KkBw]]~Fzѧ+}#7`j?6M҉z;.?-7SwscP##|NT`Q#yG7a:SSIIRvKvS~ ҹɟJ v'9;]Ņ[///zr[6_N[{s~}A'zm>/op77^$^|Yh~'Vx\n.o_ޜL:i7us ބdߜ|(㮬ooNvs6Rߓjv_O3KgB*e@ʳ~\WhⷅJm%1ҧJm%жh[ Vm+@Jm%жh[ Vm+@Jm%жh[ Vm+@Jm%wmY~z?;@ &Hlj0EjHJ6jI%J(Rwu[SEJ R@"%)H DJ R@"%)H DJ R@rR.Sapuw@χہPz:q @€MJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%* KgiuF p :J @ə %*4~)I DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@:\%BuL s&ѱ3J @++3R $@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJQ}Ď}1~jn>/߼N^HDKؚg.I\C{K \F?.:DWX,.W]+D+DWHWye;DWVAU{Y8t)ҕYg *BnAyNtut啳wp6xg6AJ;]!J㈮0tI/eӟχ#lwɚf8ɗ&<3C?*j<7B.WV7y>'0k餤Od̫/ًtt|05(>; Izq®i^fggrycUUAMmoQ׬RJo*r]?9 T)-ZuTZm.i[wƒւ]XMhJ@*&{__"i!,@wZh.r^nL-oX~+^8~!ob;vς7țxhs]!`;CW]+D+DWHW+Lg jBW}4J]`Dg Zw([!DWCWhl [B3 jpwBC+mP!ֹu3thNWxo DWBWr]++ˍ ]Zt(SDWHW*i4Վ{[•+th5wBF] ]9kT;jG3th:~<]v̌C+oq|pew+@^jPzZЕgc&oVteMFe%bS_]uV9Ҿ?G(g43hT*2OFF8$م(<٨*\ՕJWQG_7T"g]̞zOGk=-)pxt/eKӧF/k3jb,R[x<,Q,[Z0j4}z9sI5RRU4!Gb,ʦ!mEyjBde7V[eqԈ;qX;-bV3'=-cP1ԣrwG?ccK=b|ty8?齞Ǻ@V)q6/_m66lKե ) hs\| y~kR~Otb-7x3 h{ɷ%i2 p GoJ@7u6׆C1PW~G3on]KKZ?B[w>nhpio [*|6W/HqCq3jyܿ)66w~n2+mKOìT\'鬺PǬ ͠tT3_X%φ,sRv&{E< 7[YCKy:å|y| E*.ml=ga:o.GF49fe4AϽ'eܛC:tj>(q(> <;~6?' ^0h٢;ܒ]K|୒}йϿlGvne:]Af~о;>ոWlJ-P¦6 5bd >PG:̥(W;pJή*AVQP΄ߊי\ǸRׇ{K}5G-xCaGk˱l:&0N\xֲNhjYGmR&)G\p- A0M]u*pjiRʖe1>UNxqB[0:P3pl"ת$EI0k~82Sk.R^(Ug_WfwnTX)Pvdƚ,~o٥w+՝3V(fdT&|-R蝎ٸ<׺VNAg:N:{#y/KSl$bD sxW" -7/x[*9 ƈe4'r>D8 6in*]TcԖ"YgCt:Xll"$ vO?zŽE9nY-dR6Q I:xrbq$UTVNGD%]P੖Zrs K%Lͅ7"(QLb. w? c8LcH_iG?/ n)lp!+jJ~)W|~H33)=c c֙:CME98!+Ț8s;!wڃ纎.oC*u Xx].[+02@ ;;swgt2׳͖yk1z@L\d܋cn1}"-o"7^Z{vk\yKLD߈+(|Ȓ3[;zlR؅fUAiFc}h2[TJX ?#UQy&^k=z{2O98cMѪo.,E|'#* KnP$P2׊ZF)Ku,Ǜ;s%5Qе?D2L )P MmUNpwVܒ=޺;k&G[꧵0謳ϢZ.ؤ)\+"g0‡8R`(l.Tit>@ڰ:W&kir<>LIY'JjN! @(:B1Rc=폍A~,g Rg-[K<(N©`\Z(d,B)xϼeH2p*D"換&?tĒO)qhJDŽ_OME4f1m&lL7;ha=Z%krAp:Š1.2M>7b.S\C1,Tno=׏. R}f[燀f&V/0ztm0}v%׿G!0Wdu#樎pW 8>ctfrG_gsNW{;JP5*Cay̺\nf_"Ҕ> O|qyn&rF6}IѴw&;|ӊ=R- |û[EyN ܁I.V&4?pOz2{9zdq7i9 hQ#ϯR<І+}㴿~߬_@k,*KcqsXƚO\kj춽#=n*bێ7;wyOy˺vp8Ss[m2gtCŖړQʹ>FB)N6n)h%&?{ȍ~J|8ns=n/77"9YJ<6~Ck(=Aj*u?~UGp:Bd)IPE)Nib/e`^: Sw6Wq6r껵} j:qb?bhVI[^Y_r|,0co|uЇ?=N,i r-W0&`@4mxdZ$> ii# x" xa-Pf裸{iXGdcL9Ǡ uSYv+ΐYiKڇxuA9dvvaSv a-qy5; ^d RDFmk0!b\sLR fQ$E7Eܦ JJAK+CTA0#@ƔJ I|RLrLe&f1";MWhFym!*O*+l^* g'JZ#!:cgxHWSV&D,FkCklcg] lkԻ%hp]i ߳uSziQMߨfojQI1NXߚͶgZĴc~8"fUHp=?VT"s$uxJA6@FoAD\FzJuͦ/HmwKT2NK'ڐgVsE2 .ɺ ?o1BB%?kkPج :_oGM+>@WRӼM>zHBF͆|,gm&C?JU.҇*uN>)oQ!6zIj{z*] Q5N::A@Y}T"(teN!YB%.2"2!Fb. [ҽ! >Lڮ!L8[ۍ(ʂ0K`kGdq ;xdq?Z# nW82Mn6}L6CNNjzz{#\n1HƜfLJ`kV5B%h8@lӄ%"<@9dVK\ tH=44&iSB ]B!ok*Ì<; iLwl-vЈ,SHDW<mq1(-5EM#sb$K—)OObO>BO<_0cNڹҾ/pʙ uv1s I,,|\&ҁ(d6B4=ԪzB 0,zFn##Ǔ_) dtC@t읎 -Aa kidЯj:zZlYġ}B݋նaÅėjxYA l_/J70ɂ1hcpU u ЇDc,+EtɢUG15՞=׍˒{g0_]^ſß^4DRdR4JmKBY'z|s\]"@?^^|JٛŏVʋ(^_j㏣~oꝿ|r &?~WoΗ\g_L{?[\s9rN;٫ɫՐ_dXVoA_ oa|fCq\^=bŽ/3/KL{:l`y~XY gPB6m|QIo~W7~?L?*57x:hؐ~3 -#uMZ]ݽR)NXџ7 a5l lɲK !F$ѝUZiW6:& \)c/GឥN)}4=%K)eCp\Ql2GsNzAʄ8=.5;GRi<,S`i!(uI! WA'@VG I?d˧g>]bd !qZ$VH-7A`-]ʫ8jz;m+8EYy".9gXMXTjq,Be, OI_f͹؞&.O; &CRؔ""B+\l;. RIŔ{bĮ&~Q01մXvQ3]CVX`d1%-*Ʋ&؛`1hjd!$/sUچǴ[fh!!Nj'E+mE -Jdc?%Cٍ<%1b) Yvv.<ߥ헭Tffݶ{Y%']_knU- Sאu,+!w(Yk&5K55\,fVuKf1b՚ l콷f\rj捑a:ouuy][L6tƷ.n}'z =dB'|U{IUӶֲpMϷ]hH H(}$F4nwde?%;e3;Q.312ޔF!;>%Ѡ>땗&c|2 5Ҩ̘ (5m-)q!XҍS2#;Ho}\g3qNZ)B*eLs1jvsukEY!?{gȍ% ,L{4vw^L03=dnol#}l}+u*-UIW,J̅j*|˜5@S箪\\K[;vpB+HO2U>U1J{O qB CE{M/ZjS Vl Hc,QY?sǩg^i-F)Ѩi Zai6JۊbZ9e꼩%R.eW*짝b^bs;uoLj())\0=ԲjMZ5r vFVnؒ8`ӗz;_RYZûzڎ'Tu"dͳ"sowe2o~:}9A4;KmY+Y]#Zo;yOq=w}\l[7?M)[Ⲩnn;| 뵞W#]~6EE?r68jb1[ɮ- |¨r|ʇW_pХ8V,Q{^Sj^?  }0Z4KFF)u^FLf$2K$̶8`om3,#ыUlD/QaŸk}t޻ >vyoțbD{79, 뱾tM˗ڳ_7l;߷]:fK#9r=w:˾y]v59Eˁ< V7ಓJrxb>{O^GMó*j1"oO/O_2wQYUVch+08֪JT#2i\Go}Z9k˥m_"1W OJݜgHQor殀 tKp7 (uţTtU™\kueds g+/i׹W__ o愉?fxzU{Qzds(4 F6 `m &5i^+olDk($DӼf\4ֺ7Cj^yQYP銁M]1}Szt5@]Y)銀htŸRŢ+}'S] QWNJb>AgбtŸ:im+t6jRo"ۈѕipw]1tIWFWvϬ'B:TW3w)l.iU>n5ɫiuBjxrͦEp,q~V=(9k4;].vEjfFTƋYWwef ݪbҡ@ l+x9"Z Zu?zpHs~ۗM]60. ,*qc81U^e$Ny@v^ 7+,qJjV8׵G뇹h*^{{ȼugCG~NpÄpUdzphuG]-agl@ŦYO׺=Gjsvs(ӷ웭Qe 60ʄh92͋l>[,jQc8fEuzD#4w#e,fpM3I4*qQuE|4b\cw]1äJ*"qEWDRa&] PWJعhtEOϑtŴ+2jZ銁u<}WkT,bZ}S!h#!&]1WpXtŴA418D]YlTAv2]1f:a+!F+gPxtŸcт}tŔ QWh1銁u<"\DWLܞYxdD8bXO̫;Q[=iʣLWI'0FXތ}^5cʎ/l?(nȄh1A(]m&Ah" ;~F+;jQ*ݯhD.Ef=RD+FqcӢR -"t>]T>]1U}Szt5@]IETb`gXtŴbJ#+eȘtE \)d,b]1IW*\Db`OW;v=,2?\WLIWCԕqF+6*]15i}D)Ejs [-h Sbu<KqU4"1Ȕ+5JB4b\0Q}ը+4~e7&6]rk4${ zZP1ڤbM bk'PN6r?kְMCN$'@/x]S:a]F){hf;4|cC=aRĢ+սSZHVڈtG+z4gjո0ʾ-rt$R 銁;||,R껮LuOlCuE+zM0Zo+!}IWѕ0D+ׂEWLDuŔ&uQW1vp)NݨlL[j1Ȕ>j(}<*Mci]1ew\t&B|܉Ei;w Ӿ`\L`^]R*!]1վZtuPKTOyu'1t~RaT'3'(NZN'HE `\M4AE.)Qhb"GWT,bN6h IWCԕUW l|4bNENa t5@])uF+t-+iiSbZwE2]1.B,bN T&j2 =[pqe4}WL]uŔ65+˜+]1 j}SHrNۈtEqi 2}tŔ'] PW^q&"],htŻɈ]1Lnd5.[ &L)bty۳MI>WwR`|ڔ$rYA)#!)4JU [[͕1 ,'J:$+nMa6[m-Vv4<*Ralff;4AtEFhtŸcӢR -#'b\;|2EWԕ4JHW lU4b\'cꮦQH0"]11ȸFŢ+bJCԕ6bLA.]h+w]1RIWԕh0芁FW-n4OB 2eNHz]YIW hKWml?HWLػ=\WL+gF4qe4bZ+t[%+}U+2]z0Z@T6 z=M-L61N&lPL[1u/7h•:cwN: H6L䴢B.?P9lv׷ߎL͙˗?me-KK5fg|8uJ|&L D݉BJ%6 BL!ΡY.HW ll4ju.]1w}QZᒮ+t x[5ÕV)PpՓ节k]D^#htEOAe(IWԕHW,D+Ƶ"]j^%St:]11ȴ]WL|uerNF+JE+}m.+bJڇ+',` `)JŢ+5w]1M}WCԕ!HW b4㮘R뤫g+g*u?8ب2ᚎhk`(=+tupK煏fMvzTX(6]rJ˜2Wq\FS.GUP&K ){UG C prpbDjttDm66>aƔʥ6lKHW l|4b\czw]ѽZJ(N6&]SDp,]1FWD軮RJ"FWkU,bZ}וj7L[ T%A)gij8*3ǺZ'5]J[X/_z3ZwFrOQ_S97;/뒮tQo}W7u_ݷ|#?{u''Po?E;̮Ϊ/b^vI6+.ٝ{hUޞ^!I:XyA6ũTƿ_jKe9jKˑ ghy=QT)Ϫ .Ka+bkY A^NfV:SM A)suIj)vCb]nV[XmUFްi=n*{zK}FDkL,v5=ҦVC֌'czH^@4bx42}=SZt5D]Qb #``v52]mˆRomDtuuPhތJJ)-y}8=;k{ >(Yv/w7޼y*ۤߨ̽:ϩP}NIzS%, GmKg1TÿtH?gg2_Y}jsk/Qr1Sah(Mn>o9DJ%^]E^.>v㛝D/sˏߗ1w#[;oU6JmjF25Y:ysJU7glWѪl~ŲG׈TTQ \S~ܼ{pYv~aN</S~!3נd;9%tۑSV]埸WwNe=_ʋ{u"Ƅ/Qƕ]Sq5TX)<ƢJr0%R[Y%X!!$*}hO'z/|壍||:k\lr(Иʫ5%neEs-NUBC.T= NʮUXWKQ]BK)E^ Fm󢨰$6@5VT&/ ]4JJq>hU4eU)[7!<9Ew;Ut1oDꪤ4.Uށ@1im]uM%p_,25 %T7Eeualf'-]Z'y)Ђ~R*KMuY(mVU蚐A˂")][u *h:IEKRqϫz%h̔ *c64,JlN*U)j盲sale /hj?i~՗^jGTkqȒBSR/  ike VC)ٖ}ٴee%^2OƉ(fT阕2WqaמXBQM:JJ1U`CBȥaN{'x&E`Sy@_g#.4yD4H/mjJ!C*Rq8A13X2d]}P9ϛ$"hj*uoωVRmu ;Q椄,G@بc[%cKctSt-JI25mRhі8:BOku„"kivNVCK Vj/ *$8dms)T.WPf,֓tR.E#OAQ ׄOQdO]X)i0f;TpN.YGV ZBw$,x,;v4#H2Y(_ qр)k|FoxJLv **6(:٠-;u9xC9ݦGDeDM2U Z.JC/ Ɛg-K75ѕ< q). jhh\WoT\s ^-+a ' єYnXkYP4kBE (kGU((|8*¤T)S` `; n\*U0VjVa   lBCl\+sYBpP&WH2ɮ3ŸJt"T]!zDzf2 V^X kzeY z3|Wh%Wk@ 6N[@0rNBB(2m ؍ttg-JC(]e֜ANŚY kG !.A f|) ̚` .ASlG9(y;XGgұK!\V!b8-te7 U fY#<;J` ;l?B Cꕶr{&jqϪ J H/[.,zFXy}W@Bb|M6y-d ՜`dV.2Db0۽@VQ=z+p"КeM2y?R5ݠ=bF]*"1&9ih>r ULl^vXNR"_ !`V9 e}M{::9WtM TA׮z|ou/t$Mǀ1*4mJg%7{H\!iZ*#d`&SayCs ~0_fVuᜁ=h5'HiD.0iՠU YP04X<@{KH`e YLZ[]hx $nGgp6*YȩՏD?/AZź(7YK1e@TT4X!?&?`;.O&!z*X}t6A@F6P= R>=Ŭ>&J|:&$pYwk ) `*L]f,C)9kDκ$!H;V@r.!z-jB bjw ,]?hgՌ6XX-;Ql,< `D td%.`mJgVHQ2*$C VAgUkUPaVB]$QIY6 NX)j^ ?h}3,I5$ά$޲[XPti駷*x/G$XHhf %0lp_tВ&![[&z̃vg-@ˡ oznځs9d5P`0u1vtrlѓХE4IӿNd6YQۆaMEo)DzI0Ie05AI;ugFзUfĞT29x7,JxKt]ж+9P.O+U:0^Ϻ(Jj]t*22ÐLQ%fti SRC|뛷ƮXBĿv+Bq&SK$\sF`}?I bZ0p0)/J-* X(2#zށ U +IUE :c@sjo6i]r31iJ3k׃*M3| R5ͤyha3JBv.=#DT赫>@MÛւ LW YcT:i1`.ZSvlti =+߅iqf0p((g +(8qKچkW̤Q[1xzp4DmZ\4*w3. bC1 PtQ܂dpNn ZV Kk|\~B:ChT읩y0B.8!7[h#wۋ~+7 +#Ð݃ \:4޵!#WӗϞ(#)SM\N0&nֳ~C^`#G%~j[kh@uI5~jHOnᲶr`B6z~۫>nSEh~Jp{Z+~i/ǗTX"^?4n/7nkۭ~IKxj-1BDA %/ǧ*<|a\E8.87.hB|ΚP*Y&4ć$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq|@؛7 r@&,& g! i'1 dx@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 InF%%@oA,' cgb@@d$$'1 $'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@ "j$$$&9$Pݹ@gNZI Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$$Z[ѭ^نQ >.իU^ov6tI%a9%(r1%5KppK(vAtEZ ]BW@+8u"1]esaN%8]= pt_׊/ Z ]= ѕy]Oz]].4ZӾJo|zO0P{?X%|nPx_ݝù=P/ϛ\=} eLWgHWtq0 eŜk7RF3KRW] ]\mS+By#9U* "n9"OW@D`Е'Bӵ78]=_U_xК/dҝ+tSt_qaNv{LX· Zbl~/Ϗ;#35a^kPzfxz]y|X/9FV!%77YsFmrwsWiW~ݏ_w~v (}/&7zO74o%F!?=t|t¿o~~OE:˻onLou%N=y |p}n#<|ڡ0w7>؂O>xn%^\7# ޔ6r{R|ψ2Aq X8-jn*+c2N`2e%lklpMF!LgI&&ya۾\ǜmVܱҼp˲OH3lG 9:$m.2$UPVQ(2$ EH5e!䳌%T̅3+\qxIYUfu}>mF18œ$?GNcWGfu2831\qV|\.L|NkK9MI?@4*h:G=||%olPgzF)%^SN 'NAgzcl>L8[8h%mdyۓ!xHFĴ8Wv Kq6fgK Jntb:OG "I(27njaw" d@`6J%Do+V$}ZvanF0m8zBwFI%Ұl=3^ ƀnesAgE8CfxuA9dvi:e}7Gp}Kj<D/O!]d RDFFro̙kQI ,[Zh ) DaLI'L*OV rHQ CeA#HvmA]U11B .O*+l). 3df%LGn܀퐣ܛN5g籠%JcN>:cgxHd1 ZEJPR99kyxli1xΆ|o Zy;Zw܁\ x.ߖ1o%էͿW[`ɮ-~>LʶEZGBZBbHkUʰ? V v1k`XjXb~Cr-zĂCiCiiw 0z0` Ve`,\JgV9G0,&..&ڣt>E6[&l*{AwY`[l.pv\ݖBTkOWū)QKr|79Xlr8\Y獺[ە` }}eVV̊pPY7%irhUk Q.gGIN'lg3/>{P@%^  2MZlY[zy]pv1zWEfayTQEFO|NVX9d< s &^p@mȄ (;ef|?oWxn#]w¡ cQ!JΥ مJ+(d`L-n-^ɝsB)'*:,1+$ΔVYk:p qMjڣ$1u>6b5<ݝs/_%03q\[Wd^#%=>[tt$tdA8v)6Ƞo @ 4qV*YD.T ^&X;^S ;6'vl ;~:TIsV%L$t^,7IXt*#BpI F#U0HBW݃ ^_Msg"z8b b.;T&\i"I,+` .xU|jUy+]2TSZ.F #b/Hk{h~.>k9ӌb:?*\S:!! j15z՛W.EOBN$B{8ttn;ߴ9մ^{7=Iղa!nޫu~z3Wnp?\>`P˻q}>3 sۣt~I}\r=),v:Dd5dtEryY?C] ס_KW*Ja5찉-]bm#?%\Ifǵ'_6rS"9)[AQ>D5%Ђl@q =}nӬƂr$@H(G/sx"#[/#3t;+'HP6IW4O!Dkp^Neh!Ѣh?B`h|?p㸙X=qս8>ov&W'a6h-hVh/3͈8λ<K) yNL}HJ88%>b'5Jaظ܃^ 5V};8%~ThCZrHojou8wKm{,νx79J{1óQlv쵲2aLψ0GM,CGX5u[9Hr͹-h6A [Y.ydjUOWȤ"QS} BhuN9 'XeJjG[^rS1v) 8EbX4Pa&9TYDXG"\JlB OǨU3(K)µkY2zX7 GK_o%GሂE#$$%6D-"nYģs)O0­Rii !',u%Ų*bO!W^)ǝ6*APpQ^ ],mrZ{N(ĐwhՖ0mm\@ (g{o]RC:gه@&ɧyBiS EcT2#>'dd |+ӞV(%:QBZ=J zF1l m:ws+f ts40GC^|Ry 4IB}W[ƢbVsq'9){Y,C"NcAu)a+ujne69,{lyt&&V)$<3cĜE N!4>|bqո0XY]/rW(oC/z°7r}E8n^ >vpis$ۿU6J8:}u)x½>fōY&tI/qźl?Y-z=գ U 8/Zș3@5CVC0i[#v∝2P Ve@8SR0(C9RX; IK;'s&(862! QDeqceEĖPS:3{9A 3e3g̹=ŸI[>6X:;5M,jNuS+/PO .t\ 5$W9EB>)H!)O$8sYur$eZU&.vc v|A q/}?xIQ`@HQ0d <\Lj&ȍ˧n|}uқʓ 3N?PҔ;Whrz|?_{ߕOﴸᏨ_d&/?_~ txzˏ? _czӫ4l =~y_o.ƃ\}ڜ?ZXdoA(OdN:dQג͡q*#gjFS7d4/j ?_~ ,ѳ`ؔrlݗ~ik&\"f_=˼} qmt˛w*1:"X#f4h,4'g0S`T7zOاÿ/ I|0YvJ$T~XJӜO>^ɮc˺ =ڶ*Sixu=ݗZ΋Q=N)uɤm(FWw6?JnWڠў󨜲xTKST0uys'mMeq{.)qN" T)J4,j>ͧj+U[ld(U$JN'WQ̧Ö_Vhj&+8VTEeӴW=(+ i%({.YZAF@bHM@k7 R&"P2zEI_-"'z(}Ԗx<.j0DȘ͜Șdl-cW,TPuXLEtad5ytA 3> OĈm##y8mqHD@ьԈW* @@cHQk % 6NmX$R:CZBDG#v6s6# R:vEmu]=P+ kX`%BCt o9Nr'!s*.fȅ4*qTAh5> 'H QpXpUAɲHD>vDD"⼾;D\ og\8G\کX O\sXEv>hD:16. #p4C8XG!1pz ",iPA$MFb`2 59[lzU[ŲoXgV+.̸H:\pq4WRrĝЊ" ``@RcR!aP" pqؚ\cWʊ 2cGeЛ?ohػ6^TY?Hߦ;e( Jw5O83t`rGݦt\%`*J2i7VTFq$?g([tя뼸xOeIӀͦ7n;= E\}L2;V%-?({N^p >&yאe"k#ךV |&늱IauIM ̀.mL[Ju錪5 |wR6>j=;5_ΡԼTr}{FkknV-Vi^7s^nIdUKߺzCl~St}7%y_lOA2c0,p8L7\#mii0?!yn}-ħ7goU9]5kU]p|?(p蟀iG ~;>&q.}Y+~m}zit8LHjL lpUa s/ҫzq(8P9 ? >8w΋Z]yX~hb4$Vęu{[~]MOgUS ՜3pt6M,9M&Iߨ}5֡qcx2\4_^}SE9W`*WZWJI:zpS t0pUP ŒJdW/W\iKLp2W`%\D⃂+0_e8xġ- 竊n0Zv:]: 3WP;x\LT E]W%4Av229y2LC*o*1h95vT.VFkY7hj"j”xʈN[!Tw<`|갊kl4x):o Η T@H1lXZ.BB@'T qKar66A٨][.9ڐzFA札9#B`+µ\mЩRYz(WZ?f QqZ.WDitʕ:XH4B6rEMtER+9\Fq+f]7\ iA$]` f$Wle#WBZ )tYz(WNiيg+֊O5'E&_I.GW}+Ĝ 0BĕT;Ѫ+Y(WANuWld#W$D{E?(rboiz"0r]w8\#-KGkr\=A s>AU~`ȝ&{nVM'zl QhDž)-BYO %B('uq;(tNUؑ {"!ڱ]nl:~R*  hpM kw&RM0PچJi'f#W+"Z|CEf:\i lp"WDDrEe\[\#W,BZ#Rf\\lp"WH ]RYz(W8\!\W = q&Uʁ8"\fʁB.WDrQʃv2+4N͛A[ʐ媇rv| 7.rR(;}YU ɦIb{F&Q#Н _ -,rMr{węB6']"8`BgWzeڿn 'i'YfcLĸQqN9gGT3vߝ=r(nA9[Bٞjzi3H+>rE;:\eYz(WʺV\n\ iH^2%ij\D4HFrE!+{*\VYz(Wij A6rEMtEΤ.WDstG!xFr G,"ZR+:U:O"\FOf\9/Fr^"\&$ZQ/Os+;Mm[|VnFJ=jSS̸P q/ap8nA$:NEn$IRl.́SL/*W\!n\Ll,W+6䊀d#Wk"WD\rE-0re ]kF"im R*\P5#+U|+"u"J\PppHgq`3wE|2H.OQhrcɕ?\7ۈ:u"J/\P8;> y8ڽ].WD)U*Hg$WAzaL#T\-$?wEdz1r%4<W :vt]њ#]Qd\,WO6݅[/kzh\_wn}bN/ۮ.nlvl? de ,Q#!>Y;a$*]6m?nqQΆ` \M>$J'\PvpjG`/L.rEQڐ媇rE&`$W>\ \hN]Ҙ,W=+oQ2Hω誏ruS > z>тL]ȕ􊊠I n#u.g7PSQSqqHkQUSUM.tN@*%Od{Ni`쁍Jn\TiU(*CV`T䊀`#W-n"WDk]rE,g:\a0+֎\a-(rC24eGϔz"W.$.WD)EXg+ڲ(6rEfh]o誏rep"WDO繫>RJJZ6rEq+52u"J``B|p"WDLr TUp[u*Wl].y3H.(rbJoizhfu1NfeM;T'B7 :gMq8svݳ9wEugqZeY9qR^q;˚Duz8\}xq82x:"g9g{\!Fk.r^(rCmD`$WZѪ\ejY"Wڃ\06rE:\ejY"W; #"࣯Bڽȹ\VYz(Wz4#.T)<(Ag\`S %WI6oV' Yz(WvA]劀=B Ѧ?N\GP #B`DW-BZ~tE +͉%Fq9v'r.9GZ*2L\,WO67W)n'e?i|ncTl:iH8j:1~PƦ-J3&K6%@ڠDQM1Za\i$h5WG *U ԒQ+,z7oeay}ƽW=EѝqOgTxx9exp{ޞ>~Jg䢩+YuIo.0!Vk*td6LUltX7bQ jrV}ZTf'z{z*f)~VT.:{g=} r.}Y _5;AIc|1ޟ7a1+W) A8WG>^/ǏbtK*Pb?}.hoxwtw?xwWZ€Uo01oO$w2ļ~y@zȚB ̺^ Sr ĨoWtu ۝y@(Z,? `p[>qu\wN};̜(Ckӡ]#-rzoH[v[tx-q9->t_[WE'[t6(/J&gBoLUB5YU"zTc+526mBG.hknu!ל} [}Y槔cO/ ]L.>MqBKF88KeXYj+s^xtR@gYYBQa8PM//Wd _ |RYOXPWszq*l9VTFǼ.?_Lz~84+/[.O۪U@8gbbD릑Ŧ/í377ޑQ;{ݨL9oәyd+[󢬪f"f/5v6|tu+sYp^w=2bU}< ; jV+ms2 tvp-8{<,#(Y+ހ]b+s!cRYvtg|t~YQruv>_>/S,n?^~IaIoJűIpD TpLB/P.}s:vpU{t^.Rj}iuZ\}^װٙhZu+twq&O2<Ң@ufJZt%SL^ y8gWKmuYr&8g胵+5FJmhJ@_7W4z%{yG{I[Ј' 6͖gp\--1ֱɻf`~v))y-pGPcS rzN )PgPDX#D9ۏ@wX߯vWlʁګc^9 q@Gϳ>\5 owxBvW&ڗUŪ?tUU1{ȴʃV54Vއ :Xoء$0iY㪶FwoѪyu5gª߿ E}}qM(!t,dhI6\-@ z4AcmR+#k8G奅vrh5.܁ ˷Gߕuxw wqpMwp'˽¤0E#ͤ0Ը2j02[y'+osޛnРDXU)Q{JeDa';vb;^i? #i f^Q3=Ǹj;?POZ sߝQNSLĖ=-SMOeNϯ,ͬiA /'xP:UZQ #uXB- ncU9bI3)Z;вw۽spM%2;u2AZʊ~ߛ)E#s!ם |?z-]q#[)8#a Hݖnڕf 鋸q@=G#w/I=c"17}Y(UeM:2v!lR/9|^jfgٷهY2*Ts{2M ˭u5󴽯\"[vnwgMܻi˟&i[4)eC񙗘njCH/j*HېOH SֹS=xz%O{O;qЅ'dY|rYzguq}sLR"n pŤ4WLΔIRkĹZp SC8%m8w;ʏNxRu֏vO._+"-r&_g7!ȅ?!+c]!ROգ%<RO,Q01 djlAwҩR-(ýq1)ƤH8G﷖N#A=q<;P^&Si=U'zY= BiuAqj%씰@!HG,PCXi-4Ϭ!KYOe~J4f,/X$7dVU:\f6B\C6+eff:.\ RlIXOIrJ跚᪉tt|80GK9LgU8]K/_tpzA/9t(kRKȂ BYJ~׽Zz*ٔ} ׫w lʰdFaYD˖ h_0k-s]KÚf%d9tֲ:Ho0k-=%fe UyhJp@_0+`d/̳crj*/sLANdž nzr8:%f2aNc3,Jden4;GĝNKgKf8*j*Z u턷wi2sۂHAqJ [ F$N-F8OUji?KX1Fn3tY3<\K' WGUGO9^q?icu/u ]2tY IJLf-M-e[1âno)~ZHޜ\^cGWٳeiYX0) fU{eHǘtk~9Ы֧_~ vv1Gܭ61 Xr&fDT0L` D3ͥys4nqrD+iNcL!UPp5NKߟ+q3-߿U }+~Gd-$M',c@hGh\O{^r2/3PK\50;`9e,e1Ko)^\\{:fvh3W^P"׼'ĉo|>qNƢyNOpH߽?J ZM;NƳb,~?9}>% _3J拵S7լd_GXI_0gߒPɽwJqq R"ao*ýf<.65ǿ^MB !*.ݶ}>زzX2#Ā{LgTSٟZUx@ij~WeX=ri/`M-=uwƷj Ytr>|y!=]Ƭܳǒ`Vh.}ɖǗ`5B=$V}&g5#'dh8POO{֜sZONWLaS]x,1_g7ܑww_ْ{ >,Mg!zřӵ3&ygftBy8NGHlb `ŗ"k\NY-ȌN-Jer\Lh89WDTT*nRm; 3O&DS0ACbKgu3$r2!Lge$6*9(nuryVHT >@Y}x*aڍQ?\sd/ov j9 ?v.r^T~FsNgHH͍Rf]2:iTS˜&bԠO4Bn"D  {;I#թ lz-c )@ rm87EQc7a^a:Jd82 Y]]F,8gaF#oFy%_I)Ri蕦y0~/T0LE1E,)gݞk ɉ5HD D Ә@s6u_2Gp2/X ,1P&g{%!d*Pۮn:+}!jLdz+X!Òyl4WqѡZ%XLq^_lbfS !KB#ŽVl?dM6 JhEMI"h΋ qNi}&sʘ4'Jȇ9KpT/fh`:H3ʂ3FPA / Qy%9hž0BdyivAẏ.f{A JZ WMh9-x.rw!Lwu6 JP/PS uG4٠Y1ƾ$O νr%vMdzb35kWB;PJuOE~d5!2~x.GA ŏ֟ކXvVUBKV)}% "PâQEt_PH2zFCF:q96sbLG6?St{Sa63{A3,r 9␤(J e%dpc-ۓ'wPsg-.{Jho;?_CKNL/A6?wwQymng֔-JoiZ2&^<2O)[ޢl7 ÖPb)=-B2j6!.%q@dF= 7Ps%B?M֚URƜH%qBNpmt6BܨIxAu8սC%qκR A2B֑C2q}% I7,)xEޠz)uBXp$'A)IcdZn㢱YhI{hrD;eAD|E:F/Ge!|$rc77DqĽCkX*zcE9_xhQ^s5 q<`0-ۘ!pH>J-ݨI0ZcMM;H[Pgެۧ #]ɶܒV8bQ6w]Ilתּ7f0N";M@ꭈMV\,7 l r4OmHeR7)MdiDSml>_=9]^!I9ךyBw3u9.ò6 fo7tҊpْB3x֊M6 4oVWR ܲU0A\;$e&%Ao[[}BV5XnV! e8UeJQ#c+_lSv`cM Am4e()9|مHdֈRb76іDqѡ;"3ʬNDJ*ߤxQOWSUqa8҄_3S,M_Kr;wRH=AT =9(.G mq,wmkrj2%N3R;}TNrЁu.jX&B zUB"%qkfm2)9N}yyɭgȃ|^NQ6ZU >n]Yb_c"ZlUs=s]1ec+Ȕs_w.`SE)ʑ",!/Ceg0B*?<ј=7W3+(PR^@si41yё1;Mj*4ge;؅7amsc;t7o3EEy?@obz˶Bf0K\t2/",5X>I'o#THՑ1~,_^ױ xJZoE0A&$Q%*LW'.tmH4!DzHǡKn"ՄrYLuR벏wmmXԽm/aC;;bwQXH}%K.K*%V䒌-;<$?co2>q{y;_b G+_O,Zm W҂\ @;;\Ob)@Ohf͸Y͘aroaAg ߺ,H|IZ%JdкdZ:MIO-?w-05=[([P]\xlsXv2GkGS0A/+W VM=ӟ+LZ=,ax0*8&7QͿ`][6YMJ&ϗv\^R ym&d6FpVm<.yv0w;7Q6)vDcHzhF}XS4ʣӀI΃!\(>vmy vCUֽ26Hci0}(]G|gZOF'z.RŤ_m7mZA7# ~9kqaq{z3%4NF,h*9VBBȸj3h5zKN\;Ϥz2퓮UBZ6srRGtT'5xi9]*6eA-P|'wBH9F!>e,*I `mV4tcaO_8ڝ*2UiұY#sd/wf*Nj^!%k;vb<^ؼfwf&SNjU:^7z_fE IK:^ RSQYM ?o)zyl{].t sf&N:ԼTjv٘M7~>R3L`ӯy&t0>^ O}*$vmy]gÊ Tw_J#ő/hVSf]D$ҜHz,C\*b`JDe%MY4pAt^˾zg\"~n#T3x<>AƛbeCK:|E׆OYWw$A"6A&zŊ7+ X.泚_Gڷ+e>[Ŗ&BL!By3԰R!uBP₴wޞȄKbJ,2Rj]hpml~<[,`f`9x]R/)Z93IR{]H] -<&AphY9|9 S"gZ@OFyϙFrX@2MCasT= VYD^6Dݲ3kWx0eBWoJht6kr ڔg+i龋1 QxjR!5A!!VzWqR ]qGAPk.vBiQL~'؋8Kޠ^g r?|VyJ(5K(E$àA_騾NʎNx ?9eXaH7`:YJ> 0PP,NAtg֟uQk-Yɍ&N)V.щB,&Y -i%nR{ٕ4!M9m%W5|򥦼)v7^jw^,G׺RN[%{;R #:P.ݓ5fl2bb1F%4P|yP!77q.!8[h6C<)@Jy8XǠ_|by"*&3yC S#Aֳ k Dd *Ĵd2O9"I I Xˊmto`EݮUv~{f_+8]+ qxXňsg8a(1(TT{1*XaɜĚcv*JCU!fcA-0%I1&-rӕ~)a uVɇ *x rx]V$? =lZsݸER_q?Ajf[ &N-w{ŹXt)!mUz2BWPY<"2s"#F')Q %;KFe8Z[P:$n.վA$aTT sȸ~|X%I|)7*3Fm 0@Iy&E?kb/WM]BO7xX S @h\0d[M3LAۙYM?{W殜ת ”asJ s!:Nai&SMT U X)2utBfgl:D.*KPJ2Gf|X~E*ѱ e(?-L'R(۩*= &Zf%4.JhMNmDy9;b\/']=YhNιذ} ӯl'8Y2l%_ dIIq˧c 3(C+c,M4fFϺJmAsrnej^uw;)yKqz%68 GFi:apefb:|m1,X3Nb+(h6Y@%4{-rWȐ#EɢwpgO= AiNf)=5߬zҽϤ L\B ާ  y:r/^o2nĉɽ~?e6LLӶ{6WY]~ro %TSdcKX|*/W-XseҀwBnDӕ1(=~#o(۔[\e ŀU]^(;n,媂0JWԂi` YEyhn7{Z,ՏϾ"+ *m6C`ƴP\~kpLU5RqզivPĭ%csĮeZ>UQBby㱳 /,[ksk5Zv~'.U5*3JŹxYvEL}+!v! ~8 u )\@* ^wwTA4|pjHJIO%|_+j+{CM"<#Ϙg^H'mYTqdÊ{(&!jO:WCMDZ܆6{ x+ѕd5WwkE@0?^* :./v&:`eq'}'l<{<{qRF7t- -?YʫeN2[ fOϠWy)זoXPh莒foeȸfojprE#HwK3jU SJ?{H0OIh~Y`63 dw۲$Ke'OQ-͖⼜3c/"Uůc%Ϫ0ՁBIgN94Ro8R'b. OAB:zTHf{Of3-t'zh8. +nDwsfp\`h>-pEr\ 9ߩ,#\p쥂5 YH\/䣷TT#*ujg"NVWp΄h\}Y_Q5*%gw//J!;yn 5f==WkM\c6wɁ홍QnR,޿7 gmM,!Z214ZHCD,FGepx6&\zs!R돧"8ABN) +~kzV5CS6=eq}`2|C`SѢb8_};L@\QC&ύn|j}Mfx0Fi<~_lvْ{f[[< "75>sIrsb+'4G ^*6QOi`ew7 ,59 UBB¬/,J ߳d9 DR!߽8-KƊ6@Gd`q %:),5QrOE̡ݼQR+.`D-ǷQ{*}jn")h=i,.8<4 ^zL2NYdvҪTӳ!lqam`4ܡi<5lkMWPv>?!g몵.cP,$#pbCRMDQx9!, Q )+*"T-V5R'B:zGZPg T!#`60ݹ̛u fݏMQt?jY':СqJ:ts^#[oސ=U&e9 3\[9&,N4%LZH(* rTC9)y? "BEXݗ콲)W(460O|~kI)hGĮgQyr%"W g1F2MWK *s'WcTu"S}F5Wߖ*uw@c'tבwI A\!Pb>7Oh妹.kxN{\|V.f3Ym)tevJ oBDԱ3͙A'Ɏ2dG)Wjݝjvw!;Ji4O藍 TbtwyP:Se2HXOC5RHCA2>Ѧ|2`LlFCvY|s"r2w2Y4LƘ%POP#8לFIY:€G&9Y|vI0FDwy%qq`~&҉T^S2O6ۃ%GLJ!cIq,0D8B|_f(J9 P9OJS<3.ͅ:bSc+  ^[QƢCc.\r嘂>3iԉCܭ0HΚSl&%8x-sT+d2Ӕ"Q\#sWȸF𿦝C.eԭ/U3@Ŷ}Eh>~F' C=U†5c#ks3lY2Tm>H2O0%D ck*B"8ADX2>GybDTbAN}mFℜ.e^j%s]5}>Y#GӚecjQS0 bܳG*Ub{1>NL8aJ{NjGLej>9^. ҐN@V ϧbP{8-c;_Kr.uN$uLK%;S[MK!cfc>B3Oj`¯ WmI;/#|G| kh]ihhsK +>+ʍ*!}.=1;=x >NP;ug雪w℩77"tq=ahUILƱqMHӸk:ǪW+׾ӽD'uh@2 npW9Ow~L@-Gd[ky5NeR[_KyzL^$†C$YN#E,M"iH g]z?oʲ.l͜FN4.d$cSc !N#i fj@ ˂$ CؑKDwĬ"W'ُ%(F265=-C& cj7cRNptUa L坪 ]U^S}(p.!F5n*#f#tVo AkT0Tȸ6vC [tW,|w1θN%2w!?" DɣWOM.Щϓ/Ej]q4Kx+a|bڞ?Nqs]Fm zjƬP_W?BKwڦ>\gX{1b,Z37ϏZ(.մXzvqӥ2#vӛocp~l6=ϭ7 ~8J *~/0#P51rO35 k 0~W[Ӡ[f!i7^ˇ^s?v[.`g? e8-[j3zo_gߦ`a4H`9>VbRc>`zuWruRMt4ډ"/C~ k2u%s9=Rݓ` S/+k+x+{L Eq:k5s cqX<=9 ԉemoS6M)@ )t. zo+Kk,^%2b_s4 ԡ@\֣ XXjlq1%pi;,~. kl/W`VM+˼MKbƒstGzy]N n2K.qշ޹҅BU?Yiq>wz{RRh}:)ΚLm)*_}b̀@wSN1!gu05`P:Y <:r~¼9MՄx|*_8U]@}X [3:7Њ]m(_ԉJtu!g@ժuΔyDW}ڒxTszЬqa{K_Gsb{>}=cC Ł{ZQ%%f:YdTjY7l9_wP^ɩ/J^&7 XNEqdY~ozEz*)wwuE]fǥ]_$xqyJw1Lư?p*E  $2oms6c*d\lIyWtW_|nAvc5}O رn; 9Z*`S5kMjCiߡMG /[0~/ ݟDC% Xd +`f\"-Af UK%45PB#<'C"sWm4PbxT}3ϓ<A\0>lp1BTVoYpNXl]`)΄q ԡ9SoV0v$JaMp` S+;*vv"_.B[Mf +y1p j(:D?YY?U(ä?3FJi$L+Ģdq.scЊ=g:Ict2/m_ IZ&AOy2,w[j઄:Ϣo9񃚁]4xm",Ք vܸ@h>qMc2%X@ -]!50{j JEiDOd1 승5J2`~֡vkMbN9Co}d w=L@ddΜKHZy͘qAލ}^ y2y yKה"Ŝ( >F2BhTED >O^8Blr soYvdfYˬAJn|ä][Č q N Kix8NRW ͡W:cK#_'j0PShfEgY?ߣ7 t󛫄T'@;NNV{|9묝crQQNc,,-JLrf'&`3bn+bb-Eș"{!ɷMNeәd@ѐ}ڌ(pAXN(VqaB)dJ7z«Ѡ4tŧ@kJQf)meF9tȷ|gOga6_إi+Tå#Zff"1b@͘Z#ѯLܘsܝZ7&+Acd0T %=%/D9(HyjDpjmǕ1'9>Ha vZO?4oIT-WiE jh3Į|ӫ亴iKd%fLH?s`"$m=x i#B{cgw.]. !Bi[լ'祺Y8G2\qZ'6/28e󡧗Wb1"oCNglvPJ!.d>aE~?Z, g43܂rnVw>M|>R//MH|HWEfVat?(]PFY\J_]\%#X#LF`ZV-! M\QRd(-GGzi#L[k6´%1j֩tY~E6´O>Z@ɲƢ%Uט;PkŢ~f_nsScTp2l=s[I}ݚAouI8dL{MM@:`+"¢ bBf|]`?Tst56qǨ©u:y-]Ԑngyeu޷ Z|wM:C˯S nuW5qÝd‹Dqʹ)@o#1 K`ȭ|Ӟ[t ?OǺ⣚G(ZȵKnv )Ljܛy]˹ƫ!WoIͅBŽ0W٧ȹV#D15oR?{LI﨓ZG6 2<[yL͟0} t7I`ru.oR0M ݴcvQU } 8(r$B&44)2dž?[nzRZwXCgЄ˅#$K&Zj[]R_;[J;1r;]F.H>6VOOzor8 C|ɱ`)tàmjЫTO3'fHJ泻 hɷh |iqkԄ2Y>o~Tk.RviTo.ITeoe(_[^[U 'f]Ij 7:ZhRU2JCݭƫ11#bWŵ\/&O{*H-dv/$7~UN+::5lmEr;ZtDGS ,73%N" ,S"Ǟ嵝ls&(qtSSod?[3xeWk?c]JL}n6rmg^h]~07Z e 7?vףʹԽ<:żt,c ]$vA+-nՎXJVj@@: asmsq_ZA<-(RMX&[~g.pw:ZuU]æ V^#,v< ,F ћUJ^5SBcoiJ#]+>%YwR.J9\$P󎫑N>]mPLI #0,;W?NXiysnbR#~R[lg1}+%Rc}Z ":Bkc n,o t/Hww.ECtTJ=;YMQl6xJt3M-|:YD;]W X)+啶|Jyk M4 JX)2Zw~S[w g΂5ԱO+! y;D.n5imz(OHrGDOJ8J)K)2'?o]'f$FǾMhj%lͳq)bw=e@}}c4Y*eu^9[:SP#GYRI#VRop;Ōd\KW5gCkt ,lb3Z&c̸[Ƶ#p2lHF.wK%[5 F Ne`%@;u%1R)cyw0Z)ݞᒒؼV_ *.ބޞi5J=iX!4:cM^$Cߖ1Y$v=N(_P n!5uE(>0yBYZbZؚckm! $I G謐9'HH 2UY+e&-yyz&g -J.-ʜMy(s/G`Mxa}I@ A 2༂TE^S\4B9ӐhSwS*¢''nN/4тmڈDi;gaF:ӳnIA恹wJ%xTLJ$q%dJ=4eFć (.,X&AO+ }@Y5OEȾSko}sz/yB &7,5y|X"ە]!c:dfT]MzهB]͚  7LLx^̉HCc]tWXG55;Mɢ"&t4GBb0ѿ8Q0P:dz;aWhnNCBdAk9e^)[BG1v@=,yPefń hچA)E[WQ$OȸDD 7Em"E2#!B< ~@DgDj͵?LXGas ~?~d>eE_>Nj/v}ډYQL0a5q ֽM`K2<+MH׺=Z($i)ZYAV* r2S,/d˘RmR-ۀ~>AspZnmP;M;4;!Y3R4Ҍ G 3h:ήf7zH ߳xTm97.ho Cc)ADAFq&QVL"F,g sĈ0-[ x:g0\ѳ=ePJehGAak2Ș'VdAT P."PbS,HSb~mIcXOk^hT{4_JGYc٢>  LBrĦlIo(W7r|V q< Ђ^J8`_#sn`f'A'7QZ\@Cfgq ʐ-4V͜#ˤZÂ6nEV h蝋7ɟ^^~&!M5Eh-=m3L(=;dUO=|&o_J9 B[hv|ֆ~N=Z u~Om& HVz# &'c>17wˊZ@r2IL OK[1 !x+TO<aDw'B a=kIcC R U&I5Xj/أF\UJ>5X-=0&k9G|{^#OK~ĝXu7-hZZD86 y0$4diji=hMzy ȾӸx|2ۑ?3=4X̦KMT x!Yg)=k wp 72dyܦF&oN放b, Ix:(~ am5s,) (e2taMm 0^OJK+MMe) !pMS1TcKm<*K5\J]hidZZl:;D@Pۑ%U D'_& ZiQr/=#zAe^oROX~2FաԦQlIy$FQą>>xi+EG zH\66KABd{D޵S-MB˄{:VCO daCO'#+gh){CZO1.2bˮ,rx:T=qgHN}q2윮ﭪ␊ 9 y7zwl^悒}qaȨuoϐ>Ӓ5Ś!W_h`u_!1f7JREC̴gS7lU=4H<ۼW/vvX0wv+g8ClI6bH;ΞuJVJuu7 Xﺙ`p F,z1*=JapӉ4}D){pPYZѵV8nrnUAs<(B9F:,JdEms Cm7p( &ᴬP[\)j|YĨ^WΔ{8Zu Z'd l\ǃirg3.+#;?T89|uUkm2 Z~?0fP|h 8IihFt)c CeTe 6PDS04Rxd$ekR6<<ׁ攕PQ$%?ܐhKb(LGyS4r-szuq)U*|BzH_Mhcr:Fᢎ{\!  L. ͈l2+2JEhxH) h4Bd Q)_e6-sY6jKhBԌBBGiA-/h.nޜOGg5ʕk`HPz cE9,Pʪ)*}2-5! ZƤwQF vHrTwG[BvG[-$L އμp1-t0/Kj{kБ|̴\Qсm@&QxB֌Q\i昈 WZ1OvAFG !'~ xrn+BvXsp%EkP[e ߖvpOga! }[~OEZ2\lmBƜ34 Z[쳠<@P{&`Z nʣL+'Jf\  p+f(&*򤨔%e,i6/JQ?<[&uߚs4乳ws5M˫jiRt>23~`Ƞ;QC%n6{*8cV{,(SCͶW3- K^/J#+]Uђh_s\R;8fߑc|гc}@ !#+i{47LǾׯ8`EVĂՖ8c ~))>=]! %*5բEr;OO0 vwb!;T/\n-۠4+:}ݷIYPvwo!>j'Yc(I.n[+!+)ݗx!^&c|}D /.65{q?G`zxgOO#wC.09J4w>s~t8_c[FF-Uff]Nw5m:L=$@f^rf3?eNg$,KnW8M*t- 9$7];uH$Iv {P|-?ڎ; ;3{,'g6^W {1-hc Еp".H87)- †⇹u路n&VR*}*'slOZpZn&5dP;>*rwkdj1=*2kG/u4_P7k{iixid(s sϨ*v_r_x?81ү~0F_~`O݌8"R~З !{JOpHZVJn:C W#!I;j;6l#DJ6l6.9bÆxfVϻ6 p=vƐ;>ӾMwlD)' @{bڱ9HU$27e:]c4n2".`].hC5I퇩Ns!` )A< L @νDcJS*}?ڬ{^w+RxU?- a<]Teϒ_E ᠆kDQg0 &d>R04@CY8%aS΍GsB25.s+5F1j[&)C!HCħEM 6mEKo->l yvbJ3EVxPdhڧ zŎ4RTTcTXpK6" #'C폳W8iDDZ"]M;75ԼswzJ"[sB9ވF4n6qlDs}4E"ߣ&: pe4Fh :-2BL}YdߤIeo6ڛjd_텴]MC7:tBt4"j:jClZ[wAbxLfkGUȔk1X PAK|kXGI| 5°D7S6RY`XI:xsCs JXQn$wtQH, vbg!e5ћzVs?pmƛ mʖS S|o˛&ШËoPh$՛{W.cYP@$4-즍Q0>u|kZ`bs8f8㽂ħApN*K5>BT8y##ЛF#Aqh %Òa#=$<.D" *թMd]Y; F[V$XA=8&c=D#d >eBx2ȊaGZ,?^/ MZX} Ð%wLݻ\H]蜇nAbRygn6 CZ1㒼Hc?\<0gɥɢ|$0_[^lhSR+[^j5As`=d flѸ/l8 xOR(󢔌DyA; ؘK@~c-0Bo7 ,vއԥKlc 5'05 ƒ&@']kS8bO͌ʖCնzrobg͐Z]nX"tҼD}-aF,PH5HKP 3]SB]2zTQ=Sqajya.˙-FnFYS|| %[ F*UX)d I24F KlΌ Q(d%<R4`ONQƌw#^yF椮4A;sΗ ]ܖzu^O<[6m/|0NmIĚx"ĴV'>D=pgIϩ:z@`~FʼnGvc:Aݚ[ƃ{5]_CXe!F{G}!~f7mt%\7;u4Pڈq{ Y rKwKumv'\7!71P;B|IV҇}lAKyidp?퇸9\YhG Wؿ3%߆7O uW#Zq>䔼_Av#v(cwt.qLQVhBMy ~NhD9A;I1&!1/@DvvȤJ?@k}g^:GJUܫ˾ xu @nEԺl"6>&yk3ޭ#"cGn.Wyۮ7]R.*Ω#% '{ݣ_ydJ6{V޳+ xӅ ob׮%mIKBS8_?i6^j"%g=\:͓E+Y'N/=;j'g+<):Wֹ?xZ,XW`}; fqG3V']KW`+% WhC;ՌD D|EvD>ӥvHF].9]*" $$I.$$ T :VEC w{ֳٛ~$_7 Yc׽~ZܿC[h  H0 ziʀV3.L\$|f4G%W>^~`ܛB6Tm9*\q~fRE sZ jX+CjS*I>aK XMq"{NGm׃-!1ѷ69zsFH -kЩQ"e[Ux~[:2* \Mo^w6;n1>ߏ CgQ泌W_^/S g84Z@,O $Y\s=ߓ%Ks$8`V1fh0ZԢt"7< )7}6/߯s B#+~[/gVZ-Vl^RsGqr)ӜQƻSG;8=((1CUq #Šw1)uGŷkspzT\U1y̨^N© !z_~Oo51Vtپloޠ j"s3Q M؆Bie%yPȡgEŊ ZbX|R>wv7$:H`2$Q ^kc\_k_yhxGj<"J:7C8ژ摀<Ŕ'@%ƜS]лW'\%^Yt:r]j-6M L)"][}B s(QF54mDr.ԡ@LkWD=EZ cԖJ5ϒu}ቬ//ϳ;Z& Y黶LZv7xs ]鷯|{Bi-ij1H W,YŽ~{6ZGoXY:z%0G5 .DMO%̧K~`~F@; {`CL{gJ 31ZƿדhʟjxK~O"Vڡ b0z/G{ҵ!w|&xi'FC7BQ*{{!mDZ@w1wx<A>]YsSɒ+o7Ԗ#xysND@c *GnIG>DT::'/T^()6p4יJPXmuU8N]=ҝ rGpQ-§=P5< zv,W+t23AyMƉKF&p[/g4~Tc0o6x:K9w#У4̯>|J̐@ xon].v}v:g%QxC9{=s{U =??=~ʩMptd?fTK'/w/?r~![r6/MTx=.ܐYRϋ]wzfFn<@(iCBR^Llj46W VX>x`G>& W/@B?~d<⮽7ϕ%Oc0t8N1jm{g+a{Y.WN:KW8t!bۻJ'J:zIW@dbѷ-f$S.\!LWS:؛`Pq>P;c NjDQEK!է>]y /詯_A1uqާ+ ѡQ=񜠝x;0 -nXn9@;vdhwWhv_( qˉl {nN GGv2@sBvQw- {>n}n'H(r2K@mV}dh@CX]5JΓh@iܳ@ڹ-?)S:oZ {@~T^h[Ss 2 j.Pыh7ߐ~ph㳂v;)hQ"رڅ 2SwvTac l$BBvv o(5h!5}v a-piX|l=RExN^HJںy6In}t]Ufu8nu!"oCّpWrrfc|+x4smɇzs(cH7l=n*0 iVHKi&4%u mv7ӣqP(Kek߽OlB1("A5*,$s5 ̡Dh{ۘw;7 Oq}zOc:)CU3)&L}psE݆ʪKTvn1ؤz4}Kv9+[I!T6e֒AaUg$1;a67QTfCp5ZDߍ' p-8Ў !'+T g(׾#\dfPS>RS-E2gJ TYUd]W듋 AT`"Sq̜ & bYEA|@Ł#_ZqEޝZ]TCB9TJtR?.i|̙Z!/uOe i| &^|zǯPn,f~R߶iGޯԖ=.FYy_/y:hEs&-1q3ڿ!k:# cFt~ToFRMiCS:?^VG.{Q1\bqczF1D9f{U_h.m\w"cApW<>Q!'\j-yKv[BN\s+_pAy}}mPO>^>$zL @f̧671%EF5%J&d Y@ilS?IB 췉>91t&ư[/4N/9%'T5h,>rG8X_0m j"C4dPҫA1/ZTpJ j{$sѕ K9B# hL |:4"qœPաc_RJ; JӘ\t6Ɵ߷ӓ]j)V>|k-5Wۡ]=SAkx6{"ⷣuَBPzIO}oڻ8R #Y}}*|=7m>}gCP3屓)99nS(EYd䂨? 0ѐ]s& s'3JUq>yVf#[7'b8x+={}/ޙny#sX FDu&ȼB|S>]OyV\T$GDqR0R[%Tq VBkȅ3ElYBĀ6B+h048E)VB'^34H=o[/(S,&+=Fպ*Xm*D)$1Lի$;0_\W0W(Ro8g)s 1+uS' V0#h =fz`Q,̸݉rv뢵nl{]]_f>Ù4R.ݕt8)QSIq`(ggoK "VFjPHQXbRm Ūp%'UIl%] /tBjJĪUto$vh<9ޣ}1 tx(H}=2q(: {!5'@='WOIb'u؈)w(Ή*ؖZ3іfcH20Vy(∆ZzSzC235un|ҿ8[#v2'Η Yz+1~JqXGv=y^[WՓKە($ aJz+%p[y| vgrl͞|zR# mޜj+:SI =_*AۊSk`ܫ>LA9TyЎ@F;HMX+vTETFQ:cgz=^>ڞ?=gd]2//?v%cvNu63QHBtǪÑE&̣23O]ބ|є!oy4{HvYWOd-CyM{*`$K7( rP/yS*].A".>@1MKHm)+MTK WYA[ZEyV{w8VԨ -5yh>Xv\ ZËX-{Vl0tiO6}Khc8\ۇ]H]èHè/Ea̠7q 5?rW,Y 7EECezn'`>w|-'憽wđiWpXOsrd,7GZx2b<(0l,y2cV :@Gv%m*{^a)!-q7=UtqaC xO<"M6=针ST?;E+{ӓ/o}4_O~GuB45EU#h[kV'R8*; 8U-I9GԭZ RQתDsΝ[vl6=@=@{#XE_B>[{Ms=}D߇ysh*=ԦNN7&}2*+l`hjXuWm"Ձ=#w!8IqYNN.$RSjTSsX0:O̼ק+{}rz,\4ju#oܬIĸKbJ:{9R {'dEpLsj"JVSjEO1<ٍ3v泌QFZZa,M"H=RƥqRu;Aе"7+E>"?{Gj_I$JSQԣm[%{a@bޅG=lfw-Г0Oϻc >S;8U39y>^廁R e2|-GOD,IuJʪYL%4)MYY01Xb)S0"M>٤¢QJyV8@'?MZXأٸ*Ĝ @J6J%!{{&DvӍC_'Ofȋqֽ! H l}pKȑ9]/R(LFgdkInAk֑des !/G +xb[B-K>爖QzΖVzĪx= HUm0< [h",1DeoȵPurL>'CPRI@SقhLd^3 !W>$d[/^1b3Y{kˤ F䜏΢1C@XT?TZ+h# HGDGTbqh岭I24E}FmI$*bKpN%)t p$3VȎp0(:+[BF"Vt:L0[jӺg EsՃ覴،5SO_SeH},B︻[ǧ۫CS$2Wa ⛺^<$e@09iVI A]5VAMJoȬ[ ϭw;G(7E@*X~roNj|Nk_dmҽ$5 ] H);X5)|֦QtԜTް~ 6co>  /A?ePnDT&ޚ^h[BޛSgP_bIJR?a_wYE%?$sWC^̩|lFEe#sV ;CK~cD/Z#rw ~ԯ#4^,1"(Fd &J{5}c0aLIU-^nT}˓껌z}hPff@ގxls%9ы;mx{ EǸꑜS2, cS욐#s1 p (ϑ {jLꞄ_jDQ1K[ G Mp0C9d$'l1]=8KD y92FC@ceo}vKsZu=bO!Or}'mÉ#X/lcNqy n kp6GЌPC67dRaOa yiҤ+E|fˑ'GؙKR[w=ݿ ɥ|8W5W!*]1aT[oJa`f@/YFU8c kʽJ%?Vd r\O>ȨE4IUBtp Nt~]y-~-~VB:=۹`LC^~cD㍥0sQBI΅O,1Y1R_(Pآ(>(x?VD޿D:oo2^D(|+%gk xtIH 1`lsUZ0ʀ;zGqdO9Іcp#[I Р$ )!SԻD_@l]X%r3%G;yq8'G(Yo,U4`iPE(jqRGMd\z>TlFdcJ}F!KUQW аydc_lh|9#'UrU(IJr+$e U75`YwDX ѧJ  bK?_;K%@QV XfIJWν3i]b@3'^Ug1ٴh-6L1:Ҳ?_|\PdP[)ZtlЋMMHlJl9Bj $[!NX]V ڰK-)`͒`<}J')J&|oz8?ԇo=S۷ּKo1{{nԻAk>KOj;?yw;>WR3}=IqoX޹3dVPpq-KU13#Z'u3Sg&Z(ӽdSBE &&œZ@$%/$H4,O+;^WU9nI} RdHIq]ë4'A mP)7_ 5jOUON)+UAǗ:ıY[#iԚF8-d%l5g dcQ\B{Ӽ58ugT;}Gl͠TdBh:m[Aڨ+R4N?t YZԯE`*6!H.5Qr`u$ [PpE[3(=.jb=#ZPY̡(gMRPϑg"N{D,gq'#Cݜ H vfVLg}M[SwT\Vk- 1 c2d&\l㈡nD(B3Ն e-_ބݻOluɁocJtT§M9\_U}Qf ФLM4ŞQ1IM oo6&J!eVT`VV2 ^go}>ڡZT<O2B"ņ=i#+nYpX>~NY>Y %P]G7.N?/2k93`RDW{X/1}gdp'YMGG=/jtjY~Z•Eu d* vY2j__FzNw ߍ[ n!nV=ۢS~2wF # ܺjt/QƩ]>%}~}+r1(+i5R_ *&V`^K-*k_j,_eVTq41zl/;~v3\@68aL 햐A "1w8@;r:.<{?RHn9{@]D-L,= B윓fȋi*TAm-!))=jEo-J)GՐ#ZYiL z_h]rD- v5s/՟m8.\Wm؊Ҭ y!;@Pu#z]:Jmk!Ȝh83åApM ިn֓U7Vٻ[B̩n2Q03 .g?ٹ,/gw ƎUϽҠ K|TaG'UB]IV9te9ٻFn$W ݖR"ȇ`&p 6އ`5-E=3_%-eԒ- [n5*Sb|}/'ҟ^͟ޅ|_,y̿EN+9W$|a9t~>hl;^*YI< UQ&4EQIB$S/xYZf5/ݨj9{¾ |.ptUcH7mY&8li5+.ޔe>qTlqwA:];gHbH~n JG9/zŜiΰR9Jxu']F[SwYlp SΗS,TC˂7B4K3Ʒ@aK/xT652s^*fBjo.Wېx} 戥UZW} #;rjj$SvG6 $1)Rg!)?Zn-tJ<@1Vslp(8ͣ%NhZ :'mhy ,GiN'$(@ Y(RE! #aEOhg9u"k7%4EjwԎՈ(OXf0X'0ƫ ÖЇ=bR;l a:i`(RR?a ڝߴ^äv攨cUU# E#n=IJٱ:Q*i0`hVj]3YOe}t(tWyۊT)B*"_+Jg ]LJUh3*l괧46wRK>BIpY|Y_~]DPeNv9AA%OIxpd~)Um+_SU^R,/u[^4}?xn˫.;嗄3Jhy0esi{łnW:`:7&A $Kyi -1R* !KJd7yZm3y! Rk''!_2V6}<3ʐc9);*tUFѯ9+&û1!GriP,KJUGVX87 (9Ar%ʬȍB6MynЅ֜4PJ"3L]p+ZRT3pE.J#?J},vOKeYlisIDb2bed2fC]r_%b+2)$ }1MW<RcRI27 "YҬ4, "|(](vJzt%KYlF+y_Jg?!5$ނ}AJe9*I'U{2Ųأɂ41~1B>6BCs?lU 6Ut7 ݑS;i]M:S摒B7B!)ZG:zx <8|u Xz b\N˯UPSL (DՁ*VYmRFRa%2hDșHT27ҞQL7n>.q/{3I $iK 2,cY \kx,AލK&6on佯hIԇ—Tċ!zWۏVM?,F͢Ê!|!r,ȱi=?1MwV`}p{y=j;-MLj~>'pxJ>'ɞ Zu}NATn'hm ~*?v:wNAblrSȣ{AĦ&R,wtٿ|^$5L30,MFD/~%aUhQvKBbQ loV0}Mpњ FmنL(v+򞢇cc#?ch9:꓇?s>Η{ejHrY*dCe I3Ƌ6˂W3!KDEY[U*Os4Ѐ0Ej Y0灂!ڐ-)rR;01Ҿgt.E W=E/GF߄o C\ȹU?>ONJr)nAA׿+ϧt2_|:1h|z zyRK5ݓ}kbF;#b^ ? 9oQgA/e&*Z#Fs75>6wx^`z+uy7Ϻ/MQrG ](Kvq_ڱTx)TN$ JCb :tg+i*{D R bxgW6Jƪ'iPbx@ hķP; DF"i7%eHP]kզB;xx//ޘVZw.68Ѡ8V+ۃ]]gᔎGLi"wime2rT $0Z)lۏ d6˜ZzB3ˍN-rv4$J, )QکC3'd8`o(٨<&ywq!EOImYA_MKuW==2NvE NRHډgռn6O𡣛Uw^Qvc!׿\" v?^^+롑:&~L6hKAIshu  u UE]06AL9`imk8n}J=~T7wϡ37yNF* s^3k9!CQeܼ cX P0BRkfO)(DqOSj=ל7 }VpB&Dby=(>G2<zeG%> YZBa '#Qԭ2*=yZ #BNcBQ ZZDYtB)% }aYs%4q.K>UowdY|2 esgG&k(-@3xOw+O.̡+S<7zJ^ %Ft^s#Áab"rWDgN HcIY9gUIa /ZzɌuGhDoRʶ Ü}7Qrg<s?}]1t=mЙ2 >35O~?MRKxn%%77p/7O^fk..iKs6 Ha\` .8 P:Ą8o:k pB pCR"uH< N, _6tjwn`-侴 >_"ZPtF?J _rn鳽*Q W+uZbݬ 1┸?v5\6n." 2(P O"Fõ7fҚphAӒċyI[+Sbb 0<$<̃2QPİ,?3qԙdtBӴd~sb̠$jZn6|js&@G]p CI)8a3?$xfnlF-X{e/8tkUlz\]5ڨh 2r˙Gg'QbUf=]Z'@^yo1N ۇ6 Wx\K)yN7nIDY]qV1j&7x ?*BrДTB`E;GwWm.w^ h:MVim`>JΗdrCO1*QRO&:@p  G%%jZј6XNmNYƓ gn R@P$%ׅ7[&I[S҃4%Ղ*nl 6d꣄74Ops(imcbv<$\ppZ2-pI0"+pc2x9B+‚h+*g3ÍZ- vmt :x:hkE F6ÍCwGI4%Ղ*nl Vdd% >|"jU[I\ 6{"4I 0PY|'?ԏhZ ]7"]{?&,qө{Ze*.vhMΣخ)Sx,9tSJNن4}4JaI'@RdD)@Y7+#RdN[fAY( SQ:JQdhkrܥV`7cP֪9VacάD=(=JGq88^xhd&eTM2vUYk6 RS)iQ1e)GRcm7wC.܁UW.rӏ K/S@?QDb02NlTj鍦iTBU PR ahB F/wܐ?oF<2RLB|g/Y,maβMKe7~ڜnaj;;7?ߋ'm.[Дjduއףqr9(/PxU_^_.Xw(%n?psC$l)z 4:Ԃ:y/]ދi҄y{Y vh(}н!I0.E4xD#qge4k8(jĩf&xkD0j5>"߆~EP-4-y#K1ipz 1 ZS^2GKwUT0)J(ô< *#l{Y"5h璣viER~õhiŤhoA4/vCl q3DD #Ae XBqN(!W@u^jIO A;jtDHQtP_734y80dì N6:Uìӎ†c4xm0LHrܨEN$eb e-$FcLcm_*35q}-`ߔ,aFypn֯pm 0g URҹgg?SqCBw@`5ZúZS䣣,$FF3J}@C:D%GnLJxCK nO_<&ܶNbl40zťZ5A -HZJަ֣VBk=@}UC36]y䲍 %Z_h/OJGZ"v&(*#'į(Ӳو"@r3xRe~Λ vCyh7ߌׁ݌lVR=,8ꔔsOr.tPi,*=7f([ܶ(ɇ ^EٶGݬsmG+Hu5՗4f3j lg$ʋx޷U&0(4#\ A>%W =.3{hDF#*~ c%  v*(b賣.={JEeNԩ$.~|X c>oHmDFlpY:!MR~J 7ʕt@TqXg{&tB|PQ ϫ~nfZThg2gfy'EKUՆB H &*זc&; KFz7Q@:er'B C(2U*G̾]T5wP(ISCʸ>zFr~()Pbt/|٣gPIE(h*AJ WoJ'Who 4BI V&gM쿽0a T VzR!):$܎ưөwl'd>Ifl^E?q;mvwoN1&Β|GQNք mhG%y@ R$IiחBHOX1mHI?nfchvaqU@  ^33J-QõPǏ(P^:gţ@B]54Uo}h+ ޼~ 5O4+wʂ:*KZ5.zҮLAJrTƆӼ.f!+Z+ߙAVBET Lfs3Wk$Ek jugi" h-+$dϧZ|\6;h&=]']']']JtN{>mOpRdx k.)?/I"=E:ŷ}ڔʀJ1w{NuEO|TMMq9;] ZE sQZAឣ B#1^SΏt `k`Zn9Kn6kuk)+dxz&!~(I"}kN(jƚy @lke Ns <%r`R;#Ťe z /q|ޠ'ҩXl9͉>qB '6U]uY3)(2nA Fy3k-AE$8aTAQim<2sΌ!Z-\2aC[*N`H,2[m4Qef[:Tfu:-L2iC&R{)M9@+0rc=vk)O.Cbe e Uy jKe eJc* 0keD[kJY0S&B@w'pln kH\UTNp9hT`3;DZ׿66a6o/#/};"1Z{!PF;nnAZwT\"ca:Q }5cw zcCU@,f c`_>;VRayZUWf7yeURyfV"'{+oTQfqv(ݫ^#;HώJUʀH@v3%y`DaKVQItZ`TNR+JbDUvjuPFHlrng!9 QYTU8 $# W1d! R;N2! ?줤=of e3CۘVIei51`jbySX˛*N!Zi H2hIZg\]x0`~=诤,ͺߨoeC;p8v08L%=hjfxtQ׌F;y3wd$,u" x@2}lS]HG=0/gmN+g-'pBQRu:Ȅ=[ M4 mE)cPCyMyF䚶nsY rҙi܆.NCsӨRi1 1ǐjPv;!\hU4Hl)a'ɂʅW$}Pոc"f*aD,|"b3߹aDAIm{Q]my6P/ Ew<ݜ-BZ-4*>{W%3wi/ﳋ6XL"!bcčӌe>"P0BekKq Vm{z2|LKb(Vp&f{K3CEx%t*WA$*mI>[]@ *\~;MQx~tqq̯޺e?4E|6YLAkn:O{_}hOw׷At~Z9Y:Lt~*`~˗G`/-euj1.zu:C.R`Gw3L:Ҙ! ,t~Q/&xRhW~ ~&M/0J/rfp9|wr0*v$2U+o_F"|*;( B^z;x96zŋc1_|ZYew;ep+\9=/tex:uRڜq|czd9/n1WN^4 pOp5/֔,sKQ%Zөcw8oSK3ms;7 ɭe,'ywSD`TD@EIJzSeb hV3*s77}vąh8Tfm]$kћ:3Y C$].f&>{nJE?Po*em6F7eP6`! N|؊TzËjU((Wk`VjB[rTGPe8F"ʻ148\[e/W}x5GpG^e7Z2w|$R!SŽ9@(:*?hCz(R4@ K i7WV|ȪxzoŋeY`.gSNW>ְ1I|nkRV׿%9MԪt%&Ǵ*}{Ofa_faY޾0nɧ|ز6REBpǨdIj 08t',nܪ9`E2M@&jx!'VW *s0f{o2KvFb%Rr]ǹ+0Zkq$Hc~88Av0?/5zhu{Y ;4);{?9as**v%6(f71I/N.J߶W *Bqmi,R4aR_,ݵW6̄m[Ӟkyp@HH2jx"òjab_w AiACW ]8& ]iJCW*k`g[ 8Dd$ \')3i,JǾ;"p9y5uD#DjE#_=B61Cl@ԡhf%8ЛȆ74Uf%/?,3NsC@$O ҩLPcwdy8纳`1겻hœhZAEQވb_֢yAbߗe@Q`пVJ4*m*GX_02N}} :SL@@k̶ڝvayjw^j7̌D)D )kc`dQ&&kcTьIM3©ABS ~A:E|KKZu\[ 1 V+&l᫃K$fk]G`Ѩ\Fj*m#$YDt!yHgzҙә5ڔ+*kpOĂaaOm=U>܀^ ֖y(t5hV(/MS2̧=g)"Fa0*JgPD_ @aX8ץ{`֛~ Զ @W. kfJ)v{`\']#TggݤB2o L*T:!^8ny`P!feøឡ,+UH#: EъMʛG[%_Dq+VJJ$6ț E!PUuh-A@XXU!B )('֘3)BTz!4J 2;fIR2=#ۚ]Bp$ gdׅ-N8LC7 Sz9ւQ(2#.OOT@2cx<ϳ [n1mf@J7E IWd~7ZF %k$W XXb+mpUa,8 *җ9"ƹdv:pzL  BDgjT_jLz'CHCJf`C`P-5^W<5+Jeœ@$|]K8F4\ʊ7v= `2^" $d- SEy:9 mLgjU׬^d9=FweR.=|oµ7 ,^]MܨxCg H"]`wӗn]]t_6U㷮Oڻ3ȌRu``],Lv;)û|@,eu8*pQrnS9{Udʨx/|` -˄:nxpɬFɎ|3qOEӯEi#7>TP62y> wn3pΛaR5swyڽ۽sRd )P=m%le1oVBu (!AUhS8m0NRìlKϺd"ȓ zNSJMZ:Fh*FD,̤&,ZtBzR% 5|/ !Qe) (wDb 3N AB8#2 k$<cK&4TBk6H, p} %S^zE3_qf$Hf?fk^zɯůztF/tѤ"^lѬP#ʉt_XIpʰ3)chQRY)"Y+-ZFÍH*2lx˥LN6[g57 tpTw= b]I6YzT|mmw»%0 pTۜ[*jW2`d!-=\N.}yd gL %F%|¢EHYJ#..?DзJhגn%j" ++jZ8J0VZ٘${w%ΰ;4lGY3x:#A?0Yq,!1v$ZzI%DhQ9į4\<\3л|)yiDTH uBcSV;S%D!P8K\qVNzR>T'\L:NDi\(>y93gX{NZ4Al]0D>; /"U4? 0K䣻Dbp2Bp cS)d}BSHoqF-`+ @ꆮ70GsJ5֜Xt$Ĝ!Q)(]PR!'@qIìD)3Q"^uk"aS1cddg4#-NT( 4]٤xO9H%5AЙvfQ]&9ٺR5;PY1\KQ\qu\%9~[&V3ض0͠i=]f[-6ו@aЇw?[eInrTUrڗHgrwkiYx xN):3=(x& ,: #?C!8<T4" A.8TXOxCѦѩg{6)Ь &|Kts`95郓Lq2繫oI>g7d{={tyMz&pGs ~tTm{D4МouF |.Zݪp"5' ttˇ=b}㔺IiBe>+A8-V mZ"4\n#ȸ(AU)i 0wEtfj-s9~d JIJev 0ϝN,p 쟗Ռ\)2lHFFueh;5z3--s4E?UWSpI=k ϺI4G~6&̜d#9jb\H sH+41_1j\Y=3(iZ )66m[Vo/6֭;垞:?\MsXj\}~yſ_.W@\kϊהtwN%8gc(7iWU:NX`y ֭թF6!sɬ[}B6|*ZS8(;uŠDujźELtܭzm UNiNOxs`3a {]w$q{L~e6 \2AF mGU!Bra\[.+ XeRcxO+˹QsZuZ+?n,r-4ﮢDm<[7;}xw񋽿tO_du-omY8yy[af8C`Vqk{qc!tpJ9X:`hʄQG*:'Ԩ~-U~ᶪ* ?azBsPT3"h.FC)ճ.\uFΔ )bP1v7v #ac, L?HI]0R\z.яD^]ilDsڟ:ASiNpǥ쏗ĬڲG2 ]ϫ ?>F)A^Wg+wG MeZ&۴pj@Fk1E'Z!{I4X*`AO- b! Q\S07M,(i]=ItP͔4֌k{cz5O81=md]I5k%. F7*- 1cE˶%B ~Ai۹hk7S[/j=ëdXWlo!DDkordJBB nd,pM85gd!5QM@S3aY @aLH:҉QS1/W2|FL/ _\v> ?:q愙a;no{DzA٠E${0\"$_ҮkU>nՖ3>֥jKuFKU[ux$|j:%TTWw}QG /r[$@tP5L\2;@ tCip_~ϡaNwPr_)x0\Qdj9ؗJvCl LV;yU!Zo0; FOL-`NZ<һW^lcы2IiIu[: 'b0AC&Dub2e:u)$aZΗЗ%vK_gLFG92~3@0|RS/|㑮AW<"[z|p0Sو 'urtˍ>EH͍oӓK};\9P'*8 Ή5ЃOJ:2.p98Р1AP%,#´ӽ6#q_VV A,Z5M4Q8G <ߴȂ2g ۽޿b;؛nav*~ %@ly B(Di/M#Ԏ;LD#dIt,H-yPbJFFɹN")ꌴTV0%QF]T3jc̡lUcpw.arٞ{cr:H͵FrDB /JJ:!g}GXJS&b#8=D+q2ɫ!}Qs4UsO>U ޲"|rMģ:{2r󤈊Z iH12R-`+ `M"! EҖʧayoWDGx{RIㇷ' ?K"^R"Ynf=AhdnmS; -UFo6OO.a u %sre/Uw,yÑ]&/;\9Ǘ&_&r b9:1{05;Ev^ jT@t9*8/L.҄+dR` Bp ,FZD.F:oɻcsNtm3SIFKY\p L A)'4@H^*%=y?jݧSUtMNJ"?JuϻG@c RGڠN" (P1cRBQM>컃Qkrk1(U. 'YLְ6,BCrҩ6yB&p` 'ю"թF'6CMt֭vm UJŞOq[7FUgnN7XFX1֭zm UNҬ%=yLhю8(ȬiKr[v d`8 |Xvҍb( ݸ J6}=rdbw gs݃fM5q/WŵLqZ^N{־w,ݿ )A7/o?G"i7y5ZsS F̠˷~oo.oo$ F[{B%I"^2{?c|@Qnޖ(E> `9 >P&v8ʬ!CC(ϩG pn~;-18\gbm83Z?udGY*;GĿU#{ey?^疉p|!F./^S&"1XʺzS%F8O?L<4.ͽRC Dau} -"i@$dUԚp( ]fjDtd)J:3e}窄Ovf]%Fd1LZ$ 蘳Y%ju"SkwaͰދzv<( 0|<$ Ѝ+EW3'"Fe2S3eRoF; i轋*ItٴJaaF8 F h4D.Nq  L 9r0Jfw=nH)x6c$.UF=L:k\d oMKsMR*Ldt~erv)Н8ʜ.X#!*ڀ(SNRC*3qvǤ;JWR] msL+mf3>O7u Q CK;NB8woH!l&3pgjVED^IU\2;j|Y>m 4\ >,`$MMܛ*uEp1CEGDb%%LG7l `5\b/O&3/W[ *(d`xqlKld^LsK2'A>59X40Qao(q3#Lfεq3`J Sb8?+di/!HYk,uު)c$dَ_AT;cfUw>Q=hhyW# m ʼh.չ1I~:Ww.Cɺ'`lmw53L[t' ˧ފOʈمe8DG2bdž;b%̪\+SZ)b=,+Nh^ô5DEW%0|Ώ0^IT7ry!ڻ(TDAE.VH0dPK GPns(k$rCY1`O16!*}'}oTZ煇=}H%P5OGoxS`8V~y}2]CGo7Ϸ3{,B=TGznaz`1@}ɷsoSv3k ^uAxq_\ җ8v,Gj;=H5C+G]fX_覕RRQ>Иo[lGuOyzk?VA}f3mAlB[lHw>Bܒ}:s2#ο[^Qs!/k@4U^j^)>âc6{Ԁohzo[nt ) TUvrϵ2f;[ *OR&76CT@8*?5, !%ld}"&LA|0[<8Hg9Hqti6=Y9iraɥ2UDkFbTc+E)"lsJx$« !be9 I!쩈Bs/ *Yגu4䦈K֟W~N_yrS9}`'mѽ1jaȵT+#ZbD k@ERQD#f%EDS`O9@quVnkolz4|ߗ;gyp3AEDPȷl̬{# lwWKX woJߝXR.u+O:~K9SajP )X 9WԂdMH h; 24dDĊo|k*:ܤ$ټX~XOW)pl%H.60[;%r~ivSç?Jcl}ƫR%0&mc݄_&liN3Z_@Zk.j 2}zb<jO*8 hWtY9_غLٮ[Y N6XG #7fJ ׺U!_);&9nuw+A.)F!:8dݺ4u@Cr-Sw~Ⱥq;ổ mukq? 8̈́)mJ ׺U!_):ylD7Ve1ȣ:e(bZܷ\bݘu+-h^V|*ZJYȑu.XQmnmo ۺ4uBCr=)Nvz*L kw#vM"܏ITR4)!lYGGSC`e#tEQD4{yUCW(}To_%m+ MћMM4o"z| ī0Mg;KZ4ſ}(nɯͧfacL+X_hx>D~}5x_{nޤcS? 73wx7'y漈W칒3W߾y_,؛e~ osoGhO-~ 6Kқ5w_.a\=k1ÄN2J d%<z^r-gyg7:Y8=2dV`'F3n k0rL4qf2~ֽwի 7f?z5$7]l|° @կ>T7`dbJ`Hb4b]5|曟g{3Jd yR/o@M :1+~Y5Z`ﶀl:BP˭&e2Sm&A/PB`yf 7&G1K42&m\'/wX֫-۱(#R: )MhT>=RcHT|:`"GW?zM|`V+?_-࿜qhrllx2I˒(Ι|A_ XdЬ.bWߦywu*ĄzrbӘ,Hc錥"6>2)#<_9iHz('+h4GMu Ms5qpOҚ#fQ<vN{ͽVc%-Nđ2P4a$rh˜$$o?\DK Qɬ|'3fsp]<GVWZ"7O V .uttD)BGEt]t|$#ԏM!l4CPni LA:fni͌O}zuȌm2\_s\$eĆ9Ɣ1ՐQ5z0f>2XaBo?:KZB4"oŜq "yi2XEpXqduh(\H&%/7Ĩ $0,Q>(pU-RM3>xb uc(F4MeOWNOg iKwdʿ{KA5!2.?u8SC:Lڣ UŐ sو\ Ñep^I9ԗ:l(`ߏ]Xr!!T]fS<0a/|-hu1t]fMZ}7fJj럥O4iB2 $4*Y<:294ѐm'Йu>j,݂G57Dm2 5:C/f6[b5OMu-zЅBX=U"f%a+FL2 E?^3ϗ7%%qlJJzZyzT-L_g@]-{[] <ӝVV,-67=5]QbliE;6.P}%޶18ܴQݞ[/uz5kl_~0Ɖ=ݍnbtRMVmkbd;ŬᛩaE]s}>PLcV3UVAMnmҾ4OݧvwUx,`:y5sx[q6fiX?fNu7L1À0 ͥqŔXXGg:J"RN G4! TI`պL} SHaB*@8GIGI%mTྫ37B)Z"u[jS!;\~UpvR/ե#vڛ, NQ%iq"gR;;(Laa 8R(wu1}?}$h2J+2!C\ 7PZI=NؔGJzv2%J܉ÊCڟJF;-[ȩ 79M$ js8|AP8ORp%<74 ,GW.yQA͠KPcSČO~ЙNXZ5) i̓Îȯu03Ue'ώe7&lA͑G1v5c4JG10Ql(22^ YX}'&O#ʲ4Fsh&}7ݪbVJ!/_ 柝"}{="O-_}@6HQq!sH B -hVw/_\7kp3|m}7>y6EdI)^ +#ʼnG|FЃrkUuNK&P{bn L[\c|C,U _n'XzPn^=݃lԨ0锻xD2=)`A:9j{S|17/ɿ/_l zzduu )9|*#@gHXrgck)C`Dl3Z{'E9 1{(R##;4yv"5F;y `)2%ˬ%|10;)^Lg G HG(B2)f ==# h( |1gz6_t37}1bv^&-74)SEQ%QRpܢO)c"! H8CG 8%" Tz(^I|ڎ~ނb@#"W/UB"?\"HY)+ A$g}y _(R?[!Oap"@"*(R p2 YdH=cZ%GEdXOS+.7>p|l\N! ~A( ya;.+ X~B=Ұ'+$+$>A*dQ>.o7D0RpЊ*IlZ%1%$>f+ ar (|O(6ȗL݆!ҰeB v{ֆ(ui׼&rIDn+ȾՏ ŠNٷ) 'Px%u:e6˞[&qAٌ[Nx\?[vN=1֫V`<͆qt:B*vц~#EWPh" .cjUU;f%b,!vjٿ}= =,rFL[AsYY'blkzAzf#*BݏJNh!)ޠ>@srC+( vvBur7'FYdIyT y{40= SE&yƋTyZy6 s/bH=3!M)b_wXEN8ȅ :쮋n_BYm1?Qa@Ν5nQ)98zӳ7OXlf[;۟l+$;#.+~ƹ竵Dbʁy"BOyV[؝Df?(gҘ}u-BQaAs=L_l>XΚ,1Nq TϴGu0`l<& yt;R \| }RIڜ%a\˵8aWD@FESym ]}RfH\4ͫ2ο @b%?A[s,ڜBRn3GȊz(0+hw꟫gO{qIyYQ{QJPU{w:xif ->ŸR!!H~+2֝$IQ6mjWb{ȭܾRB&O=Ӥ,Ǵfd@JNb8XݹVy|=Ǣa4W@EԊ;s.øpD;+ Q4ܦI@ۊp8'P;8׺&Z={YOYbIvp بDS+-ps)(Mzrb&xweR^dfZTRfJl5j$yI;r}"=?Ű6djX7֔ hqe;z4QY~~E,RL8V3le1Ũdkjn*1b8/'&.^zyc=]hyIYj:H6gHHmXs=֓Xw$w>1mĂ@éUp Waiiʁ5=PbkM]-s >,EZN4ڞΡr1eӂ%* 1*g~c$ڦ7oi৽V?&cn},x6ʮ/{}K.ЭVͱGxo9BzκՔC"" }ͧ:&s`Ya9± &&C~v2EɌ?,NfZmO tfv4*Dʭ9ȵIϟS$bBZ=V(Qm1rCJ OɛnKwNqPiM΍Qrڀ=3dkGV!&\2UK8v4~W$ֹ I siVV ZJooo! ]tMıL 'bBb1g ǣxNNٚD}Ƭ犪?Viw3 %x֥kGP(l3(˹@HtY^!mzs);HKnv%`M>eN2 QO2Ei>*H ҏG^ݥmkpqr׃1zk3-C=WS{h=ףn7B@[15#R0dcS=) l^ 'c긡ͦܝ1/<i׊ $%vsȿ(ݮm(0)[NS/q'M#$0,pDAg f\K.jFZ;{5#r;9I$xtì=q>ZbY餝E #JllD fkb!}O8F696?75G0Ő}sJϴ@J>td6=R||ʐ :!ձ_2wZeT] v ?m"?A p oSGUvѭ̜V% D+)_;s>zkn΁+ clXst+=H[:ikW)ROC}u헂;"B BKd|zOvI IWR!Y 6$.:Pw=AKjiN_BS9m]M\O!ۉ>|kcDc;`HՄ} $ZcS[`xԶs2'),Y}ѲX\\4HK5Nj!Xty8!& = '+ dAWG>ws4ˈRN<]ikS c'xV|ϣDLBh#Ժ=Ɉg ~Nj-.S9<\oc&{,I_ĂBd)I~Yob -6 J=g0ͅ zH3Z8ꃓԺl絀.%[kts"D x7W.t91UQKzTFR_j1F- d,x9JOw3Iu^7-`KliPl䷷iIDvI׻[-0)rr$̞g9NlE2څ$ݩ)ݟMWC,5D-Qޟ̻z4IwJ8Mڶ‹X|]zM)QH쭐mo[eTo1'ʃNu=X"몾l۷$AFBl-1|BjhUC0>EX L{8Qgt5Bl@TR纵{{^܃aCwY"e|:q߶kr}XBbDGO!1ƑߋQ9^q<ʲieىWJV4U~Nv,棴XgߦܟDpYA$+V]`?Gm[كeVMzK {OZ&6j{VExz7$,74U/P]^gOwR?h2ʿ6iXL p[ bMFBĂ pï7ܑ(yX;JKe-M,۠d,ݍ^W T<NdJ(0Å8%֥sMi4]>,?k8~l[ƙbtcv%LPx$y&4JR{Clʌ~Ѩ<]|-56=wk)/'wfgZoyy0}x;K)~*G]8jx[C݋:b_VZɶ68fM4hK>2t{ReDo6Ur2s{1KdĬ޳^+K%geܲ~ʫe㹢DX>2Tog솻u}:IV7{[ TI3Uj~?*nYU_smQ)ꎰ23Y0dNP"3R?ϗ2T̠nף87}.2-}FO'ϏxZr4vϙf"EI!RΏf+"80&>Լ豀ۉmi=B%aKdܬS;Z@X&o X ~Prez[gM#3=qyo[RSUWn7C(ߏPCb2 0"a@uI,Hy& E 1>/W䅲0'DI0?0<q#Y/FR/'A{6L}hM[,9dfդ,Qd5ŋh -]U_uuuuW,'J%2"XJU8ïVnke4P򸴦*,pJ)%C*Ȱv 2#\7ןPd.WYN`P - N颾y 5SL׿7՞jpÅF 4B|&r䖴{eZ`}!a'AJm٘{FXBCwg/7=ЯaJ4*K ~ ܮz:Vrӽ뺽oi8LNfdfX3.~Vf+ .s@OÇRar>. Io yPa܋ \kOtS;!*ʸֶd)TAJ{: WʣV_ 5}:&f*;&Nw&58Gsb@Ό_sHʹ7e?+n#J6G * #֘0&vvb iIqRҘ8[eL˃R~u'+wR-ؐ\gkדO7HOr6Ge(R~H&~~CF7py ~ƿ1MoȄt|A;lүhMCFU;`ڶIwF.ykZCicԓ@8eH6pUb2\[w>TP~Qeu5*U m cyRI溸klloܾX3M+lqZL8VR'"R:w|RMBp]w1JxܳRg4Q=ixETkoC&rBD$A!4Qb6O 1!X (&MBY _1sWA|l\ Ԁ>EB:Ӊ\ʂ8%(T&xUA5*:U:@KdPYO ڪ2}tMtsY1ll=V ư8%:V^g'f8ZLm{\̯`.az??5'XQ_]*ޚˇK?Q03ha{M37R7'*z}:Fݤ,:ҹ~sjtunrwv@K.BhHqFdq\<}"SiobZ,gkiPqMmchv<k)XU{r~o-͉dr?*wRR4wyl::oVx_iAk՘(o%ݵl;(](nL:_-1/I"͟=[?:p[B9n:+^6jџli[3` [Gq7J[vkiϦ.m(|r{6>_y~WnA.q#E2J[۫1wi ֙Й*Bg]dM4Pe9HVRiFERVB`}cY ʁqۋts~ݕ] Z2w#h өZJ%IK[R:e gױ؇unx_bf6e$[QHuX:LCU.G!rRE>~ΣA9ہp"sʜta)Re:`JV,fT˔e"#!%̻xW&0A07n׵pg+Ɲ K V GggRPn_]B2ղ>X a.Z"7{>dQ%3S 棲QH3=#XJٶk_SBz⋠4h2aa)SBpz n4 ͨ*R=~տ\WWBHFB?@c|(#03W;&߬n9EríU:1&Bh1C)IQ?SqSK5&Yw/W۽3ƫPm:q,{ _pq^%9J3=##=VƷ3NNrИ*in5E9^tGj47-/~f#yM=c|sQHo"W; =^4ۤnq;,׼_VU@8T`60a)XVv+ˤ ާ?;|0 <ࣙFpzTƫ`9Eڞ+>o}YZ.3Y|+*Lw qw8} f8mQVEssyM wg3(PA*MUA-hnN'EIA.J7ƭtQv"r~/}:L#݋=f-I*6 XUCg WRm);b=/u%K([Tׁ-7 tѼy ߂ 5ɧ4zw! ֟s;͆~h -C3Δa7H hq7gŷAf'A(y \7A؇A[} 4khe=gB}>Nٸ'ImTMj:{1t&9DWRQ)mcA{Qxu6Fo'`JTyΛ,;尾(H9pQGP):R7;TfGCNsK bZ=$NF0ʵuJoˤI/L4*FWAvX'ЬHO 4([gj]QAɉ+5 tbM(kyZsg|P%af" A^-yGݒ_h4`uW/)'SN2E \eY'՛#XjFIbL"Nl[NjstS}lOqb%$#J ń(+ JiEFJif$:rU_X-WIh¨$AD&7Z= X$!sVM.j[|#GWU6%7u 9. gm1w<==2x$L|O"Kp`?.W?9Xe>]*ޚˇ Q0CiࡂAM3PZ$|UA nޤL&i-˥ƸYpun5LȧLSS %-F:8}]mUD!\%~ݥPrBPBrź%OaZJRTJQ&{WɍAʼ0h{ŮxvXRUm77X#Hdfe Zʌ/$#:%J<^(=藂4}g rGԼԫ:<#j%VS|vv/'@cYЍYw)HoWv[j\5ZCJ(G卜{G%mva׌7R ̼#C7 WfouX v3f#/W!DAL2FC!` }zKHjt{/KX&XrB&TN8X:(ch@j1{]`T$A|c"8˝ zx1D33&d5i(dcV|%C1b:bQyE C#i85@yClRv!"9F6`Tp: cfJCI1iÔ䏖4;j~qk&J4;diV, ԁ7RDH 43, T YRAqʣ(JD$kFU'31kpA$\ExT  !|cԠ%^+  T QIE8?Ggƅe2w(-*8DЏ%RX Uk[Ͽ xS槀0w68!X&Z7:ְH|V l:p#֡{+Ot:ܬV0WQ]/y0jz" [e)F=O"zr7FOG߼`%`2VN|jR]1"Vonl7* ũ䂾= ~&4j8'B1*'/?~UO㶦)ɔ7m?k2^`6׋MB}$'܁>e>g15;e=z|E+QH _ӈfQy͢w5zzs9h-ŤdknmGOq2jkGTH M>H)\ɤU^ +: S bL&$Fʝ=s'Ws":<%pJ|,TZ, `ʒC=@W;kT4@/f*{@lm\/.7(;?rqsv_t_螚Υ|8{wߋ_Jg>->]:y4rpS,=d$?*EV[C_EXm~q0m״&l7O 3ni:!l|⦚ |ƝoVlgJi&_u m3JW"bh1ǁu 9XRN;XƑikgclVhu!p-)f;Z75حT9S>!̺3kjh\Et \j (EIFa ,6|1' H A0ŊI)S Tb:Y#r^t!5׼% r%f%9Y: w;KGrg(Z0.wswE1w;m)04]p=9ɘ>w[C(:K.`*d6 Tlf gD2=bR9'Q!9Q-7”B/]$9mht,z^ ᥢb\Vk0G}luez\TD&GmʡB"yFKyKENɜvXY ٫u6gwfSQe;7h)s %F+LBPz# G B(wu *˫*ܵUAs{eW&b~W^u^=tYW h)gvPؕbwЊ(ɧïߦTl+H,k,g ڭy_:;9r j¶M7u'w8џR\3v8'AT oUi8?(C$S N7/G b:B7uyfۜ'tlۇyXu,~B)e6 Յ3Ya'ioT!p5Lx>7p ГQ"˧\F:aV4QEUe9AhK9et>_`qWkS\ڇt AkCSFRS!Q``ƃ#{ǃJ¿&`",U":ToF먄1:BOa K-ňלe΋[Cd!&C )1\"se Vp ئ+9135* *gpqϥdc y (\(CMg۪' Y՗}|. %c4^BP@4'3 ΂!`3HF+ n]qtxDO, S8ԸVqTC` ѓSn$]> &Bj$j<"Z茓BF׻V"b-m>%*N),;2Bx9Z0!` IA-p@_g|T@1hYϋnu:ngdG  O;O_Ǽ;}f>( WcWN T&8J"cbic9 &5~j(IFl܀()GP՜2ܹGbqA Hq(!bINznL,_?j#u %ˑ:xAfaZQ܎~0Qꎑ:mE~6p^slc.:ަI))@ >O}7" i>|#ރN&7wa!v:ݟwޟūj}=9|͟3iڬ!;{-Om>>y'g>-ٯf>~1y&&P 5 ҘZV"a1z!9BimK^ᷚXP)KvFJ5DH,Uk%ѯ>6G SZIk Ezz !J0rOMgZ Ҳ h\V!8jIךi`H"_3 @-+o30G hNHT(kMRP"UV:tc -҇G<"-y(7fާdRM.@xx^a*g!s9OJü8.d>/9sPxV\yB ĺ2yu1.y& ;D ɔ-a >\Wj<0ݢ0[F9=şTo3wI^m ^sudwOOFN7NܰH'M#O Qܾ-k9x<8z `=VJ2RJN-|JIss (b:7XJn'Vˁɒ#˗Ċ2BaF&9&=D&gqIZg[\_Jѕ.Akw ~оhKy[.~h, =%|9 fdƈ\_u@9xnjEdk{;Bw㈇",'P$33;Օ:B+ZPm?{ORy::89pBɈlI?tƮFSGү&\feus.礼c\Yя}Q^82gT!,HDMٴg"!vn8Z@T"<~ 9$U槀:{~,_PΎφٱJ!/f,f+0(f%k9a[Ճ!߀m<. x}gS6 mr3I9¦,Y6ο* YH&"i"5FrYŠ&=;s׋2mK)=lb|T^#sȔ?}FrX5Eΐֻ.cܚ}l?Z_?6?O!]0M7Fx0!Ja2D4x_yYꕜnWNY6!E;RZM׀,,NBl)6) Qg5au+t}Hq| (Rծ?~ h$X@㘏5bbp >J[tW*ZbSjւCXxw*;B9Ay xDO8XU4gTV-(E Ҿ.okqJNg*'Zs.j7y~Z3]w=-~%)Փwާɜu{&"Me*LFY4G6FIbG2$XPN<3N!r,E68&U4PSOf4ݫ\$pfHGtZQ4aMd:er%5 ^6g|/[B+ {wzVp˦Zb<|&EލVkuqKw1AJTsu30zV~\5)Z.b I'=\C69^|1ʚw>P\OvF_$$'10Tɂx2)ZNBwr@HbCbbh+*M}{"ǐƉs2N=qRf}Rĉ}0O\롖)zО'EZu)Ѝy@N3gB"Ko9-ɟ9qBR Fb,J[1H!qAxGj,'^'?{//;'HtZƨRC$I[}t!M&UP5+CY%)l*>y4zsAZi53FZ7g iX60njS +T 7N.6tZmbP=xq+oܜ1 FBO 9{{5/U0.5T7a,It>AՍaק"?éI)W=ڈC \3/ҟ򀷐|%%֣"iFe[j8P%mot^ H (4㹝\grR(~BˉuR("jK/.n!6KcE|頕&bDzIt of.3$SzAz=m{OC=e;⤧JY'Ԟێ aֽ痝h'Z3ŏ:( [9MC%wUEpp6m"ܠݴQ -Eٮ}7EE>,WtEDm%Еo~/ 6Tq_fF>:'a7uO:-7;SOQX׫43qzOz'F9A(F7(xl8R?hqӮE9Q[C_N<'ZT,f1cnHĭ]<"c.r<"c.e~:4"}Tę<&/$>9b9 VZy$1阎PSpN~;}NQ*hu|C 7D54+b >Gr4;4keIjvqI$+-`G)s0P4^NΈWoc_bsMz"B::ֱĀ*\0Tf41@Q~6YJqS;1$F%&˚9+NqtJb2 \d RԧyHa (G3+U%F$F̘gMHD cqOs&Dk8N(Q餔bDԟ^BϦ1\ڕ`~uƲ:OUJlM=u՚c 3Hb3x7']4WII%( H*Œ" nlCߗtӿE ATý_ʣ1E4mEŘ8"6/)HaEkd@vt_뎒tO/OG;fR*50X^0Z4|׋'Ԇ)##kL?>ABL/;(OZ8XzF G*k1?`|N'i-Q)ePEa+hM ǚ]jj Q|y !"vϏqhѲfP*yu7O66pAYQ[5n*a'u%ձDg^smhR6@JB.Q>)p$ Z]0&.Vxc9VxcXa3۪\%>D"e% 4hcQ}ZeR ӽv)ݻkOf%dX_kJТ|^/z"׋zmr2 <.GTQF N 1xI6(5` >t9>\*];$];_#9CCpA{(&Dϸ`)p4|b!i"HRo@Pr;[ A^bVyAPSQFh"B I#+p7M (+EObJah˦v@ pd3dH|~>J^̻`$z$I%ʼn8cGi+ ,ʈC&Uef`3DRЄVU-evCfo,y o;mW%yE)ߧEQ ɣpa|O1@k *m-0-$K'D \cMɲ%OJ$):+>E; < v@cBμqkT4U iSz"ɳ4vr]d!zɧn@#JkD3^o8:PCWh-: EBJf߾h("@b(-(%H!x4wũ|gc4E'Im lmͦoGJSnq,GvtuTh23Ul p/Fdpܚb,4qsS@LvYMmJ P z)A `&'*tHh#A7n&1J$BmamS{h7櫗%ntğr}sS\4Cv29e4*^DvgSaR؟ΠQg$[HlV]/gqINd%NNNUU|7?5-4gG!xr;zяo.o)gFyߜNFs3Z}v{gDe.jRT]'A{x)KuR@b#dߢnoPW<_&+G%I=0߽rX2f< tIg毾{2#A% E]>"6c mk)HA$ۮ((ӶۊoW{\q$ٷ7|_WMw/9 r؁.>\˵wz{5Gȓ1$ )n݌7RǟXMl>v#Z9:Nڛ /vhũu""cT"X<]Ȟk4zAXI2.ygtT$*%p#hrb,]=w<xqrF~wrXjϲHlsIYZ`ܷCjoCKmx2@E3o6TEmhzmڔ  fq]}u˳f_KZoeen d` ϧ3v )1 ~px61?s|vy=xYpϰ8ϾRRU'Z gq-[&%9j؁NYjcx_דIwdRwx otr̸SeiZ-}3 ?SIֳRQC5]U3gὩ[qCOl/_63@Tzn_Ym-/tx=џo[Ƕa~_-قʛߏin'dF)}0׷xd5>a6 >6m~z4nƷGgFepd6}ݺ4?t8{?m O/|7,75l3܀>,Jc{'tCO9^[I{sPE♞yqg@vrѽa b^yt1 zf&gd^:40{ܚ}dǎp4D Ǟޡ taaAP3QX:sР ΁ÃSϡ-Sk`;~mok\ӹLmjJr;['0I`R F&X㕱Zz4J oH`خ -fؗd4899s'omQ+ uZ5,Z IwL^H{4ZbcimgX)U< I3%8l3g~GyyHgQ}t(s3c6q̓V%\0 IBtDI[OITދn"=iNj8ŐtHz`i 1G;1ODOjozNIG]?wFа 9nK^RUyVMjRKR*H-;@wKb_ 6-Ԣss8 ͛sGIesk6Lޘ_I"O.vv78hP^+g>- fwYWNMO6VX*| ;zsgչdyǟ9U;wY+z}?RSg04s)U10!DªݬAHA_ohun/び>,+~W ΈV, (?e BiFQæz_Y>DEнrkř{hʜX}vVEE- XBX\[~ 0lԵVC,x$뎹m9m{TxLJ@:drAmӆAF-!m<-mJ8Md4gѡlOQY‹G($z4. 72hE J_s[:%8x("MÐY}VX+]R#EM%- \f h7(Rhk KY/ۡ @b!#XvA&ƕr2PC%JAX%R-R* W KwEآ+!}Nb=V8<|0_ޮ-0^@w{9vZ=ڃw8Sa:G~l4"k][(\ ntLFJJuA7ꜾS' RHj$S"Akp$϶ҩ )ِ` ^An+`1͡ԟc^/&SG?9ͷG58"iÜ-c69CZ ǁ\$q3O߽g?oXNeAا m [9~S܍r7~)wSr7 P)lF)v",TV%F Tqdϻ%. ,* X*"lUO.vvtRF7~ B⍴&=/?, \?ft1QOZJ}]YkN~\n( jKNj^hQ!1"*͋(_Ag(X5Pi#4 Pʐ@1gtRP[*k $ 5I (6/~ B@ܬarxߝ)ڭfbw݃|uq;3:\4LɬE>.Hѧ6d.dtAzsw#Iv;F9w>>j+%4zG;yE;d  6=5;2a4-q)2C6}1+ri-BM Qs^h-P|u*mjǥտ?,PPnBS995$ĀJ#4(Kac L2) WJp%q8@vC"=au!%H*IXڿ](߻uBťJZ$*'N8eVCQJ++'v^hѳ|Fː^JQn3U,dP;id(j3Bt-SNlcInQp@r{# u&z+wj6X_^K÷ן}xwׇPE&]0k2b 25 m5J0NS_zD[]APK1*;CCB GY_ ğuL`1l!t-V3/Z&GhCg_,C\Kx8YWܲ+T y d!0yeK__H;#^;Yθ`L- ݚQܮS#'C%%UI ՠRno*t*+* chjJ@ pŒkb մȣ# 1NK_DQ^,KcswJgMehəR9%Ihpܓ=3qݴhh{vڸ~͗}'^Z<]'7bayfgUNϗ|kլu-^̯*]ݷ}U] 0 []۷-Ȇ^.Y2l=-'0]a6wk8r#+8r`=8kF3Q`w ax~{ȟ|}o,UI W`Em'v%ֻ 1WsqZ̿L<74;Lx|5 "yӼvm\sXƜ{Mq;>m¤Fyo Cü[l[ +7׆Uq:YACnva2 f!e^e(2tR8D|%* jvevSonf)Wo``̫`2O} Xկ'X?zٔۇ 3,nVw`߯DŽB=|(qcéR7,[7|s69 {j8H[9a 㜃m?9u臿6 p߳pICYYOSA q*,/>NC0z/qHHڍ8mCcІh{t??9]O-A#3X6nG%D^ TY޹4[(3LVATs؟*|:'oT[y7J7t^#X t=l5.ڣ-Z RJrzPpUӝӹ`zG5[{cu1:W$Rx ?zL/`2@QbXOwݛ71e@L2 o; z 1oj OY[-ԭp'y1"|τ[ ]\|~zg}A#%iXeGZ4K!d|<%1-+k EQK]}u}= AWG8v@ S/v! i_2=t JG0~ zn"f]( i=MeѠ(Y/n~%s܏_OpQ>3JbJuЗN6ZrJٗ:'=G&)p'3>Ëv戰ldۙn[dϒ)Y9F)j#'LvZWY _';Uϑ.:< *DJȏ];qs09"ax-{LpIe C71 w.[ @aWT`2Q1X@QΣ187]cqy#13j4AP(@cӶlPqLpښs٦cPXk1A9F]sJx?`g4w\ G8`fHN=ns R`ە0!1aO"C' @IaPI3P0+bKDxbDp.Xm9 쿸BZYo9[LOP~yQMn}]]w]tfL݄_WQˈ6%qw] vn=Fi@|\Y.A+mşۿsY7vBZ'RpA2 Q*(1#DW2RHP fԗd!-\:` U@A "b@fchY  da,e2po8!CJVQA BtI[ Zc c"J yv I gD ͕V 9Cb()qRXTHn֐T-[G/SW,'L@t[ݥ\\Wa}s?_{#~o&o.ߨrG?b5ב/Z-#_$Ïspʡn?gUa6TrUʒIM4Drqm+j+#ћGBdz赓ipgJj:>t5!"F!$Iax8YWܲ+NPD?{WƱ !@XYJրmvDЧŘ"CҎ}4x1bsUWA. g6娖 H#Rp^gNd*H0h+&V1z LSPQAK #b;a֪G5Kِf߾rO#}p{l̻a[,őf!ŔJ JQ^stL0TI6Xy0XWbR -`)āEWKyX{a%_vhzaT7L5Ř|+ )磯zri/M =3`oЊz7V*I% SyuIFT7N5h{B|$o !y:GhQ1P̑QDv|4,`/LKOFS& `Ğ "7)PgF ZeZ\Sq6LVxAp.tjizFC \ b1̊x>L Bv8΢'MqaR 8{=̇nlX<ĺndJ֬[ֵnm CܵnZukʃ:iM;Fh{֭@Z6!S}vpU<5eUv2h{U]Uh!O0%޵nBEU)@36n;M ֬[ֵnm C C1lub/2Ӧ<ĺ\knk@1%֐ՔkoU(?5@@z[Aɡf-jB/eSUv0z+[ E%]^ׇת&Hm &ڢ&t=[Og+<-pKͺUMܞ@(zAZ3%Z]vXcs՘s@WcѮ՘[&՘]1 ٺC5c\u5ܦ&Wcf ՘[E՘91w565cM Aw5ܪ&p̥]1 Rë1 DWcjmj@vYc՘[,wjO ,$A]1 ë1mm՘DYC1K"sWcnWCߴ3՘s  1KEyWcj̭jzh/.k ]UMPë1+&sWcnW8!WcVjkֺ1+% j]UMDë1DWcj]SO╾t &Gホݫd={GYƉ"w.qO>t<5} zJ9%,g*3IQԌ& >ȉ8BV2o}\O''l6Y.tXuv{<ӑqaiA =i8|֜CBJw5=vi: e pTh^O{;J2aa9dfqM᎞9#"Qp G¸ @t2] [9d֦\o!h ?/\&ds` :Ck$)sF1 ;-)p.X-;i ͪB*l!Y_ db&Tp2>+EoƯiXd4'J[D *K [PP< e6LgY2N[J? $!++$9A EC`s uBpg9 ,IΝ)ޢ$)vvwU O:2Y4*ia`pwÕ=HӿmJP)Ed❽/WCNi_bv]B04|_o*fJLOb,RU[дBS#-eT;Mf_!eL3}N!vvs$5qdz:x4=\봝.Q]vguںUU+uجf0.7Z,o2*+/l2R!Gz6R[ty3iT6D͕Te16Y1#D}Ğ[53ͨp^1RAIZqA&}5J. ۮC˃<)+ꏑQ9VU8!Y\3QpH#<^BFhd+ENVNof,T/b!Ȇ罬j0F3Vk{`h#A.%O_JRݞS,mbz0ؔЭ@ʠ=D R޽zsucHCJP&ӓ C!.=od޿?,f]F(˿BG5#3<g)OC[) L\]CbG#8xSWlIQ:~xM>r7j ܠ/Vp,H jc=sܢhǓ4iexOרf j^jHїH&6$9{c-|'1fͲ"\!_=mǕR^~G/_TgHfZL1 QrD\ݫS}![yzЫMDCk Pd&#eTik#TWvϴ! 묂PkH J![(I W^sW1QpƩQ#4X@jb DBUzY 87* }Hr1F\qPE\2̍waC偦W!.h Pߔ6vXç;̮v9fjBޡ+M_?\f޸6J~NOti EǨ ?0N ~՗9?g"u(gWyu 7)J;H lHqՅ!od54}(Wƪɽg!Hkn<ߎ7U֯n$eG)ˆzC>R 3qZM% XHf4p荄cB _f%)758@»ĩl1Nt܍ۃd4%,W`m11&4e؄XP}n|&,r lhI9 cCd>F"Tw} I`} }d(^\ ſ`T?w764f1-6+tK;y?{9?hq8^-/e3@HaF[Hmo.;^s]Jž]2U`yp*}LxvSD&>,wf#J/৳f̑7f#GdïYe4J=& ̄P3!/Kpnf9fk3GdpXNUWHT'cs9tΤR2s!I^MLrA)W#+_0LM(ǍpiJV: ]1_9sZz2Mf\0]Yo#G+_v>=`=pýyQyhS"Zcod:(5%H-_dyDHR*,n;/~` !gў|,A/+yJvI/e-c\`]0l|oϊJ~a2O>݂\tN>(\ tZtCW{T3p0\ǬCצ[ϭ3^^\nat[Pc k|A5ëUՌ_Α}q,4E ͚p8er_[_iO6D4xXgfCubqLPGs * P@9=6rcz~&@!J1q:'jEP0낾}oa4R˽Ӽҗ")Gos3ɑ3b`4TY$P"NxbX JNW`Hdqdվ/Jl#G4A"fb,qR ˥;-XĒS;mJ)mfCٯL$(fb<9Uy*g@0)ǻ aq,o1; 4"a *NKq *0TH$k)`]5)Jn]q6lE&bzqًiX`<$.е)p뗋Ϣ+fϜ](ܱxt5$a]gqW57PthmCh(81%!f6=SL2Ív*o2:V؍GauBfuVxוdTˢ5ndiI{(D;mx18 IY"(92'GVc)95˝y\Zڻ/:l"Jp&r$MfFaGV˻ z[r"^C8Aiwplȟc7 edQn-v4PeG"(iѯreF9̎ 1}kCtwIh5p UJNz<)N2riOTpam9>uGm>Ž:Bt[SײÅ3F,|+囿!=6S ]f,HGSSRϵa5ѣR!l:&Q I:ڔ;)b尪7/YbfLFsJ:*cEtFH@JVc7*;c.~ ^BV5n{ff iὍCM~Ή %at7`Ptiӧ":}hT)^6 K7")TPo"yC_''t7N1>e9m+nѬâ%z NޡA!E8lӟ-`,fꏾ9x>gq|κ2n(C 2K :,3zsʨ A+C"?h>`C[]pIq'ʠnG7X |ћc`yK>N7rM1¥~EnZUB|eXmJZ/;-*AK:w\8q︖- ̐ IR8 >7+es H+c[GVHlvTRiqt /x-]9_ oqtѝGwz\v˂-Qk, * $B4PC1&2,h%AHBK亝/\"_i P(^m٣~L)eҲU8F-V=[n34E C4J)1`E jZd ޸Hn_{f_J$qE^q{d5&R&8ERkS^ c`j4u Ig\"Х}M2a8)aJg2+!ךSo&WX5c"38giAHLRZ>Kw& 4X~HR ITC]>*7oreq "8Z3㉗T1R؀߈VPvu8H)*#~ oa>Ir*. ^x4g>PO?oZ&eb{b/|V,+DԖ9t5SMCxgww3~&3RJM8F" Ssz6)1ڍ[DX#yی ) pF9-Tc&A5 }۾"C(0'y MSO[T`X *R0raĹ bF[8' JzX/@I[ e#4X,!GKI^.D`  UZS7Y)dp,`l9U<"!+*T0}n.`nZ"'cM 3D~MeSúI%Ë o9D\ \)6G"pQd_8(8. * ܆Ą Ĵ:_$2ۥ7d\ 1!@H?[> ЮrjA\O0Q _\Z§@2ПUO)Џ` ͘+QY=@#NwO% ˩,Vr; '{N(V kjx euc6}ޚ I\&Q-BK_kE2U(' ݥ:.w f=LV@c`ZU3#k DP_I%GD`DD 7%)?ƔjQk(xfK$G1挝3ob]|{JrHwԴOvH|tȎnoJ׶G# 'x꧸v)L§/74R HP%i&-qM$7mzץCs9{ѓ_ۜ@mno-vv9wsǟw$?5s[Yxm,1\гaU,3r藃o+L~VW+ềOy|O>Nbfz]ɢL홽EP oU>m-#ѕn'GX4E! luf,TgBG)Ã-?L52.Y;}BE=Cmi q"uT(O`VJa#@v1fqq 3j<8f1H8cn]$8T#jYS+,!5"w=ipXMKN_4pe`&<5i>| ">NDFK<=.$AmX C1kXH%d[[1qSBX0ZH}>/3kD%z[kI_aQ[V'>4HkA``=h"<*Kg</T[µfp!QT:@&zH4Y}r0{u 1,8>C\\]^xqU9,fã^ 9D<6|>>J+a~(ק.gk&+D3*1|EfcXe&72kJ@fJmY5B^.qp>HBH={c!i*c=+pJtėY+IX[Q{;e9Sy- 7ﳟ'Rw./LM]##Ki6ֿG%HvV<j2s厚 P3֭\ˉ4>I aĬR 08F;rz =q)Ng3ZJ`R<^{= 2pg sLC>d"{kX:o'CS{nyg-No* +nNx((nhG`oH .M99cvxFI ysw0Qq›0˛|q/^Ep2GT8ESnAK12ŁҟZ@1D`׻6J@x0m}Ke_a9/;R+~4 $ŭل+xOwa2^+M#*xS:prjn=|EESaԩS:C7WGq/bo\][s6+*NQZvL%)Im%N ud[ж, <sx7}tz% y0  &(hҵU\KGV*# [`X-l Ԩb.a~Q:$l3~%h+*N  kz1C| YIQU3ɔI1) ,&e󔘌2 =ȳʳfzՔ mW>Uo6֍rDm9wYmovj[Ծ*(Yuo-`:|XYg` lA0߿S]KgԔ ]c1dz:<kMkMeMַjuNkE6diuͮHEpYMh}(TJ͖n{:aW..* M7C:1=5Ӂ|.V8X+)/+2'WQ*y{_/+:}Bd2p 5@K -\X2gBf_P'`NRUڮ˱hL)<1 KIQp$ũԦ뉸ȋc _fL`E-U*˂<ͤc`5{ %6dž$%PiqP tlϘnWFC6aQZv[k0u'4} wǐlv (2[6F?Ap*Az6̏+kݼ=剾 -N~ Nī5sgQׄF,>`B&5ʝ#ݰH3i{g~.U =wh=;`-Z.Qtފ8rY_SsiyL@ Ef,oH Vp3Ο7u\^%ҕ)'<dOc`J6,v˹DZyk%)1! lJ23abShZ(ajc8-v6*׫ލ˪nV>7e ZvlzWh Le%Kr- 1"& KQfdR x*C0Hd&eYn E @@¶&l,5)m\IL(ME+3ӡRb1}(LstYH>՘rvbjgNVovί+4aѳ:ey#3[:uV+E0UoOܲ 5nq_&Kd3O T+(~>53L74BO!Jcqp_Y XtEp ~|wde_/z닙]2B/t8/9w\z홀 LS@9vh^Q5^}o_/2w00逕7KUJDPL:ai ǧiz_\vUo| $<}~6q|QE7VO<J dXr=X,|L9X:k $(iV'H'p1E̜QϐZF`/B %L-7zn @Lo*u?,P@cu;?n<˖>S֘EkX(^0*ԯw:J u)֓\\^ N޶$ўacÏ"\v`¢Z-2zxL]Uytg.MⲾk?HgQdw϶uWwGi8ZHC;|A^+bJ\X<S(IB C$60Z(# ,6*KVǝGf#ɍ4.H ݯyozsCvGv>R&=)U=m\ʳlwa;l4[q:kX<е7^ZeGTƌt۷Zi-LLGiC<\)&u"xܼ> I44bb| ܿK`mOi{KdϹ6|JfHhDL"I1%E.i;=6I{.@ԝjNtߵN6RzWNRIkUϦ#R,Nʟv kzyDV uSd@120yA)1LIQS@hfq[;FiX@Z ^ xGY!M ".eI쫰5M5 mU[9܅h>BJ4G02gΨ̀W՜`[k|_( V'>V(Cnd 5Ur}4BD [\)4 $Zzd/ĥvۘPq v3ak¶ "Iv %CA( %7 bE`AUUj$*Qx$R Q-H QoIcmLMr v,eQ8fz>æHKTUS:!-mD8v -愮nO~²Y]wPzo [у ,j2$O=I^5ȩ##ym571dT+}B ɋ/T7x6I[wM:%'Tsь:8TKM(9^$S%U%fj!ǏBTgٞeQi^a{0Wv2EU VSLrx?> M]նZ.Ucv IU}Ʋ<OMZa̭ ]fY6wWjv/ҳ,!6XY0"I blX70o[(. vF4$m^ѺŐ7.A2hEx;Λ e[(. vSi-uu!!o\D)Ő5<l}6hv}70&PR}v:rF3eLI0A ;Jk QI8%V_D0Y1nJ1isL1$a6K+V ǫpY{eIͥL*$IIXPV2=QmwxP/h>ާKrAIjrb#$љIʙ6(Yy_s%`עcO=nܱRE;}ةu,aU koZЋzu:bvSTsʨN|*:Wh1w*cjqdЗþ )4n҅49K'~7 TU:R>G4EHn!g>+hߖmx&/CR"1V"ʜ'6E`r];w Nb/;r{g M1|Pm?g|y!1kq}?j 7yCZӎ: 7vT^ kvxA]wOCK񌗲SԬ֘(I}o^uȁМ6Pq3h-@{PV6ba"J[lyJ/RVz˴ Vs[!jgCӗB&PfFŃ<|\LeLmJ*J¼jQ0.ߨROdyPP{VSe)0qS@\@ e?m?_z_*ۦ5KZї@7x.n^?0vHy#p-j^nRrylѨ=[or rj0Xm S:i?*UsqoNfwLgO-gv}kKL7~s0U'uwJcB,ҭ]@sۓs%>Hled4g.%`QE,s%.L d.O 0V 0F4Kö%#/[S#,ٍp 7ӒDoIYClgr]'_qr4E'GXm j%'㭜[ٸ8͕r') ˝јrn1f\d6W{A"(FxlJM]qm_f޺RE`Lw6 cJ;U 5y cVOyK0W?Mz نۅ#EzyٕCDMJ''jX]P-@ aΏI2A;,n3M9kzq[qTb UG׹Xs`>o"0/?Xc,(:gBi)苺y'oUӇ [?>X*~P竿 D JԴs}„@nUƋYŽ3ʇ`[+Zxv*;sAiMO8M)Vx}* NH N>CϫbDu# c}[Ѻ{W|*?o+ԫ4jWl'П yxdOc`H6,v˹ZZjuk궩V1Q ^hC6%˲"Ø\\\cJrZ4O1( \~P^^9g˞]>T݃{ 1{~zNE4_8gwE w ~%Սbm!Ԋ]=̗`޵u#EЗbW 8-۽ v?֯4%>HF] 3'Eo̽bs礼{GMs).ic&c4fD,ED֒4K. aFgXvT%ֵo,ւIh\iwOw_d,֛6|8|."s>̭Qzr覎L“.Rf%>$Vyr"W: w.{˙U6{쫇luZUZm5Jݣ(٪E 4kgp4e/gLݷb:`,"ݟ{5\MdT ')0000{BV;hDs& Z[G!:a 9-  8s{O:O%P-YVgoH+RҴnc1 }mqQL7#3AHEW 90ǴuAf{``x ͸aQ$:͎n~ƣgyE,*! c6$I:k9qLGɜ @Ug)ܑ-0#l"]F<( c:WqrS6Sc.j\fgYL*!UKJdYr4tB٪΀`|gE!7] i@FJoÛûWސ+O~&p5@ i$a''b )~4s⮦?N&W|֯gg~ັV _ &i87޵os&J uKԸ\3߲KlZmUS)BƔg -w3E&^v+Wm#դLqat"8 ^4^FAm^O]H$-@7`\ӦdKZgXX6D>Z2DzL&L˙Ez,CΑM+-?ˊ$ߪȶ 4f;D0ҐE]4V|׼V|xn`A(*M8 CiC7"L}Lߜtj _ ]ܣAe^Y ^% gRr늴.tʚ"NV(}j9DŽͨ&PBulO#b9J[ۜcMrYk 4Qr̕m֞ $OG_izqBzuGwѧv H$FfNm<LURzTB'G86D3W}ܜlI9Asp0 V"3mВ̀3ZJc Z"{_T"?'!΅t,ҧu= 4!Pf:R Ǥ)e%+k΢Ae2SHYFsN<+gY8pz82jM |Z$x fuY3G"iSV X,nCQHK\fF<7`JcuFb$Q֗ -4 LܙrM3|s%Fg`SC 3DKԨ042ɧy og4Uu֢3xrfeJB@$ʉ Gӂaf9E9 Uy@-!EQpIZ&A FyG`yfG>Lv02%HK2H\5g4rt&X[Fy*=&U*ܛU[KVm'oݧY3_IBhdT&G,9r:V6D&ٹ P,^Ul$+o3s]!.#5hP,$ouwzH+w0[ckXbWZh)e+Amn%+LL>ܜcƹ4[ܝv]Hr62-̌%5 X-(PqrbJ8ٕ91+ʒ7ꠁa5|ֶDіww:cr'ӏ_6P.@qe|®r>gN)[ujG5>#kG%kFWY?j#HՙX골UCZZ;48VtJJM;;l۔cR'Jzp_kꄥ7ҽTӡnc'܊R -]P|NX̨Ҷ0 ZZ׉QPC,d÷AoNv8߾Kg>$g# b \ѐqW+@7ԒZALuZ↯1ew}2bd|'W/R[Wty 6])æ#ެpCi5v8/ F1k+\kM~vDӤevOPv5w} h@~42{KϳTOJHy.RX=}vK^V/0|vwl>$ӝ\ӛ2Lrmn!rn;a|Rk[Z{;v?#Q+|Xbٸ:?Yחů=L&68rQztwq@vbmeAث3.kJ

t Dnldt6~Q+Z9p5{ ǻ}d4t cjTQ[f NΏg6HiMseI,4AI //ܥ*JLO,fUI/ܭ;j ^v~>kJ>c>BxIݿvdGLB8, o|,㳰Z%Aퟹۭ4mr֓|"Z#S~TUF= VA+ѩ죓v۟E?nŝj$;*j>UsIr|ڭVSG'7/x Wmڭ s[փ|"$SLrJiyL"NoN+s}188oxqP :3_ǃOĿZt(:g~{x!| {:@'u!Ըv23wB}S#֛a|sa32*^i-/qt#XjWCftעo5bIRg*YN$"䠶'.Pj//&V/?跧hNL{޲wl@>{WAl==gs:=g%x0zJQۢǎN8{~zݽA{I6ْU=]h=d[GpTKpԘ>ewH1eì|6eVN+#N=w[ҋc$H^U<9B,i4_.KjޚOG7ϴǨF?~>s9kļ]&.IY;),X;)鸹8/>,5L޸եl=t:ܞH$dWWYJ3 -q/~/ej lƇ%HŒż͚b }dsְ/uq}RDca*׀ȯ.Ul3"c K3R1fF$HWfK6+h?0|5lM 6Wхr#aΓvݹQppY51.ϧC(-/G?vr=,gg}~G0h *=J7#T&3z}=] anHGQFKwEV̝J[ce;eӦ}[e\,_[2ιrqYS2Q[T1X3e;(,WȟҁŌB[{7=̇6y7Ye8(|Z3qU12B{g@JYqh!`B̊9mT B%0![j"f62T`ڀIP%Ge,|9*ZdeNZ=h[S3>1f(GoÛC5OкɯfMLGMN`(#G5oۛ O. ݨ;3=ۃӃY1=?NEW%݁XiΎ%J&5u5/X5UZ\e\Y g4Rh6b}׾4X0eDː%t.nل  EneQ2X(vxIDT@g۞q=7)@4dHtUsǭ ۲M?^viv{{m:X:;1zzeDϙE8wq8_1퀫^,nv0{ezH'Nk73{]VUuI.((Db =a[lfu +5<8#({&Q\Ⱦ۩w[w,2Nw$aQɣ<ρ}H7a&` MZLO8TT ׶ ܎,AizEkQ%+U&aJvl5RŻ벹9<4\HnD˪ =S9X劣L{}#}]6sgLŚjHq8bj8N!ד;[f͠>> ^I='* UXmY̯Yѡ P\B (-nRb[3|vsnG>Ͼ&(Efpd3 _zݎ1^ Hwq .![z:Ťg1|ۣ=OD3 ӌˎ%zѾ+<4tGzeh)d'/|v<|wB{7 &oL4iuӗ7 m%kI\\74 ;CYV^E7(}!#$hZxC;j}EUUV@\A ںRYYFy)EUzE[d DӀ!6  (22Nhd%ݖ/'iezWr4C/`pZTwdb"DCM'3΄(7^hN[V&9J⒒Bȵ$ђ㯒v;Hm4s'H.nPK-TW'kPɑ="ῧ1_ P(o9 _{vrjYEuY8[h(v`cy̘Q /j&JlQ-/G%~΃p}ݸ۽QH^(!&6 cGzDL07xw@E72%;xfGf箼1!E/sXnc@kP|N_)ƘRZ9WZfl)JPJcp&EF 4NZ_ '+l-x] R>YR YVmXm&'wzmIti" H!FmDBHKЪlU]}%+Q]'vm=Xy >XOsz./?'}htfkD r'}( sєլh<.Oh ^t>pAsHOk7q'Ww&~ڳE'Eyy2OVh?g0YE6+oτ{lQ,{J4]' غP :8BQNJl%?ۻKR2cfYXqU^׌Bk O5` h1C7<#b `7-Tx /_jR3#cqY%۴fFGp+,6pbaCeY |Q.y,Y FyReu<>X߷xVqmU6y T[CͳP9Y~&ʛ﮾Ͷˢ'g׻YkW/$ʖBXXH#y6[;˻;f|'7|'s `?=DҫDc80rL&>;suƷ"RЖNteU}Sp߹Ac9?~c@ːel 0/;@q/ka;bݐhK-br7xvi؍ݸ#%1@Z0^hK:䩺 Iiτ#qF3;PbLk3r-a Ęڼ;EU$d2q#w:V냵 IJƂ J;Py mz?C4n?WtT򨶯NPCZI#m|ĤzQr%OO/ "'T[rےOT+=~:H(~:OWm'n~<NlpZL修,Pokltp]_yHל?0pyo?cAyЇݶKt;?dJRU@ٴ k.l׏vYm-?.?yG4䍫hN>zúcnzsú!ãn4䍫NHXm& ݌OAtʠ=w'W7ChIO,.OYޜ]^TgˡBH!l974{{G=e0Y\ 2.|-MOB (+i*o9`5D˼Ygss-R"T@!PH 0ŠNk[தԍdfr)j!;Ų65c0]<X k3ZZPoD*LPnv Z6ڨ/4@OiN-=g}}7d$)E᯹SBd%]'!{4 c!;>b,tɟ~ot*oo/>ϾËd~)K$EyUX#9`3 4 &nJF"G>ub7 3__ Bpϟݧ#AG}}OoW)[ԢB1E+%UŎ1 Xge(fϭ$iIPZm,p˘ő +ޗNhGͮhH]!~7R)"ZkGǓo@N6N6N6N6Nحʪ~+ߔB4W *"V9i벑5XW0NTet\@Yr釉Te31VpdCWHblS c #s]T%,YGfAn+ACl?%67HR8A|j`F찱KDQ L AK+ 4G7Sb9@&tظΐR7hQh* ("\1[?ԌʮXfѲ 0]EqG/p2 3;b'S ~#](JvSMV@Dr]L }H}O@$v$bW:Xѐv,pEhYKg.qq;.P;%skڹxzt)n''*yl/!r<)"?"ubxA2!2a Ɔ|C;w$WPwF1 y(l}B6g_J栅=t`5^NzsԆWviEvT*`GoK>mP`uA䎑bxL31uˇ&n}hW5Dl5a*`| z~ H- ; 0p.yYyqm~Vƹߋm?tQJ8="Ė P#OaH\$ϡ< EwR-GJ$6Ţ9lIJ'Bc @!6]Jv{o嗶dLCww,翔׫@"[?=,E]N,m2S:m<`Z`6-yp7jY(v%viZ#NP.6ΩJآZ٦PۘJ;ƸϤ}ژ{Cd f$̐w'q`wtdY% fBR<.%~J[l8$hN#y5*F7c*&o0i{MŸ YLNxxJi^YFq7Rl$'8L'J4tQㅔ gYbO2٨f"Du8Te%U"J/#sXQ"tbh9Ƴjξ3ؾjiFF\䗾h? $aw'$S72BDh!mh!#"CDKA* \H(BDѲv;d(0ښ.ԱOTGQ˃1R1L.CC޸SYsDȠMIh<.OJ:?uQV.Qw$-==źq'Wwnzz>_m7rI 7.txfU,]/8}c;^ݷvݒ('2HM~:XB#N/~~hbIxҬ\//b#p?du/Xw v#%Eb"k"wo7ۉ_.;]wVT4yT 4̢ 5_ڧN E.kdٔ:ĭáy-κk,Zi.vюbg5tGJ9-!dI9cufĻ7NjVJE$MLYGZ++ؙsMz/A+/!_(+%L 0\y37M+8Z iGHFoce_TDeRԞWԕw({GCЧ--Otcm}>=0TPΚzU꜎f>ubsGRWr\М Ѥa)@Mbʭ_F@2'NKdKA-. rt+|!G6b򞝭N]ΖӀg5kC6E6D`cƝ3NCʹ 覃ҷ*;uQHe$dJ4^⁅s^-euqa+A"МhGANOoV2 p"WMMB7pDeBY_e ^HY\Z;teE+J{oՆoBלg<Ⱥf9k&gCӫ8+m7I .(,`cHhB⩙n:8ژ6Kh~2ҳ8ˌ rѯ.^quP *2`#2cj\۴Ml[si0=R](Xލ"lmyxV0 >)>D؎1~np:65]W1=/ Iv='TEzq7t`B)ڇY[APE3jJgjfǠ6A٭$+L8 ]?7kYTVX{@4 @]@0?N3I98Ei&r,v Y.i}֚Ë3d 8VUöH7 M f$9HFDYzA 2YROĨ8> t>Bc Lw̓c4o+d~*jhѤn3AIGh^}?cx{2\iLkk&N0fD$)DӞy \\e;4?yUc ^|K0hjZ[ Z[QhǕ3 &M(k].f$>mkR5). V]ԥZ| wMm\#Iuajإ:O##$" lBoϑF"]43gO?~4~oy&>ծŪ:+OJ$he}wxq5x`I"Tꭸ"/G[`~B?ʱ`rn  Gn*B9=P_NK(ǒ=a%h[4y "fG_N>L+ͪ0O+{kTz:2<2sD˽QZ<"NFJ3 ,s1J֌Zq{k_/ZX?߮?M%J<4_!W7~s3vS:8&GeJz8 ދLƸŝ^)~Jp){r\eV7nUVRZS֧S/0M jk 4(jTz\ީeܾ5Zʲxu_+Xf%)C3WjqA*rJ[ߵJb-]ٲ9b ZB.W <:U@ խB9X~7Ց}O%R'^X/A~LVE]T \=i ɵwz?P5J _#vuo*Zw,~i`6{]3_tJń&bA3f"DT7 T{-U]:=I%{ު)NaJhV,22^3,ꂡx$ZBOMas*wXb$Rajru>o 9=vˆ{-Dz}Y=W[9XinzL(t'nyժ֯6:!!1A@HV]P}m8Mmow翷 Q13L `=*/l-NSDE$J`S P  . QT |i)U%{$cLS2f_cODQ*(w 4G;6(1.hE%Fi V:Lr1KH&\,`ԃi0G8H  \HMam-;P)m-;x BOּ 3k3qښQ\ 3+`|DhO01>x0?TkDZ' kjr̞W4+ott|^:0*flMz7&zTNzTמЛrD!:/1AF ,"4ZƢFE yفJ y"MU! h#o<@3e-" ׬a ׅ/goM]ؓ$#ܩdȧi-QWO+PI([B|fcWWBrgNՊTKYv(LI؝AxpKڨZes˄y*J[t0v6ҵMTc [gY7TRFel,x?c+3/#Yc}p v3h5[r"}p;Or+v{Ýwo6ݳТh?Vm7^v}{k{skw5imۚ\`O?ݿ&Dys~Uߚa_Ᏻ 8@,ung8tg%)oV(:'{yNL*D>Sg Ϗ(g\;̛$ٸ&2}5.0sx94Տh0]{ B3?}όZ?ѓn͘op8nh S)A|}?8<0jRwC{IZ.cК=L)_~w?6]rLXUu~~kZt]~_r5O>|f f;Ψc[k.R[7FO{:;8g,Qᜄ6#|cNt|0^]nz=[=7ww?c :| ~Iۻwy+h9 )54mmQne:ЙXw16~NA\{pp/']΋ 7OO7`f?<Nl|>"L;fm<%m N]e Fgcsh3x+'LMd: B?Ofy W wڙ0Pk"Xbi΃ƒZ8DRC&w5_ jWCAeގ߉L0?Ep􍰑V+2@B^! lLym<6na5o֏E^ʗb(1tD5?dKs3KRS KT Xp}Eov*E黑9[sj  BcuDA8 E&*)TW&.UW#5%s<:x:x:x^a|ܳ& :e 8eL[1Gn!,K5.2ETF}(BQ&R' X"Q D ! Y!] l?<'V |fO;.|;mڧo"sj`Zb,EL)LB ҎqhTQ,p™RuZH;YuZuZ Agw@^h6:3L*NYPY8O"Q" ޝ~u\\B tTZkŭl2GD<`&r GP Tw5HgU,R,fsapَ0M\+ 6aj~yyc ?Scn PtRÇOAT꥔R1yLǬyEj;SYj;Sl۩"RQSKt$^d*AD+POko)(QL,cV@CTGMZܞMY,ձv^[0I3({+sF > C#v{0cTh %PA(@KňG:.X#m<`rL(Ls)moc-D!6a LUJDtj .S$ 0Y`` !gYA8K;gilvB䞻,E{s耽dg0> 1Tq:X cϜ!FPց'lX3J!w.7be(d)Ym9hQ!xz%%8dK1@4!gc0H-s>`1QF㯒wTǼw!9qm[yZ (Psm?a"#%8Ԓ*j,H7%dg)^ >{Gzg+)O"8W*`B{ai4@D\{dN?&!+ !fH 3򕧊ޏ,0@BaX3,) 8q3ATF Gf;!lT KVSr5DHCPE[Fc8DdGUf}F5a͚nD޵q,ٿBබeEyIvE`ӒMIDN&EiDR #Rf#ZLwUuuӏsxD)G(QFP:t FD%XŬĒ c3X&V_RmxnMLhp:ho44e!%33c^0c8G,8hb/>X0\yqRIQ'MVDO^ƕEht- †R}SO6VK>`bt%f",KTؔ, $yzYG7Y⯵+}[mMo򰲾 ÄL"8 d ak讷y#MGn}ZN߽_[7ToZ6uTݨEjjq,2\RC bޣբeW虳ZD6\z!0ׅn;OJ*jm˵Yobըg=gi1? 6[;wuz'O2.3XnOX[:nA9,j0tErAQ3"pkfa?8n*wh9-ʖZ.lnf6S .]1U Vm՜݀˓w;ۼΜ=n7@L*nLc2jYvCA.oei^#S7UeiKk\'K\L;.UKKZN߳4or!Z̽K*O E,Z:>-=JcOE-,!icav@XL;hɁ!*;\JtS pN5_1€vC^ P؄u&Sˁ/Q9Zhzhr!ЪD弄oWv`O%ָ$^kPeeVp8g]l\Lg_)Oɷ6哷v3 [SrѢiLn*`rdiIC~gJn0"FnRՖk,FKqR,QyIڧN{Ȩr2"nr(tx@i>Z[GN%jQ2.3^}46Zѭqm辭VƖ6zli[u:ZaF#JSֿ>g{!)~ 29{b 9}N)l(\`gTh܀Zz.˵>(@Ep)DYd9$ÛdW}ɶڼU&dW*]e7'lJ6K*ٛE[jw*oܞv!TUEVZjEDwղRzTD &^IU;F)^g]p$ieԙ[`C E~ko-|耖UCf5|r,#FsTpEUV, lHGiNA3E3gApu/!T|O⛊o*eek 덵${(Hi'SLQ-B."N'8xq>TB Ϳ/U1:be 98AhV%yR]RR2h\~8g8{TS>!Ǘ*ҩH"t*Y.QVXiX//W/MEI2H)q1BgtH`Mu_?Y5_ }`áV7x)s uήЏaG, NՙjU/X6*^!.k"~L=!N[ڥaJ?ɑ˹DtwmYi܆ DZ[+D_Odhd8d2+aN H 2M2K-H #xBzZ${<CUie2at(s(kO>su5c7c {!"?H֩P$ !euHx;lrڃu<rڸPE&5<*j7@EE&IDgLaV,_]"zs-KQu[b4ӝtӝO'%feڇkL#fR,@7:KGgi[Zs6=[ly9jeറXZ9 JA` a$K2zKWm dTVgh:թNeu*۶ dm$يH`x@ٙY:gR`{po]p[cu:nAQX"D1A,x7~*8>/B?!lLJSt'q\ u-/$ܣsU~CBOģB)ϱ wDYB1Sr9ee :'ID-|Ed?)aQ%DJ(PΒd$^AXESݨpGGFm|>q6)kpْ\Vu,q4Ir6:/jY+r mm^ޚ҆>rl܁ oZڸXg3_u8N)ޠWS8NSTm;&csڧKMceagϯ%g^K딏䣥2[Sq"p8sft`FJi`)e=p/XOPiw0>qg Vnrk^=XȄ;WR6RFDrz+@Hvrjv-3Ջ-!?kͻ[*i2$/|/h?~=:͊X+Lyd}RCien3@h(?obFf VudQ?t45Z%tx&Ώӽ? q~rM99|SNNi~@ ,9gˑbP)G5)| 8\ٍUz3/_/fHqq^X^0cL>]$#,f?agL9K9$~ѡ¶sM!M!4 ftxl; EeEIr*@]ItQL0czEGߗAt*MHhv__XB|`N~;.3÷' (-1ܚӃ_yݷ/(>DJUm<Ӻ eoj_f`oOy2`74<U,uxQ;f B#)E8[{pQL!jn]m=?R2F,Ad,?1ZIrDF0|$ǔIeDL6zdvE IGǴĊNr5C !P70V)S(ӑV1{N1f$ǜ_QN2>r4>X{H‰ZRIi4c=Nb;"s"DFjnfJ,V.mE8O0DqOfֱv^ )Q6]L4_&fB^1UCX}rr~H#%VnҚLgzqPyq%" RD֐u\1Rݵ-q#_aq'L$n㉝}YO8ʦċLRf~O6ŋdU5";lT̓@"3qesdIQ}i$ YD%0;%_Ċ/?u{})KEwӝJ(PN]Najt66Mڙ5A!{.S1 *T(W-@'a8 \.{ z~%<}!qz#9qɭx*RLm@Ia.R_:EZ}~ UӋz!Ii;?{W?=nCCn]쇡]$˞ VՉzbbggI^JDk*Ŋ0AzL+Fes:{^NjCOADN+5?A-hY~_9G@Aug"̸0땣 $qZZoi҉nߋ5pȤ3V97[ я+UsuZYd"bgN./Nq& :cxMo3,tTʭ$3c M.*sv@bWuDXk-QBE+G}>"<'WWO7Oc 첝_]]h z4X776Mkcls qr*9bf `YJ!\\jp6Ħzg)5iт"jԶF QN3$xqDqָ XR!JsDLuE7WYls4htyIk_\c- _}n!&X C?o fY77X&bZ~f{fkÄh˓?7'gᶑgF[%>}ro4 B~sroWlAqOdz|ޜ\~^pqT2Ԑ6l 7i6OvGt٘ez\t9Jfe vqQI@Q^.*oa2*ycavQ7Dq*"E' LQyޱA-;g2$kׁ傄1P 2} 3 "PjsP;;f(]|<0I H&oPc SdɭrGw9'd{D ȅ7㠌_?*p`iQX"MYJaӸ4RnJ1mV f^zGiNĥFGrM =[` 8݅[o 4,qcX" ` >L^§Y [F'XC}Pؠ>`Crˑ)+EPNrؙOl`[2`#&a#1 q2. l[/nq..;6s^,E~Zϒ1\MJdƫ{q!GX/+K}Ӄ:~rYPJ lpv*m%!fw#|;8F;سv2LE0`s*JQϢk1XS .#D*gC SgCā;jvˆd4@i V1 e@(kV[q Mas|Vircoeq(t߰_ɍo +4rԨٔ(ؘc:JaqK3~PB0V@ekdGQGͳL%'˼T9FB ?w4ź;+_Kp=zØ\`: wb4 l.f%Gq`F_F|j6*;6(X*q9*# Ϳ|έ5AAG9;sf[Ъbm#s()Ԉb8rɟ$.ؕRV ;?1c7?~Jc}όn>3y'R7n. =ǑA9>2a};)&]`JrmK9zhph.GxaJJt %+#Yr.1C.0Ƹ\2}hчf%^!|dGzsF:r̮EMV9j5>= sዲ) Y,CY~^v*<ӿx@SjrmF札p%QjtU-^P$bhOVx2_eFv*̐ J v ҎF$fo<"ұ1QZ}L~$e [6>ΙϚ9:Ih:ޠz23#ݱA)$ !YiJђ8Q1Zít}9qDvp;i}ǹzcwlّaY;Ndn8Q,dlw/Ƭ-;6hFߗ`U:h5wբשd3C8Ͱᕄ;x(XR(7 B=ok2vL,1 p<܁rh08wY2b#LYٱيV)P>GWx9>_EJ9( u ޾T=M2T`VՈbbRU^w}j8 Oe1MgF;A5 N ${lmX+|[¥`k8>o A+!ƮwVvЧnt`b+.h GdJZle;vbBB/p GxC@CJewү쫛&zH5qC}ߡq}טAAna8yk>78Vz?mل+EBo[A\,c-FK{WB0Nř_| '_<ʕT!_CmMnvJC'Z.OV6|eVa+[RXXgD'oth4 nCP"\D;G#.h z` D1| 6}i?F)O}w˧wO+ܙ t) t؇ԙ\R2;&j1 D#IkFB_j5)jJ+!a} ~|PJɧX%>dlٽME5wiDM}`G5 ;9{`i.saivީ)n]OԴ ]>}:Xjl/~ XR^:&ǶËe?#Ϛ=7H Q^RNoه1= @%}iOWKJg] =~7I'آV+dZ>jn<!X dX.y-JM;rvy{TD{o ka߳ *OfܫfƏ&Yo"\[LI-q`?얫&A&-Z;Q$5Qe!bVF/0O띹WkRꝪ_Лr8cg$pӅ ib^t KX9ڵ\;پyYې _1Ug'J7vdR }>c=LMS.KăѻW*>m`` q0־ VK&*'2 0WA:7[H9Joe _ fҭ”Vk"/pW7؊JzݛGAd3 f 0'P71@wK8߳c/ O g*ı9''vԒc` Ҵav!a ϼ٣t^\veT;mdF'bV l!Zz _ig*gOAV{<%^.hJ/EX}T,Eָqnٱ rVTDmOY˴lkMN~ڌȍBLyF4YPzʥ9#[FX 0!qV{@Q` bLSCada s =R Yk(zGegVg+w]c$|xbB4TJv{j٘T [\k.&d0aL-t"-1w21H3͝y6*;:('ԍDTZx ӿW3Dcz sD |ZKfM"vSrXo4Ma$ro!凫  //)GeaNL'/N"N*4E9}͛غ|r~=hOMM gbQ/'sdPWocWd<Ou}՛/7Ο|qt}-;c\^ ?w~x>tMW^4MGHRimOW]잗?黿mr#bk xs)&ztqy|tg Ggٯ¯hvzy~~yZ}j ٻq,W JyIt͠` CJ'e;j_vٖl:"e)VC* y}y{Yߋ('wf<闋|~/;9YgSc-%J!I$ #5Eb;%&σe/r~tsmɖY*h^Vt*Z$s>%wbvtUh35s( 8Z=Ɣ/Atngsܒ eeJ<ɑδSlނ^|>^G(ϷB):򕓩?Q^Zf+-Z#|o*(-1{{LW),GPa(ˏfѮI7SB=؋7 %+>_N("8,^܁t$8MaƸ<='[ =z{*| .A µ w/w|_r8=s8My?oKt;e}by o/}qku%j]?WepF˝v&u_'ykv9-꿓ؿ?%cKn˻d=B>O@U@ i* )pVpc N3 aTQHB8 >%ѪDq;ծ!D\:1j:ˈ!f=ţF5rƔ&]vՍ'Zs/+ϡ`>եW+mMᤀzLkB T4&ܻ9MorMD!1\%PYJO1;y02DTӾ-]~b2py.PJxh$kGآߑsJ1cF"L8H;%/M)" $,qf]i.QR3G׻dn=Y=ebf_ޠ_T#+/.&CNDKiA[JT]6v9u:t`{3-  ˆRJ^S\7M0!%RL?Zpn$1e-n*0T"Ҿ6|Dp)!xn;-n+Q]GONo!BF0*aX5N(Wd4B89Dѵg ]tykss-ѷ3?fv))dxlW2x7%XbcRAVvs9t_`-$_O?+!NV <XVu,[>.ZicRA?Hrj&+AE6 =z2[i?Or;-N3nOe^$PuRY1I%cf`H T#x$P$܇`?@c"#}kG>5~!d$6h$)лtVp1?%ں?GES>ĝ@ݫ'~h n;Y, }DC7=n!oKm=$". Q~$LaCO l Tb-X qH!mAY0 T/ ڔ!qj&2lv6A QUZI+&CTZQ[?a Q*AJPj8UD(FRS n0"ITAwʌr~!ُ9I1yL',Ie`+*7Qy kn,Pq0]=*09Cd&KEEu6;68R"BZ2}GN|/) X'ޱ(9ݸ/W"mgvBxs$~Er_xd ^'a{v)&E.2(U+P]kqa*?A'J{tÐ ]",‡D1:4ŃG\"V]5ijLJzdZ`ۍW?xq}R EA9NLaKt_ [}~H*g󣫕U C T3̐Ƃ!#1˂T9ByD `n гHmnvz)iJ%̤ 4UX2m"3*f(@LE sseRVyH=$SI X]:*i2ߖ--kPN,1pFKw&ur_'yk=g&κ<ט]՘]\c 'vy j).4Aʥ"YuQ,3Zqc̍F`@\>%ч̮v+/o;Nt=|{Ĭ..ʣ5 zdYrƔ&zƓypxG\Js|Mؙ3cge\^ 7 FT#2v3K(6y!,(\5m:QtՁ=W.DRI{ /_%҉+QKUbq .0(ZwWA'+4s&r*D_^xa`Gךo@fׁ'NC8䩵S=1 68kPEE|2{̞s@$%s0IM2fIϱ.~g;2vHz^^T+BXx1'N=s^T/DF=wBIf0+6ӱ[@]>۟Lj;_(ܦUAC{Ņlm-/C%OJsyKe)CP0Da QB2VYԂU2߇;vٷ u;xֆ2  ^Ѐ|=RBPͧ_鐨( KRQ¤3]K4x58[ap"z1Ak{\ECle"!5T񄌸sjgE; J-/La;&Y>;}6vl\g5)H IA\Di@uA$0pid,` JQ_pf5 'p)o+g^!b 6}ERJ9 r3wFd^(`sJ;=w78\Msݖ-չ Ilh5oC Rt"3@* *7$9BwPA pr@R¹Ry̜FMw _ [lP7?K V9-;Ss^ ,ΤuQrFr&+X~z~@l Hh^%ntMvFfdp0 cM!=SWqjD!8Vm^yc9.PT'T IO6JtMQH<d HCq8CSLڮ=_>>;?>el^zInVowr)ɢbKeqO:: aXzǻ:jyRǁ-dK޵t q`|@Y?w9tmP CgٽKBn3ƃ[bM$BR.Nc_D$"!JsڥP?v+~Vحqyo qT7mVŠKtcs=%rn @zIt@0w-mp4wҦǴAO3>[{⤝NhFr P/˒ME˶, d3!nn.c8.+LNX _N}'ڏB} X ²,.@1YA25Dz_gxBw^*iyΉWEWR"F׹ HnG6t{LS\&nfxgI1G֪)Z:&`I]r׃ }gw<ʾ(?w. BYv)nLPUBh_=.UMz{rhẲg:Z6leW<١y%,`+/wWW*\6b]J߲o#xv]:(X[-r3\ּ-d>tՓI.}?(cM! ᏻg8 hő[s 0Xzd3f8@ȇ{vt|J ?(>.$=:BKYsI«v\ .P< I#"bO}h DxE}oH"F78w=m$Ԍ h^fIo$|8UR\#Np_ N+dFG!!y<^M]#I"+^]w~<oJS:h-'rY+hCbG? Pndsp-)0m)Fc0[8[lԜ qO|8 5nSX-B0!z+AKِz}=!&ߧdcoB +4+2IIՆf>aWC1A̙ ŵvfm P_5ܡ-Ԓ$kvkݎnSmm}U) Us.ɲSwje/5n$ #H؄7[\H^mg~ D]]H]6P-sފ2ӵ: CF֭ޑI#k-#p>ދWkWn}ou=y(O?t/߾t (DgV'm3p.߾gc:\tmv >mzT} ]0ӗpӆ:OC3GFqÅJ X8nA^-}NqMJ;S| ѭBͧno;G%\DW'~?g#jL=K:MHJrRrRיNy28")p=NLQHn1ԛk(4}U3j.݅k[N_&;?v JIּݞq͠L^R~n]LpO*;-7o\aA%,,Z;[˨5{3ݐ}b(~9H]I7/7T<{oFJ ,rݳ:m76 R}xU%_E(& @#i޺'xSЩ]]"ptzt4yn 8Q/T^kON'אg~Vlf밨27`L;.9ٟ`O_qुv ]9:ί_uL]y$5c]Yk:xAsm7z);>_#0/8i4uk|9\;j-^gvrםpT?{OU-^#sW-g&$Vq|~h|(G1}։I*&|Yֽe8EǶ-PE[oo1*SnIcQpBCpuXLG'S.#cp@x4tqj>$= 嗪@ ɝ KYoЭBR9J^&iGFNq nK~ X`lѸ dK[,o h@j>6shˡjV19v0܎q[d[Y*E%p`W$d9eSEq9ƣ(# ι/X, XX?OџϯItIEt."u;IXNm9%2Z)́Tqh}l+ O ix;x1HwF,WEewiE" 箭bV1gy*"~ۉ|Sᮓ~2_N<7B0cTW1wèMyNˀse}`C޿y^لXѲKY770 6e[ KrVZX=xM5=Ŭ;ѱvXIͥFbhlhV?[3+acQ@c;V8 B|@0}P8&ʄ= d=&2[*V]!ɎfSӣbޭۈt|/ < w*cD=G~q9D8;"_O}r׶\XC}ћ:9σn屙J븂ll|veLc 𕰁qQٸ|V!d[OBfs["}έs"&xSAt/iɼ_?99 iaF[4'YQL İ,ҁP|"T G1ZJdKN-Bw~‡Z+[ˑ^E O9,q.P۰mXuJmh"TTd$Nd6xڠD bxʨb7KBnqH.)M4N ܆$nC7.$Q_:۹]OېRJ e"}Q(kO ()τ`ф*G^~MN*vCj< ZGY#3,jͥ9A62tWYmϦ ӱiQׅ^jIk9sHdLwO =pTN]C֚w#V;Lg1XnJcKPVK[rkMRy|Fȫ1y B 2@ɰ:Xnn`G|c@TU'Aݼ L\@w&fWJ^JNz%u*u20+}TMcZ-]mM<*x;uqZ4j)ݿ]j:M% r;2KL=rqaL>o}Z N?Z** 6%UJJ|-i\eΦ^ qF,Y>~?2-t1֯QP*:{x}_N~¯cL#tp:2plT4`;zvv}8=lǑfB+C[K"ď9~@ddDpPXE*PG.J0vMjc9J2F\Id rDGjotJ>#9e! %I4H0(N1P$cbŐ!b?d8r^F77#yхA<}vO^ɋﶊpx3$b2kY1BIs+{hِV!|m>y޿yI&wZ=L{"p8RcO{e׌u#k[!p"]d!ee_lZ~0,wJ~ 5*TzR92y@%Crxt|tw;i1%t~;sFV]Y@JYII0J2NDos6R# =>%`zeG E#. 7OLI*f߱]8(XsyL)=2ġ1sPFO%Jab]D  P%xFpP SFѷz9,zi6TΜ=wf ]++R3V k b3ܹޮZA{-Ax8ٴڙuћMONu>m$?r0g K*0pm ckvSxkiQt{{u^jf~=݈^:. 73_Kz UvNy|`louYqmr^]ݥQx$0U'} n(XC (!F~65羰Lket)h|.`#<2ZḎj)7Jr+`v(*% ˓8J^ 4jC j([/1 BQ7Է<_"˛:y5I8JcG,6`(NdB' p`l'f'$0 @4RٻqW\~ɥ2ƥqdvs^rꤲI5z<^3;4(_(@b>̺l l~ >/Tp)M`/P gaG-E-cE-յl r}RH#Dn+"FYQJ9@)or.)C5Йt \=xNh-1BF(텘RÞЭ@&bE)IdR1Ӆ)Ԯt`$_@݋. ޸sIWImF48c-cۈ}X֭:&h}T$8#EY?:NRXWϾ\J<^zvujJ# hi,3GJ/KƤ"I_JiJIre$c} @N%: 6-ḍ(&_@YWZ٠D$荵%4h295 Ze%bx0B@=4V+/H" 7;DUlHiv"vש#ihuÆ!2D01&gogtJ&sqrhYQLw1ݶE5'}hz6NFUҡD:hdl-[PVU.{dI<#$Zc—>4d_:r Zq[#ɺߴ5zmN9ghv6md<#  t袌aU$ꁄm)2$:V X*:Q@6" !iJ%i# oۢZ0I4JqєV^Ec4 GiX^jm jZhMN0,l?; FY (K|瞖$_Є 8c K&Ax腘hd#c i]P& t pPƗtŠyR}^h%,EjS+yvn5NN @cɊX%o5jФl[l+@Vc=Ƚlj R_iٲr)4m[6÷ |]?NQ¨zJ3` ̎H(?t.rh}ߢsdF u}.۴N:Q>d{Xep_T;,jڶ9ྏܤj#"rԘX^C9<S0xܮJyT5kdHuBgG,+쾉C$^4&/&3T,'82]njҪWj=creM/NVHr% ;.OWK Ogv͈A<6,[vqj`g { 7HkO5MSͰh{R\ϤuS xVh300w).J- LZOVm}1֪ h?BP wًCS|ቆwӸ4NLLn57O7XG,h!~X~Wm h76e_׉,N8?%^zrN@pDm 5/PQDz2U::W 9mE*7+Βc'sZ r>eX@Z؊`/*R.?z@2”Jp%zFJܫ:+-A%i/JcY&jwzpz;&vJv" gg@&?IfJRNZ&=1Z!F?hz>P;ARٔ KKkb iRy:zғ;:z -XԛoڬiiT;7"Ij81H[ 0P:7 `Mk&ztzL ֍U` M쾑V3m"Z@cq-'TVLhɯzMŕHܢeծ~2W_3{t! P1lNnԢ0k(rФG{EϿo|S(H_/ G:{&rMb N3u(c4j"hu_^X4PܘZ0*KYIvתkÝ8OX8}C;u_t+|=+E&75cFL*Vd<4xuge^1M!aʇDUNw/CO uOE=5l[~U޲pTuwF| xiC0f'ճpγu~Ef^'!|\ݸ"d,e+)qZ)Ax xRu}DE3k`@c8x jdIm愚_h-34;W>i'<ځͯUO$5usF}7CQnFv-9zQ8Qzv'8\[`L9rA@4e^2ˈʢdEi f́IT -sΖ5 T86ؒLIc} J 97rpнp7KSy} TluCQ@rsH-r#F5Q݈٢@dd"VJ:s0k 1!\n$+q"SׂZuҙNU(1ԂvI̾AVʌtFQ(.&#ea 4{N.ȦSH/7%"LR_X37MXJvM׀BoH Bep"s֋, % K_~5{&e`Ҿ^c[VۗIg>^Yc':"$l_}0J(Fٞqߍcu/ |M72kd@} ϰ\Us>~ҬvmeqY/Y¤o X3rew̰ɻaЭ1P_CP'r0苴N:]x_mx-\z,kS(O޹͖ʧ/,V\qJǤˊ\L3ǰȸ)ep*dV_QYM*KF;O;>,WO$IUfXgff7&ĎmdY+.OL7^8'(p ~dmQ{lc lJ͞3G<#ˉp?}l&u cv6B5l9Hçfp6J*?'ȠzFaZeˏ~kD^q|Iި5^m`&iWL~Yk MF Dr-/i@a{C!aRh/:7d:,DXML)Dǧz#;4>eQ~n~/ [ |V}B y!䒲q˖Sf".616NSeq}dPdHI{ 6 Ş2ĭR?eW{ nPzI;qo~M%8Sa%RS:#&kk+|ǃ<9c۷p<'PhC?ʣN1 I3rnL%ɪ8y%l|yK)i\rBNYR{%nj'Jo4#IYy23>CTC&pdff3/ 5x~=ѪD!dW^df͜/#='pHpC=iL5QȦ-a+Y >g&sj/@[KSE=&ѷ"QN5hr@%sEX-- |7d[ӹsVk>}SAikچ9{WB :=K^2=Pǒ$ǕD~^D5rB[i쫧sCJ1g97qnٸy[^M5m4YY0u|aٻ7n,e&X/==b8K [ؾ"XJRTNlݖT"||CЋ wwwVXS{)4&Efb,](OmNJ|/}aԀ%s$sx6fypz>3>ZILi<=cA}rPaDkCw聉B`8_of?5"ϖ='B5(Za7{v<y0[{`:Z~Sy'[I#yf|Z;Ħǃsis0#rRj!r~8Jt+;]4Y|~n9|~o };N.sp3O *hA?6[~p?ɱλ5w 40E@F7W<`BR.cOz]:mucjU v6+  EPoB%WaЮfZ3_8J?Fof3@azzvt5K|81`[</{O⣯ty KԵt1VR+{y6Ai4H2L%"=21J__|ワ9n֒8g$Cy(2 Ct [46rKm;j5v~c+砉UXV>oq-uŽ[ܪkHRuQZ 4DX`VW ~SbܶVbܶXIAlPrYmv^\eYza.Z))h!  !PRВb0ҌCh!DZю:좗#`TLeL7vKVtNhVZ}W43S/㕧zSX63"[@(ۢ0PcJJ[5ktV2i 9MhsU&8#;NVo.FWQ`4E6..6{[Lo֡hy%?5cFft3vbG$mQi}U0w_MWYc{[F'# Zֲڮiڌ@=ݩ"s1FE?O"LF7$g8c X¸"|9Y`8g蜹d1([G`KlUrx/FvS^f,̙_6 /mHfOfX3`b#rZ? 6AxE5t@J/~[n7Q6|PKru Пca?."qռ+杤'T?;ˈͿ,~5k/ٹ%ڌrow4ח@t}ߓqO`s_<[xP!H2F`! i5R~RP5OMcL8 ǫaxԖ3Fʿ.`ըʥŨJk:]|j%Bwd #% B n5dH*˅к&H09$uzO.t(@^'XI() PlZ2HH 'ȵ=@C[o/s9G0Ym1H!,V̄5À3*- !(G9';K `jXI"fd+oZFQeXD辻&^<<{X?*Gn:i^"Ofvy3-.W]_F;,rɃg1Vnu[] tݭsE):I+HO+=nGVjWz:߬2cg,ӪSS,Tx#2mȈeRGcS<.*Ut;i1I%#8(wBc$S؛eVLҒ+sJ zyJjwY|c *Am溷kTzi"tl¼:hF4E^:cA" n[`@%C8X`)ƖΌ HBP ~󞅞D|0X/1`_[{m!6e}`Tc& idPAԴ*4qF!dJx)`Ǥ:o#R ,">P'biDTvLgcS# 8]x8ӝC_K$:&3i y4swi:>e@uP0č0D0"m+sZeb磇)qREh`aYyU-t7'۴ө֌Ku>R![UǗkyJ&BTE` R>(7Y)l!mI>\ &I/^Ȥy>fRN^QtG֕Sd,=)Y)r NuSs ǦsIK%W|=ZM\%hf{k\5:c׼jw݁P=}8 K?=%%ÄCX&' #"Y@XFU;dʑ'(XJ2$uQj$pC|EAb}jPottsk͒=9_ Yp'QQ !66`\N4=7Kp# d5JF%Xj#r℺r"XP& #ʐ9$c kSEIQR]B? yrh1 bU֐&R6PBA5>$ ORr zoV!-'@FK$#*ԗSE`7[.*66 ,B틕J+5Ѱs~ s3U'2G gT#)ڱdr.V Sh&y2MDc*0k^Gb8ؕYRaIJK>6u_ڨb=^sʃ\Uθ/ }'X%Y,x'[>?Xc4jh0Mz8srIFG9% \y$kG T<],gB'˅LיЩ/[s ղRZs!sB>A:u=6G91~frYݺg1uTT7Աem@SWN5Frҕ=yә0>S}L b^tܳWSMp%MkXY-oGQ*mY~+=eɵhƯFsKTās mXmrl[|qUK?ocp~w~^gc6EVk e.K_[ܫ;nymi?XJOC\ڬ(Z^hv$!pm%S窿W7^QQqukA#gu{Fx*`[[ѓeJ "'%L-' ( (09=-`-EzS*D\YV]ו M+oqjYypz7]U{E[tFō{X|>pw~w[T:<_g'Agc 8:c>+Z/I4NtˊJϾT5࿅,HbRsgMe=U,EnՏcܪ9}+ۏ%/ρM΀ūKG`{knт37#7ew/.?+Uvf:[Q?vְ1Lٙ-OR\9H{& a^1lF\S?veZD 5Բ?aN>3/ 4ژ/hwDT=k\91m71fh,7KZ?g%Ipp#J2`OwI !)Kfrp0Iiq9C N$t$< YYwO}~vPkÉDx3nE8Vse<}5Oμ|zp.6U2}st;pg^Oqn)F]lZw {>cs^uGg] O5>'n9ϴϵM TH;7Jo5 ~'5z('8 LƠOlִHi-^s@bⱤ`,M0XeD-11kEp!vXbzrX4eÝ>OƗOf2:O(1>Kvb[Ӄn#zٝfW7Àh=6XX% b\JVt;X t]We7xX%}[4?5e``_}vޓj_S 0ϗk7'ڑ~bpuy m$I_z%,0H 3Fx֖==jؔLM6IQX穮.vWi㪞lU.dao'W,LO8:Z{SUvrE 1G4s.ExBC H#8rN^_\MZ2@?L\^]M g ޺ӏJTOV\0U_LJ>Ol=>|/(aɆ􆯤d#-l9G!Y[D5-qOx1/ --&9rls8x*c[ i(r]T`:L3ğ<* #FLxя-r#T*Xʴİޱ: ]. xrnvUEc84A%gCXwkTJjÜ`ʥǙ"Vg kqF9»Mq|Q-uE` 7 ﵬ,K$r㭑>n9lbnhGգЪ3yydM.F%XYL:j/mE{(^>~w*JKfthXaͭJaW&ٽ]Y1]TAY-sKTSK#Ri\!ȌdRjgO'gJZbPY{chcٿ}9=A {9ч#,hӞ9CQ2e \TYmqٞg<"yӌ5u*(TL_{']gࠚqP<'oTˆT5VR4XИvXp%_fj" Q/xk1bd~oC%0A `H;/:aY::͎"KIkZf97 !>.=)I ƺ;PI@墏6[X[bT>aا,4=%&&qf3 s>W'޸dGcQiG{X.1 ýv`;{qQ)h>X1V"cWD-JD7S/n9 |:n?ֿDv?_]Zkhb_2}sX"e/aًa`9p6B- 0H%>1E0xċH&yKfYmN}ɪ$4 7˧Y3}X>swo_䇷Wa c{ߤ bdt c+3Lɋq1ţIf3$h ,.m]diƧF vK QۂEα\T%\3d#Z3lvl][aEرuQޱU^3K;&%͹0Y4(r!°$i\:(WOˋ~]F93\"cPLA(/tor-?Z: _݇I6~Nb KX W].AK6\48y Dk{ !tf~aˢABGXI2$G6SaVs\Q5>+Sceߝ[&LXN q+l@KLZv#pFb\:=̕uX}΂|gWӿ BS)t:ЊoBM|9|+|0 >M@E-Aw?煹 ZO>8Ɔ%W. 24HA^Pŵ~ *`~+/?NCt>}a+gTZC;F3A(A̾YU@rOd-6I6$m4ф(EIɣt7X;ոV*sr^d(o,]'Pt ] Jg|bvs=CV.{dzqoB1[|߬JXlUZ&·RDiOF36/oN[Qxe~vY|{}iT|#w ۺL|֏8\dc1+* \ottv;pQ3@<ˬrW[" Cڨ]5ʞgI>G*[S4h ]5ߣh@}m8X¾a񖏓ׁ.xlN5vVz1ٕ ":dcׄP~]"Gy&RIW?Vs?_ r_3s_褵3}eځƵyej~`9YS(S(+Yϳl,K-%3J3TuHV? =KZ|6lv7`Hu|/[G֝=ޕȑRVW04# x1kls>>4]oU]GWfefR R)+>2A#6Or?TsQk(Q̦kniy E毅8Љ"[ڜ3HchƠCF-|)ѿRqsYVZfbQ7@_inK7jM_a;nim Jӌi:.il[)3ҒЀs1gfɣKn?sBUdH ( T Xt(h (d%g/4Zik/t^)f6g̞S<6jP ꨂR٘ eȊTT!]|FWȓ&߉.ӝ潖nFtFbY(D4D`C&K'IS&o:kcOK(98WDK7@6PN$A{_ Ds뾤B89Q^["9ǏRVKh"翎CυߟV󺒞ו+؊'k FE!S0:Y!m.F JtcG0ۛ:L/y6vaV-iiy_/?.RƷʮu^&6‚"E$%P'0/%d"IB1/AG+^YP@Rhl/a#6'w'|KIAhձ)q*Y$G>) m9xQ.$ 0'0ʸ)tbΗFV $O-mT}R0) 0HNGi9G"k8E:QsL@QaMw`!`j"@dv/擭}w}]>nn?}zۏ\p6f\яGسbkg"ylM54yÛǶ϶f.ɐFO6\@+wz}u1͢u W"`Q|AjaѺCA c^grD%_s/7^>E0?8I =pi?b_~|z难h?]~0yz*r^zj|?9qsk#gOk3ss^WgQf;6oG7i%6r31ȯ#F@@?#a'6ݐ@|鼇VxtJm g0zmF{`Dnboo'kyñYwj1^c^5Ԧݮk^֗ izHXn :N8%#DkM1܉0vJY*TBQ"a:Z 'LQIh@QxL*4 ~Kt*+ }F.9˷f#6EltFnM=࠵gBr@ e1HL"XK:'`HsGr{d;ܗ h?kT|2'.~wE-(ν9<- {'Z00[ofAZ}Z)Km}$6l0_7Hn/'p fœ=:9/ӚWjz0&`s!󌻻5AHoqbL{g}ٔ ?I*m$6l'nҐ7rl{y'7`ܑlv=.ӦD cu>}n>%R1qF€h5&gCr5m4I6ws4iu¶+VlaܾP4"u#G+yUQ*bqTzkжyZ,\zkHh1tPխ<(b/=xB yם/IWC Q<6w=s2WWCsFE:NK} PvWNV8t/-ہ#&pP2$$z%ٹʮ!+J}V3h$ɒdi%Q4p>Lk^PJY \ȿ?M;] 4@nѿI[F1~SfyGdzS >RZ$+&H~^?#!Ney/|~IGiXAZ7-lF/}pKdP/NDp\ /I(Ngbkla]:杯\Ӱ\s_ 9NyuBΫr>̗kh #lL _ beeɉXRh[Qy u*еDzs@]$66Opg!E 6K2S2;^Z -m`e>OPDٮDjd[4t٨< c&6F0<FNGE䑜*VHuz܍-;wTOi6Q#K$*<xJd*QlEe#h;.  $IfʓSdWj 偦ԧ?l*=He&`\M"Zf/^K1pТDB]g}"vX8:=rw=ng'j؊l3b}\-BlOx+klF!lZ FV$%nN-G˱^$b#ť`Ԥa[,*ޙV#`` dn17a/ 0RQ)M.[$m KaCpb\hǾ<k 09ۇ4;m^aC>%p6v״= 8#umQIb]I gV(:"dΒ!*!WvaCHq8E<@XT6oy(a?b` M5vgKjOֈWV۹{sx`󩗒CcoCmtqw]v=s:WWn$V~y`vbl:x1²)limis5D~w(O+Yܻ;хHYFR*YY-ab*`l>hO吴^"ZNsѺ$EXœ^7,TLS9W"ֻsʎ (СPp1hDf% ɰxVt$57H| B΢. AHQ/lKR/H<QWSœY5 c ZI/E]+iPVú44WEjvȵD#!Le5\RU mɷƋrاH^$5av<~PLS`8ֵoCߧ'/ eXW流Z6RXZ8zi n]b\s̿Ljǿlx(Ea,2,wS%6p[׹ _&!ID\ ]>շ;qCsqsXbO;G+ Uż&w,\uH^Rk]|k[8v܅!ܘѮKeSKIQIŗ?g\XF6V|f35;jV Y±.~-]_RhZ1" vr& c꫚gsy7 x-w7͝† *Kqͻbp+?_)_>] qF%YD$%,bF݄ ^yeJΛ`:w~a٥PJpt669e]nssc]2P6Q w"{,Ik%QR׼:FW0CBI|ɸ.ek\3^/P#6] b3:[iW EpCQJJHA>80%lTQ?u@_C^aL UV_+1yq[_;^rxrE̻2⯧Kz RS%lsB C#b#Z_?)+_J5 485m2k'qU2 yS Dy*R$ɋcT«й(cLJ@_mЫcO-v2se/T,sOHNN|FV-Ss[O:ƞUT;As@x):VPbNb (n5.Z iٞt$^\' >E礕t  =2%vt+~MW+B,v~ȇ$X-6~WۚeJ9%C^r2\B8cp °U(h7 0-*vymV#iV@:3Y~Qoǎe $*@`3*8oQrHk4VM>nbQQh[\.<8N@ *\ܲfn`Ժ*,=ǝw,<#[Εq fْ1$Νtf& 3 i<fCZO7ݦ5yLX Řvfw^1 bcCט3y?KP@i]&an^i]{W)/=](AJ=> UY*biR/oAF#ɤfr~W4o,EKQT6ܮnk@L_%E#{/Is(ȯn{˿-tɄ( .,0$ظAPhk՚HgL"z 7 {  3pTH T[f}X#%FRbv? K+#댏i'998}pk p-(񛐗o?MN7lᾲ$JkhTU)޳F0OƮTE`?J)]xXu!iNkKQCB0{V08|n.6~&Tgi^p_ǀ@~k;FAxnX.CHK9X׷ #9s߅׊%L*vrU?-_r$Hg٩<^rjpi׾y=JC ;%&!QE QAX5XϭLё\QmAx j +Ie[tPw?:HU53s)ԁ? xM+aVuMі AzWubq3ucabJ!|Y/0eF7op _ k F<FTjCN($ܖN0< PR*1?|\ߛnUvOgWBd |\0!]~Weª5_Zڜ,C/d?Z1xkg5a_? uuh>İ/eþbؗv}J1")쭧: x& ASYU0e[Zzd`D%6ѳ0Y ^ZP#w. %!h]ƊHBSۉBj1ۈ1PU!㲤_UH΄((:Qb FqčіbodEg7$1ϭPHȒJE)F,^Q ,#JᒐBg,x~[L2A nvSnEKc@\\x #V +Mp3"A5H>ʲ)o8\ -- SԘ ց2&"4L Bx *y0»Ds\Uk6A25D׊vĖH.JNbOm?~ cps X& IS E#J1 iRv i6P4hjf)L NKʫ@GK6jAFͪp % 8J:Qs; ɦjPAvS=[KuuhkVEHi쩼P7PU-U\DZj܎HaiT\UdV"-70뻷qկ$I:]x[44λijhJ"7Qs|B ;0 Q`L3,#\B`MaX!1|#7Dj?dS$GrI%G\J3@Ҁ ׁy64 ^b@3I䌞j$%"IAQN K/ Q/Ɩ)S{\2jeK#Vxk,"<$i.ȉ TR* DpZS E _Zb)wiRM#ȖK2Xʃ᎐6BkGTl(%# JQ}*-MnhVjt`(o0XfA[)W,KB I6A [6AsL@!@BI3Wտo_Xxjr/df{RrK^ky-n aEb1N|&YB+U/W/6ZuSl砒| U{5]wtjn\@3ӆɺ?W昷YHJ0* R 1Wj{ WoIO7_{5:*Ya8 k'SQet☃XjF%J}X -ޖd4N˧LP,dvNVp/Xϖ(- n-$L A0O1kȨejgli3v)e,qmq.ڪK+-SV FV'ees(ƹs~/Dצi{PRS\8b8<=Nҷ?xb78w:a"3B>Ysk폹%AHgvUՉt@G t\Exu}?JGdcmLe;. ޛlpՉd+D9y3we|t>2MnՋ9rScru6BC.&n>]MgL/bn>CQ+jk 㲜6F=xn6=s .|0k!!j@HHt 2~]BrtSHؐhKHỨ3:0,L&1/ ԵBlQcl'O6wcf3oE3~wϦC0*8=ѣwZ]?ݻmFjQ5=n е=> CQ>/\56 ə-_PZ@)Yw;k2Qqo[y_lM6 B _Y"u.Xt;S!0#!ƼJ^|v[Ǭ2Hٶu`֥}1YAA3: VH^S{:G!Ɏ;y(YyEzHSpH~pz=!88h]chQ7=uc.V_x::~zCf QF 4?\=.z4{hdf/ Jcʭs)6I^r|ț Ll"c#o {x`MCIǽiw"C\`KѬp)8=I}NOx8C;!{нNaC8zװSHx:$ueԲNܵ=ځk;nUm;/} >v\Lg/itU-ɣKbD:B]fbuTjH$(+9άå4D8V` w`+@IJI}aX8@a]~p{9<:ZDi֬]lNpRG_M-:1:> o.H-aTٚ<)ЉWUcL0!X˽d(1%q(mB+ F%Ja&J)+C*ZɣKczX4~#\r׿m@ݮ%ɣK\mx7eG8r]s))x'VSpb]i٤ cL kmSZtNIzurnƲ[:Q1m).?lQ'sQ]sQgD4ռ:UҚU_ ̈Dj9hG﫶fi~!e!bAWaoc} ?GU5hR@0Iċ%D~!_/NJx D $n~J6Nt޶սY^}כ_W3-jpff[ňǕ-s HeWphɈV-=.xJV2!ggݒwVb̕40xbOvpI13[~*(-m/42zͭG#W JwxM|Laa7n"ھ|#ELƼv`2&[!Twa_+erBcMu_޷96 9AF3GLxdx>2~z\g(m=2F#3m%wwg7<;MG, -$/-=/Æ>MpЉ7#*2$̥măʟ߫%8LgY;C ӵ,}W;nϓ.UHT޾){h.}Mg~ O7˷~:iy Ss3Y F t1)o&w6(皝^|sAǭΗ܊* QJJ8G "(J8S";' 2AUhK'TډgL]?YI.~szboOoZ{wZUj47_KпIDOmy| [,^ކ'? jv `$~Z"!G-tFC-8w,l~{kf>5 Ib"l2@Q-(J#}PT8JEk3PW`Ǹӈ*A.6Èb9 TTĕhm0j :Eq5K*A@k:D]׃ Er`w?,QXz"i5v|9$k{&c"8h Ej4κUոL@3FgS(Z"{|ו$XvR[\H~h'Ƭ_|ZQmvnW~X72-^2Y,'6ծ<5dQ:TNeZCf>k1 2% I,LwF]t8c/#dWF,K5.Y|HDTϫLWz%N"(jv1Fx$Qqԡ mX5OB3p<+.1iψ1y!pw9Zҿh@"@g[_;ȃyE> Tf&+1FCs^O0]hoqDA?{۸,) ;ӳ YVZܒ }%[E" Ķ:N]έs62'%Z! ]o\l􀘿l)eCD.N2`1XK.+s]k!4^pe_Fĭ~͝GM%Ÿgm6ox |zB2NpbV2f|왕:U*>5Rb\Q!!:R}>,! O[ZV7(; +Ex S7ftn1Nܿcw.7n0LK1?T4V8?eBKxo>:5Dl3c(~UÙ5Nф+3[+y>cJKg aKmu(J)լktj֞ Hb}UEBZ<3RQM'X mYnb)e QJem*Djpq/KΕBh!DtƇMjIp:ݤƹsYp_ig@wvN 7q 74k@)%+Ms8x(ϪAmuz ͐X[v4隣m#\ƨU?~P 1 9$׉vA \~.v~ p(R? hk#xC'A'G6Ⱥ# Pi-SIđM*bpmQfgtNcO1@Dy>nbmoMzW-uƭC,%ais4Q`RLZ(a8"k$a%bz;\~2/'uR hsH1* 3 <, ӊW+N&/wE{)P/5o!%EVR)_4b8]lG6?fF yxKs翿 e৓f&n"՛!z2HVH潒2rjuYJ6w×U46lJ -Z%|VFF@Y 2AA'h4bƗT ږ짋.g@15T4a$\H*/#᥉ 1*q1}v\>GK.͙"0ݟ{`c1#ZU¡hϊڷ2N%ZDKaS*HQ؁ olV]ĸD@jnЇiA.Y$C6(m6Oa-\GT8&F*E9Xk 85r yNT\3Wy%=ctQ5 :p#2p7;V'q+Ѧ?˘Ɛ\*2,2'FZ?<@R@JJQ5JBR' ˧\SDx)!f DEa'.ZwY>MWj^.!u^N3@ZI21iڄxB9x획,;7_kY6yX9ho;3>gTCz6A3Fdyx+H';=94ǔq7ͣx~͇wOVhuy1h**_ L*>/&*gYP9oNUإ#K uKpEPb6@T^؟k48)[}UhGf'OgHX#0FD)xIbI5ܬ b笩|(L߯ϗCSbŞ1:H4,&Yy qovw)Ek Zړ7hǣ>i=|d/Iy+گ}#̚U}gnîV9)jhrM#R"!)PCtQVϝJ%RTQHzW48i֔u#Ws:Jߢ /l5\-oE'}n. AYCZʫ{SF9{BNLƈm](4f?7@.PJ+'&+YQL2E+thKЦ'K3ovNZXQTHBQ,WNJHpD5>=wMz yO%!b\5Z=7ɸP܂ {wήk* wEZn,j ~WW4>"HSؕQ[~ƙX!s^=F:I\TU`e)Eq!H}(ZS* V8oIA/'/Z`C#9;A (6 ͣԁ5X|j7X'e)"48$"ǜ.ƩDG9+ bhh lsUm@_*TmkPpj'ڝY/uU^:ԊE_q21:" p&Kt\%:%*bU]v "e^ʚvN]R.Xm`DXg/_!8Hөfd 5mv+%am~5E xu6I}![X2H;BȺp' A3t0H]W{$}w4u>wn격}/X'`ΗU\THy Ephj]ǗP ;5TuI+*B V8t~ERt0gA-(mX~0:]kZul ʜ!%tʓ}b?jv=uӅ$:qǏyx a1Di¯ˎ$;'ׄba)$a+OP.&$F ꦬ[^N(|:6ã (%1:'zmhB^^&}:+m!+.w\%wYofO(dSk}tQ#Nʻ{B PB8Sm )$4㊶Jho슁jX \b5Z=|<ʹen:s(SRČ&DT==E=~+\)efuaXYщ.D9}oa Xg*zԀ!5>& _ (Z8=Zmy PUP NY _SUUTRv YtiZAtYfpmuI1]tj;;h)=cU&146agqtjQxpR;!Q 3%(+zkPjXaыDbVu0إn[V9:ژs˾_؍:Ez=h+C.:Z.lw0qzt*k ~x&ǴGS&j2>1THњKkp*ƙ>+ie}犞qF܍znc!e]cXh,t@2&xe/>ϓЊ}n`l J1n6w6f'Y~c0B<㷂Vq!yznF( 31PI_]Дi4M()1*c&)OE R+7%{.7ϛH6\PQWv)ꐭ㌺Lڴjދ!m^y72psm71rρVC$;{"^Í@hj^a'Դb=]nrVJIB8Qv^Hp4Z!/ҹ4F#N҃Rh$&ܙè׷WV2|6-xl#F6!>JA 6MW/#gSmIoȈ&7o&F,N,}+1|JCRY IZϋW{KI)snZY|~~yHx6[;/z>(>(_}p׫?3Q˻\lӑ3؋pW ?#6leTNgmX^ٍH;"μuJ?Z:XC_.| `) *r.Yhx``w؇}<ޔc׃G@ꯃ~G6RJCU-'x.EvY`?e1 8I>L 8.XeUrqx8C6 )+Kp#"󖔂@WZ:[K/A^z7JgDP砋ŵ<1mǞYS-1bS#*!@DHFvBKB>%2ܕ'_JꪗMQ)oHTj]3v3>aQPb|@1$\6Wb*BݶQ)hQƭA+*5p`9Ixƣ,5S[f6 R&j&ݔJ" MS*MJ*ƌƍ#bnO/E2@> ,p`fg ڱ\mjQjRy(X/YvVod tc X8.tA[nx>Gt^"h'.@P'F q[chJ "JHjYJ&;OغwMQ0mzٹ??}>R(qzXSC  x(4ad֤l-h <ּJ]A|$r A ȁV _Y-:EXi+(d8#3wE .dEVP3W)듶!҉jVjbkڃN:Z]k["ҜWF &p[cVQ%SQi)Jc4|Ak265_HW${3uE:mOyHzmmC|C,mي3U5-#.\>9$9ي\tJwAN 0r6K Xrhȼ n^)P1_DYIfEQԤQbp |Ƀy;>U8X0lPܚ=]TJtSD9#ړA6XX)u0t;2*|(P]dJH2hc9#Ab> dN}r%wuuzPYRҖT)4IZ$pN0`!# BeiW9ZRyt; `^ûlZp8j倗̘Z֞tI[2@[YNBKZA3ٍ&:8൲"ƮgroפQ/={h?W_ǹQj$U3~Q(G痍 &[Va (?%hb8K{wIB1炱߹)'?/߹[ׯt}pl3uo_ܾpW1C/ߧt㖛@GÙ0kqc]{O{ \v(1EfgX Az-L0IVcJ)z-xiM%SRO|1i=S0yR^긹9 XsY9oC)/ڢWr5T%mt^ nHܙ} @pQUk'fDBl8Vlˊ OSRUpR7hcɛO6SțO|3oPuz<$w9i!U;0!]N(uާLF8R ` ̔e0S2x[j3PtXLrb?;LcXbtZ @s|ɔJVj<Ԃ׬qV-jIO+م}mJF3\."]h-s <ѽ3`[ %lg@x*r@@YY9͍V g]Lب{ӿRePjL,˭qFX)ŹRVHØZlyXdg*/+-c/+|YFg>WHG@8z nbz3)\xN9O{ SbO1)_oا]ceOH -nj)G1涢mNoM ﶻiKo$c\ fr;GK=f9~tgxxzp{?Vn6M5Jd:̢ ncƶY?77w&g#j콥팆}Oe-b.?{!&DG2qd}4oaVI2 : Կi7dW{ 7Y2oXG{ z: ]8ЌSljqS|cbs'&8p?*=yDr=|kr!ŒxG2=t04o(']DC瑜d3B K5ncKfwևPp;J腝41v꧙+}ނGR7Xzq/b%'Ļ n/6e-rӴ~wjX-S꿅/y|7{ȪDY,B^HU]UxaJ ^_ɪ܋^#?1^Wbr;;=? (#fқ{\p8 & մ3&`踹$N41C:"P+-gLb*[AMz`e=; 36/qaے;|p os0[9KB$/K[ Wk7'_cP':^9kՔ:"eU-J!@x׶W6UzGF2p}ge32.J}sufr91JfEx6)LEɫ;Ό m-ʍ+b 3eɜ W/D[,cW%a)xu@G=5Wcki7vFhe71Po !-$x_;j`oe3ujC^BRWBߢڸHA#XbLAUem4pAgPWWSzO^z+؋_CQM'Szv׳bS:X`HTpϲQ;n#QL?픵VFժ67$mvu{E^T$M&SXA.J+M k}:mJ(tE΂Bqz ]:EZŁ@qxE% j|URJ[[K,Me})AqX+=og o v~wm {/+? a#)0 )ɚ8+ ) ULH[=VJxqH#7c5QBj$}P8##YUkOwnjaby7$wCe@Έ 1B>` j'-CX'RsYq2F.b2Ί)NW_M-ަ&%1NZ}Yjψ ıyÚY~y6 Gmuaxz!4g!ᯘ W1tpEcD-;m?v,>õ3N_kmÚ)GSǚ)mWƟvK ^3HLsWwD:Z%T=#m8ӍtD3 WOIW>;*^!j:ҬF2E]~Չ?Z.\V>QY0? MwQ2wGR\L+]7oPŗ7w6]i)IY_:p3Nьӌ k4TVrM[/pKg\ǴEejmi^;4c%2jYPUsse<P҆iJE:^jZ}=!)^︭87(k4pmi 9 hj8B3/,VI&R" })8uȿMŗK2Y΃]Gap6|_qrsc|Am#i>d/AP?x%%Z'rx]إk G\OA'&'QLY;^;]_{W7ԪN lW [sRZni4 f2[D=}9 {FI^C/,0w?Rdքrt "/oyEJC)hiq:Tx?/۽]*}uc{؜?tϡ(aޟ'Kdzyev$?Fl|fLK(wG3謎ĤKmťgȚ~{eG~ߠt>^6aͱzkDqTq l]rۥ?n;S۷! NoAm0Y?r3ti&k@⟌0E /@gB $wg 2|:1ny_F0 ۛn?|@ujç({102G7PLzL8ԑL\d}c)vF/7r6'l͞;דI(p咊LWSj1vt#?Gǿ+jCfl`<D6BP.ɼ]tPנdO g8%3h#*'E.G eN;pߦz$agNۓĖy{*[mZW-fV^s0Z ԌrH°zVd(BF*ucЧJ椥yܛ=tœNO+ JjL0%y18γ-=vm;3+^[˗2${ùVCIm>` 6g?xX}x tKXjkNVmlTחʩn4L k&̩O?VêCKҘkB!)-EmnܘظG6˶*WfjDvo6)9nMn奨SNNB6=.&$fFj(u0 TFG֭I& 5h킝r?Zi9u~4_iMNChKM!)~4?÷llC$! W S>j8‚d!%:l>׏]g5P(Ǚn/GA'^!H4ZE˶7yd{;;ִ"0an䇶\{^r͎/Jr8s5%M/c$hCmӇ?h˕mu+m~%()3O[ :\"5 ̛8Ki6A?1!ANUjq:i`fړMr8H[M 'B* XSI(i1# (&Z i+RSM;N+thGlBd#WkcL%Q}rymH駦}`6h`ܙ wkruL=^(v&|8¾~4aBI [H1^˵}̈" } 4x-ӳ=:oʎ^֢їM}-^p1"8;gP IHDeD70pɆ%g!)jdT`cRHTU0FD>b 7RSFmr=Ϯna,& }׫8 cgz,Mf*rzvk"r厧^((RjyFL80A@}+/h9` ؏;Tg"5މj\#a nY!9dgq?kK7+"J"2Y{  ay$ZZg9bLg#h68.g'9:Ǔ۬ ` <_ #%}d!`JJ#EЙ#P4!7>p_RN]t\Iau@%C2o䝝{yiҟa݄!kw=7/zIou~/QZ;9u!Q0W#'fx.OmQ"a`j }JJmڌ <E(q"a"n U>3H.D!GV >>Jh\E:8\Y X}0jQEoF_K8P:!-#Ǚ0rQ@/ 30>JFHR Rnc"ߠPc?0TRkA_ ƇXD.ؐ+;ANJ~8o]zV>\&M;P0峛O0MD6AYkr}5+p-]l۫3ON"h:_]~) S (7$:]$xsߍg0EŤ߇N %4@oݝ'„30(#LT9MrZ ) (Ad;*rv4 F 1\J$(*QlvydnJ)e_>DΡE@ÂZ :"Pqi>a0 ^:|C%( KC>AK2\w2PDE` 2a)0}ЧiF | g籪[0ᛍ;w<7xnN+KRMC&!'. !Ѡ'Y/4u=RbaY*|@W>yZy*Ey"WDͪTD^U\0bUJHjԪփ+J \^לd#Af0qw?040@Al (ܧqa@FIHÐ`k/?|@ P"/["t-AWE-rCTռXl?n'9?5!um¼-h c^r+ϒ]u_B@cYAֹDr"LLf3[rzS|ONCIm{?9[7~_Fv‡Jq"W4NcKƕ:F5R?s1R9x̀2kjøZTy1^ڈ-C6lJLu}.V C#\a+0C ݴ+돮U$7P׾ZRL!i+ίu&w~&NN;C~I)沎H-`nxfcgj,PjV6nbsJŌR[2Ve.&ZV6y-S9-٢v$ m[ sP$ q(.^sX"2Y0jD 0iDL߽QhteV4(APkGDY 8p/[fg9̷oCLrq;8I[ՏWvK׮T2ɝڰOZDx=*bGSVL$ Q#N ,5:(9աAR KUc/ڱDBa7Ix<)360 SZ3/TPudCI @ , D)LEtڭ0!K9*dZ#U8IU`S{ =.]LhMh%9Ook! 0BrȄr);޺S:S0wh 3&D⼷3L*Μ/J\Vs:%q8kϓgPv,Å-"wCA~EO I%Orϑ/VzM@IcNAFЄzcFGLbi2HՍ1/B_9G_>҂Jݻ2Yn(gYPԻUR"#]s.pz`*9^:K.I76??᳗ ΂/Нt/Ze̮1%RƎR 7v8 z]şL)Yd:mJVJɦ)u?΋q_ M ⴆ<NyegH)is)M582 deH01ʱn %фPoRRq941#e~L G k(ٝ^VL.`R#D6IMœC֮Å`9.V7Ά__D͹\ly1S.ż,r1= p Te-Jcaߊzai3S8@`Q!Eh`&> PM4.j" ]x3!Z+qH/;,[yG;΀ȫ$Z2IDbbUEdeGFE _>/ 9MMfcO)-cO~X u*Ђ1I)5SX,^-G;s\(CY.7ò}\*l%ԅcbDbt͗bV$$$C}i}G6ڵ!$4=ThQaQAb 6DsKVGD=H+vZjHHnAR%vtuPS챍6 W$UJ9lPW AQAJ[!Y9g9U1#VU ]mvGiMk(o$F)kqMgVkKa Ȥ Ff?Mfl>٘§I Y/7l2}\~?~W˳yKw\9\9[.-XD˳6Ы|r2p)7sYM!ަn BWq|Qrݮಉ#ya_:/J!: u|yE%M #SY\ZQU)-wWtB[ J@^Ks*D)` h0QCMpJi;iePTڽLtк{c+fRLy|9j&] &\A@k d.b&VV\< ͚lcP&4[]V&.(T٤; Z(Z+,{`+e qÆ#;/hQ;󵦼L<JVѢ1ybS_i45z]fqfm67ҰW3;{FKWtR^ꢙqAQE묟 % .JF(21E( Ej%.eeAe7:ҟJYߺ.*y#lg8z! v8ٻg.%ocȐ[#ܣᕋ Ĕ;wVFNE-s-sч7;/F} (U&4i.j}{2Ze;Wz12a QL[kFW"" ^Fh^|W ~Y |qANy5N23OH8<[] …B{\XOwHo43tz=}pՂ_/`d˝3q7k= =_|rv+E>* #B!5%K>:gYY{2]8}>#5Y }}(l|NNOp -240 [c.D.dMv_+8j}V{{{ihfSwn:Pi%h.;fw%(~kJwk>sKf6L,0S%Ce*`!<h zj`ZQw2Рs|%Cma6+e}%˵e$W{hA4hMG#fbB۷YWꗠp3K(.G4yӨ`%h9%:mG %LuD+z.A|AZnRsntj^js7  1+EH ݿ@Fg]W>}FJ#eKq!->n4RyJbEBWB!PV 3-5")j$5D`p3Y`J4Qȵa-K].hM!ӁA.t,+ 9Hyea#&)RZ+͸|"WLqe=V%^&bT䞁X)#K*r3I)kР43iH?OQm޸k#2N3P=uSMtxy2Ks)Q1yVwryz|!> *a:]|񢵾oū Б=*Osfq~- @kH@ce28JN[Bф2yt*t @{00w6Y`z2dVJK}+΢ӒpddxDJ6iN3NQiWIEq^?i+x\O ON!.0#˦L Z*,hxwXY00*)N* p B{"L{5%<:'3mXnSˊ;wd\Pp@F)ݬ\?re62Hjھғݒfo*Fjcl\ݏ G]n^-Ud읥J 0H_K*VL7+(%*KwV;L \yYE/Xo`ﺇ4s-Zw}SqљLdsJ7ᶞj u&_eK+PM5E+t[`(e# ;IuAqp.ϓJP;) qC37ˬ̫ c rɝ 6?-B{㇋5w3|låL7btL͎f\ٝr]04Ѣ6 IN=6Paޚ|GvX3Jfh%:hi+8=8ozC0 Nmn篴?Sx` Z!h2sh h׍XC|YB&.*UkWlT_E|%dsS '=O*!B5D<>p#$ooUnv[:vDyor\blzUWUZ} omܲxlmp65mGwP@ؚs#K(skнfދu>H ՛o r#U]9^張NEɘfU/tK`6A~y}lW#*OWD5U(xGo D_0MJV*@ra | i `z\&S9kRl~$sh ?O?0jG%C-l86p3d:B9w>t'JNT\ϷIF#=ӿ-I}~)?+Ն1uq t̤'jwěvrLP!a^T,H F6ܱFr'4jئUHM1%=>טYI0hr"E=M_|/b[y ,Rކ1s5g/XxJ,;%SӀ9DL׀J06P ] }g.8G[Ǖ5}|kƫ{7mD?G*eo^kqŕsۋn?YUԫ[eƀfC? j4L')T{>eMv0 WL=/rOOTy Q$R%FV0* 6qglOQ5K+j4ST~yRGLݛ4bϩƵ߻ejZ;+ "tuzwNU;|? mt4?HFI厮N r'kNW}NS#ceMZUq;7v6w=tᜫ]J9ծUiy -C989zuv3V)R`OZ!ιG B+ Ź0\ͬF-]ᄐ;IAL|&|$m=wR m Uun;h>3ȇ`GNr4ɔ 2VNk|'?:41NG=M`@3g E@`3p|EsCBf i\!Ox5t)j>4XwV ot&I6qMKZѢnv@k(o??-Ɵ_M#iqwi4jCCڝ#oRS4tl? hde<[B}Q |Gs^ǭZ SK|oXCO+rt鏖8sZ4T()MϫSS #=V/VZwZkH ?e F}v%;[!L'#@DgWƓk0}]b0ILJhE4óuq%zь<.%=k;˳[L1E# b@w?[?[XŴ3JSS'[i*}r~ȓR l +.s;_ہZ9_[.cw8's;'u@sNa)r0.⁁.97IL{GW6IWvVv_[r1zZ/}?5̿1_~ӏ/xQ< };zK1WL?P ٝ=9g-PddzڂzV݊MWЃBX<#fS3<'S3=YR:8a*d.^VY&X'K:U0^BGV8JBI8rid= K*&+Gi(YEK9C.(H34QRZwg@V9{}q4A\j ^+1v}J,AF+X|~|J^e6;[2{ujo̶Vw2(Ic#?3U:Oi=+qqr0a/կ B*E kǻߟ>0[<0'ӿػǍcW gX`? @a ;HcIM~%DI4Z/<3Y<Op ""A8O] l&xO8Y n7Q:b?3Yax l"B O IhGaфeB%z|ٺe,Yu?Ģr:.N~(>Ew}Y0{xS_*E]>C.[.NޞDZ֐M>%yq}A h/VݷjFZ vN[Bp7}>%!$'LXh}'7iRtnߢ\bQQ?HpaH'ܼ6k`G p,(E}U i #B9;gBL3*HFj:/!JQm/[i}:ve\|p3zv&k_uww˧x7`y ? \Y-JeZlm蜭ͣ[ML2W|u,q5* 3UZ(ri7KA {rjR^1Upi1l״#[2VHߤlm ޖ,N<@kKQ A&@fWZt[f(G]cb:Yrˍ`"gL*8[ńژtAUT}d+a pٸm.E[5Jn9ä*;f}fu7<nZ6 U0Fg~O)"v:ҤBTɿJ@V;-EԺCP)Fe!H,T$f!{J=c)=RU&G2O63&e^S}s6z_`_ {+GQ4ʸʌAewCSCSQYӊ3:`Rrh\%[WDgc,LHYLTTQoRσμ6 B4f57S'J&ZT3& cVA-LcP)ha) T<^xꍦh:åeT&CՄ*8K6&f3 C2&VAa! )d8,98$|q;]$q4Ӊ XP$ F{!q̋E- 礂(;* V $v88?xOV ݇iz%zaE2]HaBH AthXsTd-u5-1AA/XZmAdT&B 6'qIC1Fb Tq'2l1p3*pƱ»jrSB2ěB9+e>:3+;DV $"m:ȱBCr,2HM[䭣A&ж@Ѽz XYWkB "| Ω29zR )U"$E:%2X̥F:<+"s3=%9A"JR5TXkڌBX4t%Q+LO 3.IH.9 n%Dvӻ/82ff* d7(QxjuCE 4qC<^y$R!c$^$),Rʒ`P`}ITDD "q]V Bd[cVPF | s૶Wxn8!I^Ȁz>W=JO9 /c[?~ѕY!$!=SmkF.1Wa`~Uka'̐H6o3̇t5HB(?>e/@tO~>HgSxh?^.z~V[CBLu (rA45XrKNi,KIys_WxQ4e>-<P!c}l-D3V0QoєQf͆1˶w4TCAy`5V+T;16OnyK(pn%J4X#v\9UJDKDj ]RZ+\ f4Q EKA"=Uhia"?kJ3/\r2]? {٢](nl5¢̿  ]s,?R_ֆ4}Bp}8fD4aqmG#Z,S,5;ŷ? $?v}.'[WSXC]Oi M)л u>JvEw[]OQzn#vAXbRJ̻jߎ7|{wA}Fs$Z1YVEc{&<䍻Ox? ;S#in-䁱|v$^!Zmχ%8;4X} F߭?,LI߯G?hUM'M=M'P؟vZ"r1L !Sͥuh>?- ~/o->`5.0vzU?]?я?GOV7X5@ކ-0g\'@^P_: ;sC*!-duN hoY<9yy)ޢT\+^Pøkt@E~H_gzX5E" f䙹x(Ks"X8SICqqpi|h!T@'@S8|_5гgH/8ΕKCV!0!Yz1/S?̟_/wG~0<X$c y xZ}2 SA _J&Yb&Z W.yxoC< ;DfJ :e9hFx"4pkƈWx畱?hfd.*B}+#s+8MUkEwT6dnP׊۝dso~F%D[N{>ن:n۞+%voRr͠zt]^5kZ6mCLn?xñ-GqQd.ȚDKQ~e+ QJ3]7J0Fv8vAEi%83*(:2^_IlՆjb{db G өI.QbQq̧rNw98uLnWGiQ޾XsZ0 C`/fm"݊7_z ?q2'1ݛdApk XAd|u3z 庅&k9liɝV'c8uqYɝ*: d'zjhRJ_ѵsXdD#C+%mJ,s28DL>Wџ2_9eͩPO g,]9o6tLaزt(X[*4dVҌDHq|W4,naEeG+UVZ9,Eq0pR 9͡Q6SYV&n 嵭6r98|)ٮ`]ݷ;({͚f}pQE30PEeqE*XJ/X Dpd:McEx4UkkٕBb2رL-4q*@䊚Vo@T\2ĞZnjL֬os-v"/䩥h}KB*nYEkfӠQé[.Ni!⩑Y̮ٹÔPFwy}mD[Q9E-aݰ.Z*r^]ȫ#֞Xmnp-y:6 ӷ0e hD0IKmDFtWxVDke^nxPnxX:Sv,]:^ʐDAc o1iU&!a&nmUumnl9ޒ:Q4T;^ C̬>ױj$Zl0 "[ɺelP ُW2XRiu"~صI!"D"*< lc=NPe|O`JՔ8+Wm3 A\nZ10UaݦA\rTLvarşȅ hc>.߫RQA+A3K|q3 9"|H3R_IԲKifKL!"uZr`9jѵЍC }%ӏ螫x_)@/NCd>i/yFЪϪDU @\Pɧ=zMp=YqwNz -^dYd8`Œ U`!HW*L)=a6Q< =DU]zAO꺤juU(HG_1 kVyxS;oAuqo$_[&-թ7n-MoK8իb=єҥ)'^-`aNjK+;ErOA0|5VDr}vKZj|-zb/&:j$[2 cɭ\] Rh-/c'R\}sLnVD͜R< 9jRrb2ǰ.XI] Ǩ{J-p[3/XX<'*Fiڼr&p4y~+`k-@[0g\PI&Lp/(}+81/]yEd&>!A'ȓ\%/7>aWgF} HZ1N3!9/h9xa9MΆ.F,ҋB~|MYy<ѿǔї iD݌ ӭKTE΢2RJ'fE\LZkzd\ݼ+Ah[b8g4F+Mΐ"ҡ$%fJĄݾvFrU"Q73X[w`a(&hE@A+4IF*[ 1~,ZsYdE>.5]-oNj7 R GD%ZlQ'F\AkHPs&jv9ʼ8S+MF@f#lprz)dC;]8[WW-6Ȇ>NTAbS:-J[ʡ6#W-x~Աqn(AU(6t-dxIeP˅w|E`}x(n@I;*Xͧ/hAkjGEOhɌ\rY?<]ZEO /Ǜ>.h##  {c3ƘV//& [PaRNAeedL^&$,0E.>2.2&DFls7JψkPwiI+U`&hm18!,U3-TB"8ÌF\gSneE=+D49ƣ1-n.TE h־+ϧ㐞~|R&[m||.g>>3˙3-R> iF3a0Zz=7kQd \}~c~i2Oq kGG2tTQuGyolhv)Ʊs绛q{!\[6UAu݃&_%z_-^d WjraRxKTWT{"n~Wl }3/ O# ގi7ݎ9B6l6ww m?PقCF7-y/F_6kT(-,ѝӆ66^_%hΏLIsFe-=w-f TPf縵evo2{0Bgp Z#sk"nݳ{xؿۓ\ mQ_vu|=cfcCRs G+C`8C,{  ((R/0m@5BDֈtjpaS/W=e 2?H`#k>*ܒ65x-9?yzY*8&]֪Wo:?6:6yӽt\:_tx5-Ȕvaoo:žVؠ!b~wtG.4v]zm ps VЬ߽H Xnv0(VxC?$0K{}ˍG?t GeᎳ춖4-[n)W:k,3pv p*W>H9ł,R^\ p+@(+ߧI)Hj3woJGbƗ7W4RW= R4UFBB)vRYKmȬJ:#ae< )`xVGI|".FVaOo?B1^8/8̳#H-̐dyFeJl11b- m PO*Ϳd/g@{JЭݮ gakB$ j{ ڃARdqwL"ȡ,ƨ\0 M> be'j[U4Dx4D <t,Z' }RI1iO#\W50qRE" cP^)PqQg!p$\aEDz!Zl7w]jвF.9ݓOCVfv!`r&a\6I$SX1Z!SѓܥB.٤)`.92bNb˼6Y F! g$oqEA_lɍO}6$&6Y;fwg;S.&.CJhOCEt}|{[_A }<_^d@|kn(lpPX)TNFE:+(o!@sh ,tc|s{~<NSV6$GtE C_} +s|g&>镏FxuL0Ugլ+2(`vha)-Z׫hNNk[yqQ]`+FleeH*py|ePuzLja8km:[RcOxyW~l/FU,sȶx f|S՜ye[ƹBд-K[^[Up+ 2-jeWxC("7~97S4;/jCxXnϞk429<ûQ cvwpv/l9Hrvkk[ d\Zl#AEkEGd%$,S\:z厍\sZ֊O."nA\Ɛِ",ED ?g9i)oY4Z\7f݃z3\n0jO@# dk4kӨXQs F@6bv72˸i?~MS~%lp˗~>W/n>*(Ξ]fhRea' ӬȶXWF|m9LMA~!Fn:"_׀ol+tr/XkK]t@ 1_*7_ /FʘG_ʐ2OV]:D/Sx)4X.NIۻ_yQ\Q,RxWϒ;+UGo87:jQ:]vVE2pDvaE4k d=MגCMn.+_mUJ5g)DR&k+!}BWX.JbCI7[R86 SP{5cG?/fXvOVVi]N2?LȺOW/飮+umt]yY1uE K+\iśMFs"e%/d2ɢ+,ľB0HDADeUvydJYˀ6 Asf1G2C.$AH8,& Jpb"gϹ$,r2B @kfi3I}ddܲ̑V$Cr M”(^Qq*_Arhq$VaHK:׿k)y)|EkhlG{U(ΊFrPN-?ϭO]Ą%މ䶊t\'|(SEdJ:2X=f>u9N3Gg(WX0[]4"Ar)Y$vzVpS`pob򳅨ۓ:+33N䱵.ޠFUra=]p+z˶%xC9G =X/$-n^Lj/< ۈhbSFmgFC.$4,%ζ[xb9w3#J_F Szؼ3b QOQ2x^NouP5m?Y!xefeICV֒-nlxv &E9-?{ƍa~Yv% '0ns6<~n=E#I-;Eo?[#-~iCp둨fEXe Ih͑IKG$ "B Nɶ]=}B]Q +B T% ӫ UC139" Engv2I\%&,P&ƕt1\Iqbaڡ\e^,O_kj3z]9[~), :D<DU4paPMi/)޽{ tBz/%]4.1w7 Jn8g=Uȓ%}r'&£M4}/kDejy ;L q`_oF7eb&ˇt*7O~&K|F v 4_)ڶ!!qƣf.}2o^J[I#D8>~wo-ߛtQ م+'~6hT= -v~Ժ˱,lw1B+VB!@gNJFp+s`U^",.+2^rz25V)Pۉ{vTg =s>|ՐUz;)^xgl2DX]H\ cN1O闡5~5\a2Vr&X}psBWn^6ܲ^m_c=B bΟb֭xXb*0*iF<[M[' X̑B)i>&?:B^(ܓu ұ|7;~}%DajK %߷H(܍8LK8TBiEI3h 83qB ՇE!!_W|[ ZJyz`OwS?ى ,qMTn8V6˛rݺjK,[͖O7_1p;YR^K}&K{E]-8S" يQLksܝ gDnӿ'Y,V6\{rUrI T15iW!:EM޷nRºb:}Tn]γ ޚu OqiА\E)J!UĔBbSJ)P1/t65C;YQwuzt%_*ҟƋenOVOchR\h2X?Ƌ2ԷhmuLh5(5ߏ4ͥjZ] #1LM.a8֧8[X}xAƐ(@m7|"AYz~luwruոqck<+AsE<@k7SBǖ^xg9 ?jy xUIh)LuS(l>#Ff;;o:Ũ+ڻp$4(WKRosbJe6knl6OW%.qÖ5{=ΏmK4wIo`>Obdwn/[ !D)"6Dx2f}S U=7 F1V*zGۥQBPk7֡!_S Z fJ&F3k^U$zOp!Dɭ,vÍ1yoٝ-/ASWF7$泉ɷiȕ9"0:!! EP̎^۟ >OBS\X.1Eޅ@Q?eW/I{4솬!BX%}\)6g3=gd$p >E̦ Ah,WYb !>9 )\BWƓܜE:2ǀ/5kPsN&`Yutbբb-eX_=#cQ݀_=Z(1]sb90׹pD D @']Eq4"|dìgU˃߭O`-(UW7&;Lt?xhw4;4t։`@j #by H'IK`1q%%Haa@W܂?oQpvл%=v ;=p9hv:sx Bxv*S81 r,k_zf|l97g<z-!_n.: !(8aQ cn1b`3._t)_ f39BIJ.BEC͉EX&6A U1V*&23 !%p+*ԗO.fܧs j.mViz慢mog@ usM;Ȟjn~[+FO2,)âcWv]~Սt»dbϗ7[8pW$Z߭ufr9fY0n__"G\Bܡyg=H9|vF5_KMob Y~K+(ȴ*yϟ y^Uf!zx7DL0hE*EJϓop;(l$Rݙ2u EHǁ˜&Zo@`y46$Bhu3ZK_}Y.lFianEű󓞳yyq$Pjf~qH R+p:`k*r:u_E[SrhK~2FC+fNa6|t(G} QWp sWᚊ7;ֵsghܱjI*toW=9ڋy,1%obІk- ?dEP PaƠP j}ghI:/CfT*%-&JZEF )Ge?_tQL*`p%2pI *H#-K|PUTU3 LND$UJʆBT/ ϗ4}:y!+6 Jb=$QCu)LNIb=fB P^|7\Fh7D&>%^Z^ "tL6@4^3ZL!@Kb\2-Zeb]-ߛԺ8` :*2ZzFCA:+8~}_FT@"OB(dpŴf=;y{/z D2@9c;/$?81HP !" 0M(d"wЎ2v~cȬ_ j}gs+^UГv F/[U.kk6'gVp1N_ʹy B X˖!u hg/OjHkL|l ݸh3NR*xA@<wžP@2~8s1,'vAhPűF(RI 2*6PȰaıFAA\=F:sŠU*"iaH'DFĮ>#Ĵ4&F#blBOBPSj2c~a23)j2oq=AeRlc`BGOSDP$/x77I!_ 8>THkvy*!iȏiuU)Ng @vUoz (IK'?Q)# 6B@X:=q9PcL«\@fY()~ogZ꺷ΗTsQv*`O&ny(X3֝,59>|#TVhψ0Ap%>b{>g熑A|Dnƀ7nW+wI7g8#Bxq1e9L0zDZxq&dTvGfJk,odTOY!qkhf:hn|OKo ߏdi[EaZͥlmZ֦.d!V+SMХl:tK͕|+.J?ڏ~ ¤9\VmnV>ܺ|Xcc{Jc[b׆=ç+%){I2$)W \m7rЏKnƮ ɼHZH]ZU=HVuOV&Y[C'Ɉ8kmKǸbXI{5CU(D{9Nwz XV'٘WpH{b)_g!1|\{w`_]szcAZjvR38MK-ո:GОLIwK lJQ)HТ,b h{ف矟?}x~Xϓb,zV%Nfc;6y E5?}\򸠏n͏h*K@5<+U.%b|mʈ>ǿ sk}MzjE?|(`n=aٹwg-U΋R Y7,j}Qkݘw trŻ1VynDVB޸ؔd¶wx-}Fv t`-=ѻ7n۔FJ9|˖ԴusFA>w;Fӷwo_Nn%,䍛6%4d|bL\iuk}0f>U/[2WO/_[cF] C9X𓵉Gz=Wn]~oboD3CFsu`Z*o9iR ڡVqԵ>6KS-&hZ(B-~NXm=Qvif\V<*mR[mYIh ?u T22% EfIX31?93i! :VU+>ܬ­ IgI_+3Tra 2Jr!H#8S{î✒I*-]] epk՞g, [ȵ Sn@GN}_VvXVT Eihh-GyH BwbƦpMp@\!wҮ‰[5D\KH@|c uAFylo7X۳ m@-tOP(/ۛx!)m{B`.A$JE4Q}!fLY_TB!A1Fqt?x mHim]RnjaIŋ0/"xǻ8GgfPkW&_7mD,Xt5y~ke_\x`v,y2On盇fA?ib٧/kP:Q,}zZ5O늢8EiS>b<{U} W]}O2}24+FWXl>9߯׷jKDat?*͝7Üx%qxy/Iu&,xCY1n9uV,kՌJhb(ȐQf]8k&$XCmC1MpE *72{KLA D% B'azL]c(fޖ<y[a7%h%8q Q%:j4K!j;Z d^>7kQI5պ7# R@\sJEJd ށYbmh:%$ ړ,PF+f]0NKޜQ(I7 TZB:X$\{;Lm% A+2Ewll}K( p.lK㯆ajx >y:|njN@ sUMJ!:͹Mom"-ٵ&ID۹u?;јR8< rz^GNܼH 8(=CX㦄 Mt; FFՕivҤf&-JH10Iu~1M52a?n2ҠݼAL *2M{r ˻izʜe{G3o\ΜK/3']o: h^p|Fs7V-Z RytxE5O-DM=qx2cbJ6&tj,տ,{1b)\_Ay.{#+Je$ns/HQCRICyG 륃m ëPa>{V2u[BD]4[9ܐ/e/4s]THbv\~ KY+I$ A {nv}/K $- :C]mnY(|i h(&vHɑduFgB -x+s7ЬlMzyOU?`QioKYO}gk-==wn̑Bz, ?.1-Ao a<.sX!9&ȄyVҊRHYό+< ?ce"HjearJC)wCaD"@b1%~)ouic8f¿J(0*$Ԝ:Ups4&8=m Z6~}0ŭ%A1C D9B "I4Ҹ:-5\f}|0/4riϰ-? }35;p+xM}ƕ#pEUFc ufnqXtB5xs)'^bI{@Gk]B"94hJԡVA,E9H^KD!p܃AծL1:8 kkDM")։QX׈ !w^BS >s{YC~l2Nb/͗mHj5*;G 6fS%SQv~T^TDc` \)!K# +̝jIǰL+M-ZM9)MٲfIc0%\3$M0rd{Lt\yv,dAڝ#ޚ%v' HcB-ZN:٦"ߐ}{WDŽWUVXZ1xZ[[!,HJ2I#`9,U1Cn L*nQVexB "#Sl5|I|V#3(hjZɁp*B;C$&@5UGNvcM0B?z1s,yM:(UM,"L9#Zۖ򫭚˷I:+ZDEa!LPD Ƙ$Wq ScgoE1\st#r9PoadU:@"ܘң#ѹR$CPqfA(;2svGiVܷc>׮^P_`zAR BP˔;٧ RcVc2{nMnSMX;lJ ar$p]n8{ιW \U0;ĤӇ'0yI%OWOE`Z<8ٳkVk6~YCX s>ƛ6?u:VԩkDKɾ̽H9*-4\_ӿ^fy{XvkzOnyqh! y&cSU=:tӻi!w trŻ1Bl0)ѻ7n'۔$BT 40QHdc2lEUoh(1gAD-ⶈ$H5PQxaZyha1xƔ|Z><)tuFLş+˷*{H?OBx3F =@Qؐ[3N nKL#]5AS5ntd[cG)$<ԗHnbq6jqt"C?C-GCUTQԳCNIR~z1,*p>Կ7F&=7:u`Ix$9vj p6=&-]~ʤ\jr7bjyQ&[ښŶe<#LP-]ý*{%Jh2ɘ4Tp8TUEƤf.\}[ < ;gތ 8KD+ 0视>aGw%%sY}q< G6~O>oC蹈١5v@0u4 (;;IeSXXI(6"ZeXRϹIt쵫5AfbT~ukrV(?ڎ6Qf4dX(Lqc#]kΐ>p"{xNVXؗM1| =h>K6<{M4˦< 6hﴃ眾쭭 ޕQ4)~#]sS=֖O똸7n'۔KF~z{fiZ79%ehOY>L/[B*1G 6}0Q l_3ߞOhVZ}wP] xkL3ȥ+tW?_ٻ8nlWz;)b\ A8μL puH-63~TYͪ^ 8jU#FÏueԀnx-:zL@DjdY0ߛw[q+x_iOWg㩫C ]O\)ϭ2TT{_1m&(px H;K*Hj{$l~5ك̐\WO6ɘᦡ!S(h-̎"x{tr^iH>`Nc]sX,uD-<, 8;&6*Ͷ nZB*}n/}J^Pknnځl>CD_ۂ^*.HÅ3Ts/E+L;.(ײ@FJA1 2ZuN V燞k}U61Qq;//վ 0De魘Rc͂-Zx-zXc)FAV0ޔiG`<Q7_ H#r80t&,qo+b[.LnnyZ7(`Løp[QP3]ɧ 녤ΐ)*H Li!jicG[KY-Npg B t6 B#ke-8e(8 Nj$4]OHjLTGhR<[OdAvh(]~e6xhkq~#zKH3BBXEDZ? 2:<;PIà˄Hb{b=+Ԝl4ڡDaQ,%s '+ym#zaʂ8!*BܨJ,FuahtW + ODpKъ1+kVKC.%=[*\iihY`g>~Yq$ͼK21e .t(X-m|j`Jn_mYbIcۭ{z*(~IbS2hDn=>9w0ߒ9:)8)Fd1 Q{WL3j`B qBI0 ݐ]sgsͰ6Lo~k<^/*75޸j*RJ_)YDEM rY BއGTbѠ:J7޼*@f>B ӛٵ.r:]̿?0NjߜI7$,Gc'$,Kw[s@i+mkE#UK `|שt5ЈkeWXYj "Oc,Kh&LDo>Xdf@Ft{c'%rS/!Ea8\[f|Z@+k$8QP8Ǐn/-y-%ЮT,[ ,TKH!;RhmY.Ĕ0ݸ`# kC7Z,[g^R44͜ 5Mo};S9'K=c+ƭD~uf"ȥa l*FQQЌn5S*Ϟ:[ U7?VyD'.酟y/{gbj :t\>aO/gmb;^y(vւ/mQG%2f['z~-V0P\" n.iC/#ɤ#bDr†GNaCOTxVcXz{zfmE5ҲΆBGc \5a"b?(& }<&i,%{= g@&fDM|Hi7EUT^l2@hWG/ՋS)QE^ܷ)KՎzNj]Ԋ(>@lgNB=f"ӡxOu-yn g& B:0sO䓹qݧgm+eNDٺM3X?V6\߄B7pI$_q8_qwuzj Ő?%A/>׹g'*fݓ=Hsݛ2SYR6X3+ں}}JՌ!a%#}N:zͽ{=0R9"*0_10EHj\SԜ7lH$J)M16.C 6zT0 b n Ts9H"ڣzm9dj$`¿FrF SO^ AƠN`ot9 Nߒ:=BXlVBP"BQC %6%׺gXL*~{7$=I4ɇMaje@KfNVɊř4(H" OuL;- c E*͝rֶp)`%]pB'DE(qÖWU kiD9"N`{82GWf~kr:M_ӊ~zG$rj:XMy(ń46ktm=m.hؗ Bu^JHH`((J<LtBkˇsU;К72Ll)c.@{ٜgE B/vg ,߯;zv*wX(U`5X C*T^)uT&*r߅U·cXۗ,hg6RE+ȠTg5/.IZk7Ee+1 oRyeJ"*{+v_-3(K1^5gHWbtAn=z]2Qc rk1/1)]k*Ux 1>Ъ &7{X: !$R^u A)-"3")78<$|/TțOz6-yc\7I"40 ;z &\foQhcԐK9Bێ$=B' ɵ|C09q&G q *g0.sԟU :vcQGl!U!btEf=weE41OhG1F^5lZa*ܖkBqk13p4QJ&P0KZ,גd|M)g`, t W~x5fٖ5MA}!:FJeJ*DJ syS0kdSHb6 XJl |hWx ( (G =^' #4cE_sm_/ûM#E|?w73@_+w~A?qN!VH*?]u:9@͐ owXORօQA]t7^r- o=i2|N1"o%eEv2~]]TQ}0mT_,Ra} v@&K~1x]Z%+EֽƷ[`\`ʾo*aSzp粫~o0*&}~DҒ8bk0qL/W_ Bd6KynCEszWӖbRi܅EǖX<}󎧦7~HW rEȋv9/"3¯2\[Fb WARS%zlY3e{WfmkT I{p~o8 X!;e扚 ZTi;`3r~0nx?Qn8NMȔ)æN)mSwdÉ:Q Odv'[1I.}ʌ#\IG$,+LD*,8IcK噧4msiKM@dXar<'~G:nO:y l7^MVgW7\on, =1U!iz{]j"tСG ;cIXҦZZoIIDPvC ȥL, hqr!>ӭdnr=xUfUTS|/bpyYT[T8#;&k&?/?rh=c~WP(A):YٴV)+mI( il527剐soӕLg@+HL "4qhCK6[1,q*V(8~H@TKRgl:.l'S;HI@5~)mEG$Tq]Ϯr\ {Lre=.ՔzdERޱMI;.ez&11L@ 7: ;68+WUTw&>lUdXJԛżU䅴YʎŠ[#d5Jy4z|RKFp3ǐΩbއNJǼ"rE(XD4spi r8R?RS;*Q"z+gbH9gUxFj,a!/6)kdS @IX*MT!A+%hK b5znB -1鉳Ӑbll /.=SLg/IE%UHוATQӈ=BhPZov-}jpzozGT!1K<[C3Fu$~"܍iRK5=W^&n5"cSjNj%Z,5:ßS+HäX#.mՈx EH~Jc.Qa$WXc<Y ZFrgR-w Ȯ2OZ>/A||X%>/'u= "WoLZ"5ٓ^YG?n< "OO?>jc#cdwָ ~{rsLI8Ǝ8]M߳`3E8” >n}XĊFȂIGI&~ ZqE^O)ArFڬ+ o1@(#yag8KNdBGr)>Gszޅam gNP71s:wCţ&|]bE6Y|dy5~["@a'w~5,VzuqWJjuN~9{5q3וx$^xzp}+2>M}lqXlzi66˳Gp cN)uNYT3§BȂO߃I^m4sևj6f`JݫW9YAf?ʂ tiM꿏]ΦfQ,C*HBX$Rz똄(J\h$\mZ1'q7sQZw?'gy(oLB=fbUev& 6 f_KR 8֓R y񙁿!Hm},!nntf@mzlfJj[vVb<͗5AT帠-`a,^14U 'c1KnQxIÁC#2 bQYc pO"&@ީ󖧄0zKq`@M1y,7R9| +XKߙ$gS-s KW1ŐzJp9!!Ѵ@qjEjKiOO5m%lϹGYe?fW ;[4Ea]gg ώ*9v5Kg6+t:-bFkHxW]l/ m!c> A >Axd>Kg젺-)UZqEPlI[ q3Q~NeMylI?S#ҊO4+MdRImkx }Py 3JXq_iK*`ViCAPF"0ve 5K}P%XK%^m P2#t*,=:jn6#/b~?we!nAwj8+]Cj7ݿ!5JLew|XS՗-=wS]@[enXi~v?<w;F TA>؉=wb;UoF:ۨcV-uԘfVڌr;bӀ2. 4\րrcM'f|d5mI:6m=RmmGNn^Bj} + ucz:A)֏Gho%*uZ*pz*飾hXc>* 헖uu0гBԏI%R ]2BBn됝mPr 1!Z]bB)ow.af KG:D),ǵl{&;={M8G=F0>$~F nZ *EL !n 5>T[j(x='/5ST񞮐T+Cl| 2?死ǷHMnf]$e!,8kU@Ƶ3DYdU [PraŸ5Bb^J$f^!iIj<%ntIZ}B٢Ҝgo2vґRqْBǭ2IZEJtAQicފ b9i) /bҡģѽx0kPtW[5 fngjY̆ܬѳ`='êǢpoY^j@HMzXᬐTW; gFx.coҤLgei akYjG?< A y!Hh!Mr!*͗ǛOw?7ބ C5统~͟n?o yT";`#HbDaJXEa&_+kY`X;Y**lTkvZI2!B]!.U (KzV2mȎ JsEDe ZD뢬#cS0cI|2@ғ7_0U-Tl)u% P7EC4:ZpАEJFBꪪ4e ʁ Rb)H1BItZhtR~}$vVUtBk 6PX\-k ]cQE+rO$';aFx=uzSogt٪ XɨQP8sJyp,k3 QXG!K+k ϏU*;(~. _XА 4:q4? %viyUjaD,OZiwgD?~T' 趽jZO;> g*˿Lǭ [>h;g}& *9c6M3!Q,ṅDoPQwfc]UIs-CPjx_-3ht?p>=;c$U4ptϫ뭿-`Ķ;-44^+߲>1F$f x3`4]4]~oDBq3oP3ܟPw{-YL+۷d- ^sd`$-_nFipOI#jr}97wQzMPvlK6C:Rh @M|j 'zLY(d%э6bëѐJ]Ƞ4sbCtbNH?6F~1jb՝6#TYrT}k`cVĘ`)$ JJ*B=sdQ9Y FRK*6@\׻-, ckNcJGENrZspoGQ~ClQHjPF0v[73SC>c2Ytb84WqmڰmGQu5ygee?ꫧFWӷuԙ1IuIOv1T{MqFy!bBѶ.|8MM8Φ<(V%'a R\Nɝ݀mf9n5 cb }OKYx}[;s &#_Ͼ9,5<&*?LKI`o%*y7{O?n]CIs~zt*&^( *c``D@!yǨA {dӹN˂@Y׫ >~H0h|X 8{<^jR"(^/M,+%^2-JX>~ [)~_˳_ߟ¿ }l5zX'l?zۧUeEU5 NfE^jhdyoQE#c*UM$K9 Iաu5@]UziԱ}iQN::&o:O98)g>F)/?z+E^^Բl9gMӷMf^ t#xnn+TDDUB5:{W#IkLC!n\Tj%~CJ-X,D:np_9ZPU PRj 6Ü1=d]T+n/{nt53># =m_#tcN,lnV#bnI-J\:Gݚ3[Q1?2-Yߎ8⏍['}-<PC9 GC}ƹG\-I&e?F(_a& V=^w'._)^{=lm~4uD`z؂Jv]@=(wD킭]``NAxAk0FjJ1t /߸P3Vb裀0+F&jXs.e ! MI!X'BUv']z.W @zan 5:Fi=w??,n7M߆q-viѝG'5TTU&%L-AT %*0s+R~(Ic[* yR.aIUbU"5 RHB/*rE?92$C+U]rZcn6 ʢuU;k+TJYen䲒ɼRaAH.I _zRh Kuu^T J" Xz-"wEs(sAjX;FE6SQ'ӻ{\[~YuEFE{Z 4lSAPjx(@oZP%V{Xw'q~N :;~WĂ2j}Z,VPkx[_j-H5 r 'MR0BU JXv(~ɉq*@WWN p&J"-bnN+5^33WdK۲߶(rk1HUU%Lv*j!4njvR7'wuV~wO zOsfS2 w1UGRwixY[,͊zxg4Ă+|v"_l l#Q.6RiYr<ե. I[ K[ ЏAwPUa6t+. =AcN[lc`=YUR$pQj\H? JU\9RF .dȐm︫:ȍCn]Ρ'B T  tHG|E:KR@/?s\y4 ޕff5`:==ăTritMN2o&9\o=}!(]TYsG@z֙)dWcƺ ΤZYar4խёd8a eRɞ6NR9#8҅9S!yCsT]iK*ZaHE] k vP\؛B7<=}^Wћ u)YU`͛绪n>ލ=YH0vG̚O {Y(neQƲGhdWEB?׬*iՆBUqQ ~[;HHA`=#&ȃ!=#@|F 1S~nxi~5w2[ Db⅖:elƱZ 2Md3؅ a ս#=^d44ĭ5{-{}d3]$&=+[v[O]fa9: PvxL 5s-Zw{[I&C*3Xp&QґԻob8iƼI3$s AˉDJqOB~׿-}hQzƩ[,*^⁷m$(hnWguJoOv~ J;˝ɯy/p œ0È$ |~aEv7`$z8آm%ŻF9{4HčZ=}n+Fe\ 2#YZzi2Ҷf X`~`JsGgl9>WrsC["2Fܰ)=CT_1c%h+MiI,ܷ" v {)oqII̪x1maY~inN3T Q])4w=ZF:'EAFQ]e<Ĺ_0UXy~ $79"GeC\(5it2XxՊg<}]`mF,m&!WgU6 m ҧ҃~ ˺س%v-J=h6+QDjd"5#F7}+vh@?Ln)MY:mO;wAlG-{QՃg]n,F#oX#gkq_$lHd}-ѵޓ7a.9d`.SFU$ϵ !P&ҿu .9qah@.g*p Ɣ4D!+Rd́VA^kj#d֢A;V%:0%ՈO[yxD?0߅5+$  r0'  {8 uu(mgP")Qy1INq 8\ H9FAI휙E9V0^i5;9;$A :E- ڲѳ9I C\-5T!_3){aK ݡiKԈΟ.ݚ)[FZ㭳&A eUIT\Ud'oa41m'i k?>ҿ"imԕdHϧCq Y> }:Β>JU][oG+_v}vIb@ 8ξd!" Q: zH#rHp."7XtWU]U{T{>z̠RKވGuwgoUQ^Wu:mlոSUi{yN{ϻϯo4(Du4Q?Ȇy燶P-Ej uB]B-fᖒcXC1>8\ͼEl`sA O$|>A@dhr4 t,ayA(Fa˼s0 AN旃o _Y ʄĈP :E|#_@ [U>J|"H 6#t^Xb,saigB(t#-CzEJϸH)=㢜Q^DEԆc4:;53 '1F' 2\|4|4|oFJd*):․P{|Q}Y:nq6G(^SN'lqZ/܈ׁO`"|;)z9d9aqЋ9⬑R20h@ka^yLcBRrE1QP#iG^S=`ky3J"9%1Xr&K8Q+'F{I`R1aG t4$&@٭5S48@%:0`Fa bXkVj#g%sXq2"EFMD9شo/R}`~)o/\Ǻ<]8#YLkFt{Gޝ?<6\=G7ON4׷ "bܲo`_B͗ pu SlT7@M_3Wntrw%fO|"!Xd0LcJXa5\_ꥌ$a!RB15"s΂CĂ Ev|Za9`=wރ$:=,#3Zi,8pyfYɱ_PrkM6L1~Ѥ/I,#ea6bì,J{N R(G{US̈ũ3`Yl`Hж„GWAE1eoP#4;&|ҩV:u)(֥QcMVY8Ѧ -pez.0=p C` g\h6l89."f CR;]2刍Uz,O`Sz\~~^Y>rhLX*`<6;Q0,| cHJme'h'gI0U΀騑! !bv`~h9`P$wGKƀRa>qFh`aduQm6l30}:g9芋}cF>iDKh^Z: M k$TSlB*ۍwԃ@W§rY1m)Y/wݕO/_~ߵ6"T# cpW;l@!-e-B-Qȶd^b5e+,;%wkd tdB5"53bVJ.sQ akz>T-29Nckƨ#B6onXjF mg%V"ҩtۃ!bP.#3|Ae`2<4\Hc$mŶ UGcٖ дMzbZfA MK"x&YyOVGI]`x*Cm"t/}˼p_d&Bgi~ĢŒ̄チزrbi3EbN#gD T~c~^)q c{"}X|O[eIp0j&U3Ac]$KW`M9bL- `˪c pggJb+8~^64hR1_ J F-""fwz!8]~і1 yG BQG~ik4f˒-c :|nYXovz sԭ4_[[$vheY8K7N|Yrұ=Wp&emzpݨ'|m};զ `:5iuL^@< nj#|xf~'_?Mtl#c8Cf^s"6R ̤⺞HnߨW;y̵  p|$9_$&ܭRo\5]~)4{3҂SoDcϨl9Vak%VEX4YW#$=nYPw$؀-$UxYfenHK)?{9,e=eZ[5?$L' Ҹu}oi=Yxo!HФ$:妼UYƯ%$R=^_ud/dfQN1*yjZIl돑]%aa*X{X,+O^[R_ xϢf `)&T߫)8׵;Z- je!"M]K%A$ K"`rt# e=1Oejv޶-nAȘ&$,b$bbȵTNZtTJLknGꖦ!űW+*h4&\ϐTD^Q#Bƃ`:%,t4j7 \vB5FJh1|H&@nϻKwz˻bj[YmMe6ݽL!I4BvPn:Yn/|^aqn`oHD)WMIy1Ғ/lp;zOwwkX/B&+Dɓi{@`ވp]##Ÿ+ Ye*o`+\G~/N="FV"c$J $"8Zꄗ۠V֡>)g 2'^l)8tN&$ T&eM`5[~/9)&-ccyon'nV6a6ެV(%#RٕiqJ=n*DOnݐr=\Fu|yZ ś+HEM/!y! WcoDw~‡S J}9~z9l 0ݟ 23ѝ|tgFww>ZfP#29˪ќf9yWͪtkY?%O0C~3l=8A׃yD{ YG9omMPzSef6j2B pk.Xb\F!Ee`pˑl,8[L6B|<[V9`h0q,U7nK7 × aWZ\tO (Ўv́ȭE;gLi5bWWcd$O37Z6666 %-hEe 4U;U`SL<#HY˰%-r QN9dPFkꃒ6ZS~ T{V+(ŖD%QZr,c$˙#ڂ_C;+492Wr &83LbP|s`4"Ł/Q+ȉr95 !R"im%A-UUl-ݮɏ#zo>ScaߔS[.?߿_~ ) y"`Votewp "bR?~4t6_cY pP~] 3fo ߙ,PnͳsP0R`Z\T{.arS"(! ?:GJ D KFK3v,1Fi\$ҧnPr)W1Ȁu1)_8= 0rBx%9EST mqkJ n (cJ N*+,$9F)JLIF9 1:Wx$];{y}O>>]Tp kw>jb \IW+N׃78h҅zMS~p?֋ֱXk+zfE/X%AgVA\kwu^0/)K7?G*4~-$W9@^H"qicehnuY.`,5`l^ 5NH[ {Ze#5ݧk+tD@q;Ԃ !1Kf %/1T\DN'Oar:P%O4:&T"*ۍE =ŶDRJAv1٦{#[Xk! [ÜK;// ~`FطI08f~ Kr n ǜ7)lwRcm& w .\LZh^*YC ?gwܫM9reO ˟uW!;F,_Ifإ>TljpqPcyբEy=0)6 ޗb֤~͇cĎgWaH{O}0硽ƉupJ3@d\bzj΍'Z}+(w<אk?#1a zIC2ҫ+֋Q*zK墟FQ.”~pɭskcpV[K:`mQ;g/1hb`]AÍutA q5p4G,Yp4XVd\qN{E7c˻+׬HZJMj@}'\DBC,Ԍom- Wwnuzxܒ "{tmTyL=xzgv7xZtUpd$*%I`U0M: D;}]LTU<=DƤWAFhwP~ KͽWDhBlZSBΎ9$54<0ح'zoYz$W]"$DC;OԂ@:_ ڟ)P)O*YV8s"E BZ'BdHj!0apR1QM2ZF4|3*N/|.)j𣨬&HA z2r[bB9>du\Ir5"gAǭԁ|Ӱ#Yh4# ԚZq6^QT-;LUXt;؈B6g@ão,Q|_nDTm&W^\mp5Z)C }=j E!/5$^|Vv໴*d ~JZr(H}FEеog[W(Y9TW>anTo}L,ӂ_U*Dɫn֐rM).zSLiI[*1:xAZC-XvkBB^TXծϵR1gnGL[)Hքr=\,{$KQb#:Ϩݎ< {Fj&$䕋hLnɵ&$ x*םk%4%TZ)QˑfjSF;s%E:MH+$Is85 mf .4Ga9 ƙN-$˅\0WKB5{ _3vnz[|cNa63HFsmT!S*P :/6 oP"evKގ y(п}?ń=#VcԽQptxP9ȕX2bs0#[\gZ`ѪLE_>"A/U9-|oF[ Jl-{3"}_e] v&RwKׇQ>g> }TQS t+ ^>rQ5{ٌR Q?1PN|F$性tD\z<6 y"$S>L 18vK FtRQG1<ݥ&4V5!!\D)) /@54\.(Ժ:l x'i0LD a4QV&ngka:tkS ٸ4S7)/^>daNֿ6׋+O afsW ݟ #_`Xd+/*}RzV n%`o_52eP1*FB(PQD.+|%d0 Fa%yNPS{q "t,s-]X^4`6ž?#]A=!Wwy|*NgC2d0ZF9"4R lTg hwsp߽;E[ xtu[񜂹"mGTz^ YK|Ni}uk.6Yêsf&!4L/-+N!=ry'g sz^:Sȗ?1>jxйTρNP" =\0m@ $I@ .`}K]q0n" ^iН#Vױ$[? 6hsx/I+ln>.d5#%yK0`ZF]n0[jGS)2 "QX8[(Vk./~cB~Pit}0/!*40g+Jo>?YZK>J|xd//_g=p<( _9z8t~޺gA~/6FXT;BONjxDS>Rx_hA~Z|3ǽq*&H"G-2,:SF,*4,xgW(V Vqa%a\-Q[UNX~46tQ؇K @PHw~:r )_mnWk~3x`v7EQc'Ht:s 1 eBh'qHtG0eCJ-b( I:Hp"AkdHڝQ1+ajoRTi4FO -X$X4G!N3-V|1yaXF s.|C eVQXe]R"C _t=sfQf o[26. R7^]{bKsI~vw%NA2HJ J_ zBttjD%w!1V#9 [8fj %U YR}߅B`S<1ɛQyIswY,p܂9xV4whVky4z-fshل=]I:|Kaq5z<;koB*wef7M^]aR8"H2З[EeyJUBk63YBS\\ 6 "ue[y?4RZ6^=ÝlYL4E-`rǧSk³(#D#wTJ6]j9ުh}oU:T74n-8`VMSV?VWq)[m8s;XŽU[FAmΌjG~P(PvKMLdELdpi yDw(A kBZfTDNh`&A1 P#Dñ-S=Sc,nIsX!v[# `;3%s,Dn5޼mJco )bz[nE ߿w=-dcXN (`ⓖkkPoo^#n~]aRGff75u>yp?: fe! zwnx _Dm]G{I5t:4_b-P8rK3#oR .ź^kC!mƪB:wmƓoEQJf[Q:*qs|*'Pmz \ ;qT:Sn^݋molL&E(]~7Á˦]#1]^qqߴ Ѥ_}?|5!&,imB̸6v`v,LFfti-XgslEm;sKHeeK%X28bCVήM'JDƌVLeXNFDM>N6Wcgp|4Y2nj1J3ɨEctdZD Wl;m(d[(v8H uzU[8ѹPsܛIKWbLY'MMꓦ2Ɖ0}I37qH݉zbTa~HcwHJeg=)Xcv9*¿8^܉' =":I :" o@;w=S>;sJُ eG8;UY сC^w ڹ[[!L[M u4c뿘 +U8m/ T0Ajpˌ8kjSIsl?hujdP\&-j !_")R y&cvb.-$VHMW(,i " m_/Dt' ta0`w&=lKFG3ˋp`_.~+r9vW_Rړ/_/O(C.]e-d;( SEU<'P.bNV,ii< 8dX3TF>7 BQt9  GǼ[|0VZ4āWItÁ ?3Af۟deq,dY\'hY,{p8 4:~ x$Y!SƒJ$0בgz9Пzo'q^?+5UC^`r<$m`,7 A f7o<,Du_sNe=2Auc L'd_dd~r[JDR猶[(9$1 l%.yTcc%'t+,CD <0*n P0J2{Lmi00N:qU*!q@LQ7'Tԁ  fLO;_S=U.ïau(Ԕ_aMzGl,Nwܗe랟/vk?x҇_̔^=K޾~Qh0ͻx G0 t{Q>Xեq2-j>z |!!*)~gh]~g|42{)#@dkXp!UscIi ՕrJmbj݈ ǰs-X,RSH]YXҩĔcX M=og魙of\Ȭgr}3pHeIv<}o9 w`h)/1>}U?K,{KF^dk4de,STy2&Ɵ~&,UO5aͨK֌0zY~Y6.̧ɩ U16No˖:N6bN*)'ua0߂}1JEA!S'` nj&"Jj<5OB0$A&18U$Se iCnvx?.eW/_bX ;G *ku"wHXMVQ"q@"0(]W OOTQ$[B(YJ4ցcDQ2ic#5bCGQ-t5qr$1-Z)!4L$-@3b()6bB3mbʹϏM.`l{Ա#Q{XsDJtS5{"% ! "Wں J/^5@xGjΈuPOw&¬4jPFTѦ)e}?oǧR#Kϛd~lar8D%@~jfog?`>,yxm9% ER9,al8mgiVݮ$qVr4@-<8߃/fG=>(b's%6VhqdKMC$1CmH"%â)Wta.X?y]Kk+% ` n,5ekt5pPeV$.f!?E)HUPגG*s.ѹpBHw;$ `Hse%G4^_ 3.J͝Y^4TVdng}E`KЧOџ].Wa{Jyp'au:N"nEa"nUӼis n~c/'; 7Z,UB9#%Թh=hk:tI[;LœUGȨfc"bCJ80~Fè _7Hn"x6@~i9v`C<"7HzZzه83Okt{eȪlލolnu- >{ܸpϱXq~n߻Q~F}y 5JREx AƴV*` D aeP!*1%!OH.ߡom~)Id#D`lgdvUhP gF2R D!-3eqHRA!pD AKZۻ?fs)H\).ٱ%}Pbj^A3x1|Q6ȼ.?M[rL 'pq!*9^/l/-|*x qw NDw Gû ~.ً.vx-p z0Ɯu϶϶sň϶$;p.h;"JVB1aâQ.I͟-(;]SV DmY6=B>>V1}$JlbQva⺶2`ATJh)S !K[vA Hw ;wݧnN+FO+GamrJ]F+f{߃{*ECx S/N AbuBQGw6n*<[ƻh>!e][R:ED A0$)A-@zѧn)ywܵ2 'ʊ I:LP]= >BCƝ̳Wx7.c-7^"^ПyưI;vS+ʦ^NӮ0`]4^(g_%n[Ȍ$&n-ce{ 9h )W3d}kóD[e ~7a}@d3`{Ҫ2響S|}W J-z_EX yqOA ɮPI"`>}'&SrLbӄ ̀‰ZLic $M cŸtti<q=?-d@mf3=(휗QޮWڗ ($n. Ҋ@alD~~×\"?cN^^UebHtH2J"IA*K 22x΍ Z!rE\ P8 J4KR8B\ 2nߵ Bkf%QÁXQĊj$JIT@&k (ELd`E5JiQ$ OXH"HS P.նfޫ<ֵTCf $>t:GS47^c- <(֛mWi6.r ̒m`w[d~yZ,N*,!Howo@<.}7eBA1nU( ƌ,:9'MVG>1)i)L˦7] %Zi).§.ضEC^Ѓ_v3A'cmZkYPO4vu΋BP+{qR⊹l;EΙg|TkpKh,]#C3wˑ>gz`w/;[>mnm/cWk }[buW _7gS$q&=E`L󠕯vmfV?Js|hm>O"l` -=qh -;O|`aOR[_m ѐN2?9w> LDb=O'TꌨT1:?_B& -K)MY2*'?yVL&gF)+N8UG2zX%ڌcEA4b '>Q7`e*N J>J2'bp{+$$V=`=(˄Kk͔*95 $ɌqѶ! HƬGxyMhN/wfگ"A8NRhK#UTpT*:%Pn\SMԂc~)dL(5CCYm)SuHF@jWpHV&>KA8$m|́/siA`tYt /^K8 bP]/tf!2؉Zg:&e (69!iuJp:ٔcsU YGTܺöyujTr$ϻm/*&*w3aīZb[+npڼPp:N01]|3 9'g4GiwIzz*\5*C|^YM*272b_ ;Ͷ}m&k "#]!lgzJ܉X*V`ƻ+CR.C\\-귃-uoBHek0ʱu[g4h+]~z_ Z 7le\ }TC+ݶt!CD{n-M%Y[q!_+s[?to3LJsg rIۛVNr3~]D-4iq>Fٶ#BS/;Bu9w/y$ZP-Y{g_9[GZ;'rqu,VI.w{ی60Vy甽<IEC2DpP /sure5.oUyɪ;gRtYZP-p(wk*.v| DdsU^mV!(V7Lk= ʮ;9qri6 ][`[J[ ([CAP; M'R/( `OH6dh~A@ mMA  Z!Pq@rOUZ+pR"`2ZCVactƔ!B{y9| r5yq絶] <̗f7(bPɴԈr"vj&DXmƋ= 5BxalH"@ũFA0TLNd 龨i͟7WѾsN)*[Qv~n?8󯟍Vbu,z4i1]L7a^6ÿ +/ rA 8냃fWP!xrb9 *X :]8rdi쐜fWPFav ^0yj3|}̶bV SRӞglKגǽ+ODznI[mP"#w—+2IZga %W1bCvś͌cf6O.GC]gx9e|TΧqo|;vye/\؞ +2Gw?J!ʃ6$}do.%c4ܙ =j.Dl?/*,ۚ-ʥ>ٙ yhF.T[Lgο¬Vwգv,W#5ߖcYkZ/fiQlj{j| q P(R77o>c_D탻ycŽe79@2qalaeQrr"ϗ<ߊCp_]~V,E!oIPN! AbuBQGwLgyj=wѠ>BOwmm~Ym/" .Lff_ `wcʲג3Sl2%,7=&j**%b11hη:nڭ y"%S^rPƙW'OADQK#[ttvk@B^Fɔ 4.-11FJى$N~џ&Vrj-_+A"tgr4 J5n~'4&'($'~B+A!%OOP~7O&VOht%pq~fF?ѕ49?AOfW Ohr%hLOOԻ0 $Uj;xeϽu)|oeݳj2 h$6a6PPRCxz!+s[Xw]/;63Y1ͲV6A\3Auw8=Lڇ 8W^H;MT--7Jqh߃+ g;7ˑ]4MU!Ps9*G8XX_=Xt_y`e%XG[HhxTPkjb2mw>6=<^1 "==]D]pWݱh4JQRQ}06oȯ޻TUhn@̘wo! 9fc{LJ; $#!yy%\ӻ5.nr{wUh_=8Kn!;z{v_LӋJ|PCOQ~n} 4fBv7dT4t{zjKGRKlS@iF[a$YN]aAi!gb7Dsէ.A/Tp# m46Hg@DVdBf( &DE+uVj'e|dzv=K*τyx4,OXY1Rcʷu^0 *QgT[j$N\ X s2Uz6oQ-0ۯOn`ٗ~zͷ$(cQ8azZIvcdA1-(B0BX!,(3׳; iuZ`Y?ܝRwߞ=̮.~?g!Xu $g^6[j?뵠d`+#=s?ndk<81K#%A(WGJa[/rnt*[#çbLtpH3FB`l{5oUFZPʇ0ْU x;?Og6ۦc瓏Ӈ/׳g*P9do(W\9z;g1i9f9d*og:?x#ܒܒ * <ôG{ ]*s E;}e5vޥ)v;mR ޺  ~qEϾMLјk ?/0]x)S^I-&\έ;6EDpjNe.R ҙUȋ,;a:2 iOgK?[E L $N!M $H"'dV 劢Ըn,c$G0#3 !)B(2]`˿_>#?Ť ji-WN]ufdv\XKBۄZe Jybr4C ~2.YiE6)*,葜Lc[XFS!>h&1˩|,g`5Cl)iL*U .D,X^4[UsLTÚ#l[@9g5gTSauuuf4 vWن!9ߏsz]s*,Ъ}f)tG5a ܀'~\~9 XqE-]O9%o&O Z8LT"]}w cɽ>}mu+Y#+4#+ewwډ*CݓZ߂_e265RYdaE[SPAv-LWc~(€/R\,'20R>H4' W` !|iQNSC>ԂXXW [M8Ȱi6C0+X.JÔnǸ&[G9'%?ʂ,3:8EpcA8%I0y JULs1F`LB2bk]AҰ;lM>K1z.8iN&+L{]nb#P  gȮ>toK]-䖐ABF,ͼ 1MQ~%zAR% Mt(EI Z򂙭*D+VyM۵d]#\tḤ= .O . 6E!S2$Bpͬ04"i C&t|(%;'!"읓xwN߹sR ie*<)3=c rBՀ.Juaةhwwxe~|zu!̞\oz8WIFCGG\c jWcu[X7y($z1wg?؛'}tIɻ?|`/;[?]& "dR꠾~/P2/ @>7(gxbF9Ӣg2uq;ս;B!S0Z.;gÁ%*me6u.^D6p/V$J㶗QHʲL_z u}5k]R5hQZk~ 9I|xBn2c-@uۜ0C sAys6 Vd@iЁW#:)XK1 9zs7pJ zx_*O9s9<)`䄉R쏗ӝ3R9ZuǺjp հ@sk'ѹ 9S]p{隣r,u$k&C`lUL!%UjzCnT5-Zk;FRվL2 aoŘZDN&r4dGԩV]h2!20`SpZ|)'E+Sfy)[<&P^ZCt4jxlBsRTEh~A!}Rh!sB4 ̩%< B ̴-iy!r aliAYf$Uh~ZB6\y"VՒ=c<y;{< ǒC:j}Q&D:Nt*zʹz~SL"JtxYsX`xzx`Bu T.}qh-hP +68atzӺcJNRCw1톇u(cXUD!;EP)# 3iOZ tJN_*,Hsd]nňwZ' Q[I첅$[Gqk{G45Uo 则T0ȋZҎbƽXwT]h(kO¤bUH=ML%+=e8n֚gJi!a c+Ax}2O" pFQwR#Z &|L,U$n<5to ?(v΃Gpv(8~f^Qء&VnyF WCF@tYN#e+O:k y"%S fGqEznNu[pt-zڭ y"#Sٖ3TK F܅$ϽCţ 44sܻ%1,&߬8 l/_NkA 7 &/l\^$?BJ!~v?RR7M@sNHNԅ`$ה"lMnʟVN\5ȚWs1N/VC_88 b5v-w9 M> ML^" @p.AycMt"U??7OY~ih'&aDAaTa/cJ y~S/=xNd6e_ 1h&a iAo 0Hp?:-{.z!˰ X+o)pjf&[co 2%G ʿ]x Fvt/9{Iqԋѱ*~#>N^e MGsa`Igt/J2 C d%Z ѴrGpTmv6 r2y0aU,n_hecĕI\ĕI2;?-ӧ&L8% )TrŚh{z.^.v2P*3=_tt*\ 1buSALuWMx+tM\B1_+o-O'~6פ% yë差HN `]>&/ L&^Cѻn?@O<@@NH?!x%?{Jx2]Y^S.doAw)7 \S,vҿfLJZ_J7qQA#Eeu?J-WU&r7ULfNd AE09cVp+2VKD+nG`J쾝S( s^ wc*_Vj⎈L󘆀puFRjsy*7[T01e0 Vhg=UQ] #6 ˱ H8 ( N¬cR%A0Giq5RZqA(0e[DC|izb2^PTHl |5 xI :0F e1!Ċgt88Vh0I)BAI%z3ʤIjbşV5W:X)$ xeK꽶 ?}¶?` l~ԚUyb%ƔMY!±e1 fb,zDEc|WJ+ R7EE&JSÜ55 ~ ߴU 8E6{ tt6DDQ\U!L(p]3r pOl 7Ue~rcx0Ӎ8"\ ݔƷ#[rIIaVVB-*<NVZ̆yjBK5$P$'b|_Tu{.4ZWK2KXER~õ}4 Yo:υ60(f >Lku6 #z az?2[.ĭ?uiGLš O 0|S5'P>%Ѧtq/#h6{Zh=A(ڱyx ܠpiMyISӶq t3\!y4<4B^ OV!pclfhlOb'Gkw/>\:( -˯RmzkC[h-aZxSEc!^p f.EYT CT Gu`^2ގ6` |"(7[΄A> FReZ#aH {WYi10>׎sz,]V{{tYU@]5.C$<%y9 Z@y" "x%2^Cw :: űnyFuW4?m3!9b/;_c B wx0;z*0lKvhk^>k5jO8Lr]s#wII8C.D$wG1wk>qqQN%x!@ㄡFΪLXhp۝Lwi.B<0t~v U.9B<^N;A'guI^6bjHXtVkb|7 B8H =JdYdړK^&?taUrzB`ǎHbp jK%K ]Kn?q l3P0K]Y<b /~"^eަT5Ev\>vq֬L0}q֌mיA*Qq .n .Z !|w=S`R#qVRcJ3zR+AI2OěK%r N~?L?/1Q:IqO \+p>ڥa-luwgjL`aq,\čs3~ߌqoqqwh/^^p݌ݪBĻ{".W4hf൲nZyH:kƸ$j:̝#{\U=dyfA?vq?|Zlcԅ^Ǎ'%KΊPٜrDVg=8Z1wfT("|}3InƅHIj/JI/M)ڦJ1Q)cbZ[;//.|&(qwm=}I@O++DV+̕Tm ^#"!9kAsx~`/ Tm"Uk+㡊A#=}Ŋ%۹2OSK.ū3gƄGm_xNS]u($ls^=y!(Dh4y !" YVd_ՀCJgOl#Avq`Ĉޒ"fFp<ǁ-L|w\` M*jU=ӹj]|4^^,BxQEߵ#^ =p)6HL9i,hh+6d(=*g]qI1An4)a8ν7?NM/7 1[^U b nF2`Ldҽn3ZwEPvp)hTqzRc k~f24-E4IV7 w3R O ثƤVVV|&.ѢQ\ׂy\I&`TZ⯐uޜ\;?H A%A4D3Qccmw8:L,HWB) 9m>iMN-^ BPt౷3j?\O>N]h䁭cyCqMrﳷXBgf3HʽX ~24OTxO$P6%Z1I]F(&8ڎ;/EPPA2$ DHpFKi0H0%JBP u%:UR5: E!jj1xDyKoeFxAN4DeRŬ LjA H7^XI[ufIBΐBY`a~-T0ᄶ2€ XBJKPG>:*T1^pWVCߒYJ߷aG3Jz&Ssy\X+ :>u˳{ 1C_I_~*AW˷r k:\YQ`SE\<&5XrJQEf " Zm'ƒnk ~wzV@H"UMQ6' 0I|]7]Oi_&3rqO0[k \ \L>w1Zeڅ)OSdx< Hd} Xv|9Vd_C0ŒL<|1y,GD̒_knA-`e4.!,uB^hJ&`ғQH9+RC[bG'RG$L@m- =@5oN5`祱vtb&=9ʀhY{R)5$u) ks)NJH3Rۿ1N*Akrَ&7ϓAjvf8YG¤f!YcM⭵=cFER|@y̻db 9~< !9F1' a<cN B1jxyf1?CTlR㐸14gO|oǜ >t fL2XS oPVl ޔ ܈.@mhָTURW9Z$u Ѕ'~ h ڧ,+V<PU-=Ղ>3jKZ 8XuX+[+_5eIⅶ5i)CRåŸMS`Zf/!Oڏn'{w@z10,!)CR.D`jcY:3,ex; "_hm%, aPyte4v \L:I?]5N1UƏ~Ϳz.8ddqs7лIBoliLf2 ͷ.d2 ݁o>|lЮEŏJr$,(&hn5I 0oL;aPͷyww~k8ڛf:zϺ0xQnk.quk$뽃'{w^[3 ki-s;^;/_wvgAq̓~?x4>~(w{o4m[`g;/^>y;J'Ag#7ndoOt_t;̕yߎ.̕dJ-r1r'Q>̍>c)%\Xur[ə>sE\<>{^z Ok8$ʪڬSga^<4%6zzaVy9\K'[6fؾzN;q 7M~[Kx#w;?~+GgvB_RX o{oޏy3FzhyhHNNv*LA.Ov9upuU.ۃ^UnBKKrvٔ2j9#%دw,oTz[[i뚝N2>{*:^c⳽ޘP[}HpvpBif) Ka31/ů</ǣIު|F[P1yt4iԦYOw>Qu^e ǽO) >תucFuMF?._~5-O+)zt|Y?zߌ/0T[^aU8T+ǵ^8yꇾzkϾ;yg(/ˋgoQާ^?=o0 &%7MOC$/޽8p.}<^;:8.>uBک-z5eӛi n2Ekޟx2ՍqY 39{Z(tsoPWޖ9yLB;OBtuө:˦靻By/f3bLŤ2"cДDxVg7f;7ݺY1zӭ7݅cxW/Mw 7oDMYΧѐ՗7(ȍ34-X㕕ik>|N5j`50+ؘ*}W}80XN  WŢB ܆VQZ4V0Qy&7ٷ[OÙD P7.ld% Ÿ|QOG5ȦyŽq$W= NbD~G0%ŸkG0Z+q#.^ErT/u,uIܩ$3q9$/tRimKTdyQ Z9c:g_Bap"*| d4fQ +͸JY$\SQr) ! xR4#2zJ`4 }"'Ԃ֋T Zl$piC2d|-y2aҕOgQtybrQ[oƦHʵ,Hh?D.~?ƠMߵZM.2@z%*RTrʼn[0)ôl^:G#I ; W6'/V0~H}͢pDD_:C{n[caCR7mZ/]2;W- يGf疁K}i_%N)aYP( )ݔJa$Q)]>Kd6<9G7X"d@*OL@)Hɘx*P_\d&EQƹ{|v)U%H F픦ܣ7ʉd8g:8| Ұ$єncJo0PS=|[J/"Dt[iVEUSn{b<@,J 8^A DqF DעFb}^/S1H˸ ЈӬAXdG, 9Y:;WL n1Xf|NUt(<6Vd*% Տ*Cp B:r=N@FL(4?{hZ/<>9;h0oe.R"3x/Wg&"tDi]rhx[#}4oB׈ bm6* p[/Uh'*^)4Wt<`,؄,ƙ V$ub`!#H,%)K,ީ6MKpmyK4G+4f yT0DNQkx<Ҁ+:Ȁ!*V'w|4l ]r ̲ND~Dy^XBz@Ni_2CT*`c1lJśb4NX) ڷ"kb'! ]3'Y%:Ӂi#* LK^4JfМ'+A&~Ex5>{Ή 5B bz" X,UE7}4jKY1Tl.U0W3.ʳb @*1[ A5b3塨h,0gZ)D~ kE0 ˆ9xtd siwpt,"MfR@OkANSF*x;E|NS꯬-vA;Z>K =鶄kv:T͵k;yē7hyr7'վn渷W[KKY \C)o&k[i`!,:)3T֢h N ڬA`X=*BR4, t+J6c +I2uJ|YJO7[漦+=݁P.-[i;6Xȹ`] sG+Z]P,= K,mXF!蹏 ? 4:̏ WҪGj"@2Nڦg+sϧJZwq[UӪ Uk;vBX&_R` s{T`)pjesP<̄XXb-qZ:˲ X% *MJK(ɑ|5Sdun]NsVD*wu89?;E09l>79ymH2"ȉ7OdelxPVl{߉ndGnAg72٢lqt :[7--n:[lBgfTxٍvlqDlqtvCQlQ+t86t-Ҫ@gVBg7fH -*@g7{_@g: tvulQt>nv@g >t|o$@n\WQ1½@̪.aɢp߳ KG.m\L32^$T6Frh ԙԎVFew#o3Q:U[mM2v#rL\]¾FYhVU6'&c.k4'D))xv?><)E<-`zw' k@#pN_\7]ze\yEwzQ\gɳu[bͤON[>?IN,Sݺn_ .X&L*h.G_qi:zIYJ,ޛf1YyJ#lJ ,3vȗN" AI3 'NY6kKaB &7*i8OTJ9ڝBNXx*(vfzס`CQERu@n̝oU *Ul6sb~#&Dđ5^7爎TAOg<ІG'ȰCف wN_(NAsd`ZٞXyn`eHZ49նն& V ӸT,[:av3 @#%DLeNÿ IZ[i2II`>J(kΰ: : ݲɶUXi8B~,c齛]սUF ?Y.(#Jw&DJadBdH {IJpt (}%QTgĖ+J̵1 L,F*`GZq2 \I % 0,dj*Ì|$4wD S@c@\@nb A% Y >:+QhFˌcXT4ƙ#=<!rZ`ǘ i <{D;fagHf2k'DD(F iFtm \YoO?\4N$ۦ+fz  UbqDF5)<ܺ ɶqOe"=Y'J.5 DGP< X%,31Ј *R9VDuNlh.3eAJHi#j \PcRzi^cpd}E`)|n]_ xC9n3oLJ3^e"bC&cdM1!HXEݠ8,H;:;[^rw.*Hx.T]D-M<*=w>n551[Hi x;[:w#|ӷZ6Kmz;Sze ѷGEFJ/7k Żm43EӢqtD#Yh(N܉a[>zal R֠.#FnyC5[yĕlz*X;Bg/}zߴlө׻lTR۱GkWl/֤#l4N!/CM;̓u7j7uJJW 㹜ZNm.%nCw}=<'Ƌ맓G9W|,jvIL#<+8,y2)8g0`񸳝DDb:3 toqI0L!µP@oC G3gTڌ/Irty.Nw;=vwJ.iEeaCMm̋UAo׫\ :ni,PH!H@#o8mJl򱋱WErT?(P PΎ#GSdoQrHdn *a,J>yW;woݮ]=wJ.^jh6|.zKcش].r[ jRd].Q#?$ubկgZnJ:f,L+꓏ s9|)_.%~9DůŠ0 p L i=N0yD^e9j8fLk Y@8 9&K9 h)vCOb1ppUd"2PRc1|tAY "ym?ZYP X )2 : -J,iְ! yq-[ʐw=UR^UAϜ GR.^QP!2tBjo;pzC KrSG^SBC ;+npհ;43: y#``ϝ6 냺"X"`~/}8ueLJQP81yA}Diy$fS~4҃ؕ`icn7n^i3 *@h02ye:]̿5[<Hd5(ABMaM2o`d@E 6""$p3yUAH Ҋ.կetT MC6 9#ﶊLdga]!k 6["13_Bpf(gYj/1 k<4Fͬc UvAkc$J]w#_5x/*#XI#i)E}4c?WTzlaθZsY^W $' M0A 0ňgYP֘Eqʙ)QIn*I#<>gƊb\^El`cA1k;6ȀM}/0u=fY#/ >|%4%qK;~⭨4N I}EPq #TXc}פMYo *Z ّVY!:C gSmCHqHP(+823LgcC, !POSKAAb#g Eor7ySb9Xt]ϩ%aEW-JBF\7=ZɻІ{]:@wF~4^Ne?e?a|t \Kڽ.HD@A\[5L iLPc >tK۸|a1?{|CPv4 F=~X;< )ϐ#./ a`tB .%\}}ʰߏ~H<}Uo\+Gm$O/9)eZ>U\Kt=UUQvշ%e>>Ic.h7,eu|&_һ +ךqc-cLSK[%ur|n^u ?iݼ5:ݝw8y6jϹ>=P Qk7[x?}seru{ǽf}O|o0gֻV;Z/hq;z_0ݨ{o3PrItt>z0 W66׌yC?{#嫒앇3%1o;~٧CX̀ ^4n˗"&5?ͧUUDݘ_@+>ptUAä,?z>~/cgS9ǚK^}1Q:IqʴESl yr?od:BQ5]31>isF^N)hqZURkի",O6WewG2h;M >ǯ'cjAtq'MuqT* ļ8bEQ/N֟orxv؃hOTqͰJ%K՜QN$W RJdD _1,ҚQH<'s(t "Bަu]7W[8j$\& [gWі}VkpǺ0ĶS.N RX`SʵS ^ؓVHkoS)Y f`28FM4M=,.  57[yH꽨# iTFi:S17Hl jNkH(kdk<1sQ/8TXKɭ`DI?`r:edE=*E#T3筞zT_!HRuE1  bԳ"ʼn"2U=Si?H{'3&lُk Gp*JGQPDF4L;0!CPI'QN5F o#D!fA4bF{ M<XmFdʀTfܦY!jƝ57i 3b2Drػr+WLC:dIzPj{JݖsZ˵"%<’ET R&xvWVdZ.BpsqmճUm΅#Zu-dA MA5fI,C{4fb9NǙ_sK\֏ŝcΐ?xhgB[!ܴGx<5I7~.0qf~ لaf|! c2\ |§8zC,i%PfgFRH=T;£^; 3 w³֭ǜñL>˰}O. &Gjo#+cJQ"znUdǿAZJqH#Q/MKcmbt,rY5"'ig [o^x=^/ 7q?BG\0{7: ` wgs󦟍@3%C%} (-><5pk7f Pjۂ b@mжAēԡG&lFQ>~4'ɍŞ1.z#G) {ƸdUD@\%]oNeW5u:Tqڵdaޜ]-Ef]Of/vHK&cIK/ctodXgkD.J(El6=c\2SFiFݿUaltq] h#_F/9\ Gdƨꊋ.>9=>Ͼ| 6/ٮdȚף>Ь}:s80whפVuo]ZC Cb\*-$uʤZ մJ#۬QumZIo-dww1n%yKfwMtozSLi ZDS=qη/)޻ٺ?ÔO_Ԥcؾrxrxrx=Û׽^y޵޵ŗëS.W\ev.u"x[Kuu^~-װ kkַRKm7Se.{^]]ޭޕQOoUށV:|Cd9&p-EJ޽2rSI<[6779A#pWDQ؀q6חhZ<=z4sp#YtYOĄ<i2o^rMJ т{Ͷ!u@U'{X Pb.3lvV"l_ɰG[jZދ y,ӭ8k9^58=טut~6K+83h^Q:H9^ЪS2hI^1zx2`_+ P\0X3#wvոApQ8"ɠRod$LK6S}7x͞A+R, BmA9/c`#$\ʌR!%(96JVd{h׿@"ݫG1CկSc`xƆF)/+g&Tu&@ С}g>"!0s)`n0U? W}8fԘ:[2ܹݩU9ᭉQH#4UQHk$xk>q(Xk,5cwcE{ 5,pV=3GF~ ^_^?נo!(|cF wרkT5*ؐ+aN$ DWD숧ΕTFY0,|!V蚃X[FL֘ ?Qhѱ;L~ \76͛xr9фH8Z!+g Wl`n_[YfBQp6ߐ]: ~4YÿnO7oSYnIIA",8|}N顤xIy "sZR<7L(fr%:TGz w~BMa2?L]6b;NJHs?%:r@N)u|JĻ@mY*TˆYNBVDYR.RRJhC(rvVI!iwǻd R{%yݡFCqlq.?,[Ve!VE(S.}@ T >XDWuэ Y BYx&#cHטs*((eP HhT .梗8E;TIL}dV,c5-.z撢X4 h`dzБn {i{+'@ Ioܾ>\o~LOl5XQF钪> \XCs,SVaPT>Vz#i}=ˤe,heM0~x[TT}ΖJPBi6ʗU,eS%4| Z6(.&iBt%eR5r"reFXV r;dkTOxHn\ԶKë,S="0ҥrAI\3@Y n ΧhfIkc,LDSzD`x1?w$H#RF\缍 :jk Z*u:5i{]&+(xM#(-e.@bE%GGZ\H PN8ɾذz~95՝DY0Yh("u5~fvtC!b19|(g%0Y(Qw1EvvwESe1*I]DL_Ɵm#܎H ~u׈8c$ ! .bCx ;Dhnh4?&'!7$TH _[mC`G+LW%Ʊ1ǰbUJ':W ?}ɿP-<uIE]tP: uqZarVѢGj7Zl:gif5Z]U/lΟhnE-b*4*kDmޡ=Ҙv4]G~7$R2yIN~6^fӆ# `;l>T7c޾WJ= JHHPR`ϩy[1TBJO\&ǁ:jE*}9pH;$%!h R ?JB׎)w-\,"@P\YIf@۠C@𬣄N򰡄u(WI,죐%q4ؙi ]3#)؋_+W;+  Pqe/wNJ*^d6rTβC@qJhB9#$ad(hPC ø3D2 k3@vG[[t! ]P?F UQ[yb˻u,$o+vĘ"< D%epfǂ( |'pYh~W6‹ *R;6qsIN޼:|vp<7 h3}|}yb8@]?&b 59{uZ4>|uvt6:`UN`z"waO/g<=;ME]/D[ۻm>EϏ^|yG_ͫudĿ=J62ݿQWɛ?49~|^Xs̵d-YXG_f!u4 yt"鉖٪ PRyhLZ>Z+ͦGo~R ֪:.e5|l?Nrs'9||[^\Au5kYq-$_'iHn7,߽Lr=b-|ZRE}ZmOeƯ6Hvxa|p˻[7ǗS0e¿?ih? A? uIҿi3~i3W Zt\8XNͷbi32o`t~yWpWAjlM2a>9Sq>Ak~qi_ Qh+=ӑ[=sG d)zs>(~2/+ i7ϭ?ꌣOc1nW2$ɫu\dU+ ʵOg1ƫ43硘 gKb'ͨ~We S ( `4̙33^Q4ovѫVg`Aݨs8099a_WatGzG@˩௕FŌoEa/:bd8{'P;V|k/ˈUP"B7"Fs&bqpzePҜv9 q^޶M~΂6YjQ,@1I1PqʑHogy'yu08ޫWݴw.fYc4yͪ u@q~@~c$6OZ|o۽S {7B5ooVrl9C۽5N2&%pH-ɐX8_ęqY;N %~">nVi&jyYESF,mzZȨ ֭EV*=<] xP3L,!)Cq)b`05ܱ[Csikm 4EX샧kbgSnBM/h/I0LЦyÐќ[t zhx & |^5 0O!bE{UvjoZo gqw0}n6Hh7XFW>1o,FA:&yELt)Lŀv v0OPj EGl0)QVJtnmdGgf!n7A&ID2hHr,(dh\/4zA7xE?^!1\6 T̫Kv _!W' Px"dZB]Jk#L071z$帕ol1h͝q[ 2@ {S\8(MA2#?(M=>!Ac+>.?I8Srr+nr&4yBmc,YF[쬴1j*h$k;'aAi#xu3ù׋؜?ҰJxZlTXqRd6} 4Yƣfw\%#l߈[/#mY7kD^rt7 冢QRQc 7m1M>m?X[HȠGg-s5rPsP!̝8"2җݳ>d\#2Z(ƴ̲+- c}^ʢn;3E֫)k]~oUpi/w,t`>ݼ5ܻ\sZ'+n?hȭנgaK qbOPUpTLqq\.W7ts6d=ely"GU{>4z[WGwJo .A9Ykr CV6ٌ-Ш"Bf:ɸw2qDkPsN׷1~~3Q/R?mFup5k=Xmn bVdh^C972Znں芡"P6t:]ڃ.x}Z֛/Gc.1x7f%㤈 =z\u'K DPI!E*9ac&1eQ l!HrΘR!0#]d N~BG=нMF 0o<:kWEh9~}׃EBSH&RULam|7`6<O|G(;ґ!2Rr:{ίyqƣ K 0:y0lZFFيdOfWDoXZ0yGLӛ+QmPҗ߬mؙ6@ܫ9z&9%S[F3Z8.dyV>9H!C>]?B%gDMmCEkӈdJOW"vYiőNQथ6`6Dڬ)B̺L!\ցy'u[ ¡ w$P(uxAe=F N-8940k$KH*NHeen8xE;dTrGëڈZkgމjH%IV^)-X0BZJB!,7DV\L^F'1]k`XwK~7i0s auy$0ki[e@ʭJUmYK|ڜ9ђ? wH`(.TAj?5AH'XR23ʵ*,ض{5_(1HGF1IDf2"!2zHu6D.6Jc> TM׿D<@1'\pakJdJnjRde'NˤhH/iO.2ri/Ww߬eȼ^S~nHcLU?uNɯ_ӳ__-A)}ex'x2="L\ewKŢ's7c'+;DfvzK{8p3H~g*=?(>Mků h<\FH{[mp +G(BwcPBv8[Ѧ h¶՝:@͡6MAEvpI$ Z$p^ӈXE0{'Y 3ѤMdN `e,Mgnl9']EJT)l壂.jyeVڀ|֓'cKFZ沊#_"k' R^*&/!8cEd3$I.4ׅ{G: =w>jFK)cJl; PK% Fw/kF!!p$Se9jm?Ac6U_Tes-:(qjr3JGk:T௸a.X_W<8ȤNE0DPh8Yy2HFBz\d޸g|}7Y[>g~<?ִZ[\@Z-8YFZ+yyu3='ngP+ 9&Uİ"61$Bl{X꤃>^,c)tAs+=r OIj-kS~0ЇELf4:DDf¹!L׋4sM{;!;(hyʲ(`GE}kn*#̤q " ImzB޸ʽve1 GRAr'3yiי DS" Gs2ej^5rd'I;*J">H.x+Q BDɼ#r6J934j g'Uvة:7%!wSxB Ü<2!&(dma)d_ʂ9f<3mJ Mn^$F* ΢SJ11Nb"-| ;2 7;`^Q'\jx"JU[^ImlEwzn; &yq[l>cw>gSvJR&2s/S. &Qxo'CYok”)U$S70xVW2MeӕJ+ǔ ,-M7ܡHLI4)+kP夓a"C'B(:(Zə+_f9xK UC/moh&_l{dvvLZ=M",!H.fp'fm&9 h9&LC m:VExf 6 n'wt4%hO"#A[17 9#*/FSv9JȎog"Mo>ϧ7ZIiOWJJ\ú( fN"\$H<{0u|9>7@3Tԧ`xࢦpڹLU ,ԼX(^3dܬX;#S׆\$) @ ,H\2*ITi N$Z09ç`u)Rp䬷ςw^+Fj:һ+`-BP4ǭٜ-xιMج]q+.;˟ĺW.םmkWPJJ;Mݽ`Aq-F>-]ɰn/ь][oV\T4ׁEUO׭W#Te>'{O^ FN^+UbuoD)X%\qW.K"PiǑr&ߑԼO}"|?{Ϣܶqj7Aƾwٸ-($7{4 `!H = $ć%;ӈ{bOB] LL .F^~Ɩÿ?]&'๹#k}:tʚV2?9$Ȍ Og\sހ!6MdS|?B{f j_س agB2i  Wv"7_UYp8^sb^,5N%ls3X @C3X^$VcZy zَ,8Tl|4[+קHwBxb޳A #2YbC(FfB)|~Db+2{{]V 3GBP2O mK|ʊ0}[~9~=M*.ѡ^‚z&:U'a]=>.-z0LTbsYUoiMqJ?:|O_O̊Ӄ>ۙrgH]O/3d|Zͧ!9KZ'Q쫛Gy7ILG8-ZۄrBR9Bq_;N #78wwf;:={ 743*]^X.?ŗK[N tn?kŒLʋў5P#A)uEO`ʽIbπ t60Vôι`>szu]ǟMkqU] ,햝鱶eONGo@E29K&W)X|5η 9~:t^:wxrzOyV(|)=u׺x41Hێ=|hx !ٷh0͒giY^(Jܿd7H.(X<ԥhC?:#\өuH ҙ~adǧw-ݡ`\WCXI݀]EŐn+N1O7ahO uxu( H G`)#$dz50ۼ}ޔ\'cs~oͭ 9}?Nҏ JxV\Ԛ0.s_].ݓ1 iA> '<)òv2L5k\Ɠ"m$ <<=Eu>.H~u;Az}ϝ _Ȉ l&B&p1>WD0oz%ue5woeY +9~`?[fז/Kns(˭ u e6 A0fIi'źowRΠv'U{ 2sоɁkAvgWLJ2TQ=U[n #?  _ x!YؒXKy7uIG T|wp$xO^_-:E&Q,D&M^PEV_":jDm;F 1eACxɯBTB9+7,ݙv=ΙZmru;RzQYxA&xPtmbfr߰uͫP/Q(BmP _:F>b:$  \yFBJJPil2T` .PFjΪǺ-LZqRNuO0UTdgq17`v@?sX·0雰FμO v[!VQ&Ic$*4V&]34i; b9Ew `v_a<*[ PZh g0.2)rYXEBqyVQ kf $:{ TA.UoiUG[SyG9孁zJ-]E]jx]@[C0DnD=#Q5;'n`F& ^tǒn;sfmfǥLOYm&nRzYJbY% /H=E1B ۃnYmw[8hGX~0>zʳԿ#;@;5zbI& ME tl+Iy.}i>}9{<dlqg~}qN0ewٌ{|DrQC|`l`ػ:K- DXuKfu5Jج2[Z}6CԆdw(o2PXjVc%6qGw˸34QN Qu6XX0)(2?,_G(%3%VD]E3Q,1uO"pxY!! P418QiT0G$R"qDH^KQ1+1{u ޱmHIUQ@ D :B8PUAl/V/T6D>X.-uMQY+>pGAB$ AgGF*GF SPPr)j+lo޸Ar6c|Y[ݩYmPou}'V@f6 VBZAhu6N NGz(g+ <2=\%XKyQ@s9QiED\#/#Nn<H!ZgՊAƚ&p/U##H%Udɴ0Q@haT$ JHA5 pF þn `"`l7>pL3^!!Ud )бer-? bEPj0.Jy^$!}2  )E(UA=FE&Ur1n-vF.c%8B_\?X5i?b{̖GW擷McJ`zoRHdǓ$k͜6 NKƁ*0~i(Ier)ۚ)֖/"ElA9g_>m=goy@Vg+; !73JGvIN^J =K^eNe96z\D!ѡmQ{eG'4%\2[Y>|[;5s:_γ+zh9w^UpހwjíS%Xͅ3S͕YmWMNa4YڂDf\͠S8hi :9m`۝{n-0'_ṵMDwFxnW~m0)9/;1&[4pj*@0 <7RTY`#l4a˽,篪Œ˞/v cݿr:r֭rme4ILX](&CڸIv˞ X`C%<1m0ͥ _6IڹtG!Œ_^'n?@@!YDHb8$ r×KoA[l!4u47-m uT [ fmĸ) ܊#W6)wo5[lP9$Aﱲa5@ 2sl5¶6>eHh7U0!`]iJ4qirYڹXjp7A! ہZ\jK)8E:ޝmpEwyJ-Rp>Q|Me.mQJ)Rݕ%ٖ.Kh\FSeI7Ĥu>t[jz:XP-8lYrT13 Cܺm;?Ήd?ܛ8v]u%}Ьg]ߺ#@x7.|:Žlq3[X+hCnanH|c3@PdI6B. Υ&`ȼmd)=^[̜PGgR1SU`]aM &ABl{`(ㅡO^d|x4 "^!0<g*R $bߚ0R: #̸Y)?׊2C<`\ت+DUM.G!ZްB՛.5vlB,ʣ]e*/V6Fv_&vхvm7J'[ FWۨ;Uҹ:~|R˕]4TVlf]>6}nF9qUv,.cUX\5X V~zIQ)"=I`eo&;.θ6K#Hj-ڽN^k^}RBFTߧšY{J1ڽ6 .J)?  ^^g&y;3: cxu?{ܶ 56;Ŗ-$\``ȧ͇bE~=)A) ֻ3uOO;)L{A >E! Sez72Ng wY;1(ymoU?i}4?i3} mXhVWꓵ@ΰF}NQR)jdxȬ¡AL"S*5Gp}PrTk^Q!UmHQacJH5ϱ|Œ>>pm!aD&qm^lA[UFxIi\&[ȓjUv鹭fl;Pџfn&oŢ><+^.>,G+TAxImx8]HE?&]ͧb]~ͫbQeOuxF.X5{9mB-KFMBPݾ /ayE e@f_,u8k1U uǧ2O"Ls,[F-D/%Tc4{ yfx{x'a4fl/ ?UX_Lh]*PEdQl)#D$y?"YT7VyKn.5Bs\>ƪ9٠T/ZO -JbKuFd^zf\2b,Ms]aХiX"1k7@5@((ڡu {y X!ϭz4JSgrWƊB-jˍiϹ,mzkH!Ϩe}[Z$MCfxҐ!!`X.3gW9ȥBm4AxӠ*BW\OaֽpqH3>SZG6,B~| \{[֜/ /jcW?X>f׷Yҋձl-? $i}P|#<3:z HWQKvS'r|Ѩt<wbz\C;7P.w7Z@o1mY:1!HZj{S!|VL(bkńUǤސgt=sx-s,UզykZ﷗n| .rklQV;[ynm+KV^J5$Ye`|ǿ&Y?L_*6*Ic&BP4&\T #1DZH3-\&b.$324y*P%STɳ2uaU@h|\}{quΞD;W Z>yeyvS *A%ҩbHXZ5$caMtjS48Ttޗ]Ή .7,Z}Rrڕz^J}sᶔz-H\vpaѺq.FDD( H,KP#bF"R1'-GL'1!$I2R*1A)DтMxb)b nDadI).8r'-.| zj/ MG]>pm}.!Aoa&\ec=1Ò(R[t<#PUkl NHQupW VYeQp=ƔЯ0L~gkb;ɹߖ:ޯ2 :)GVii5 7>^ᾤ xJ;#!AИa#"psbI"Qa@ M HGU5;\aOG|WrW*97[rHEhPDQ"jѲs$p*IHuje(%IAhd+4("AװD DVsr"A#+S!9}[Q hUʋWYXl//<^=y X ٱ LkvqjvqX]<_!b0 ֶ%xc̞b(V`PF($"aLD+K,631GTpL̫k9"56X?FP&*4FL YF&FFcx^ͫK&)-y5#sK PWuaZՈ[b1>wk^ [ňlm6rg#o#~5 `LJtB/RҎF.fS(FŒxA&-KW#浑-"YS}F@9f-̕=+}H.xG[2G?o4kIJ/QVe1G6?;[TA_/].;k%}Bً-^>mĔԘ\h -ܘ,9^켤637 "'Wӛۧi;J1[PK[G5;j Hd%nfBOnj<mdbz1FRK9HɴQnMHs1Ւ"S%UFM`(EvWɓsi,R%)oP$Z1IPP,#XI%"UrQZ5X{+/~ոe}Ytz<.x˧qȃiS.%\W\#Z֖P^۶85ZZݫ Tq%FKNSK"u:Yh=sk.;ߋv<xq)A?|YeиtXҷ#0\$QVRO낣r߇vܻ$VFa P&BLq#ndO;[tԀ֩[afYv ul)89 ~:{Vϩ@X-h]2)* #p0 ޔt1Ͻ7?C*"9[X/ZN<؍4O.stq~}bPT!BSf<^Oqg{dtͽ_J%n]35dK&o>~jGm^XȲNrX֩-NP1VL8kVmVES_)dΩM`YMlR-ԜXhZ(l`jy:QиjWuo5jDʘ4T'z$j u\o`eܴ%y7uH,DEƏc \%m$3_)yLD*8$J7X[2"(NR$$ Ĥ!l&3ɦ7q[VA9L^*hђuUŌP$I"eS؄jpqb`E SpHcHb+%tsK@V3gtk$Μ5mo`NN5=S * ?c|2 n> #|GPWsA*nG#.3;f# ^~\`8>OB0ISiHE܌Hq,PJ9<|HiSE|N,I;MԈDSL* Q )K\ׄ)B#a?%-S]Z:m3 :-zԶXE|3T,掤IMmYB)By~: 4(2qЙVP*V45ʦ"U!x&Rq":œmAS`@HRm464HG<8;pmwd7=͚ǁ{趎&== ؃s;{<?\W*20Q.YBz;.FRNuk'g (^Mӱ%;r;D~6>ݔٲO`hrB`onDǯVNa_oς}}zv=2;SpvRySC6kquهJ;C馂BԮ:,>ϷUTY&o?'Fr%A. ^s>|W`kqA/I|pXKnV~y!w6H>M;~İp3ho; #^fýwp}sؿߟz'7Nn^-vΉ-L{_̒^ͮ·Ns GZcb16j僿ӷ7;s;p(f$F{O'$룬~;FOĎ^5Y`v=p)\Aul,W$x8;yK9Dyi|Ϗ ~9`⏲K{KσaͿIccNӣşL&{s02=S5IvQ*\~hf>b悇3wn ՎEn/_ NڷoIVK7z{7ͻavu-GY`Q1I` yvw3 5aJHAxpTȴ âֆE":~2i\ώd͙ 3Iu8r0 4V #3$CD[--s$JxMZ95gv?x?߳Z CL,PP`F +-B ( @# rP~$70ȍ2AF̰(2{d@^A:0a#AHq8=<0'*)qC=Zaɸ2 z)cV_7c5l/wO3H+MlbZ5s1xw?X֗n!^_䎱 ?q5M9A˓MQn|ֲmgeyv۸wۓ4EwZf/C6vF+!8TϾ ёtCWtmB開A 1DMWZJ0 {[BS[ Y4py`]XZ+NIfm,f6ܺT-@`-1 ߠpE@9'JHN.?{ Mr=jQ-n)R>};;s|jK Yi͍H!\=e4DqyyjړЈ뫽1ƻ^_=w}:JTDnq8-/Jܽ`P)EXR'W|Y|AQɦ|(dBD[ XvPiG=ǣjLhx&}♔ YZ'@`,TJ٣/fa5|0>SJst;Akrzm*Ii~A}@D!c4#畧rt_G <TYx`) o8VQ˞ YWۡ7ECRzx;g!0:BC=TjϪTP^UZ{; <00!b @ˬC V/x*%IeuqgzK./>‹ a.f(K O%;>oG 0F!?@y‰3./>#Rx $XC!Fp ŧPbXz _X|B/l {9]WÖ6|]0t/֋md7;ot zr۷[r;b`?SPQilUfas ) ?p$$bH }9zY'=wooΦLVO0/ E*XT"߮yH tNWf:B.w bK{jIМeVk')jK6[7 ;Ѵ;1$ 2/melL'-9!tVDE6\; 690kG鏢>4罈Ҵ] @o(cw_&_M-w^?M2рvFŬz =a9O癎w !܅ Pрu؇*UUmޫ:c۩+cR pRmlpbpKKq@pF,I¶r) \T3)u\btt?%(u# g1+t3+5dVAis3l.Ov?g6/22/{)˿X rRpJ=Ӛg"M*JO77O0;&I]Yk%w(/_&igƘ!y{LIo5A)樯qI[}'iƧǘ3ȉ׼*l Կ,8Q%!|fYCN|!@PLc {v55譴5-E_zH5EZ%Iw9rLw,#ό1gv$~1 Yb.%y ׍D:; y6ߑ1GH͛2VUcŸя.MC9LmBx2x`KEM8`AQvR&ղEd~m&'zl\@' Ϟ[1YӴyQUi/[&H/=&" 2iddUp)"bbń`IZM%VK^EFV|0]=( Iv`H ImgyIj< @N[y_K ig(HB(j"w`hOclj_DhkzsvaSm7綾;̼ҤW_? =3}eLXrwEHٴnOO}E@BH.|$<>H"pɛrӺ9-OOcM)GxNRMN`Sb U3qJnԝqu^}Y܈=o¾l7|߮냻~}U? { p7΢YOxh"2bB D LT*.6)ҧϨWӓy b:*&gq覡Jx*N.gU^Zso- !.8/I{^!&(| sʐ a1֎m,*Cr#K R^6gI9/-~dT–>>eig Zy"hxL)%; 0m@?85p 8k`B!Pgl& ͅ7T F?&Y!hNS,%PYS^Ԣ~Y(p63]6nDZ_ȘH4ސ$&Z{99!!hNrTEƓ8͢1 @v{,TAWfcac$zpQ+b$gBZ8jFݭ2=}3&"|BCSB&(2rBm'l%`2*yK"^&.H2ed}Ut@ }HkN?[=%^?JzoG+Of݉v}FU51 Xĝ!py J$hOR"`o MA̘d~}\iw7UT;j:ʧyS=Ly,2֢cښq_Qewk R}خyکǭ)^O-;N_P#[J&y-S$A KRqI@$ht 8δJ Mc@?9Sx~*̟XJ6';hۖN6(TX Q"Fe: OxE޲}`}Dyjhg Ě B5-*xmYt?$@YVPɞwT8E%+2+_ S6U³"l([ &lC[('XEǮIE$Mt83(tIŇ/!&PVZ*#%/:E#;oEj%T4m,`A}:ơX#c;[urys<<56;{:̙ 1RaO[ ؤuԅ BFL%yN Ϸt&K@#,fr2Ț]GfK!3RJIboD %HPRsU0_Rėp*i;J Bf0T3|h(9S,ּFsHz>c IBہ6*̈wccQey],F?*>)ĺ}D5(A@Y (ˢT'Xe 3 jXuw#edCzɑFrA֏d!!;NmD-+Hr#e\wU @Ύh/'>ݾY~~OzF<`FD|Vɫ( :-YDR^6 iJTYΩWu٧Y8<~|hj_ HG܁ǒ W%%PoҫestYT>ETF %ބ/6Phv(HA.t㒖G˞1Gk#Z;.ڛw2^ `\ ;1WV[>]1RJ1 )^?}%SgJ0(%X'@1ہq7/L+7w9y%Lɷy|q!joaxCM,FW72xaȪ Rڽ=HT >AwC8f0Om˫9ST{ A2eߥܨ"!cr|`AɥHU]XfCf}<}<4} M ?Ƿ5XҋO?-^֩nE5|g+dݣɻXힻL8.pc-`VDZTvRQx טɡ$Q2+zъm#z-2|RK»OgY՗$5ye1IQ@ Ιr8('"3[[sKP#S2>Fh b3aC^f>#fwUxѪۭ*A>aQ)qtf.P?QvXUU>I-Ln~hR8Uk_nx.Z NA8qYuAM9u' 7M'..|slIn[Vק f@lv@+EqlAԽѪ5 C.PX?]^ZU0C.;o1Vc8*AQbϾX& * /n4po>Vݣ&: ;騻/x^'=dV ^ZSsOcd~982j~;7McXР(E)e1^EAI^\Xk.zeN:eҘ,lֈ6B7A@BQ!0 Zh(0G[O@liD`Oi DWA;]w~srh*i ͅyHѐ`BS+4Akݰ[9Kxb٬9`sU$ Vb% &Rޅh&/.#hEґ<)D#˅, S>^Uz̕Zdt7&^*8ʼݤ~DyUza+Z#~qEo,mt6VsLj٬@rWY& ֕$ɐ B %5+BkvZ*oEځe(YTFd 'Ż\LqJA p.x7&`3S_6ז,5D_uZ!j XX QB1:#BB@J.oUQEt`3ؘa5 RfmDAʼ@Srd]MWLӁ2DF=rAQ ޣ"hPSJjVkqJ@\p_1^)rܫ `fUWY4NFj xA,bM–xG{4,YZJDϋ|TrLUgjT^9oa؂U0b}nzC^AUY kdTmI R)͘D:gb-ѵz1lAh$UEo?*eR gGdiÒtPY6:)G4ȶ~I,6PhUA@pL`䆈F##Is5)&H|r: V=Ȗ.Zlc"jA>n΍$2ZE-!cQqSY}Ŕ{-yu>J0[ޥ"yb{lc%yc- e |HSJ4x$$/~&`?X9m;e)OWE .3NS(#Ea4ϱPF O$d+$E@`Ŕ0STm؋ {K @ :T۪Hibo^+vk1JdnE9[us~Ya~/S C'a Շܬ_u|7 4&!Fl=]' rʠCծG|K`N!7fz9O` zzexIe^=#®j:Fl]< Sm?IWʚtv/ kl=7qB74zc>lr }$GM;յ*krXy(͗AsuٙTQNnjn՞ nlly.BGLi| j{I(ZaCsj^u^^VϿ wĚ}90k9O׋ME_m 07.O-W}Ђ ߾:gl3wrs֥C&oȓXUDɋ34N!بcJ¢|jg'ߡQ&`lQ%L Rdxjؔњ0UW+%rc,༒ xbȨƂ7AvÀ?,bo0@v7l&0X|X=5-3A\vR?$V- 22"y3n:)؎(y RY$=UjfY%V35qaڬoWn ~pU4l}yۭ(׵E;.Ǿߪ}ϳz:9BiH-{޻8¶8oY.OܝY8n{:vVw7wïƇhΘbT~iǵ1w ,4@ %DQiU@^)-p 2orƵ8jJgڳ;Ko|f)&NE )*@vS"؊U#sQ{B RXHMԋ hEc UZ#RF oۧ;BCKMQݟ3o4_>1#,DS;XJs|IFԒeZ. #m f+:`RB|!adu4Ǻ+.s\U@~ԋe>xO7(B\0T6a޾;?[-nֿ9ߖj4l-TAOvQ4olS*<.cY0O,bQwuO%=UdY5YBeB ĈtV3k8ž), k,%RT+lΰZp?Lmni侣Ͷx h~/Vb|C4kL 6y|F8TA}G6: 2whw+1B!1 g ,\=bݦdɯ5'Kt)Wi[D~w5t2ɤ[-XLW^2|Գ 4eϑ+>bEI;& 9EXMOW?"\~]C|*kO2NRkH=t$g{>eD?$itxǔ8hbZֲ&}zw~A #nz)aңUG/}SƔRE5Rx ''0lbvR=^ag2<]w$:*TF؀W<2{씱M+ 0Xy밼fQelq)f۞0ʌ zm ۮ\BR3)O\PBH:`;}]K FdJLN@7a؜081?_m)mӖ)ښ/-T}i3JD ҄9o߀'nZ3EO> 1e6jآAm>. .h ԈETA1sQ Me$8'O7c ?ɵSYv N~p^+4w3Wx*4:^Ѵ1Ą">qΕ^qDrcб;\C1^{kKF&])ƒr]W(KڻŶ^z Ã.zE@auaRr|G/To@Dab;TKe^1,UVe^e̫*JcpEi~vS ojL -ٛ~<qE͍cc+'S".~;m}.4!Ͻ ġU*f5q2_+$ކaB?d 2L"AqD*L4HZ8u2#ssֈ?<sa6.vxN`TT5  aLEj[:U"Pk-c1bj5y­q O-N@2P!kuEì}r h gOىBgݭGs0Gzv—1G!hv㖴qnAzdWOgIYJx5i=8;,Y !7ҸY9  ! x;GP]JqL)RcW+ !j$ PYE\`QlϰZkX+}R:6 m9jElӄr6R6"eJ.u{쌆鴆A)acnxωdBZ9OX>"1{$~zBQC\OOY1W1п „s@ΦKX(2 $VD2DTוXq4SO"4PI{PiRdzvgzҞwLr8M[XȀB yۧwFSwa4 y[3J}m(d{[ ׭Bά5!I+|(-/1))_6s'#16qH%7qm'1aO2zS;]^Q(ps?L0C{Sӻ)m+l4%=l>eQ7Ոw;.R?eE0R!I-eA`]<4m}p0%#D]^Q`l##jHDBkwo;dH""7bys9:݃gQ74Mz0WsT1eCUcDߦij7jV_^}7.RZؽ;RR(BU`+_ZimżVG?t^K^iOu>O))tSZ(y,Ov>(%nsV[2KZU3oiMknmmZ)FNaP}xhͻ#he:kH;{0j-WBsmX<>50T.ŽjHU̳+nRc hnyަѢwiZد$q_f,eIB<$p8!R #.@8P3^3U 0́ ֐Q㦹,wxJn ʝ;,i!eP;\W $ZvS!Z|,$w6YX.ϖ,]0˸Cvkϙ+ +:ˮLe_nZ;:6_ZO[0[iTYQ?? L#ַ˘l|ܧ?U XHz=ۡ~ݠr~eF7>D3ƔĄ^w2@ +E~rPŵf/Ā T[#`N'|9 JZBL zHp©O-:ϱ?7&m<12F.MNguZu{v} yCfJgo<)w= <\~{M\1BJ/ŧN˧2D/t͓]oK⟷}LoC%DЊ?K9Dن +c {aejhdf괣TBӆ6ȍ?@ogr~9 _\{ f<żrrl<L_/6\º{r+O9qGD0vπ&Jjuq2v3su7gp?v} \pCms»Z^,~|yƘ9)jF2[EE zQ`D-_^!V7r.rJ VhV1aH+8#n=M| B#q-kAfkp0vqGWqBjK4/xY߂sW±W|rb\]ZΰA=hg9ɾ ɛjfc er!Q}iFC?WJ~ I$D$Ҕ&YzV'1iU.dfU?NXSFZ1 ++v'"=kuG)RQH!G-4=]% ~E<[O*xM;ϤOx>z{ vIBs$K!ofBnèdc=E#')VrԔ(nOxX08yNk <*Nw !]汈42!pe6P >R~СM`1U-*w ߋ#4 &^p 3 !!Ȟti 1y4v#g$UK98+g[4@c晫0ƫHmOyȆ\fˏА)@bRhE"]Ր߀CNkѮ3d K7.Z!K U[)+Ũ嶪=.F3DÕCdfvW1W΍ev5pP"." ^!,!VaCy+2R@A6nh|F _]2TT;5Lt4\ZH$ݓ] aH} 1Rn93>~Ю$ bJ!h) +(}Cq3vhh3o\PŀQ4b4߯zX ɣ1]8BI#,*V^gq!rl?BBEտػ6dW>`3Rw݀,{bZ\S/TIi,Ґ=І x]М 5 x` Z&gBh⬓\n6i)H C C$,*adwj}4_L'2'1`)ǐB:UL&_q}= c+2&,1PjUN7-@NCBI(9`|0R!x%cg% V*':4I0$IB6`$Xc,E C:aިcl3Uf[n3 QVҸ3A+cd Xqb(6Ӻ8~otʢLUL &z2_S0/l9Jwz c2:t1JH_>|TZs!VA[6ȹ:z^Jθ*ǩE=8w2ZMM^;)[;2+cU _yL:GiՄjS} R*?2 4,^c{;Kg>7J[En7ky! +8ƷL*Ea޾9ω帾y (\}Ѳ$vF;A 脊NMJV$ja`Gg4;vb|i.$ K_#KuZm̰3/!c/}/# O -8RF{,. N{qV}򇓰z/^xt 0}]аkߜ&q#*~({lJ3.)**'*zFxE4ۀRW3U)uyW#cz#cQ0ܥGƺ/:Px'S#h >N}:D {@j{..]Zd2|1o}!%}VBtHHo%!!=0IQC1<>nCD4ZKHOH$qVG2qC;sռ˾sەl^דn;Ny]1a=ʤH~6M4,LD=w07kչFʵ.hȔt[jRy8qA.sPeYN z. $=ۖ«2 C8z67 L D2q~&#YȞ FN'5%(u(JWj.$0(wBP甉<%|G 8Iad+KNV$@)e;рnvMͫ?1(SLWR?Ӟ ]_&(}ʍkf1Uu#皜'g79c"tJQFO]&1*1ƪ..ݴ|;l3L~e ;Xw?P]|hsPi3t"-*=C_>@R޸;@To_݉H [-a4GB}/BF68]֍H\noMCo _W.=u4<BI`!$bR &Q{X&$}YsZB5F-@ߠMo34ƣ_#b@Y/;]m Ĺǿx4L d9@Dѓ:9g=J7x5q'm 2Wo;P AD"_r^g p篧)dk^O6iE?-9jܩK7\ލQlt_Nwڗ b~/o^ ޯfWo~}6n3 f3Oz^5Xm/~6v.G]fB~?i\D#+'h0`1rR Q#~7"N՛ d'jd/qČW|1ELWa\ԯ9UvuW#VdiX@G;0<0 O:w5P%1!uIMkkaס;q7TݼR*gJ0 {6Cop%* #QqC` !Hԇz eU9ϴ3K $|˖, bݬ"PPUDId "J0S"c)(Dn(eY$MIyP‚IK7YXE"&4JQ7Pq5:w"L+>=d9WdTnU$֜`Q+HBjq=@i|OE!Lr3[E(Bǵd nl!m?OO߭wjjc.f\l)܇:HuV4ݸSݨ$NvG>V {Np_dhq&ej/5wT^/O^f/cN =xFH\fkJpjJ EeW=uSyhͷ=u{Y&;g3Kʓa௎ݼc3˛GLN9@3G`Q0fys1|r?-picH>a邝>QhG\s>l9y2ȟK%GC,NV1jWƨ՗wa)!-9F#tT=H$L3q R1qlxw'p4B+"@u:DOlFt+Qhz2z'BKDK] ŎmUWW߬XPZաs{3^3ߏ9u9f^ 8j^+ֹh4zThD4Zt\@^9tTI#d G8>z(k;ݏ+mO9M:e晒/)xTv%=r˽䇫~F<3T- [W$w߅{OlpB)E$΢?SĮ8ѩB܊Qhr4<SݑmEE@D$Wi%5*7I"2X8b RExtTl?n5=^'|Z_ugKf݌{W[.#y6:&B H^$R ~m 21ytGrBiQ$TB{='Q1\9i_F=$7 JfDbXUͺ.Z̐a[X~vEUaM<8>>M?<%S+6ׯ6WJl|~&nn_SFlXQlMS)aٻ_n^ */+~v(''Fj8y&LضeNp)hށz1|~n5{>T\D,>qߩWKZ? f%{ 3Ri jH^)!wݎ( gHVjMKV\:9&4ePG,fp y9aࣩp&PB# WEJAO YP66e,7,(PA MtqCgB(GjeN:#P|6eJ8:.qP!z@]bL`vRz᳖k518-AI ! µ p4G5Z$N"X .>Fr~3JG7*>ģ6n)sžތ8or?/!'|MMf`WyUv٬<|ZF0ۗxrZP.zU=koGЗFtw C6.Y8|00+K%1߯!=gG*"9WףbmtXӒ\=f1 %O7v&9Ynk<[ v.;]VI<w}pw6Q*"pX~${b'5Ve=I8DII8ٞ$lFvS^T*8[/ ۮn8r*tGDZk#N$1tU@T?7Ǵ|qLGf4eE rd7{E&@˥~ֱy&EdJwWl(us6g:]|GԳ=v.;ܬ;gJ*L[;{흋E*\;S4;A׏8p%{< ':Uj^J9؀;"fi=W+c7LZln顭.wh/Iv@\G+aüѣwYk uLWDڋ(gxݟ8D 8ƟVKU3Mn]L,~M%%NMlR-钘fsxj7Y钐 w: +^$_6˔/˦jc?!Q ZM+b͝h2*(EXQ:!iŲY6݋FgM / *:F @tmMm ֲEؾ% 譕A=-n\D)ZG_Q2Pԯy^IS)#i2k"b9l!r"2AyF[p7jlMmjqqzZ uwU<9Ij|jƺdL򭔑S[ۻvЧ+n~t?&e{OFVRU x(,Eh$ݫQ^JW=ۛftno.;QKzSCQ˙rnCΘ404@fp2`9GGo2Q˻`@=XaK,gVݤ%Pt5g)}Rj:J0L$*ĬX]ydf'-8Y6yJ2EN+F`(Ԏ% j.ɠ Q'Ϝ\,l$ E9?vG16iI` iAblKCgR Ly *-XX(Z[$ (R!S$AϼVKICl8} 1LԱ6g.J .bDq+I^TH ӈ|8Ԝ{`Gn$% DG3􇱁e "$-oOjP^z8 2/Iy+L<k's5DA҅+6jm\ha r7z`+ʓ27RxpI(!I;RҴD0E2^D~(i/n>Z dRr5ʐwKI1Lr9Kg"S1xLu" J*8 h%OԐTHi\& 7Nef($ItFH<5EGX'7,ɹHmHۀ"b|BZ_@}dA E)ĂTo#ބn1{2Dx jm#l`helJ>3IP| S$?<&R.jFDG2P1Z#1P_Q ],Qugllqc 3`u^CZNi*\*ۀj'iKqE/hO(>:^5Zo{}OgV`32].B:4}-x0GCY-:.얺h>&CYHWG38M%GB KGFZ#.rh4e 4&NoYF_5+c0d0&[?ȿ^s /-ͩFزw$Z`kANFY)MK*4&*@ ;82?qR?~X,R &\ 1ha~,˶b}o!lc?Y'F)R{;ų!'t uHFdr0)#K6I -lÊ8UL̾]pSA-K qv~%ERN(1m^S(1QE$ϗ\Jp)+gyi=8k0)_F؄'do y C4h1fwh3杰@dmLn%Y:#.Á~0cCL62ٓQxƐ5'Gh>}00NE6j ']7FI7?8rywx; RnZnYWZw~Q2}y,Qc43 Jς̽kLč=EZwbNǖ>_>vЉE<.t8]ܶGj#Yuăg}U˛ӹjDۋ@zjQC5S7+eơ^2Zf> Nn"Ǚc|?cK77cn&YYͲtz%((2)$4t)ZeB$mmPnF.󝨱rӆIV2XD 1Z%E!Y)!g-ǘ|O oh3UvY-#鎥cM7m%f_NY`C ЂaOvL54s ׂRAc^Ⱥ؞7BxFw8r^͘Q01sz%IVc~vϔꉽc~v`c#ټW%v F`KN,[s;vޞ}8 [5U?i`IO&Dy$F_JX&wE-_) f,/?6;򳲷M&Kɏ./B|Yr^y~עqȩ :_F򫮊/Hn/^&zvN]Q5<]"p샕mŻ},kUuc[^&7:n,*A|@U$H*- vAѩ}F6ek4?vƐ?Vq{8?nJ[sڭ]D}۔Q2 tncH\Den2bR~E9MH|yupVrG=K.ϯ^/N/ʖBg"o:"oJrs(ΔA}ѐ`),C$عA li֪dM˘<Ϙ6NhѢ>CgU)K+JkKZ e&+l.[g9Ÿ\G B̂6'|`8- :{N 1'DŽ5aH !s絳V$bvꎎ`m*~zLD7F:hyksi9zT`, Œ (+fIwu7Yxg4#xpمZҧ,uDY]Wrz)'PsO[JI)j CJdjI|\bG)93Лk磕: ouv [#eVx 9 Zr` 8(0jK̓"dP " V`c9tPK!в4%E )$"p LRR3c$ʼn:E R-9-8`Nɠ" 7XDX{+,*iU} z^UP K2AH)S̋*rwm_bCP4E駽luÏYHl^ưsgyzeJD.$ L Ou)}OH@g%r%Đ\L K9hAN$ftI @@RH=,iCj <6+ "} ihjҠxXL* Zq;U*M l*H5il$M9Ke<Ҭ=)QTf-_I8) ^k_(AE0MUA(du9ld=m! dk1'dHiq J<2ɴg&9'KTREقUg5Tszh'NJe!9 )X' ՚[uRUٔ7؝>MՒVkţrY'\)EBJlj2K[JRIeBJBuF{<9p)Rjʂžޓ7N!712w}D\t,p,IT/Q]uN\Wsj G疂:)U@6V쇝gT]ࣔN)5'PQm$.Iz?T:)yѡg<'8w98<B`0?jAPOsf'CzJ $P#C%s^{Mp)K.#"fԑjt߽ !x\9d$ŘFJ !t#p)FI~H>%6/֘ʑ\{X P T6 9Lm Ja#$bOyݒ{pFZQT `i8 cќs)'$'s uT4+XJ}^Ri ^$?M2ՂQJXJ%I.5 @Jlj#[JNaa~ k )@W#U\hXNl:E^JaBY6Irk?@)_"o}a|Zʫ/`YnsbJ7w3 8s ִyueɚV!ͣW |Wpd/@NA[Pkj߽~ݩ`z"[=*ꏏ,Lس%b&j*{w~I~)%3M/ #zgƗwQK9ЖX}> ʇq?.7v4,!+r,YE:%jE v=K b仜' B6ןۜ%Wp1ͰgpJқج7]O5 e־o^ױ uje_y\<ngǺPӿxNwJ>L)bB NXp\/Gfa9pKxQN ];XI/Nե]kj6庉x.u*w9JA3Dk$P!d9CάzcZ-gTa\h9C# xB\ЋȲɠ:RH&\P2YXclVƏaO)_'B{O~״(]ޞƔ_%2u%4rfTQc(Nd6Ht'RiYmS rEkkg˺wN@kŨ2>aA ݃@l:l./0%łh _\sc.'wQ|Adۂ\hQ(.L#q=,v"]߉rp(O|d_,(M>_\q?L I<'pͰ=x!^aH&].c ބXLl-B`p+d7/%P}i3Ol> gBDJ"sF4`0j0蜍F`JpiM+d4݅M`/X?&8c֓t~$ͶKjp( %/~-| Iի2~ޟ+_?!vΑ7w͌]SoO޿`MudKX~f04*2?U~`tI.y5'OXsxWTkbHdu'heQuT}W+uP'+HZFꊨwwo_RLѢJw9o= 'M~^ZcJIwdB9 T:QSy imou$/:,t#^ և\lv 5(cb2Bcv$*o}RG >lCTrY)K+Zmա̫=G%Pܗf*9(5:A`5v5=? CNO9\ >숡^w?Ył燹$<v.?B#&Miyc7e8G|!,Y!wa$Z++^?ǺQF 71ȷI vG{Z k r/]ru$HIHxwg*{fJokNrGO'o.~;f'+pK2ݪ!nrn[j2=*rTRPkfT{8 tV9ys8`pn +%v4v/_yLX:DSCzp!޺Q!_Hvف^Ԝ95?_7CȄ?SC#[fsT^9#}v5w\Y,Z ^<2&ӢUCp=t뇀\I;䢒\1U%g,Z e`EX"gI&J+- )t6.t+6cQU=:?@(!B(O+))zQ\dB$Kbzͥaget(cE* ʨ]E%A*拝).`VܲE8x$i>-E0ON`+``0HVB(Ȅ櫘EA*[)=-Lmb1Y!h1& J"AQ@ ֯uUj=LZ:~F>p 6qY:QV9SA_NKN]j}V2|t?3E@A#A)T R北wMkci.ِ҆T R6R_+N+3_70I6:(^kPiO*R*4֕,,VDJEb_WrV5ӭh-[QXg-w8|kyl;Vm֪3w_O}[sn-qt9jO}JZcZvϗ\}Fc>ϑxKݸe@NG6_Tg4ή:W1JFc'ZPk?|qq#jKP XäF1|<ڂQ)\=!U! vuT]ɌRcG>kW܍n[14#hCx b,y:#KNX+X3Kevd]ΓQ!xiyRяM-k6J{RLX#Ja٩NERr'ʟ>VO!OTKIrcO6dl.cTr tʟF瀞 Ђ,`Κ(ו]u%6?UGAp~hPL0Ν|C3!V_&ո]'=n1l=kz񶇵%gz2>gFvq-QXmw@?tx\ΐU gSf-<*; [4\SVisߺC|Sч5%̸T!5|{Yu;q.0/,_ix^.np ?/WWH*jd8haGO>_I,t^^qPt@l;z | {% &L1Jj)cy{LR"9IT6IG®IQ% `3^LۘJ'+x` Z Rt[@ѨO6ϙP*1Yb#d" C(KϪRIq;9/Uj,~c>Աb.J,Gو37{[i d9Rjzb̺DNƛJHuzO18x0U'D1] "Z|N#y{^\v\f(RC!Bq$:a;+؇ ١k6U;B}d't&FSmHԨh浗$ m&Voj$rDg\BiP: Gg;jdjdS`)gxZ%0"߁SkFC슛وMU f&1p Q&T KtzEgRkG`C Os9PBc 6NEU,cnX@=<G;%zuj EQr&5VJ>J0+kёkwK"!2\Y*bƠ#muͲxlވ6: 8x"`)#\Ry`ƍơ0F%KʱTG4G)"!&ڪDC„!O>MVjM8pO7Z36 ꮼʅۊJЊX#hֈ=~1ZafQ ZYNomW`SBu֏^Ӡr$& )\sP $}u7+P.+DR|5:,m~T-h R3NJE*IݸDX_jy reG:W-4L}(( ݖZ"{ņρD$8er>fOՎ(mth}f7ٳ _Rџ߅X㓺ߛy"2|hp-}o~F!yux}rG>%/UMKPz_Kц_,(Z\J] ,^_^qh'8*S@)#ܢdʢBr6/,jnjޭb3xf_ަ{,]{'"[!S|ƹû5A t~FvU*ڻ5<@W-LEN!㶞CƼa/;$./ą .dy*)YP*ڂCkk~n&C -QSQ]?:Tgu> {tb$_TXIO\DjW ˘ 3D~Up@Q>ԇ7,N]7dӼ{O6 _x[nڂJn$|fEg$nCf5{簊{~ߍbet_?5%Ob~!B>_.fo9 ∹? o>Os=x;+Β'qAEVdװF9JoU&PEHP:P4ELЂ"_ 9d֥)¯H2dԥBm24֍]a{lj/6\sBgZ1:vKj\]ҊE&&JWM)!mѰ lr9ܻj-weUYL$֚O@=!$dt@*-NrCED9y&L$k{SDdy(}J2-rurnh@eEBP|ֵYB_MlI*y:IYQb :lnJWc>e/,'},x3޿%qΝE,3֋߿+_P},1>/TaBS$j8]u~@ݠȟs3 Li3lrNۣ7g5@+hx!8/Rf*՗mpɊ3,[b̪YW;vl#6iptm В%F5LFNCxJ *꥖c*h# 4\2$iʴqR`cV&@(FӽS@H2T)*bJoE#U*R, ó t#T$Fh͕u-̎n!R9g6P[㇧6Fryhë&*G;Sy9vGKo*b}Mwoo_\Fz oc[(k_ei*Avmn6llœ- OGh͸_ܧXEٔ(Il;**rr˃:okҙ2F$^wZ_VSln}A2]~t[Nƣk28^9{TS"+c1$-lVe{T-#M ʖ&OD;A㒉vdҦ'&bް7Q GOgyv̻q[McyFɃ?']i]YM˙=-nꊂ/,W]ra|=whaGO䤶,_y)g.[mmOM%鏵zg?+ڸmEm /aM[)m͚g>$>J.)svmDf㸰V2V$j.MQDKz O?!Zwp:h(x{t-y)؛O^ZE_&nveM기jHM$BfX;6y!Օac>ao?!E}.ޛQOeN̵csȶm5OձV-2t]XĆJ6AbH׀J|#SA,5i:qҙLM]8HXYptJ+DjVUM8mYtycmHR;L %!%DK JK2ʔi-͈L%uYH(J}hξTQo\6s^Ca; |Pz\%?.KoEWSD)\(`> K6tJ!1M3m dR5@`%7;֖jS.%Gy*x}o%e( :ZЈ&D1`)p qĺT aM(J M@[@j (ܼ˘_t/$xC:w_VVOXJU\/W݄~C ]n}sJ73/w<óĄI9)/ߌS+QfQ )UG73L&4⻕::5e?B .~(9=tѿ+k!R2ؕ/8 /} W>EZ6eK_`6p~7S 7I8?:;2neSMxQӫ jʼ|,L|} \M[jk)5y E<8f1sٗI Bqh.c>zd0%oegZx P9an;q;7 V˰I/;fF.'Oʱ򫣦u]FOEU٨5}c g#~l5kqK|ӽ[qfQ{6o=׻Fǭ54U[U9%T w;GL!K]BEuCu*)`CEJdgp#rf /.Mgam%pKo1yij6>S_j[R#ρD$8er>fOՎ(mth}9's=EmBt3k|R77{󴼷>Z\S:ׂv~3 À퓛?)|1m&AZ6bU>oX,}m Cn?///b96'((0#w'nMuQŻ06E՛wk~-һ0%3FY8ޭb3x"Bqћwk~!һ[RR޵q+2^Y,^!he6{^|`fl #)q_g$fFD>VAЛO˥\ttz?/ǞǻK[4\=8q{GG/..KrK0f=!cV),V+VdY#2)Wq^Lx?:YBM LA /ot^~ρpb9< ]ӌcWvX\UG[!{Lvu~ömD#iH#X;.c#^s$X-veC=F-iĴ݈F%F `4dr{FD }3S)$PbxD:U܍wl AEI~5na[m5 LznEu߇磪}EpM !ټ`C@Ҽ ;  \v~N#vծ|J/2E =x`q f{¸&VE$}^h57ߤMo67cHdsB"$3 T2Zor4lX۬ڙƎ L&*K-$I )`(+ 1 T:1EUߙ sĎDZ"qc[q+ʅ{$G­ ,Syo\ )#,:!]*,e|*`` -/*O/R3HSYŊTqȘ)K dȤ.u!B$-b37Y=]gvx:Nt&e X:ӉS,Z2e]\pZ$Y d $‹B՗\~QKJKv[+\9%"1Zs%LeL㧖\F*i|36:&᧖ikr$I yZ39*| o !sIn̤= {|v;T7 7`*Cs('tBŇÿ蜼_'9R޻ܒ/bYA{`=o)|m*Ԙ.ҵzqk#t5qeZ/엨VҬ^$X+aԹLr-|wwъLrvwp/R&?;IπќP!·Nz8%9KcKg:t<)W^v 㧝v']P89gXC?m7\ȑ^q. 5 onvC2v[F Fl/˙W7Vع sѐ6V ڥUb}#abŔ *{m0S4S(G(c}CoD^zť[3C8E8U?vt3b}CoDhSzn=[ y[r mdUC >oإOz *)΂NY0XcbV dk%>τBKٝ\K?`g%kKG,޴x2l&T0 =Yɰވ5Xv0b|CބmHV4w(HhCR#$m6jl nbdҼ*H|Jw#"+r*@(iiRHSft76dC6|X;_ѐXR|x^7:=@4щ=c?*&{4 Xst BaM~`Y7@`o,)u,DxƞhhX+6qCN%%uQ%o76߾P_Ѽ uƎ.|.m `vƩ"QzjSZv i2+[uW-DhFx|/pN"pkYy{W/|5Ѥ6}a:WQdzbGA)b㔂6`k6oT;^9 L4Prt6&M x؊ena:΍.93V"P1,LJyI))Q%Si!@q֭*ֺkD(*:=n%6$4J'Iδ&$˵ΕI&Hes Vd"|Ȑ|쀽ߴvd KwIf/qaCd:/)E|r)K\TOd۩ȋLI5JB)%ύԅJH!bT\2+ݵ +jr._UW%c?t5Y]S 6.O\_+X7y7=gvڤ^AD0㫤EfrSP"+xNlI`6j(}).Cal%3 62sΤ`2OJ T%Q 9d,9uIm~rj,d2$/K+ƇIp7Ƴ-7ѺF46To< !ʘQk`Y2/Q<*Ң9a^Aof 9t+:C##PttTULLeYLr;(dNeK22;*N-sdY^@jCFȊ'HHe2Q:gT^ cf\iKW$dhM0/"e#i_2kk-w.{Z 1F1.F/mh&ImZ] %zVsWgWLӎ)m`˞_tweaԳq3Sv$ CƝg4O@$Y:pIWIDnGػb5rmRx[bb(륒Y<)M=;-wfPL f;E6<fL Cwɼ׌qT'ȉc%Arߍ Eb:hazs[z]wWxfW{] -B*FF'V s90 {y!J;CU9Tl2Jᗛb\kPb|[.kw:[UM=o༿)]rt(6aj"np9]XoPBcZ׆uБP]T@ N my@ʑNšOtۻa)@q4-ެ Z*\qp^w| QFZJe7]Qngg4S.g#kT ѷ[ ޴KУGb?G5XZwRǭZ`h ~вݐHY7ª 5u ^yzj{ yqnE7Q0:nC"It<[ ym)"̾V D!j9d$b\. 8Q1l*f6B U|>q'FV{^w*-ͥ$U}jt(6TJn"d"lXvs4E:uHp&,NNbvϷ}ZI"qǻK/'+^Pֿ-Y\oէ/pD< r*ڕ!ZDږ9 !)GvTS :R D56un7ucfu+[-皞^oZ-c#D7BȆzy®@{dm0S4S»cэGtu݆}YL%3jzGxqFqя8b~#$ 9 4:{F6)))9aĶ_jv7\mgW%;s"TMxH?URoU?r%zn=Zv5ɾXvPyy'6oBOM%;2R_/Y{ij7Nin)=$7㢘>>=>ۣ>^|yxd\]ieRq3I擇nDEoEÍrQ.Wޖ6]]}q/@nuœv,t!h^ bvJóB;;߼22Yd.& % Ο (IJiy^ xUDѶxIrl}ri Z%vvwk^rFc*eJ /6l^ajks?R33-YwG%vߧ%}z :) bLn`YUarݝ E_̲Uw?o+4//.[_e+G }'9V{Dv8dgNjȷ7`ƥ6GZ.0@Vj'7N8=jr^Ak$cZd"/3f G΁a%URXRIn[Sj-aXrcMcoZs>ɏf޲ZkufIKQ:QV g4K3l=$K3/Dg0Kx)*r>zjc R&XDR2`)~,f YʕKݖKY}a+yR4~,E8 cj噥'Rr\ 9.]Z-Sf)^J{X?l?YʌKY;Q`)3~,f=g"KB(Xz mғf~,b)G,܏b^f{α 0$KtV >IT\ת{-1[}i5ӵ3KOXʪEl;޲Z96KXPe|KXiAf@?j\ GR~,uV869mԩ:Ks\ZiSf)]Ww)h55阔Sætu2ϋaz--ZSnD4?뺾\o9~V0xmsŕ͖G bܵlg璘ח K|󷯕@M؆Č"{SԷy6!ۄO^"&lCk:b/e@b@?Pb ecؐȌ'S1CfVҭ?ؙd^=,*fQ6o#,QҏG4LPmZ(JJ{W_ޚz8OJolK-lOoSWhfՖˋ䷙g"*: 5}5i[rNsИ(؞O$?ӏeKn4¾uXܔuoò͡B}%?}|C PLӒ'S(QiYy:xS#JUo 1Gil?ؖMxG9r%,qH%Bp7^9htX8w"cMq-gat-„LlJ6xT>UMLv^nwsjY<6Z=v9Gn..r_m1g# 6B#ěeC "OI%\*J"% "-Jɍ,I(,,4Ät]64ԷZW[ \LL"3$2IiHbu pDY^?{WFn K/dCGRR6&vٻrapܥHI1!Ehluh_S gZXM5TYGߧ>-]~ˏ2Q>0ǓѣUQ%X~x}`4wߟWߟT.Tx9 W?/3]BUt~I'7ތK^v?BN?{f8)oǗx}pl"mJݝ8* 'ϩ4ha oޯLb(3 M(:nj t]@f[x&~MVn/$o+D0 ?]^@:"/7;H)# vZ`BUzMpcWW/O[4NDxn1E8sԟw{\^οΛCrݒ֟3yg;M4Mknm_٫?ɕ6Fݥ[|ոXyJ8@Uh/O!&;ST_;tW!e}cG_Q'.7捑i]z(]!xb3szu_M˿v?ܧEnR$[רqeØ7bI(:ū7AI`xq`EN#4'1sH^FFLkrbk k(ht2خusnȬ)is>Jy:,3GJZ*u.<+e%-#mPגZh0߬Z_ ף]^9 FmzBFPEBhi~wdt @9oXxd& "BOE]+[TgQ}ܧ#D[=?|2uuސn#W6EUq@}BגkDZZe.ԺRy&5 .fW3]ף B+'5sj.4c8^0wePTh`JgJ1/,EYvR6_-dsT6snՖh~k2 1|w`D4*Ń)d% iSpZ6mLQQKfZV7 >{kmÀwO^ut9ݭ>D;^7Ĕb&ڎx D!F9.2h) f8O692aT=Fg+۸}o-'{[7#1 Rr{2f)̓T\z *C Bn&0rSeWJ&OXO!_HX|bJ`./b>ypdR1H‰)W1-aEynP8'D`o=TzZqߜ2#"k ?1/?;}6P `9k ,e+--IK&Tz Fx\C^PQU4~:d"/+L`_zJY泝8F0iݮڳlPw!k|4]*eYHV kݰ߽:}BBs qG9=X6AS( NM* oˆ)yF]" lckDkW>7pY#qc>9UyѷԅB!Bu[J"qŔ8"?O#.^Fq"Z,8Zw%-KKWSQjkLh}Ś,c![b+EM |{>[R6;bSK&mxsۺk9!֍(1hhC`Mp2ap&L;*5JLyezȊtZ?$ݪeh~k!4ciXNf+m>.D7?S}C %h&R-1LNew#&F0DR#*"p1ޏB&(*x{tYGY"C-$QhX<m^: EI=h)w_r>/u<}'5e ª}UIYiFrMO]0-:u}5j<&e}_NNswr;9 OU)?f?%YXF ,0\6v$/1ph9|*'~-Y37F6ޏ|j[S rL;xF3 æ[z6,䙛h#Z@i呂}nM11v9Pּ[z6,䙛MLJZRϏ1MY)uxeetVYXq9 m0׽ ,siQYIu)S_Cj<N=ꗓT+_L&};ugh/RC$_o*v+ ȦR ԛyMmrFg'n~'O}nƃˡfN.L&jmˣg|M%$߶L]LTΏk@Tȣ*;mh{+[K8YӺ / 5'ImĝRWi38O :Ӹ*y= p463Y]37-UlR=E5QK4a񞢺ƞQvYk"pyך ŘR2'BT܈ m#:J=7ԧJpPIibr,rc׌났L1n5f,o_^&Zhy_d@ ~c3u1Mcp <N ^.+?YMRcy}@Y:=,Jcx$B糺 ʏ"Y T7l9kfjC 8}rYV<Uk&x'H-xe[JSv(a ȳRhz/T@&rv,}V3ZLkq+}\9}mY)ceis {aI}@* JJ"Jrg q?l+gpG#e{aI}>/5NJJγRu3-[cRKm.Ԕq?l+e:JY ȽR4I qtVٴ'F/)Y1W~ hG/ӗO3_+$= Rw=nYJz 5 h&zx K4p7 )l*_>O¨con./Jݝ5ZB+̗h,mQ)7aJ*[wSnY.IM*Vj.`VR6z_"DV7_x`~(3lPSoNXT V9 k Wo vxϩ Bt >_k:.d$e%cX)$۶* 0۸fR2+ r{YHZ:LM%]֨^m׳C_1!#HMpQKi,)+pb;G Հv4L:a+vP?jCJa& ;hLaYZ)bXfoKweMĄIysvNuAheXthhA1 : *ZZ.\Pb$I ZjW=ܪH@4=*pN RKQēqT[S{.mBx2U040~ӿvކcEWv(Dowx0zu},]̛wsbfnFTOf>FvF:*#X r6cSj,zt Uu .OBWa ۤjHur̒]-RnaNx?Y*Z򎧠.Tla[8Igq9/,A<뾠 BijJX]K'j=緁3L*j<(!b,j{|WASvU i 8 EpXy$Z 'QCmmFIHe;Q{ۗ4&>a&QotM5H;,X"~]:9A "f<>Z6ѹ-ڐŵ>ljVmP:LK;iԴ,m,xgy.-\W҆-%ďfO9=Ke$]gR؆~G"S#W{8! mJUW]Pi~|JL^;Od)锶R֪O͠-Cli1K⏷~>)}RpUW(S8CZEy+E!UR I((xT {']c8%N;=gE4}igR07߆vQc/R_uu&_\%>w6rp6Una 샕[ru,u1N@yvTA Qa$,x-Ém2\:l60 ҁ;}_ċ#K) Pt3f|%`֒[Ff(9H FPn6x knLZN9K}QĨ FlQ5Q2U*NQ}}(m~ݨz$e6Ǔ{-sXOŨV M&RwWmZR6)fro~_Dg'0%[=V_~|y~ìRތrX)sm47ww.ξK*jq؛S_1qy'(')oǗ԰EC (=|ANGI:St矙*ce! + -PFgGtC *0A LX@8 1 @ a>*`))pi2DŌ&:UAH(3tEY0Ѹ0Lvu߮B _kA zQ@PuBcRQH\| oCn[p8;5Uc܃.&zy݋jJceA(t2fL3w~w<{Ƕz1:?; |S"yAZ\!Dk˸ E EJjp#D0溱6ӈ 6@Z|K!pw(9$WWW^x}1꛿^b˶u<]f>qvɛŪN7OdAO o:M̂'ߵ ǷMՠ3*ZMVWyɽuze\N=y7}t!$]+bn~w% 1}O [k A;s;"z ]gx}D]/Ƹ !C]x@x^hPإ~d?wƞ!I8Psjjǽ]< &{Psİ棗GP!--ems*W&۴hUn8VX_gUjOk2w3 ~FWU BQAn:/ϯM#5n6[5 `/!{[mPEVzmY2*Y 69>M.$!_ɔ⣷5K2v ~G XFev9jhLz8k79J v2ಶkQŐ\DdʱW3MV==5 v0& `ZźcP_ABnY[R C}WF %"wػ&-V2ubѝ\.u(kE䣳򾒡@ +h/MbcVl3Y} rq1=3Y-g' <~{y!4ErG;yÄ8x/S|Ϗ;vq5G M:U>OιB<"imH_سh(8[moUܓI s;& U-s[0濏,Z#5`I^#ϭԥċm|H-y#L{mS=;Qi}8_`?.vA edA%@z P/(,uJhK:#T\a yv [5Ɲ1Q=풯V"spNc Pp&Wt\lK+ bWv䟞#!)m+O/ԏ XqJnPđH ;,I(jtEi/Z0?4"T<ֱG20Ԕrv\'s,ݤK;6|)Ǡ܇>I鳖Rݤr \-31rj')}Rʨk"cP_>ZӬ-nDMAF!n5vf6$OJc$ѓMPNJcJNR`7)%㘳QH)nRjSR7)Z|(R7)PUl')}Rʸ/E8ԗ{P3,)Rrx0eBJ_ZB͈:KstWj+3B@JC}jIJ&K5RBݤR]k~(I)-zÐRJ2_Jj2')}Rʔ2Uj%=)Z"0܂A;N\zvk}R FIBq$3]dy2+F)T(L%pu̪}n0ݫW^}i0y:f8{]jd|&3Hl9( 9DZU'L!=Ik˃Z Tk`FQ5pQk,qg޾y瀪I=x7ݬ[_-| p?=v&%3$l h$KL3j2B+7$c}w=eWm[ljąE 9[8_=œR""6GS`V,/񀫎xp:mښ'KʛzQ=ba'=L{Q'L~Nz)'Zw舂twsGx\QEjL•`αT HdR,OXZ# ,-C[M|IkqdB= p9QW \DUBȐ29:ϴNr;s x8FpR‘bG2ȘL"PA1β$IyJ:ZD)JJX"*PGdd@FkƽgaC8{J-)G̘rAţ(j+yJueBP f ɒ4ys$7)TDF%m*"{{Gu=Ƨ }zV>z[*O^uA焀`)M_Ww>~qS)'[b !m20-sB1Q,` `9i):mzk"piՂaRkQ3,[?G~aQ,' 7%dꎆ9X Pns8͞$Pͱ=iD_n61P.#X;kGuYuGdHrEg$2pZnkCo&Yj]D8ۍYWdcX 7ɏ럞O,o?,V`эiohژFۺ95&Il(L%[ٶ,--fֹGfF4ӂK4۰VlGrI$-O>*tA(9:. 02 ~aS妨{׷JcOjYZTr"-2kUXTji2, U6d2 !MV$EG%%|vz,I[#ztwҿfM|9"~1K/FH_QM!!pISۂt7j[+GrtOaL)Azqn`hh}4iGmqٷcl3P~>+robh8J?3JI$C/5{=#lM^Zkѫ-!Scs+CwҖgVW ɢ?7Z0Z#kgF=P\∈sD!ZDnH GFlrfLJ:S3$1<Ǜ$\pIB;"ȮCzAĴШh$bNl~pl\&'lO~Zc] \e׺߮vs}`6`>n6gg9ZUMρ9rɮ'OmFoolھԊb^_.,h&U1j]vb59; 9~[`17KĖ3>X x%E\ρQy.s ͤ1 $Q9pK)gIN&%sAy "+ҌC(AƄΑF )0M"nqj:vy\ژj=gJv=t$Zqjp A#7^`7\qȓjL#ohsL :Qa %PQ )g4#T\y^R 4Ҥަ2Kp"T Qk/$\\<$㦐91LURD Z*dd(f6zGSG]Or?$J||lm1zx"D˓x|V3O+D5a^ =,'JJ5,J(k䂃{@>=X Y_;X*<&N/ËY):d+=%ԤW/VzV&Jx}#ZzWjB+=k+H+%֠Qp .MSTuo?ݩ>-`dmN$iws)G<P lP$h$\;7֜x$MID yQNs/vFfƱ`xmbgܰ"i>c*E,R`2eU4+.A)[TmYn$CYSSӄ*ʺ4%aBCL)I2t.ȆVvS̈́ǫB!pp~f$ `WiڛcÞ|S_^(y8^<Ӻi}iHi#ŴkA])lˢnQۊHYp 'HiR6 6-:އZBwe5qwU] mCYj-M}ڲ("g*+j:* !'P >( `hY M??an~[nnɚ?l Pd_O ^Vpͧ߿~%ؖ}l~oB^RzՍ WWޗݫ6EÇfe/ƏW$$X#;Nj_]R?rJY*E#ow~fZ>^ 烜}o_ C/ Ǽ[J:t(SbK(9%z)9r؉[q.{Ƹ/;87@wI%G (SovɀPN+0f Gnخ&0@{ý_\.يmp(; Wn !% 'q9=^>h0O\ $v!};gq}x#~DN;wэ^OOb %M\1"Pg3 gaz'v8ޘkl!M f +a<0K@>>l'@R%@# y6R3H?6hՖMq&(sjOq ,=8ȩN w'ՏRgy@Oؖ쪽y*d灭oy_gabݟ6 }{1T.BozHn"Ӿ /51biigt&2^~$i>܄]E>avg7scsQ6S6pͱnYR=fI[.16_tz2_݆nY6jwKnĘN7Rۜ+n:'$һ a!/Dwl "m BA5p޸Sm{cpmG7e6R_Ivpb*Fz4VjJ<]H}x9۽9iAXX @H׫=\`{efg.gM<*1<e)sc8]-9J!5Rwrčs3$'c@+_laekVDVZYqUiuՑo|{=b5;җ. (uU$KR΁V5pAy2Lx©-WM"͗_,eb Rx_rv8a{I: t({n: 1D!u\ާE1榡%J\]rR^/u~#EaæT{u ICNǖ 6j[&ÍdLŕ_nĕ^e[B[ېtk,]urJ"giIOWdkF6C|ѸJ7eQQka]jcõ- 6hpyl=R\G:l*n*`Apjd$v#9!xGk0:SR_HMp);+E-6>;ё5mNf:ӡw=+))E~gSD{8%-oF&D9:Z@c?J..n>3E=<?]Hd3e8AHA\"̢Tt<+ ;-ǎc @?043p=yzXx!2jȸGkR3>Ώ`pIiQV8!c#՝iFn\ br1N,B!i$7M4ǦP}fr11oxC'kD)c{H6pͲ)6nĘN7Rۜ[,&n֑m y&ۦV’h;MaMފoNpr\"/*\ ^H"F20d1SD1o8B)pVYD)w&P.g]tk2E`".)Z?}N0$t#+N}v^e%~rZfWiDt'TJ!P /gLT 9CX3q9qBIH C,~;rc)CR+ʛ m[[[VΏթyT'ېW wrSy|w_#,e΀3JMGkWAΒZ ֆCZ[]hY ߋPlg; L (ndŋGJшnHz4kbIfސ'X s騸Oo1b ҥerG դj Z'* hR2B (e{ #"PZp)BiMV3SiB-q溅ƴƔ-겖w 9p`ZQ$ڵfg1k?ĺ$KeF RVԮAT-ps12cΣTi⹘=s1 .mEYBj!\l nq~a 0H-)%7PRj[ OPL9(22Tdq=̵(Cϝ\_SϵG̵Y=(rnub1k{֢iGcȏR,bP$9PN3xX0${{NʶH0pCֺE.zGjS\Jκt"v۟M)Aݒ)d|YeF16 Nՠ˱&I#Igv Jx2.%e1;t<Ų:RF&\5*qsʥw+N{==Fw`$ԟx*VϭÍ4R5ϩ>8&}.t 2p'fU0 +Hwƴ4}&̭@z&6 V]Y_}O2jd|VUn62T7 !(S/AT7ީĩ~U `U6G`ub4"ە{ bQ4+W ™UUsc2p:󆃹kpώQy](uJR~+r@+uѶ Փ78h+‘7w8GtajckC󍍿8#8K堝p 7o*88}G2*~z i`"Ρ/f>=c4:ztCv\{ݏø !b>TahavҀo4p,ܯϸ9 KrțpzS|o'8WrG38)_#e5XmXY~0IBcٌY+!ino*5E]:3&n+@'&,%sU;).kVp,K j;wYk 'h\I"=9a6E`Ԃ0f.y=ّ(%1SzVFYB`+rT S۰*ⵑLUJ;} &4,Ǒor^*B%U-_ <ތ0QD7pXkP"y8B呫O/~qBRzFwufL<1%FaF̔dMYA[i][:[WʔMmk*W5ua-TV*lia]m/M5*i^ ZϥWj홠WzX( (l %JP[JXk55e-U555f8 YkݖxӦ :xý**|-@LVz;CDѩܶ-X3~1cOܢKobOmc6IUQJom=+ t}itC&hT|+M CRJo:C #GsGw ^z;i78V\~}mG*.M,sZܑKK)ǥx.x?tg3Pg\z\h>.er?|\@%:|\sr)*ƐҮN4CN gGFdb`Ϧb.,|rخ@p ho(xٰ8˦;-1:mٖ9 !q$ 0X@02$ȴ;lP3[c# ,Xb놯Ds^EbCQ%ϥD9]}LThZ7AKh ؖ9"lUॕ(?ʫʜZNnQNŽ芒ps`U:N(FɹTH?z׋LVD$`)?U1G:DQc4eZFޔ"XKbō0-к* ,*ɦenYlal~+C:H$ qIFs-9a"g|UM,F1Ø s7FIw5QÎ(d\s +-8r \Z la|l\ 06؈|84CB~kC(/ECJ5&r A1BQd(Wƒq`Dh~b]o.}˖uB_23M q ~j\Xx+7æP;9='s&ML{ZzTijA[~`Pds=^ _i>| {&(勡)o "ܱP3ę<t8ϹɚMeYU:nBB7r?`&QA>yo]oKY6M:ci`@nwW"mk5vGS\w9h MWYW$̭/2~bX]$V"xYm)f%K ?[`Rg%b4{6*?SmlFb!態lŸZͪH`\^gfF#ۺ1p."N+ 4Qâӏ$S!p7KZq=q(,SC-5fkM)>ԚSvX@XqƯpUX fFkC)R}gPc1FJ wbg<]h/7ZtzR{ﳮkڴMk2`RjӛWF۩ɍ}|-0@SP+-&5M[^mI99K61@%0~zRh%qgBCJ$U&zwyRx=ս gxI%0M0'V['}NQCOY oC% QboH2+B -Q,YhYcҋb 9ɬ/^;rǣ4@WzyjJlXtfB}3W՗ K4"]-RSSE E&H8va$UcJ\a! f1,U&{!O6)n5ØlY2-F<1(a)lq. G|K%H"J ]Ҡ@[jf*0a-dVLR)RkW y2&Q{EE6aMQZ81Q '>3 $F_x(z-qcƒp,ܵ_p6D 4%|)np'FoS`bXg™0C(8ѳTZW|{w?|Xl UNb=na2\bSo/\YޟuYޟug Z0F;\RcSSp"8:ơ Uh"#8#TU~Hp_Cj4<'L։-Q2HLx Ww # [٘iwNi&He힉"ҋK/B/̊,kSqH#bHRP (1Z;}VN vTv>"'Z)\SFQwi]0@n1 7I1nbUMufr"|H~Aë &MRs7L;*يUm ȮFF{.I)0 BL8(_QkƇeEywQT%`3"Lȹ%4W#T'D!83n> qDa& L`8P-utl]=vd;; rIm+Ihe//azM3np6JƎ̇c2@צTǨ"&!XlUHB I2eDˇ~EgĄ 52:O# 5&ZSk>nTe'UB$'#:T(!frLXš#c.";Q-SH2TI<[VR0 B%".EԅpG9*g w 86N2IDM-n+L*!7,F"\o}fM'(ʖͼH,=f^}؃Iئ")^BM&d]4YDk&6?fr[5, BXmGMk*3lC>P ir8Xq3<^~,qGE:J]h`b.7Gi󄑒14pt&|$rdEf;ta?Ƣ*2UF$HO\{/o^b›0oC?W\ sdޮV!D2%Y)=2MٳMRxNs(pL%9Jgl1 hAFY,DĐ ^5QL< X&d]z\ e1-;_rƆ^YK8Q sܰ+B)p8&Rɲ3k6q7z[{} ";IX$HK-'h]/qu~(' Jl3Jw B담)%fj)f'mnفA߷>_ZAAϩ582#otOsWO/t=v?k^ mtN]~b: -U9nFp@V5ai]}:;A{vk0ij3Փgk~5ZQح8yڹ~Leu%C'aW^yO _Θ%s7X]ucp( ;Ϳlt{v`}y৛6r:h^I|QOj^m HL'h;l_`7>kp';Nz|߸vr w_#ڳ7x?6RnDGu.:gɺQu!ឩpOʺ?[:Ǣ:oJ7Q\W*g *vVh$@L8.݋kڷ.=~h¥_]ǯ$jI9ȒZzrՇCo]*GyhFzvx/\ߒ᷷S?{q{ s_Rvg?kkOƝ']%4@.=A_ꠇ^Bռ:}w``jOWA\ ?;*Y6HH8z6>QG)ZfM,Lup[i|>^n] M7F`k'w?%S^]ݟ޽;zw/ԫ=}}j<$w^RP*xA/ߞ?5s o/|܀;P8~ox\9z࢟ ]dYݴ5`;n| qF+AVR"flګ6:Ϡ@#yjn=nʩ=^=?<9)0L8uCmIGB znj?&sn8fl'6/κ&l8lv]KP7&o_Y`Ц}}y 1AB.~65ӓm˨K)<\;w/zqw3j-o}Ni2hFX+wߤS'61D%xI7s+!.G?X?*S˛,2`Ƃdlf{ř<>Q3)c{=ӁZۯ! *DH8 0Vh*"m=tdy:X/+˞)šV#ɼ !6X+'5rm(5QH"BXkA*TmZ@I;?&˰ K$$y.Ir,I^,4u,% 0:xr"YiF[[[@лԩz;HBZ9!Ue^&mک<m{sś`J3BbtߡlC:_# ?.rAx*t&.QQfwsU'+gӾsҺP)E(w{1ܤ/sQJe2ؒWYũɐ1_el v"&Q+rՆˊÌbnC<*J V˪'WȌ/) I;W vs{jB*1)ǔ[nnmqR|yW]˃YZ[R1)BR}9t•Ii9_VF!g0Gs%z[MRj1Tx|w=T3棥tGlȣG#w$ma}+чRJh%Pze%Ҡi ޤ]ʏ^hOD&%%}HKzdYI\8dE֞& iD.4t)[6u C1t7, ^L%WEY}&ʲ(@h.kDa*h.w~f0{!X}W֧ F]&vJ9>_=;}lbsz2>uvU^ęgrI&wMdac6OFJZʦP%;%5[hl T•ʩ;|wr?|**_=ƾA,sokȥ֍ܣnҴ<i[Mq"}=cӚ>}& iC綦1MO߿>~xnqUnƼaBWWn_'?ӓxq{w?=-N{3K{w닻s3ELJ|&pmYU:rjPyruN9\~:}_'W"#-!'oUf,$-=-G̕_nV3@a%-jkW+ȁ|,TE܏+SqX{S#qxW%)`%#M&o $RmN:Yr\P p 3@NH05DQ:LgtFFIZq ౸Nׁ::‹ sBj+'}TYyg}ySy7֡U9&tX2@ڀ`R0kabR/.vȡE-!' #rcwJȹhN>xm\4Sk wo2;g/ȿ9$XO}{1dΰB`s2*ΒRSL3P:uA{Ҙ+qx5(tL%dМU(1-IHXx˨<NJ/%je8qVٟSNp\RY8 (Y(221 梍j^5w׫VT+!U.p xtDkʰNZ2 g}H=JR`PuNvAd9^r&_F$LZR՜)([j5Cq;d3Wџ]* \ Rk* Aڐ6v>^< GB2h\K~b}6M,bCR8$ sѺ*Uz缙Kf^ n/7_ݹ>dn?vH#%Klw,=|, $@F2+~ju~D|5c<qPW' P*x1kA%@͋է-%FbɶCq`oqqPypn') V)ﲇ|w}H:2F'?gfϬ@'}{i/ғ8 h>4G7[kyֹ8s|zk|dz4y|-_oKqw$~|)dtsa-Ə[$I$/\@Sgvj%) F.n`#Z7r+x#f_-'/BU|:(<Z koVhɺiG[.n 4ǝK(Lpu9$䕋2\ %W:l;1Qp1oڊ`SjjlKN7 0Lш)$xgY~slwNnn?}?{۶!9lEx:g#䥒Ć}W@K1"x6fCk.SfIіVp_D'QoI!aEpaG}˱z?$#*2cP[A٢,ɨ㓭O矒i.>\_r,b}74Rȃt4$MAt?VaF@> CCt)+{{a}]g,@3BSI.$U@DsMɐXIMZAVM'#,va]oQiWk2ƅr[ӏiq?f~J@;/_y6. ^vZ-"R{fɗLi%vhc!8Z)1ER2 d$C1$͘ CpOsZ"jiTymS1u 6I"h$;|dEIb VStÄ(< VqSBml}CKN'p !s_tUȔt]x4}|l)3Ʃ]j)3]4lGOq\RTBDPSd9G;t=k"BA,]BHN@~[|D0\Nej{{ V@=&zLiȥe4uKْt6p e67W i.߫%GMs_P>s[@;q 17R{>'Q|pmpm,)*:/:Ϯ^Z\NB2'KkkC*&.Z* I9U 9lkz. +YӤ cGp,_wU/1 bt C WKW$1e$pAF͂lB(Jv{4YJ԰q``ߢhVjeޫZ 5m)Œ>" zPZibθJֵjql.t+1 Npm;İfjx`.QuNio^JL v\!-BQ{f^s$/kO۰IUNnA;!uA["{%L3fBI|ߌZxS{NOU㝲:JbB#utK>V"G,qK,$鈑";2^UC ^EM1Y_ wUTMmZ.I@R`8"K| 4\2Bll0,+#%׉[BT -7JWdaș%vF6XtZAil{vA@ ;5+ֶ'{Q*T չE2Qfya(uk#ɋ"[>B! A@I.5%EIڣF gGP%$nc p:4d#Wa@ۗf"݄ bÉ x~X 0.&A1|X`)Ε *E }3U$OX[\RI؆uor.Vv Q+A HF}B$,) I.H#x#øuE?YvC&rmdB>l6D_@C],jwv|<@c503$wۉ- y)nz=[tyօjj#$tVi?Sr]FLY*}LKǏS I;ԯw Ncilk$a15~qCh̴) ~Ҝ+w^lCsT4c+ܟ#Ww2[ QWW)j(v& yC\*T6RQQ*pJĜkF~<,!T/lmMLdS)ۃw?Ks% r K^|6Q͌\Gl'av\qfUWٱC>e SaB2V(rAj㸶Ƈ[)uۄao}̱D^^+IA]bkä˲[<:>Xa LrϻJM1yo#:dcҰBϩB( (1,Z +b&*V@HkkViQd5&IO GS۰^$]Η@dCkDl[Š?9`ݼ?f|axop1 t6/\ &7o7ַL8rFCz JoeC/{ $+O~t4 yf#{F~n: =v2o?nOyf3@eLp戮@1ccQqI0l "llt[}'6N{-Պ֎˗նRMj[Q襁C}9\nR1kKcV]<$^#}[* XAm ZsH=\x< A cU} +;.\hc5zkZ]ݦÓi-+,R+icڶ^ܹ JOKUO#X<Ӗ)%2QKY,_ӹ*? \rûYvO9]<{G T]7f,,:@Hq Iؗm[Ȳ#8üh3n/M7nҴ)QPyNC HR(ho~tGH QwVFst``PޯWzQgyl,kVW7|˫O?}r6˼͆7Gqp8l"&`nal+e'|A t.e[^:dxMt?T̰AfXMJ;q7g #3, ̰>-uc$"2PZfPQZqG]fL֨KYɞx> 2ԫMޚ#JFuH+EJWwbt`ثuJ+zEJ;4G2Lhz5M֢v>JRj|tZiD +}RFAg>jMZiw<.4S$iK*$+eGŇe@$jo(k\هzQ~aqK.s6cV+_e$gm+xWI^Iɵ)/@1RZh$Bb ZI/ynY"AaNil * 2Ja0L$@1pVFSBFuJxҜ0G>7ȇ\k,GWz`z?" /Pͤ.N6/%5[1Dſ7MDfy D優#7TvfZKITMG/y95C!ܖr|8`Hp"hBj3Ugo_(@斣8 9",?Tk _dT| Aw nRhߒ_Tp S!3旊LeG,6J7\>qo5(,SI-".,LٌG2 p;($÷"1K~ ]+AP]NL!O@!QFBkL@;m(r)Q-]iRlԻ"¨ES\Vbc>4Lw"m>we+4,xIۏ7-'wamp/1HCUż\[ 7{;^zLXZ8#.Qۈ^Oeg¶|;\["ZG쑯aL)U= nzKtyը˫=\*o{3 bfnԅ8:vWZ};`iN^'8 *,Qb9ɱA (W D0 )S+CW\3z.?hO?-KK̠?+7G*>o!8}RJVoa3d rLYgr'pZ pMC!@KW`n9iĠH!0 ḩ}a:qfs4ԉ@R̄jc#7"/(Y*fKM=8kω`[iBˮUk4v_m]BN?m8Zb,wN.GFXy3y9x9)SnJ3A]Bg!'~J|Wz瑟#ǯBr+?ᯗqL@Dqgx2*o€W?~'/LO23S@ o&%5&X.XS4"Z)Q"ڔ_.&\ӑ8Bʭd']S݋Q8EkHar촖RhgPa4/(pMs;sW8]YsG+\>]aI:$A@ۚ M>pp(tg}Y^ 8ZҨ;9&  k0|0-, 1FI 7:F]Ȇh*a]+)ʐU!Im2eEN `Ly&XU)Tswe|{=u Rc`DVHH$ÎQnbvFN8V,h@#yCLKG&D*5"JXP OTȘ]&AoGEF6~000x<AizOrTYWo' i'X\\՛71/| Kt cTG\E_3*><,@:o4 l\wYo( +8,>lD-bD#\6ݑ& #Fz+b)pbř1XSai lfˏb'6 k%q,>BSS6my&˷3Q4RgV=~wֆxr^I."ygGdgUײ4x"j|@ <ߡk`C rLH2|AfH|jFvR+Ki^RCB!<ϼeJALƘLXpP2BI0 3ԉÂ`yp kD.&>VǩY;ro~4>[Pjo?~욊H RB pCn=G6[,0z\= DŽ5_,Wt F?X> t+fjoV;]gf+'c0U &n`sKLaqk>k{YuATFɂdDb98$˸s X BaLif fJ1QW Oa(ÑJ ZeJk ό$F3N+YBBPHtQ4>fj@m=8=24xZq@*sf>1 fv:9NU (c "i-f􎷶)% LX8>KbEhzݏ?oɧۈ) N 0xP*pvDt[RK73=U@iXaRkcŧJew_&Que_ gYFo0],uiwPv׳-~eޠ&{GY`P#](x r^aG;KZesBA÷V(K5KXK/ȗ2|R6_8qsz ܯC,^yկ7,]wq{=7u-NgrwV;0TcOz򛨞`G[X.oPWE1!5O/ m]QP'tf~ $ T9XSEųj@W@j ꜶGuWuEO~YbeCГ|"^pdр0Τbx0Hg_jwr/? [ix ȖUT% Z+&33ϤwIi.ރ cEr,1V=&AKD$ &$%Ax8nGh: ~S?Z1.c]fZpGa[\g}tq݁!\!13(W{un߻ӨҶ!9mfD_=3 Jdw)#,6qf?;.)(^"(k *E,چӫ@+#ŒŒ2 Y?RJ0 QُFπW/JyK=dj8++ڧ&rnե5mz1Ss}ETwsJV7.ч ?,;Һ =I1s=GQEDlLF3qِboyK LX'N9L*]'QIt k02&3\pa 781O H&pд[Mx2Ki2A3i|I1،xg ZtF{[JA3#m4 5mI)me&)$ҨaXDś5 =TM1%uNE㑠HݤV誯vGj hjg[MU b:xt&K<=ߦ-.K}sletݤ~XꝽD[d :].q8r˝2#edgh;6;`,E馼gkJKPd9Xoan2OcQ鹯-R#֮2g|l=s<5Cѱ?^ZS0ě K*ujUm.|hPQ.:) {.Ms٫DSĉmsE= '_u5æt0] ҿ[Ff% =)D<@ 6xnX= b,;$≠a␍Z_q{F#(@itF.,C6Ɯ!_TNI UBj [*dQP/z 7^HL%VV"[sJB~8VnT>WKg %ǵ>pJ;l=}bb6XLf<##\SHVz|hJN4c(|xLec obOre2 J plbLq =}&qRsP)d%3f_!]ߎbC 5z}B C[wf}{FTw> jO5|ޞjquOsϞvg * !A~ȅV!V]-w܌XL3W7諾yz?Wos?npK@)W+j}=W ]}m&V7yY@~+NJ疰Jl,(Sn>):Wn+[,ijᘛEѦ,/]Ꜷz?eXOh'j{sDgvÜsn]1HhN*,:v \D;%gTLݺb":]Dx*Ѵ[Dև|"A8'x]2ڟs%w(FSNF/]2qR-831KqGb#*mq&=<ݻO㠀0{R ΑԁJMX71`1B>ز"MkY-+`RDǀeIY,YjOėL) 5(6j˯ӻ:9BJ`PU-f,צ^EN]#Oa KJͫOŒzk'} H٬EtHT;;:gh;9<:3C0B_\))!A*2;JNTЈdA$ rG?[ILt'kI W, (pĦ/A*.gWk憦S]#l%8#>}ArMREj5hќR^82q rUh쐫Dr՘%;Ej7DX V\Q9[9y9Uu7[%'qa*3$+>I%;f1֖-Q\4n}o)j.JDh)"H,-=UFw_6|;U[ 5*M.dJ9`sֺ@#;2 8o(@bX. 1ZU"+8`AHSBL/  \GNd`K'PD-r ^B3<Atߋ`nKQaSL 1Q*R*'*h#H)" ν9&mFK5*A17ccu <&t!ȍԫ^b ԺW {nJj}8YhAt)6 LALj .5mYiO!s KXp?کY೐P#9[$+jx"Y0R]SZ7{ [`+<=|K䋞,ӉRL _YږMApݣc:;^@nhH hƋ(ob́wl9^0l]J^RvM2bN)/9a El9쾿0$`FҾ;)W#7H9WsKy 0}H;s ~rOlP>(V=ZqAĤvaМ\KZ5air4\-TFtNRQ`gǠ=騏@ ]G]ЦC0đ¬1'.A.!Gq݇o)/jDq'20ʝc6wAGފroq -ǐ$b3BP>d2%FP}re 9Ԭ/7Aa6S )45LCnr.\t4S]U[ z0@%ic)=0B(sn9CYNvh樫W#VC映 a#0^nGB$3`чGЅ[Tإ|/ҹ\OFO]:pU.qq. @t.ݕn}q.ABX]:GUX /DNR%W]EPS;N&;A^^BZWP69`I\+Kv ^ {*:k0>L&_玅51of\\[]tkRʒnbNΨRWPѽWEgJ"W Tf0+Nx䳼2~Ά/h)]E[}ŕ)OdTҁ&|ʆ7E\ =#T e8pyЕKj:S휫 &RyjߍH$6uVvҖ5r>4'rN@,@`ؙo T7P͟!_؝{U-Տ|K7 WVXf`qof .8$ʤB*P\ϹI<ݛD_f rn #К `>C|=}N SpWeGrU@Mk1p!|XׅɯTǟ޽ IX.|&[e*lJ)/aK4NykX''ݗ$ {,,y {X"lX}.)}B-}')$?=͕tx7=VWLd 7_A-{&P~1;r;mpOi{JF>6ӏJlSIQZc^2B(b:#h-TR,!DncH]!T]dfJ) z4;DLMy4<vA+aHfCu<擙B擭6L+$'Obuf3{L&kg,ẁ M$t㞇 Iba̔#TQ'g HZZp&4)E`X2W\a\j dMB{[ 19^KëRv*`G,P3rIWtOS), Bk50w5ej53&ы<-u!l|z.*J&mw%\ZH^ @ ¡:a\Lh]m&0l& 矞 bq_&g]y/3K>\ebMF/]&:q4Wq6+U;T1GtSf)oN V3#&^}fNĖ|OGbtt!s`lN!\yRO& %Ik ň1QT(36y~Ifû{6;0&u ſ"F^rƋE8DzwnBSGV@ :~!or#U늫Yk !γҁd-E}[KjDfy㟕Vv'kN /s<~{/ 1ڧS'E,xy%Gu^{{{{^n;cBץA[ 1-dsDKU%"0|ZziTEUQڗq|2Z6 !#CR3 p)aBJպ kb!Ef qZiyf'h%2$[@W^kE9,2?^+/LBLvB2*¼9F6 ̀/ܼaٮ,VJYH)+lYZ_*' sRHqofP^J)] |9ӗ_׋?5m~tnŁ>!;3Nuo 8+rЖTCNA2*I?}~[~N$?zE[_×*5squ.<]KtyYAf5gi #+$6{mbb4<zͽXHSr\I Br/9yEyV{T>Т0B(ذFKJ) _ L+PԽ .N+m MCq 4Z7 Pe,c9Ɋէ>Dz]ˁ\1!`VgNp J,~ #Y[Y>r@b¡_?ʯXpr: T *(Isώda$)%u.dq 4 bE%O%%, &".Nh>YIU`4\>2 ]0_HO}qjz|~R.좯NEh*,3OΠbBoN]DPt޵mY, Nis cj@Qr_SE%:cc-d#HbTuVH6f]|5&T@[A0[TEQq~ %a`ɏ&4_(ˆ50 *p&W#eHfiDeF7,^b7RZA${qrS*yIԂ :9IK8ܕ|%8Q1Oe,I})_LBfrEWñ0Æ36e_', ;.!D Ϣ2<@瓺 9Rص4`TB9sP2Ҿ/AydŌYaj3db< 1pWUp)&\؇%!tUܼ+ tNO}Q3Fi0lL* {O*[K@5 _.5^2 "Vu߅;9tary6`Z5ZZMr REFgd>PnU[AAVrЭ2S\r`0@U$ت&-u6lB)s^wmm|9 $Ad_C5F%);"T)iDҐi%q5_UW׭ 4OGbbJEgD@ Is.\lR(]6 ~ќ1ø:'3rm[ȱwSʍv̓$DADt`)B T,!IuAbtpNb,QjS*׌YzJ<3hRu+9g.M?]UmQco]m^:/~l挌5Uwb,AQ2#4NWiVՠ}hZKC_w٤`]k&Z1ڏ""̓;F9pdT%zև4KvxL&͵RQbjhIP?g(h8eͫ 1,:hY DH䶕 MUy ckC)qO,#("LыB1U5pmg%ϢZRyu g zyKNGAZ׊R~y>'2+.9Ɠ8) !)+@@ylZ  MPsE=iAxO*L0'b۱гgLT>\LQ?QKcoT"];.AH_(#ܳq[+Q]c`ݩdDm;QJȞg>EZ?vv8`g^@8kx8+Os_yW "i"ތS7 _r4_V@ U_j=le1!G$*dKp[;X옗r֛!&3¼_/l=c<mV qi])3A.v;G]$%'%p*؈J?R:V66k#SXm2F`C H31t%z&S!vkceuD0?І*x!y'Z` (tKntQy[D_mܼQTU6iM-P*&y|7_ď׳9_:x7|=t,h m)-,q5ï?~xwh?̜N-^ai2GuQp8G烖[i`1jb&b+imxTΞ! QA\+ܤ̯ٴD5W';gs4п&Rm*:|2@S4&+}oQ7n?B+B P P0c*%I/TwGc@ !r4?|(~M|=8QP9ih;y4Erz;fnqi X嘃Y mxE'EQq{/|(KG9_:K*HJQX"E) >\ĩ5 -Mj$M\t 3wxL}=;SwMQ6uGUMݪ R ѣA;5EjT%)$::.dKR"E@Dɵ^5믉-I=ݻvܮrQ]#&lI(IkFUHfcN Hf'-qpfC7]J;cT3 -. qZ̀K:k Qs&w3t`QrNpŒ2 jQc]`EB{T&Bz)8 bCr?sTI%1%H"pv* A$$Py%D rXQi!k@H( #0|HI`' yF E\q.s;==(GIxZ9\pEHV w|j'SQNTDMf pQIIQCT*% SlkyD N!5=sxV-گ× "bukYzN3*p=0օlAI- /%$~[bm걶R 2ʣn fQ>+`\QP@Ci,blM-ueڮ(QVE 񀆿 `1F >Q,AmULKmCA '#Qu 2'8Xt&\Z9&!bb%)mAڣ)ÕʲӉ) N,4A }J|Q]iG1B1 A݃p;+5 %}7+ﯟwwׇ&#ގcyˈ#]޼Ao- _~"~|;d:ۋ *-B^߹ϯ'gs=۫+?6<|WI9A%¯k?%bJz@"JJOzTnXC^X]]xO>v@/0w8.4hYw`J ]`n8qMb bz% =7h|c]jRn40Zp8MtR*H0.y(Z6= M#~ R0|+ `(P8c2漘Fkd]yC53heԸԤ"Jg rpɺ%6 WI\R% %fJtH0nm.GF'>]S\5XrQ\'V k PMDOtGtD%@8I#`4x㌊\OJI 7vӼʺ~ bJ.mRp%O~sb :]@ bw'FBޏ:C!tQ% $2D>Jv~Z%o޽$\M.,@h)rZXB[}>K{^p*# m.;5?KG y(ed1ofNEPΑQ\kYG>WY+&t,nhأ)jl Ԏb*kB f\fPVĩ4ɏю]Zнl7ϸX3;cC97WW<d}yf _fl=^hA4gZWaNZb ;jRN7P;8<}KyǨJ^j$jObh.":H%MAI R{.?OՂP>eXvS, /Q^T\ݷc;9,d:Ni /D913|]zJJ8;ObᮙX&=b]=j #,0A$G={ޫg1ps)luZgiՊ$ Hd240A$5FC.T:IobӤb.>鹵:;j#R0?~kOY)])Ms_ÿ}p?NO%yךW P5**]q#-R9qbjMtAD, 818G ~ٺ"yw}S 9\Jsfp9 ׯ b(d;1 hUFM4|~uh*i+|OhbVF o>#CQG A*4P!V/W)}rv=ޠ_Q|d}kũUɡ BR 0Vs4,uj$ߙMj=~5ցX1>.>TK,ڬ ;|h+yTKDSQ.dkeZG3yJzpO ,9߾9?Nb|ypu=L|o4y *^L.rUWd$\g˗YG;~uq^W7eoD\4zcYOIᵷ|n_th{ŵ%} [TO V)!kQ\qBvUm'Qxx3*Mx!POΏWLۋhۛ=A8fTo0|5+$2P顠 [kJbKٻF$W,0% xX <]!\dJbrPߴ{gdnQ)9ޛQ ȴȃQp)EfTqKKf ێTJj ~h>5w~x ғ#wu^?E&k 1;w,GY3.o+Q1nU`URXIڴSBBEd6lF&C5.ad߆]|_{L??RSuh4Z{ލCڭLc*95M]j%2- 'Q^`Ls6Ż@KBꔧ*2EW4 v-uW_Z6Kh-` WCjD96CvdsZF;͐)TB(6`lT+(צ_0.k_E&%!-W^d_TM K^yԺoz vij4xx9]%KD1muy(=wa|tͧp74e@J] .CXSy/nn4=cg*y} A>^*"5b 3뗷=T-m"t06-pR4M.s(<@+4o?nR!jNs' 1nGl #Q}dHc@Sm-?0 OC~HF=* F;xh#Y;+N+.9뇶Xi L(٢f٢f٢f٢fMj7,x$:cA;A$ R)lWK|Iq}[0Qgy: `r_!W7W%."$ky%2OH5ZN*K](SjKaZIFl<Ź<0H ST_~6$&Ux")){2Xٞwi͎l5_*49ATFHdh0I&]`&w|7Nqjs܇L"N?Sxwq! "{x źdݷ]{OHzMs.5R)=J fi5֭e]V5}d7ԡT-m3ir4a>>Pwc#2 O/ ӒPRBwѨTtYuqYM'H#SҰwñBIƮ!]gI;皞t>3*8q<i>B BlqK%6bNHҸX:a]M"fmu F9-U uQ 76H AV(FA{BD[K Z5dSb!"B.Qk"Жf1ƾdCZ5܎rkKF(7⛇a[n&_䳄LuDay}[k爞/9ܥϾfϼ_||]PѷC}_xQҩ^={lY2P0[ }sܨ7SCN@۾cxjrq3|&Ǧ ?ZލBzN6xZNQLzwԻ a!_l+"ơv.",ybg| ?߅P$<j9}( jJ7ػTD'5FC5ĠF<-$3%0*i zSO@wY.w-as?H$-VHb!}l'KO?\ֆָŲZ&!rZ}4lSMFMV1^[2Oec/ 7Nj OU}?äKXWkW BBAQ0+o7i\__V*"T] ҜOiIT<-F#:Yj]^@k!q'paizw4?nLѻw=:@?)+pjT#ݻ>Ǿ ^x#üG UQebX-6ҠGx* 57ڪf*:6 L9PW6H&1d_I?EBաp77۱ Nέ{-`~xWsN+'i-1%>*ϛ}dJ*KF/P0 kӞZ$片uX=iB*+H2xֲƦW %<µidi.,ړE5nm riB02~z:ɋlF*>-C~p29. ))@ 7@ D52e/EpdtZ"BM}KV6o:<ͲͲͲ͚d= kQ*h,X΃RF`Vi"0pS,з^}oG ^147 x甔t%c$zѠ#,r<56j&P@HxZeĴb;#7Tjy@sЛ t(-U X/W S4|թ~^*K`gyz >Eh5%Z -ix|t!ϼ['O>,8#z@M ,[ct_X8ycD:YVG9Ff&X\nxzYnR&E,ܤF%4Gh?U\dxKˋUB)q5c^9^X[ 8eԭ8#84QQ!P~BC.uMSwuKj=7VQQ`Wuߵe͛:(;nCtL4߯*}H,(5zcѵJ]eO[ue'%-b#A'$[c($kO,͚JCU!tS\,47w՛tQ 2̅&JsivTJzJa[TAjis-%/;mkp4v2y 濸rf/&VV^9B1zyr E|"ƙdeN=,KO^O:|(zOSqi$pמ&/R;;jݖ2u ۯ}Y0<`fy̚}IvMH.t <4}:Є4V_ gT-%R}۷^ܷ}ۄ%Hԁ.XRԇ7<x#,GY37Xg4h F;D &pIޢLr.qq߆zq[Qx6+NYE>JD2b"!6 ʘ5 < rġ*X2&5NғE )-ϷlX[MǥafRFV0Ns┈!YN VL4P kT 37ǪA7GqJЭ޾z" n= L%٩o_)V)Ģ v.1sB ?zK!tnMjhfsr{^I&Y3Qgr1t>] 2KpYKO?\֎q7M]k=̯Ok!1okuㅼ%4aV` 7vRy}'gOU.㟉a%+D8B悒aj{[_ i 3|+/{nvmw@i8]_RvYcRԛ 8qg!9w/#]} {A*F[)!ȏzr WmKn'PK]t&(b^!eT0VRJbh2+ `zzv]^T^_Z|7E?W˨"J{=LhI *$`'ȤPwӜLk a륂S^x%al~3H@- kDN" ;tOXpR'd@04ʒM[85v[Y%:y1vPp5jӘouOKt*~O+TbEO+T'`BOsrH "9Oa8GBG8hƯjun:[Fb r](fJpL"Xs]FEeY)Ei*ڡ]82đ6a{2QU4 RDg3[ݮ&Uy8t_^.OwΉ0$tyw}y5]\ϳuWny%&_۾ly h&B*'yn@,k3r \LgF&(:GnJi ^ 8)rwM'1dӭ4T@$vKmG#*Til 5.o< (&8hkz! v]R&7֥d˗V"%cнUmu^dsO/؁[\fJA6%hLi#"s±< !]zsjekon\ATՕQvFXUb&$ϋ PedIɌR*A-+L.KSdF)V]5mad=@G܁NypDuB Lv:7`seT'8Ȫ: ˱*oy BDcg<-ruejRkZE}GZ~.;tC<X0co3fUmB_aJΏ-Mh26& V=W؞tS%Y<,&,] VOKH%-8cTc+gsV:[:ڱbbJڌp*Jk?$J6a=)ad=< 8^$<=5pX*:z[8zxF;zxU.] ʎ\'(&dr+H z"#TY?RJacۙ/R6zvpR3c0eL(1I5cS)_;{ĶG!/+M[M8qF~iz] 0 \4גh8uNa-=THPK6:%s ~> >]==wՙFݐSsL 8D2BJQiԐJd*@[JN#]ÈAW۽#`ʭ6^Qc*YP)t+9Xs &5#ڎO7wC>}w;\VyaA1Y/ [\,L2]sCR?~w[ʓO A."ԪLvVe*jcT1;Eec_/65_Fs4Gܓ5ИQHPs^{*`Q zcϠw06JS"#ۼ;(_JCÈ)b|jQɍI*u.cg,QJrfsZ^?.ٝ'ZR6u*?m(|4*=&? >e4/-|\h4x/E^j*HQ8:N܃ (" Q'4uJs_$xNavTN=Tfg;@RCBNf]J8LbTv1$c Z̧vhAIFq*Sl=qi{FɧfE.ϧjun:[]O dYIˌ*/XAgS0Lca! YQ`&RqO-(>[0J-HML=>X5 9Nf&xȥIZfT !hSR)жtIĉ+45]J q$,S|F@R<m^* POá38]VFx}(xmn^_nrU\_wzEĭ.X"}s߬X.cY^ȳ M*wX7WW6Ę~[ݼ\&VſbQH%Ofy!8)>ow4 }{nVu?,SV90o}N_'{BWhλsSwRyqpiGb+gvXhR)KE"iagF˲I {.0YUlLرlX9[oVT$xIo2oy)jO߂PP۹٫@C2*L7 $4]K-؋t9]+$p:ª&/~G_M֙3[guӬ\i("iYV1dg+GʢdhGc:m5s-isy5(o|cʤfieSdA4+QCRY%P;$.R+T4'T)Xg}hkݜ #O/UFb0DƏícرq Bz2JLg֎FP"W)Ai2Tqnx@ f0`:`4A[h6Hzb(g\Q /~,͖x~9TEU.4"G&MA)9ES!RZCfH(`Q(R6u'yGhV\6_g_P)`Ε *`?oQ\kcz.[s.mmuYn/wEz}Xv%rޕVۓF|7=\:P Q]_UyJ :m#D=Cl*f 9.^\R{9U`y;y5\ŗrGbNnb<~u=5:Sk@3(.lX\DRB;q=v{Nho0y BK)zbR7SYRGEo=>DS-Z~S08o`#뽉@ 0L CEmF>FjY=GJ d87'btBzQA(FTNL,j n{MFҏG췖ZQjU.:lIj{-VUVl`zm$ Ձ)_0Ϲ?ܛMKy?GcEl_߻ (ea*\qռSmcm_cj|U57˚*W;YȉhMҖ ly7k#zX |L'6^tgu*̻''yz.,M4ʦ4;)E;)7M:W@h< '3݁P> mx6B1 Ar&ڰ)TQf%L"/*OVI-:/;KJ*y>[K:]M"Dw\X.%ٙ7@WoijDz}BOb?MɔUs մni:Ӻuu[7z*U$g22BrefFȔKf WQV̻glCQ>co\>S Ayǯx2<ewH=\\2Dcq56en .*/Sc/( X^ciHp ]uh%.ugݱuN?pIwЬ 5CM8A ]륜K \5R~qZǘ)II}%甤NIB*kVk5ǴҏiJ rϺn\DI I}Ӕg+=^+6bcseYhVz@ꛆHHm+lhY)cR9 +eJيWJ*/KIv!VzVJ[+U$4@jGkW6= axVBsKLD/UK!MRBr@Tۃ*Ps3M,]d<`eI9yio߰d#??{WF/vNpHrb/|eG33 jIےb_$9jX,'Pr' .ګn앂JC] N4t`#v@zfa[[*.ZT. #Gt]VrZꙗDH 3_11!MhRE"b :ҶUXiS{yڱ,2B#)⢜".)⢜".SDU90A*ȸC)Z@I*vQD|F`&mlnP+n~L8l)Ϙ3d`,jJBSR jt8:YDz9!PYyRF~ F_mpNǍ䖶c8NFeJimT:#)IS $!֜Tb@Ҁ$CBB':*i_|L9e|/ZF|)GOTz8MsLdY_ Ybx46S 7O7qɹ$jB2P,F0!IiT )fRhJie^ !V;{| Ҩg T0V3z Lʯv m/D&̺5jcΕNNf ݶv`#8aqֶ6Ӑ B;⵮ͼ~Oc]oe]6XI{ #DN`z%9?) .ӥF֖^@:~>N>xwS0Ԟ]_|Fdm&z12kZcHn"4':?%1C.&rWr^Epmd7$f |{Ң#Z~zSn4]S&< jsBC*7]GgK1F0 S+쮤+1F^ɜ:0C?`00΢WC:tx!)\0VQ΀ a ](xN)i ݹ&cˌ5^7Nz87w ח0X&?^XveIUӟ71:{"9OIn,}{ͱ'5M.Gt9ْUW2]ȗ&s")ԩϾ~&8a Fp?2P1Td>.E^R )[r ٕRN >N:{Käq֐&C =B\ZJSYikYVCxx%/mUyV:D*fǼ D/\BbIƈ5WlJrZxyt;+#' D3ɘ\N!1嘟[Ƿ sa1R8QIocu/>vYm yͷyX^z6+yd>h:/WY մqD U|1Ĩ<c(Gi8ml̒1)=3P;bl|G[[h;= ?cR6@Ƀf!Wل=8P6 I}d1Spnzރ.cةt଼q Id󣽻(#}cMx0r c`+0:(.Frnθ!6!ce =V,s0آmR%A)A2Ew.+yӵ1TJD?wo4}|؁p4xq+44L䝧jUf|XAM)2c :ƽ}RRnJT QE/Zj42'PgWF2d{ɗSP:u>/Ne RQՀ^p P"dAUbZ+4Zdz%8½6Q߬K+[$#L*[Qb̺ Ak%DQvXJ"(Ph&zD $n£g[aj×ReD&Zgc1Nv O eDb<_ vq~|wV|,3,߽i4ZhrzBKN[Q6(ms,V:9EsK[P6Z{I ]FIBϣbN>=~q\P61HgpZY%")+Wش/qR3 ~`qY JgS? IS 8 %;imW YXpN z6JFD Σ!-!?y%Uq$ D- ɛϤD9Z+0E qv4aՒ/,}/{]~&2.A,t*ˏ0lܕ TM#)`cĐ"Z|ED!jNÇeP7!#m ҡeA@&Ui bՃ'/Ic'FK&pt:DA&!8.4,w@|tYpT9X\,pS̖KD))ˍ谈䭞> qe~tΧw PX3}d?<<96_=~\G!LHU;oМw+v~L?{댐?/Zܬ~:KɆ=BS<}ݟ$j9G+E~w>)aWH$Nhv'wхXDSyI LmDq <奀ĠʹhOUcP2q5?8mS }=[CR7T &$z6q0抛D=A5w~ܚF)Cp/>6~xg?Fdj_!y/{N짫ϓ3[Pyg˷enA Vʵp{c40uța3F[- jY{t=4|@U\ix-M?ҪIbBpuA'7ӳMXry!~__;8r4y%kw^L{CpZpЀQoh ˴䡣Vk&LN0 S/%@xST r .4\ oz =3rx}->JM'`8T;C`GMqDVDZb'FS"aTZUT7Twa R8 \ͳ k.Ȕ\I'/弡fgrz^7VK "c)yorǡ%-$%a\iʅJ~]Lʸ87^ctس>x7=d#_GK}\3vj?/R%~W;tfotfj+zKX*ɇ{ Sis>|^?tP!L'w8[(*\-?qFz:CԆvzGNrMB J$vʯ_Pz-`VG]MnբULB(1&+(7jrN[@|")fgE_꩹Ώd1SiÕғ6 WhMߗQF?-.{'j+J'KJq%Z #ZWbXOmt7!P])ݵ4@(9nSdym|)61trW wN}@^4}UC_U3~(9<^ӑD3uS֍06skH~G7}ٗ {נsIulpP F^1Ҹw:WZ pP^g ($QۏOׄNV$j.I tc"_iT~đEETTlt_[mjAu!RʘmL2Xm ain 30?g&r*)Lց/Hr6/ܟ~MDVbGalÂ|TQJz@#@^rQ\b‰TރF !sLj?ޣy\K>9 ]o}Z| jnmuXI{`YzƩӳO*Ҽ_jf]^ S:gv^e'<'w xuk| 0(ɡeI㨜S20Acɰ#.wei{h1x*" Da_=cp)N"l {jge72q9a7`ˤ+UԤ) }1r[:1c,SVn/9&K?Bԏ%@eD!"8c-<4 Vv'LO&Utù K1ZJ$mד73WlxwEn5-HU 00ָjOUj (EJ|AmSJ˧%[w z:;@ }?< %!WoZìt[BǍ{רs gi-2%cVpKkW:ˬ *?8l쮊t$# Lg" s@XE{i ȫq8բL"&E+eB`}sqH>`n6n P+qa%!6"c"ڰO3d&j3`㔨 ý-S#ys5TTicz7^9F/CbewvqRthy5mzqT}-孧 Ǚ[f]-`nu b4@@MI יJ@.:aNYIUJ'W[.fmjd' LXWWC:z`bs\Bo2OCW^qA^V@gI'>>wn~{ԐmҧܣsD׽>!!>`* Im׋Ce,™`XPyUA!|˯=mÝZ7vZ@_BdȄ0heSM[[YY=ZHU3 $O:vc%fHYbR&#lv@!z[ ,!Ťk5&Hi 2Lq/'s&~Qoބ'gRǘ%-6Xr[FXqg¹G!RBCRA+= To=aA*xM@pI+%^1*F(7<;O9$ 20Aq1F1 hJi퐡$͐EjW]:zYfd6nU\QkŘ`o.Rklv({8x'/MCJ+ѳD6Qǔ 4meԲq-RRQ0MM/Ex nSjuizOo,$J+r5+Ozď>Ż "@(ǺK3EYJ ??N߿bwo;z6q5#~a/O$6٢Ht}t̹` /F6/8n{d S*f2 E` ɓ6;4LbC@SS(gd)L:I BGK{mGRŹv+=u^RQT\HGR%ez+.qRARVM/@$ ]cen![d^0k%HN/puMmr:UTz=1_8}!peIvk0jHGG! C\8;E4蝕8!$u(IpuyI[8@nE2hΈ0i1ǕOW-)Q#}; zr4l8\}qb7_]I6_ gsV?)c&ݛ4voؽ)tP4=Lo}JN~X;YLY?FG^N,vjbXu1gRiEV&"4OhB P[{6%b~':^*yHH2vHoJ9BR~2oʏrsBڻVt:\hjV QxLymB)#_Ʈ +2Av KM9l-l}0EE˷/cu k9uOXzA?_K| CzSgžS $-@JU(#txHaUϩ*2|MGN E-qvvu/.?MS"_2eW&q2G4SPSIj,ݳ@tq9 ! K,pBZFPR A+TuKԳ˝Fv :Db"VRXjW7F1$0 <`3."&n'Nboރ%^hZS@4UZ(K ƴC8@Kɣ8}KƩjN`c< $$ǠS dD@Y {˞L #yK+F菵c4Dɾ'qC)ωLgcPݥWŴy6ŴU Z)BD*CK9FGۧ4ǼeCxt.*JUN3RNQ2;Ϝ`hn="9,TfBX͂Ɗ{* OP3QR$Q X\Ia!D1C7g^ch5&5S`qԢШŧǛ}ou3ݾ4~#<?_>e|%|gЛx6a  <:oNH)gH\#DjY!;0{Fcc>=KRPug$̠`Bq9x= U͋A[aʯJo;%Ie,aa{8+۶]DZwXab ۥ.BI}ք2` 띻|9QzFy7wiٍrIII30gN}R)2 }zeMPH,hDxz:*pIQùx[q.%CaQ_u{ZQUՊ g7 Fƴaʏ!AVH]#g,xt!#*cxB5n6 $ϧ&9~iTNXz+^_l"Сdm@ i[[ W{NsNj]oT\FbP-O}{QnIXmgg0AVm5=UیHR@rOE`ENS-MjHd4CLG*KP̕ImhcF^( j^$"=Z}>J$t?|r}xX 8n6lHsCPҠL&zZL*!~"9x;+3EPv]W~RW%2C<##T*ɢ߾N٥Mq&!s6E1%O*ڶiq:w&6,]];u+k0 %_eBh glf_ElSӈ0"YG%+d8lۊW<^WuXwEkpusDL!-vuf)YO$xU~ybEj%ㅿ%sxq}?ҧI;h>f@NdWK vYKKó>A]<Ӹ31~ \kzu@JuptL2.+e?v pP}3;cKTlEͅꁃ?$ݢQLRBke/J$} VF SYdHafٴv}iG9('!zKV^BuN('/18-S!-Rκ,0ܥ QkW;jv" j ^b"%!$' /B#R< d א/OEfu<nwRmC {\`Yҭur|Co(b]׳qj; j~|qN0 Ɲ;+`4DbT(L/F$DH8Ʊ" ،1G 7ۧin6cF 9l|fȌl̯m_lo7~K-87k%~wm{Mo+Qy7uVLuh=Lwnes+ӝ[+Sq!l3&XHFE{$1e0JpɬΘ˨D%o ͷMhg?6|}\F$_L,I ƒsn7bGĨDd ։h\9U) |f[`ZP"87eIJ5$e"IeS˄ß&FeVia'{4 ck^&̥]"yyD @Gl ؞8#FI5 PQH48QX4G3,@oބL@~0 :m;v1 )uyca'jl5/t܁O{§,%@d*bSP,YP ~.MTɄcb)-'4Y*) KELR#R)UƊ ;ٝ *hf5VJ$ ">YP&4o`eY(لP[8^b ɸ0Vs%Y&Ocb2?r%)~sS43.38HC-6L.G! EAdﱾ~>t֣2IhF# `AsJaM3̵c؃0 *A(c9 (ʑlhf^x1 8+c-yp|xZ&_wsЫWR$& OWм[oϯ WG @7WX=?_>mlҝ\\8F߹gVELW`Ly}}sP+/=ܼ kLc~ 3^ ~$= duj1_u\0G:AϱF43Ov oZV0}$04aYgϪO[ q}~DG[%g 㾗jXp5 0?BhEmG<ǏN'C0Tn;gnx%F@@3ȉRMK<8~R V*9m핪; ƵD1.b TV"is*=)8'>.I@ I/d'$6dwoO+xsG."Q%~zdvчd#  x2HZZI˹oظV,I[&Lc%y%%o򹯄_~]٩u)(F}ƮTÒؑ>cWua9z:`X@F?ۇWukٽ0Yj)L?íRO^S)UEth]99-E65۝Nj򦏋BZ>{9O7UƄ ^p7݃}Xeu:Ή~hƟ'`0a6g- %BE{x1_EYPEx1'$"V(Lu3w˲{`5mօfЛDž (W|1 hIݫ ʋn+PKS)Mdmk/\e\̑Ky+W1]qcG7p?2v¿ތ&U9 ůzf;w˹,H6=wz^n0]ܯټ;?Tm{vgYN|߷\ײު=9=V؛ۻR{u1`l^D̍HC!:%(.mǺqL:aB1Q6Xw BX-FLYZZaSق|ksq*ik˟p^/7WOUӯRU Rmx/󽢃nm2rq#>$ u=Q.b1.*j^14Jv q1uItXj L9?˟Jodz*psT2e%W7Ls2JxF֥PɊ~2wn5qx;^TBMJJG"dCe._$T)L @{dO˧n04viUB%ԠJ_bN,*&V1P ira&х<0 K,4 ""b+F|z{l^>,룰7F߹gVw+~^L&྾`b<˷G78kf{q})xމe(ͤ>o{-$\F޵55;䫀~JumvUUn|}(>u`C(STTqY Tk܎u[(>F֢2e1nUhU4Hxk!eQ %.keMp&PjUAmJ\֬I94>o wJcRn{!:#\DQQӼ(Oy-PX6[!`MZ;|ֺD9Vȹ }W:c,"*K#0H#ÙF D8PlBO[>* J :%E{$X㼄SеO2!vOj^G&'G/A*'Y`"{iJBBuK6 z#13 5[AdjO3XZtkX4/1%`Wd'X/˯ .ѧRr 5IMm m9 I``8]w~Ҿ- )LTXr߆bPVU* րB U4XW[ZkDPϾkNdV*g?5LK] :C| 0eK%|&ONXbdJDi0L NS,VgPQ;J07|s#AÈ$ˌbYlaTPPKQ2܅$8T |mL5Ja-,V)B"*c%*ALO%_A| 6 ٝ/Y|cˎ%wOϠl%TbMxlQU琗s){Rǰ%6=h"^ c cLHBcՆt(FX==(Rm1#ƘI`kc@V:_ ƌh"Iľ0-`,u^O/ςAt2e|~$%{6 g5AQ_8lD,'$'N_CEl|,D(}gKHNi0v!@)C(~{ (g9AF#'60Ni4kEƬ&T+aTr8a YӒj ⷃK9Ei'W@xt-bbKOCN(2]ri֞W|3YO beK-bٶS +FD~-A0!L޲KYXWcc8/E f(Y^&I >I (YVЎwQ×{gŻxzk<:TTҵVw  E=)d'k#ĸh=l&Gac0`yY>zk'=5dmUN|SJқRDuQ2r µJ+fÆ ꘚ+-ucTBKL0CJTu->Pjz{#r{ 3f ` O%i콥Q8A8`()* ձ}#m0c&Cyf24DJ򉱼y׉'TQytl`yךy@#JJҊ%,g8rd$c-hљGEt-&sUg~i ̮j̍O# sw}a4&)9ϘKF~x@:nlDvg;DmWD*,!ħh9LjpZޭb%Nr&?P a443|>H[s%^P)'Rx㝶>;`HKit/]!Աթn?[,m h7/H)Ea= y|*n'fYAc! ~HZwsųo:"Ř 7)7ᛘI(QhM6Q05}5uxY}Ne1hT= &/Td m+ZCV(DE= F D7{d4hFN7V(JAx0h>Η/ѷǔ||U1.G.l9i|ven]ܮ%GCuR!)%q2N\q:i.g*eYsKtZ&x9&hq;^5xM_>Hf&c ԊZNhBgl J  &3P 2VSlb\t߆ 3Uku!w{^]kյ~kdpې|m:5כ{Iɹn&)=S`2Nt@-IR8NZt9e6Q[bB.X%db c~>[̯gXzA()k98oJVaEk ٣RAMZ+hR-(/YeT ZRO7G7TG&$+ޟ![qQ݌Vk1J嗷Tש )ED_cv9c1&[{ [[`EzP;8;,;Pʢt(Tp@uYs =9db'/B8*m|r.a撸a }t;I_m`={T5JQ T].ZI5\0yfu iˊZs`Z~e QH6ЦB͝e{tC*-&p m0&mUjR{~YFDJ ֞8pj%d{`ӹ;J75ʬ)1Ɲ~LtO#%XFJ~HmTϏ7Jn~Yl`d+Xq v+=pNGXTn`U$ aAt/l.Eu;(]t?}*A/(o"SOْu?B߹-;K~yLbd6" ϯ_1ęU_,°㖇ez8.3ۍGoX=E?$+@ \LJabXD)! Ǯ}0egN"6 AOD·ǀp|1H;6Ѵ4]zm` s{T."-FϦJiJD YqC Be65FDIq1v=o֒fƷ9X$ý7-ḮMMty뺹 bb]`"(/PÍw..GVߍfqZb>Kx;{? &i.{F ^5Y3y^0/IW/;.vQSф= -bw61L"XC,o䥱]dbE̅+,:+\E7īgѵ Tb t31VQ޿hm i *1M~[y+A.Ss n}R9= qPb}e` ]Yå}^*\%j?. *װCPY {'p-e%(KSFe鏲1P|H m=dm);o[l”zX qVsVg`(V^g~漝v kyd{sr;uV8Nr C!P#t}m:|z֞$GΛ'0!kCaJ(3ÆՌdeN*q$1F8IA;B ~;#C֞TX/iH9QGG=6<Ի h$/-AE_2IIu }P3 [JkfN( )Ruqtbȵm,~B 0౶(B/`f aY- @q w+3`7Tpa'10vԺ6KS*>! :h@isAavDQ%$!v˧*60S3VyYP@԰t)vu%p(x?3oI6 uk|M8 VS"р%B1! clT )v㖇)f^Ӂg*o$ʸB-YQ-[T)'42X"%^TCpD Kj$3D`Rn,ҽ+߽.)ZBFA:BYd &XX(CLy%EQZH12rEE䗏idf,xVfʟD"gwc3)k,ԣ=5;zxzlif?Ej}xs 횔Im1)e__i|xSݹLw~:Sq Ni)TB -7]մu˘1c|i>.ZO+,}IRƣr?lU\dR* SVk4|bf8rI9$ht>´<! y*SBcuc(-W);F!k̺e3jݚА:t|7f2s2qQ{n׷7gŃX"fZݍ~~s꫽}5C>kfΫ+̳[)SO]yç{l=v" cz%2,F+Re9=ה3gW"$ ʰҸ@ÉPb -s"B0bU[\YǵjS[v,ɩ]m s)$ {H*p'dCbsfuVʽ^]Xo0{4O~>@4Y5{ː5cdy PC1M0l(.!g'NڹRMI܅&l*MrcRYjYbo| ivk6)~MO jQxr(|a20=n>#~,d D6>&YyLJIfŌMZ@ W|3P >$hzUI=RkZXkkN9nK|UMR@(PZK㱞ncl\R+V'eV'ɩRM_\ w?ٲcq>bZ:M;>MЭ{{:A|22! ) \KF7S @"{qYlÅ_>NQbq:"@ܰ*lдre5a-=ӊV7Nn״@%"1 I)Qz>SI ջPŅtTm7˓R$4HA?EKg߾L_31deN Z;xPgu Xo gs9rpޢ3 9ܧ)۫cBٿ j&g Թ!-R .P SM R)c<4 &ecp-{32:Jv"N3t>8eCAj[_ԧ iSdpRcFDȅvVJdžP~*C\oʠDFlܮbRݵ']TZM<4D$1 W`AJm[xN aм<æ!6He(Q-Z'30E9¬?81MW 6%8.gܳncv횛w[{Q;w-rNk~#k>b&@.92(n$FbvOk!MLeW7h5(m@Z=]7֣YfDתlj+`qs—s8.(NUQA*\iV5Y)яT}J2hۢ|5~>f_$ NGN' 0gt;C>9"&\䜋sQMY|mC RQ VYb kg*NH85}w>fTr;3Xw agNk AnD"cdS*Ե\8Un u],}9и|w5F`I #XZtO֑ͼx_9W8y4rÖ ;t^9B(SXfJ(u%n+#T(g6.xqfL-: pI'l! 3RwrJ8:[-3ZR/+qa!hco$!c! U&DCG}|=_=WO#+(|J$.([ /U!͌A{jm;P;ZM5 qA3]aLkE<z~Qji$%sdJx[HMցE}(("2l,홪չeiu{kuR/[V/M,sxk "BF:Zq!/XqsMbcY/YJ*yQp4-1(#x]Hִ>m[Qq -,͑RS!Jf)J%*]85%-4b Pb̒hs5]|9Fg8gI$k:G}l [:X&sх6r btyBKPLޗ';ˠZnP3 eSrS̈c&FѸb?S4J: [Q-IɂCE* 92# EVcxJ-^)ijm1i,B=DPZ6cF4T>[+"E`EtG q<[w$@{cᶰ 9\4AK)-Qv$x$.; YRf0gޥܥ_wYX0iLx @@9 B &yC˷@ γemJ߮i-,UfeAk( oɢZfJk0s7XVII0)9 C{&Ԁ.1zYZz9JPj'i L{=C!(t!KRȺ佮}(-BL%8(;(ƻhp1k],H 45bQOM}+]stǘJm$eƧNXR0^> &,%9BUWM)XU*¬)J`Gl)[DYH֞o2I,28 *?'[9EqNkY)iCdK3eeʖdÀ'κw]ÖD9C:$w:S4[k= BkHx秫#(͛V:~ KOQ i!~Y63#rZi]73D?$ucu[J-#5aI?{ ͚cwp;Mpsp\9o.崇r>U6Pp>04E g̿?Gv|OfAhvk½@mӒ&=k5½- Lz2h&µ8n~s07RiQN9ߑKFttf2T;D?e+݋py-o/ċ>xב 3"uHЪp k\8m ZɴwpݪHm\_:ݚoU8\P_DoM$qh%o6M_)JM Ex/_]O|UwǓڭLHO`SZcM@o:L8e0 1G!8 =?Lr՛N)J;6fl6w{3gЎSD9ahIX5CKB.F(K09uvv;}!ìTaVڍjÝfՀPR:<$oϟd4Q^QQRsHХ%Y?T bfjҧU,_,*G:Kz̚.Z\KpT /0G YO$ǮoQjFJwjAt^*:\W[]] CpJ,\5 `.sN‚(d*b\TlzVw3Re*ϲy3ktzدJ}6oL Xtd^h>|]9V*-Կ C ts79s (頼\|hH٫Tߧd׵F쇿v_­+? ]xC1 :C#ݟfA伕 lEk @lg]1~yS)ȓz ,fq\ 59Ҕߔd 1ҦDT7>/tZA}WzAT[xVqR:O~0,CgeY|Fs:]N+PP.-AqZ[BH&i!E$W;o3M ?l΁?Y(bLހ05 ɩJP7, 2PлАsЬrsMtbQZA+ )혽 iU --Hj<zYCM$T+E Ź$Fk dXe8/)uEBJ+P/.+xb<72G W魍GxC1 _qG;xz|/GfB??6+9ogݿ}doa2ȗ7rHj:[٠N 0>܍ǣOo]:zL_MB^#hC0bQ!4<~zpBPrΤ0I}!K兩;Z Z%&L3@*i4ų'ND+ -(hĵ9@/l0B4#h-y}jPRr/eiz=JZk&ڹJ4ohQ#m=R]ΣBB w=9h@KD[7Y[h@nI4_>#n8xhq7jW9ԦL m_fiZV7QakRP 8Ǿ]CMVܜ6 `ds?8MխBd)ޯRgX%![5י GWN/~ <kpnqOr|ΡCգYp8䅳Z*rE-B5xB0,C2 5ga(!Xɽَ*s2! v5k `5@eG#8SjfvIb  yG^쨅fo^T0Dj.5DbKփy\xIfW PQH'Z(a@t>Aϫj$HV(ΙPS~d0SWqHd{Xqa8oH1O9)rv#&j|\rs̆`uk] +Ae($aPJd<\`A-i"6g `KrCT fߜ#f29mMޜikP1޵q$BeR{wS @\6'{$J)SM28T¡D%ei]Uuݺ#.XKG$U[nqoIМ/<>詠iu喻 c:#<>ނd'TrLҗ{ QA?83c,̯R%rwVKF+'1\:l#0K+"۫I2B@)j\AÎߣawnLdC,Dc_)q gjɖx/tYܨ`")uL"AUˀe]S-kkTz~@iE)J}ΫIHۇEhH5shn|mo8wtHH@vɖ5Rh&P F B9`B80L랢WBK5 "AX`ƅMP@-C|ΘWO]h1,7Vd,3eU?\.F]]_ 7s;|0[{(sj/$ J䥻j->&;qtn>W\`Zv$[>Qe*cL!)}jD"HߝrNP#|zs1\0  uaQ?7v}όW?X/iR${^q(1w,ݯzagC^{nUK: OpoK -16M|?E/BDN-3mO$mO;K@Gi3 U &ԁuAHc5(a Lqo}~^oߌ]_8c݌Nff>D*XܠQXΆ_9G47|k㴿X&e(r8W3<%Rtv5X JS_Z1TttPwRz\2DWm5UrWkɍR6bczOPiD醖Z{Q5$ "Qp)C$Z2A M+0&DbcKAp÷ޭDȇ'gz3/\J~XTKGR|:2p{0d')Upvr~8rW2I5(T2I5(r J9"4:y΂$P3C]@)(P) Ɠht4TBS!a)4 ("Ict (C y!A/" 60.20u'=H[U+9ꏢʦtݥg 0m ' vbHxӕǝ$DlŠ/%SDyk$*qBVh4 m5QY繳*?r33Wf;cѰEQ`RB*;RƜű9%2ҹH 2#:&HIY..Ċiʺ%&JT} ']#PN"<:9 &aT\r*T\QcICAI$ـ NIplc4$.vƃ ŨGy \#XNHiᷱZ:myاl2 A[t8FFT+Z@i()ԒS4ENw6hd4J?bR4@8!"5sIhp4)S4ty˰^$BtIn5D[( M-B8nT&ic_Zr\&˯js߃.\7QtYuE8\jA 4cZOL)' YH"uo _sqW*aq_g=w"&Hk*;DVΰ 153Lpqx٣i^C`^9|"?,j[neB{&>?0~EO6Z}=v t͡6ļe{T(.`HkWS 4(9 qy0yJ}aDOr6alty{?3^N}b"M v::>cJz|'0$yO>rsVҘG)o$h.d9%]l㼴Yn&.n5?/yc:dĒ_wLlk[&sJ=pOx&^|ןJ|>0@ 1ZUG2%}5Q޷iB:eSku?&PS.ɧJU9KxZP1`Ăg€IZGdF< {|&3rD O>凣Y,𘃶TZWጫt )AnhO]g+' {:@{WmՊuܒep#J;#HO[zI#n $՘Tyn69,I~Osa+*%3!6DZTeoU돋|r\cSYJ1ͭ;p[p/|8ow1Ϯ[=;7V5_ (m`͵R#BEdZ|R46]"IuPЌ5:ńs}SszY-|Z|\|52-9y\1f̼ǟ`ւܨZ.z6._~zj! $ m5$DsVBeBj'o+w疿ݖ nږ[]YsH+ Β#R&lD>^C f%JmOf *a/yUV*GpY{U આխ3p;uU5T$+{jVYm59~Pў5UBM(e;ƒ*1dZ͕%U#zH欇I ˀ΍eBY& +fU9L5tɏWkuEЖ_J.tKJI;ɤխ[dLҊ31tXxPbM-1c`C4fH< Gd""&1a7qr1*U:;U@4+v,-FǁP9rɆ <+bBޜZr8-m)PAp KI"R@Ŕi?YfF*tlqs|j5y6 j)Q )[ 8Q=C+oxDO*XS*ZTBY)!d?Ak"rH,@(PN[E ; a>\ Cj2‡{Bߧc74zW,c)e? p_|~.=~~V`\-?"Oh|jgOoP͏ƓaZys~&s7s~x'pm(1FJC˿ݽ+9Hҋ<:qM5Di#6H: tiu'ŵ(8f<5I4 ;ps\z^|o,əˁ5Xzoz{?h1ş6u;Novv~w3O}Ʋg…yvůO6xV~MZvX?uqbtaC3s>I& y"%SuPlh7tjY[,!*yjLk->ꐐ.12%KynT Jīu 9$¯6&&HR]M̈́N1+_ZJ Q@[kT2mz(S`_8j-9:W t- c5ÆI3)^";D`F57Q>^3lmV<8Lľ- ^j{^+jj#J51ElH;ƖEިRixSJ a~S2-Ҿ2sž9$䅋hLҰIB7PnNݠCXd%]%Gű gTG@NXi] P!!/\Dcd :Hʴ6g |8$k| 3]WչAh*9w0^MiMJ; 7ɿzm2kH XU UdAgbo5Tj~rm}&̓Jʤ\`})KB4h!ẁԴM쫶, }KNͩnjE^I ^_bi-4$gc& Au3NjxWN[Nw6lQ]3ISn[qGY-Iwp_m3I Mws{ZТ!h_`xJڔNxh#-_×uQ,Vayr",j|tOx bQ}&Nq| 0CmT]9PZ,f$fjӀhM3JM͐F[憱~&V0)f_V+ g~ ʰ%J{EqʿYy?/_vAzksIoo[Oxrdd'}Q\yms͕6Wem3%?N$1%rGx i_-$(Uc2,3V+d!"Bײy6տo}S1XקI(٢8ڟ\YfI})fMhuE7nV5^TV_ yF c)5GmZ ʼnN-1AeOʉ,%$\Cr6[ۇbio(9ip^HG6(ϋvŧ^59q)Gµ G7Nfzu$}Q .+.( *g:%qdBWP)L3ʕr#K (8IK#rivZStAoZ%4'e*ae'yN$LPdά`&KLNqX3HG>_n*zH&o?'ۛ|soFϟW-#zgCP9+lj\uj1Nq1Nvɤc@")c8n)Thup)+WiEA'Z1 :qO\ri-'f(% EaS%YfH%1d S }I-ɞJS&P2 A%MFw9 \ylY`h90¯FddRmI ɉvW%9C%s+hB̋n:;.cl)|4eHzM؏<R͸2e^妜fT`4-W79^t1?xLIlyxV(v쪻< ?q2Ja׋U@]߮!|K82J /Yw24'Lôftoinq ;sP`:%w 4eӉ͞9⁉a70ӱApyT7o_)^Q@=7ǭl}LZ,f& cLZ ;vYL[{1)@7cypZN=~:|> |<ɺYV#ao{F> z|[5^QcF1"ƣ&Gr(5芖#Šp@8HAIU +% d ID:KӄHJٌ<82g|tiO}8Ն q8X\jdH)IKCirHuY4FQTiA1T 7SM!Tp[Lθ2MnLJh+Zl{0E#\hMD06Qd?5bm@)p)+%ZDNCB^TyO7zsepҘԷR'i\Jk i{Iֵ>W>LJ}ʀB߬tHƊҽpWwnYe?D͒7wgMVҖձXq3DET{ @bf7{]CvbHwFZ2-|ZX}%JuL glZKFqϘ*Ԣ#5@ǻaZ9RĢwGo_ \5E2&؞%J!7NfZq ӋDg.o樾Ks? ƎgC3LzE}Ug=Wa~r3\&,Fwebͳʳ5`* N53:.&kń-"41 (cZ+C qJ2HQ\r"dIO# r}" I41:OajbڰC,Ks) rhF\KP:#9.9gMTZ?"Q[p[=ZP簽^ MM|b4'Z~xf@)1՚[n*8WJ }.0 PRSͪ3*I:2L9 ^w'j;&ցZ8p@Wt`Oo+dhΝ9qov6 D l']ݧ}V+ $3hLݓ%iMS\L8ρ yN5y>pIY VC_r%jۅU) . Cr)<bΎg> ˂d^49Āwy/95')Ს}W˲xJ]W3R/ bmgn~Bk!kGz5 {7Zs&U</O0 mz! RCᴃ;InهS1j`j5:9^Y:Ȁ^x'_|Ko_^/ƺŏԍ'.G??,We/˕Y/5LL$e22E$g%LdBfRKe)z|, .ziqtudz.zـAy]e~rurOd_AXn zR@ S<2OaH@&RO/\avܺKT`+WHʻR"E 13r n4~RԭTkM:>Dn ^|r ղӱcDJJ}F<E:&3P߃]92&{kq!BUwm<> ɾ!\̟^tlHNm2'F)8*F}βфCRʆġ=[ǩ[4!\°B+>yI+,@9ݚb 7}+79@}ΚB냞 Q1(Tu֣3agj6֎R,Kݏ.onN׼ٳ˙txpҫkrRt͢9Rv2eƲW>OҏpmkKzi rp9J8-/+kf)|*$X#[IF`Ap.xE(x #L{Q_׶p \lPNB{׳zh4#mW7<_wTq>\TrER]|-)DҖSe|nXG*i!PZO29s *ZJJA!'6oC gو鸵ArYk Ӯ3H6rE@"2 -yi\AnYdvފ$Ak.hb:=fX6FobY0kWo&]_&{'t҇8C.r_=(/tuaSG#~:97gw'6D}~Yߞ- X{H_ʊCki^Y1 VZz.U8] #ؓl%'ͭ`OCcP-i \O6JrCCƎ?̿ݜI0rbZR]B2О0UITd(-r\]uqDK-o%N|^/|׷_#m/ Aj~+>Yϟ}L!<( G* ?(IHB% =+P P,C3T# %77&nZ_ "4-lPRvcqS`S7hv3;rKƭ=TIfgC^+1dRun걅ק>suzMX9~J|rXn~y>{6EkInn|c,GݭVst6_E䩒3cSR0zF|{e{=|+UZo\DsdJ. j7SPb":c4ngMg%j6$ ZgHe UKڷJ\M 4dW5ԪSp;@}:EAÕCZ_6w NQ|SDndw h}BϻC7Z#;zQ5 ՞[?JBkxZ (;Fmt`V OmH7.92ũN~ݤ/.v;>+OnInmH7.{˔0FmzFITbKDu>2%RPݻw~,%+˻ZwqB7c$f]2SZnkg8rnÏPk3;$Ez -Usxc:j\L&Wz$+j`-"t/x./33 )ʌ('3sg>'^|pQp8Sx_jfs$l z*ɶJm6hFӔ}& 8[z$0]'cek'.^|7_/WPyΏhS(VаVi ZUuj:[lM{$ܟ+6l.!O.4_OC馡u?afĄ-c;c%h}j J(3+-R׎&(h&\'iO4[AE}mKWU)[xPU)鯴s!>/)ms|H+!` [wTo/KZ8KP϶Y c\9Dž"zLݦmpNyv3-!rq8}G'aea3A!hp~ڬcކJ";8^vo5r;B rE:8Y)*j~6Z5@Xufwn$י?cjmQ*:s233`Z^# \'y놺j-Ccys6 6)=ٟ?`IbXjd;tI+-X}jxImgޅW7NhM4ExOڙq[/SgY"8;#:ՓqC)&MwRgPp87e(|$#ؼ(OH0b(~hIkFJ7ޙlD)\>˹6di^"V7NT9-F Spxsr]]ڳg14jbɃɧ%=770yTAҩL~R25Օ.URHM9.(+Ljes9Z!3 '|PB6rr1H{;F|\>z+͗$kCBq_ ]wڛ4ćj0AK1B /tO?KBՔߵj$=~w5ٯpǸ fZTYLbrj9Lܒ.Jr2M`UR{7dlq*C>uTsXtòPq5 49}76Փ-cC@C`B^ISC z&3ХW}j/^ԅA/V}ǂVW +[>C(Sv^sHP*O?>q_Ŭ%ɨbsu=^]DZә(6ퟌdsu.bE=Ygrry;V4.0JF*8^x)yhWhx$%|>UpGCrxFW/:_w _w+_ɍ&>|< ]Ę3tc.1Cuҕܐ:IY()Rz5s 8/(c^kc3D*U>rʒleQɭO$kW?fHf~7󋸙_,)XV*O-sPn2EWJS-ykhPiԗ/́{.Sao9fج$n4?.cx8hqJ`Pz $GP9 x0 YQW:$`m0F-(ܕ(\hmX„rN,HF7:`BR[F#DƔ ˵r oUJv VR? VS\$xo2(LZ]HXJsꈰ*PiVV 4c!&!kxe q]uwCn_/fba^򽍡#w?w* Oް.bg߳>?=0^v⫏~FӾgcB?ooOLg ?_]?߈_o쏋Կ]n HB ?LN Hs2? }%^6ƉBos/FT8M\%$<ᥨ)cFXsQYQϛVJm  [`8=6 -*\`qR% $ rFN\:aɫSm> `+ߟoNb Sy\yM4)q*:%s'&ziL"fKt./}!5z?Z tej76$!$Wc%0 yjLlD;Uk^R2Y(|??Uc囓O& 19I윆~z5'jO<({'_f3JNUTs4=C֓e6rp#"*?^])4"?鏊*9 %2}yvNrD=_x?e"8sKr]+Q2@!_@!hSڒY7qX DՖkD+eB+hU \Q.MFr#2$ӣg>e4L3d~ڸPonc:: Ϳ~:YdkD#ˮ}bV4ߓtyuqwV "3 P8 ˜3W:^_gv16t#AcUB'$qh(A@WyxѳYQg.Ծ0k"ӯeΒ,u@^]|Qu[%դMvū ZRr6ji}x⬛!f#ކQPüZ/ Z.vN"70]0$?0~*U!1ʜ)(iH쬕7$p) ^ :0nhfԱ{ʂR(BɑV<8H38m1aYN%A{F6@'QA$VrB* - BOe"¹8ŝxHDOdQ kNXE x<>NyF9W<ˀ}NP /rF+BkXNuKA)m my G2e9)l)nTL͎Rb5 ?xsϳR=L]nw]jhdBO!<\ F>~>_KOO}XO\J22=Oh66 ijʛ)ovd͎<*PMyJ[}KZ[ 3?3ZPp X,c^Om^{7z Q>T7mCFi()7=qw1~~§+LDDѸm'+|H[}qtYu]%}\.h"/E^F[ q{o,fއ?LyX rNfs{kM/ڴ[ BZPB*!輖&QG$.[1wWK,V1p,*(SDQT%t?AY;KypHuG}H+Є:***dm mR^K[{fvʩNWNҰjxg:^x#(H_h AC% +x}vkq =WF^jw~usV 3h(+8B_>hyLH wM@F Qugx56|m0r{9NInmr%š\gIP}Xwv:'w0eT ora3E`lo6eP˺jrssweB"!Ai~%|lQ]e2z4!ыO:m_%=" SS 2mqZR\}p;/"_F6~Z@K|'{aִ'ρ^)#z^½73sӬ9م;NxkFExYLx][9Ty&X0]\gs\J$L$ |CoGu-{#ɟ*{3{Iw`0z)v-2Ze˺f"&kJQyPqVjaPRY-q%.倔dU=WɛX}Qqe-!gWw˪hzhX7 _l !z2zgIbl3i"jt*'pI"zu)$*H=St:2LF{c"4pMjFW?|3|vYnYhzr"w}KiJ=}ZW7 PTQ7aW :SR,~O^-~׿sm^ˤ~vYh&[gTs$ۓ\)Մr*Xo㶇*B3ܜ_ N)EDps;+rtnØ&V? "M)׭'gF6v1xZX7ȥb ,9kR2h/ lh ckJqrobY^[`يT m^ $5n"x.6' Tx 98¤k|H#ϸ84#RW8lǖD2=1_gzpӨ&oX6RVP֐bcЖP\sۄZ MƤŗM4\[dZHPTusi%188Jp7P(y] V >1Qo0.{}\; <d5 Xnr:AL\NAp#07ߘgVJ_^$(g w@J~{ₐ\cղ )j[$K9.7ѻ!eПR1~er]1/yx-e D0BmaРbᏛ;ͧbkz/8bf}*fv򊑠KBk$4D(qSWlǚxz)u^2 HqKY3Dtd 6* xtv3֣Q;L[HJ#SYm#b{Bb9G":Az%9P/s wA:Ђ$hUS Z셫-I|GgO߈wXNjﳇOGբ?{ hJ0 ^L @K3ɼTS&|!Z{U /++ga0ךR)s. 82 3;> =ܟマ8ޡob>'$iWU0"o_ wu򑣇 8^-$WGG c 0)yzWJ(O()_\l&K--U)LlGV3\4D5 ,p^!<΀e_r b*(ju}N,UGX| [ǨY=5 +9]nDH"ǹTS(lX1'%;~ŜGv2eE0g¿ a4iŅ%JAcJ! (c*St܄1Wja[QT͚>UlpnVyH)o|OYᕵz1G,m͎:*qjJU . FW梩w/3.8\cߋ U)qM-V;C$Vʊ%!,gJdgl\Ȕ (,|Seo cu{h%5!'ݮZma[ɉ<:4TLxF,/HiTaAZ +BaʊdYs":42]+.[p-)nk_\,4=:oJt@H/J||)C<3Fዃ]߾o8nuT ZjJ\:',uxR.Hi{Vq3Ӆ*M+pR ޟ{Tꀥ(K.s' ViyGK&Xw83v<1xzV(cj1 }ʻB )6;fZR%;+ H`HCb'l2\0?PhC;믯!.KC,g Hl>"(,*%U łFo)V-bO?5؅gz_r=iY`1q ,6A&A6&3ز$ }-ܒuaw[-3j*Ⱥ(I%UmsXF ,j-1i!f%3&\.hr38 I0vHh "Tfsj/,L!dlJFk 4ɵ`/*R! zjy$A`c+*l(+JidcoVf==sn)i2_81QL\eFB~ /:l0 Kzy08aHea .#ɽTMH:!%aҀ-%m/4H#-0ׅ``~2#8Tfɀb; >8QPXNКLhmւ3Ʋudֶ5ukFpjjt#80V (XtEAtJZZBZ@m# K,-Փ-ȴFnKdDH@+T~GnR;#Ի{w5|zۨznO3 +f3y0%˵, edܪd bEY*afWU5¢r~nit~ap[fnN8Y,a<8.>/̓ {|ZތO)Ո޵Z;0u2O8Ldzs?Arڅ7./-XXxc9Y!P%geڹt^*߅W Ұ齻a2[I˵!צ9ZRN:LjX1;;Z1 w2"]M[")":Dlҽ%oQQ@c3q)@NhY=;L[GE=в"#IBK=2Y)J=4 ,apHlY=s=38"aXubjkc"Rtm udɳEz[:GE^ik$t9^Htgz].JiAńRTL('hP+ 8j|0bsO= %XBy^٢?nww]b9h#>4x, cʼ| _(=%cЫ| ,]W賴#ch3V01M2;2 5ș³ ЪkǼ{op|?-|SqBLE dh]Ŀ0HkYȚp$49{ /D է/$je)zJӷ*~5x;7ce :q:*:*@kpT0JEOQpuuN(X< ג [ N]7a,DT?dïV)bk2j}wW,V}h]%m鏺%t_ ⌂d є_nJs43HRcYnƅDN1Ÿ%hXhPM"&~-7Ae='?ݻQHHU6ß[竛ݚjJCmXQ&t^lčTMЊ{Cٝ rJn+ܲR(UU[JzV~Ya媀[" ELQLNeP:qc%bpQs|9"&CL,Ir!* Vnm3I3PxLK ux~A.JvL㑠xj1!KErVDfE2F Qp qA'HTŘI"'yM/ J9'"Aϳq``H68(ת操&~i`7Ͳ~,j5 03|vM"K.oTjUA}( Y-,7?MP{x_NrO\ncBJu <zyw?* rP\S)&4RJ:pna>U-- 6Q]Nn{x0SyLCɻ8YZ<8.hՈމl%SW| ";}WW<Ĥ}g9+0A7VRU1 V("'MJ 0O9t,r#e׈(\Hio$PRe )J|&7:+V+xo*;`>@ Q7#gV2cׂꨧ FߊT~kn /%Ag3T0& ^)EL(ϩ#D+]jNoтj5;sd| r& ͜I/.䰠$i\ YcD>&YN^Y皩RLWKHjW0,+Nf|Жk匂 F/rar9Yڠ% ;a䤕^8bs.(̣r#Fro9 QbUhʱ-dTEk9U: Ĩqs\ K A ha K] <9e?MP^@=^'8W&\fٲސfm&fa:Zns!SYϘ3}aؕs*>OW/7J~[Wo]jM<_C!FŇEqf©Ak) mv*-+%2*N| q*,XVZ C08WDQEpq"+qepz-&H)zڄc.㾾nńF+0Ѥݎ蒉׵*dkE=[2qEF58JmlSXjI\W밐3:>c|1WJ5eӨ$j %qI[x @JrcG5k<GUR%z/ SL>C&5Q(kD"\Dq!\X[L#H(FzO3`_Q@93-GйV(Q! l. s9Vx4癓`jˠjXgIKXjG赡T_ ]kY̗5W98cH8WI۴v:Qr5=E1uO/p%RYHP.`><*S0,ẇr NLl=&՚ IFrB0/XgN8ȭ 6Vb^}(*.%/ &'.'Je^f#(J &KQzhA'_BhWڛ9 g"@-P$mØ ]'icH1qaj<\v W07eӬ]Uu_ppFà͏YK_wϟ.] owUy['Y1b_"F1gpQZe$e1AH壟|>ߠoFT@Gmň! U,1GO2ĵ|G>Dk$b/npޛ0q>T~p(4YCuf]3$Ǽa`xqg&$׊'|\cyCɃ;}fۘ6޻xq[7%"},'xTo5-6J][oF+_YObq'Al`cxl%O2jJ")-YX}QJA%x HC`%%z#c|[}Hw)4B!Єu)24 ƥwDjâhiIw1k( hH¢`r=-T S(Σ;̼4E'p`5!+=$  :*#2Χ.k};8P\&s&WVEiR4yQRF@oS/Zʀ5{BTz/J-1K>iL͵)Ba]IՅo1QmxP!bj 7@msyqӠR K>l͢s;iվ8߿Lq91*) Άh]@+&"ƜcLɂ96}kҹ[؀FhvxA zPĐԝOp4JraP' 1ö5i;T-4a ]gkD _?k=(n:cc91n`MlA1!ojA#֬,7R-#? LO{W]?zQ_sҬsKbU_CF{TN]UQ/gV{UwRi-Wwk(ƪY[ +:onDNJm&Lb?<<'XuJ<7T{(5+M*8d,4(3CQ88'i=p]2:BKҍ5G# A8^G#)F@ăyH -ӎsk᝟- Iدe/5Dagc ϕF}ԆOz ˒{\OҡEO(9R!Zg'ξKgEVj_O:)C{%O> nNY^nȥȓd!6YO^I!]~?`?ߢ{ƳzOqVQ1d!oD)XeFkWSf-5T%aʌ`&!eFHR)3pCռSL)eQfԐY Yu:myM49dӽ+%8AtQzch1V14:]H!c4 dwiZKM*ԒU9p䒩]ʽKcIxCKp7 oigyy&4NNLP@BL<'X'/KqI71 }*4xSJA NƑ׳y@ 5sv '|L9fMZG>n:)t˄ܹ@4rn$ȖYab.XarF S='  GTJ`b5IJ,IOnP/ $DBѾ /0eZ.qפܞ[G^&$k1]K_l:ӇLLER!\9DcQ2 vf@V猾1ުa.sg6$ }tѯ?Lg;R҅bg); /v:M*VWO|-R aSOf6ӐAJΘ[X%M: im sbi[E- 4pZ!]/tr(Ntm׭x-'i݊ zHħE*3N:+<-6 ϹPG pYY6"bY0hlfp96Ք4h8N:"+A%<Q!FL[u`c&VY_"+T&.]_Uy?DJjcZ(T:Wg*8nTjap~5L)\"vI?a}XG6`9oۖO,U,r.`SXniYɡkѣbAQ3?~@^1mb?J:XDaS=~ 7pM  nH~%#SPeS@s(´ܰw(&J_I2IYR [2 'MIF9>D˃3]Ր@0X$<_F?>NJ֤.z?Tv7~j}}{> abR*@A*Xn aW"0ٱ!D\pUfqw [\BMʯ,w-J˕1U]_VzJJ8^ofʪѷOV \gvR[-HWΙf5W$ MvZ^3Ye朥Ή?MR6;x{I.7n lA {5U=oB17-{-:(1ߤk-ԱJ)'XJ t<"NʑBQ*!+ W`~eW`GǸ*[Y.\~MƱZV84"N'@$5$OǹI8IQ[3'"9mְ7oǛmxW)_q67Ǚ–@ִǺ>W>`U*9 )W~Sޑ%q&*"]v@q3 bEp}pԩP7K0OڝR1CߴJwF\"zdx2NyU2#\+0A~u#=eBl[F FȾf)/LOǽ{(gwt/a΀J_|Dɔz~Z4(BJF2"?)=&.4{>+Zt l'cѴ.i]&ӺLuY5Pii C_@/5wΙ%hf-=9=O%Gh DzL5O3;{8K=kF{fWr&$4}@9cirrJNCS֊h&`4-%Vpf*U5m gA+)xYbլ,20)7<-a&x-Ƅeo@`hI%VZ.:& G<ldz[m"58". 'Z/~ }[h,*h&-hъ%8 q,8v3d6 m l'-V & uvRƠ5N)CsBu5.&BȂ0e1D(Ψ#$_/!`ը@+VRFei8"D8z+3ơNp^?V4Qǿ,wXD8dL>{7_p=닫Gawx]O!E{#groϸ4.(ry̿:lq;srJdJ|şd\ FFv:nw3 =aksmpe/7g?>X6Ӓ(x6<7aL{}wӠ8"* P`oϯf;B(5%bgIR|Zۙk,YĖTy  WLfƈ,N`sIu M JT503&Nwu{0_~/Mfc`7떗KxrQ-x<rjw7Fk{fH*;Dn, !^b+! ZHJ:CӋnu&aښ۸_aiRqs^:l\&jfUDI%_`Hːbn؎cQAFKWPlOtn0HLE{ԂAV"x<@ml20n=b98~zXr6[h+=dmvƟ><\]-Yُkvm~Mnܓ1̓}杠+YV|sFbem4Tkӝ3p?8Jmb9z6^wβǁ1i9F9hCЮcPɬ=vη=4tBDap w큅;4tBTN;+?7Ngg33o/gw_mrrˍ>8 bTm^E⥤>U4vYcnjg:g>VQ8Z\ ŤCVNšVgw|P.ǚ@ 80a\)-{eE(N0{TD\D""˹Ȩ҄bB&2 4U*MڸgnH{򳽝an:d ~*h\iS- h‰)M8sSNΤfRҙP`eX !E"ΓX~KtV?OvFkq&뉦Ეe*o1~;%J܎{'qO,)_0#ղ:@SN|@o rh%_k0E/͏*1/iItˁW١CiW79 S+ͭߴ&Ȓ 쵔@SFZ_v6h+ץ1TN)wd漈C>rM|؇tv}֌}:=_ }Ƀ"bWcטnNf*8(^n/^z,mp&4*aF ʸŜML[f$GBjfTmL,p@Qc?yqP%E|$œCiH XD:%b#yc$( ;5QPg@r"h"ǡ2a yhERSqbi媱ut*A#&}{aIx^[%+?řȅ#y9gI95Y)Ph%VY-(WYlCsdO=LȕfTuԊ Ӡ|V$u~ $%43ZzU !Tf$̈́̈n,kC.VG2rjĎ0 0,hK%cM)_S,% V,O)! 3M r KQX.1ԐMSJV# @ӆܮ1OMZezjlC6 ԩ]q9N@&)M +l:m]ަ@kYm5!?\ z|^s=ձ>v'a YÒ߯NDBc rŻ+;~Yv7R2@$$)Fr1dXHcr@Ir0, 0Mȉң.UيR9Z`G/F.Cxd艡.;TRs}G9ا*We#HvQ^ŋ}a.F=]_hglpc o\_1rn;+ǼĎOy(LSY}De/P˂OJFݜ=nmEҊ]1>j)=Sv]X)jHB^FɔPvAb":cn=Z(Ytmj&$䕋hLrFz,hUh0U@d|Ⱦ>~?;1(uȉ&3КkMth%L'v#!ujFNgq.6Kk*-բWJ)/чVLCn0zkeM8-ط:Oh?%t\1V~p[kFµ3ԐPa_>WX *du 9 +U_ .A@^v|ʩv (9@8Wnd0qc~{_$zw ;s GAYލz&Fwؚ;s^Luo=t|?[A#0> afGӇovs䄸Z/Q:-Esž%|)3EȂ.u?{LT)#e 8!֬8S11&k3fVp7nǙ.X tAl !ߑ IKg To볭Y NZ7O8̠) :FzQ> 8rt3p:m/݅0gq3F+1"ϕ0Cad$8-9!%ZJdՈ?oWSL dR5oa]Y%n-p^|t?3.`)>hl;}Y=Iٗb}ُndkpOy} #~Kr X ~AqݗGB/g=kO")$UsrT]#+퍃 $NpbեjXaxjYfT\#R? st%Pf4E:4Y3.<)U@+U$$#9FQ aƤ4QAi 4[zȊ D4R!d[q̜3L&j8 "Ww9{U*+vP@a$0bīb--6.E!P3ע6)9Jʵ͹КjFC2k7_}0j>wlo?/G% /nL{Ty-&QQ :0Βwާd -AH'#M>=@Fp{{+t"@Gj凨E5:o// K97U:FKvhٓgh  _Se'@vĶ`s2Isk)isZRs<%Idanh:9;jv|r84*e)hE*qq$Inr+H(3SÄwh?rC ٫b}\Eh+}#Ěm;ٛnwSmg-BzH;K7յ(.E1t5!Sb&(b-!1LRT?}/sm{fJ1.mvS LNܰڨm,W]xQ1r>NqfwmH_an-)4Άj<_ԸNpڌu8I}RD6hoGB3kWUZ2\[.Akf\ҕG#A Q׿ʯ"UJ\k'dIe0@Q*8CtTQ 8׹CG߆Ysd"k$sRT#P8Wq[x͎)︝-N(o`Jyڳ Dnud3ˤB "hD DIĈyl)SP6U $lsT99/o\mSƝJJ,U?B `o|2鋯ĄQ!"ʲ .\''UBmSD#qL-T)^(x<MB2VVc6IDF8K񗦔3ID J !kNsyo>v2C?`Rִ_C-mHJ~~<~gHnty2,zw PhQݶ]-iN~_OgUx՗"V_bw`>~`Ҫ] R@Șsqܘ )b)S W%P2Cb#()bNmYD_ ǟ?Q pD9!J X!;qkߠ[$},ﳛ";ՀP#PH/WO$KZ=G/n2}3w/\Dm+L>wf76aT9{謐&ǿ+[8مZKܹ] Z纛o<`Ws;Β-n6}J0R8%3$哅HӏF~ oB/x5rL L~41Y1`a]/e xWWC#_ܯʓE5z7e;!|ޙlN$YJݑ.m"q BT|2\ a沀tF=Vqנ`' ~)Թ8]=O>E9.PECNWt9eEV>U5;F$%ў:5PWÅqPv` MC RBogu PJHvL^'hS2tqp)Y ~i}+'4J; Dks\÷q΀)Є)-lBLeT|lj[nxQb,, Hupr A4pĐ`mFKjW^4@-)1*IԠ4~jv'يƥ KzIgRp:ȑGvB D!zڗeB:akGLK˜ ǖ9>D#$iAk%*B%‹Bބ޻M{z[:ѳ%}b Ku{Sĝ9Q@}%N !@aӪ( B_ q"?mt/%:y½wY縟vm3ص{=2mD)ښdr9W\co!`r$[Ul@C"BTsi,ѣU1A;eށ,ŹX 7BNprȹJ0F"E"b)#miB4* 7 K9L Y:""% aѻLFǵչ3Kmhb(m;(WCtjZJǞn{M-[W(oNak[9~fc-)k>g1]`2rStEZ`M* R*]fTZ6 Jf$ƕ68?FԈ;l,P~Pa.9 >w;O%O٧Pt|[O"UP>ߋPOb~ ^)M.ijI^5+UcRa~sc\||t|s~vŋ[3<_ uP{jG \18M0T *6bTW,Qk1RJqÖ%slH%lm-CW)8L&Oh 뒟E]G3d&4`V1NQ@j<7{*R:/^"=`/11䷭{tz+< rfn"aW,<2~]:5co.3=dɬ}knA#T{&&o"총u-] W["s{٥o3 yLgp1qZY=,55Wԗ!`!O瀕xa@ǎ8Ɔz&ެo+`L9K ^Y˻Qę9=(-is?7ItbŜzao2թ'׋׼-FOO-^1ז[]l\LK}B[J%)?brJV$"h5PEi/ jw289 n~ϣ!>o;v^=^zmvpf>}'eU<4,tߋFm̬_>cVDcÅ;,`c3ۏ ̉!_޾ɔ&~~<~oDuGyʩ 7og)#k0_V"9ҵ鋅7g6w7ikiyzxp n oߜ}5gS{0Y^dEaJU!歷С9r#*f [yep_/*G8'e,'w=dNS6l*i@^-.ճ>$ wf#sw7x}5%6rklyLa2+Z]/n,XRi; K|NV |W2y-P%[_l&B"dBgnDp"dn;ɨ4CN9NALs30E$6؂6,+ٙ_9np[gj6H)ADȖy TRvZQ>J K]!QrIh6[xt@7<0f05sOpYø b90A}sUf "03^\N~5~AQv$74:K6JA^JCpX<M$K) Y2pZ2d/J"+ًvSѨ4xlTFQ0$qwNjϜBEj`Dq2vP KMEIΩJB&R&%TKP^K3AFCIya)Z({h!2T8KA ^nڲ~,oۮ?9+W T"-Ԃ`ڷIrn=v!9 m=6,9cd;{::Rdkή=px{8<[p5 ^:=C,LTÝg}g_<=837yv,(NH>Q:xowUs)Rݡg˚݄f-%m1 RɚM 3{#1@bOF$֧Q`5<M@u~zyP7pdRohBycP/Xy \࣍17G?=GZV;D}_N#!]2z\D~e9)V uхY)suw¯[~mB ]gf2ru9J@:XZٻ6r$Wz陘.5}Oή;F>Ae׿iVoھ|wJ3;l4& He 8}P:1m2Yk5HݺdKCQ/F6$PjoI/ܡ<RH"&z JeM6'fNJ .&QѠ*ʜ KYAt$\ǴЄsZ(RC@{69 _¹imyFr8B.>bܓ*-⏁KIt%Vʤ.ZFT6 %8yRbYB& ͳ6f%hh2De*LIM44i@hnQRXz2$#OlXX h91_(Im~`(0hZC yLxmy\va~ 5.{7Zsw9e.,-sA ,']%]6b\sӮwW iӧdtvQ C8Lk39IoٴTHKk6kRfsĎKr}^$k*Q4_r0gG{:|sLU"SMk Q Cr躺q zM8%xK A *8 c re\R:uHR4ϓIc ,/l|OϜ%^ɫ/aI&HONa5pdk=Mt!> I-蠪]8NYF 6 ;Cl*>A."i#_UM @2;SӃNeM^hp\}~ UzC[-Jo㽱9;׏a1k4kQf˷Xnɋؼr7ZLݗmgu՛T;Vo}P{#gӝ7V=D9yb'o~5;yCZ2zk%Ur:W1G),{v/>7:_$`xRoP ܃ozh Z׊εmx2dm lHB8#4 M{"]wo@Z8H2h ɤ8E@x2(IyA0h@Jjxƽ"uwՄI i=VNkֶ2̡$ysʗ1-$ :pMZ0,i`Re_2ojyAA!ҀXA혙꼑cQ(u!CL2*)JYЀjk=2PTID &ash2y!:|F]L^^ YX>?SHX%&,V$I9Y\Y2K[Y#Y уf Y;ǿN̨c~Nv㋟ZלmԚ\ =w|/~yp5{a$6k}9̗Y=yhM]{.c i`yGELΖ`Dtھvۣǝ08vk=Rօqm%SpC-k7-wk˃i&m݄iGڭDK[EtATL $!~+SPҹ*e贍['ITmx.@l6V@R=/H¯#k@@i3NWJ*Q;tkV-!넖;YyF-U#P j=ZAtj)(kwg@qlLa+D!|+]IG96(?Ѵ#u.sR&\\+mH边"ڂL CYE25myuvMc<8QxQ(y%EUieDVs#/GXŵzv 6DX$P4MWT6)/a6Fee_ϳnsU/8~8QaZu%!z/[-VaZ|]z#!'L[[Խ- NdLuܡN8huϓyyNJ 8F PjGO݀pg"i^`%EV.t1{,lњ\nEL/^f xYzh"7ɍڄh@# K@{i4]=5уn-Z۾Uֶ/YHU+x!@4(l˻ŜzQ-F#:}1^ƈң3!CPLZ'iHޘ܋~`ƥiRd&/R`/0&r#a$g$=!t|LָJɞDU?'cO6}[xdVsTe|r4M\sz{1 PYq*3f.H Q0O+ B= ^wxA8`nq]>RB,&$ez1D e$$R;N֎I+⵮ xBiTԒ)%#I6tb-Q Ĝ-K.==e{ڡ",izIe2*-h%ɠnBՌ1YʅMm`& uKNFA%QBy\XTrNFw"NH <-hV I[5> HdH^&^G(+&V"u(F@X7ǣ(uH|QqY!QrsppJ9Ƨ)G#(YZHزS2AםڶIwscJnJrc$I!*1y F)r/)ODsḵK1l]-dO&!W{C" )N$;߉3{,s&ԩ{C[, +JrYo֚ûF/]~??wٝ6ޝ| [$ݤyt[_8R7=rL8iEWԀ``n73:>/jF_GQ_>r^<{㇏g{2w%z|]f:M}~}}>4 w]E~ۆqeŷ/[XsvDk]] X'OvMcMkhAB }e=H3^;DՑTS9M\Q1Xf+\m2Xw ްMQйd(L$ ۸\3Al))NZHk ^z|3\<ٯϯǏU9Wȯ&TtQKEGb(Eqܚ,jeYt\JiFAz(Ęg[IbۨC.N\,YI,H,!:Rfk9ƨ>N1DjHaښb҂ϒpF!'h1'_]kF+?d.,Nמ,uֆ.6$3j$E85o5)i(JC\%=8d9]UHZIMȩZQ]I g:~uJyu,/eW "Yvk6퉙p1Vtm!$1IIP  D$A<8`2mA wnPm$tAfe*,&(*!:,1W VZ,̏` Z,9 ]ͿZ*vf$\V0c"+3XD: @T["嚩t;3Z,(Xr{%Ai0^B0wA.Voq?C:Ug%@``Hb>MX Đ*[*%v{w_@?: *qP` 6y#xFl&ݥ3{H` ca3)Z zOzoyvjoԁW;HQWI'?ycDۃnSFW/Ÿ5x;r \1 Mb57?FE Żfo8QFLJYyawG_n웒vx˞A>8h;@}܋GޱX|ml .Z[M#|-wjkEI9k*ydFaSUͧa˕=/A%SάSu7NW&˒5,RX 4|%𛛷7GVUSJbZ.7?Y$@ %>eZ<;!x=f:aO'wN-nŃ)S߾~5WW_{Ww8بFbGpz3C0T岢v-\;dnHykᄑ;×?>r3|)J.g i`,~$J ޚ]WO[_lsEUr$V',"|_fTEYN[ m^l/|8󹹎;k~?Y%RtT]Xoڶo[IR],hHrYOb,nz7dx,K;Rì,`o.hz(-Q%$S-P\,4$Y g SBp_D31'WMrZuſq7ëWoS}$.VV,MsAm**I~O TܷfY;Yg:1sV/sGpzߋ߾<:E{;m[4'w"ص0id=6a ~7#xCHkitg\RĢك`s}k:y_^kMx`zx:wWorf?? on>^Flz5Gfxe&n&$WwxͲG?6]/`L=h׷黷l\=4{'zz{O65 _?Y˕|7>^'f8/|ܦˣًooL6z5KdE7EӫbO=Fӏvz?on3Acj˺|Axjyѝ$E)_x,/4 ܗ xL^M?_~ ސ?'iGOc#3na)nQ3fJb6 1t>[ܗ6d1-Ji5 '"X&Vׄ&> 'Z߅oi+x_=UR\.}^- 8SO;cy&]a Q)#D87FD .,HEbId%b3 lDd _؛ z,R.˰Iha_6Z._ٶWш)%ݞ T`` 6 R(Z*<.&$ Ʋ$)$ K%$V~y((+l7kRt>j<{ viދ!P D* xE4A) +QRU4yȯFx5MC܂,e*"Z[Rd:KrvNO>.?}#VMYW-5btR- x.@ :~GAaϬBc }P<+-<6F8$S!ǥE);*x mX忮9 /*ZLTel.p(}c F$4S/y2ԭ\|{7^δۭ9ҭ??(;L7&'@0JL_ZUn0-qsQaAp[/ʗTE.{b/jDEU E˅/s5`=92="ݿ `]ӦEy)$W—п[y렟9-ۯ7aF{I2\+p2/?Ç)Pm}욟\'5?t0Ú':1X!By5ZD<6XI GSD1̆%HĽ_\k ku k;/}Lݴ=X=Oy^:@qz"`J`LSlcL!sI Hmy(eĸ&KI-8Dcd1!ҶW39Z3tIxu q"!B"Մgbatgp~u0"X7ӆߢ0$X7ewEWF sToU: ig8ү^&J0*Lc9M(`aBJbk9/6Ex҉ á/;붃9N06L^ۃ _+,,njojE'뫫+v~K {h űQ([W) MFjrqRg0 8YM#:ܘս oxWN:P Bj]&j4o~S"eߞ^P*l11>Jۻ>`vw,nH ~j㯠0ŢݏHWNI:唫 v>' #Wn 9GZ !aZ"t68\jA%ōl>ym^gS%0n|ZaaZ,>PCv eE)*62,%FQ8跔U %UgX %>%yx,I8A*$J*etcP (M)E'tdC'[ +誀6+0RH:PT0,/̉F GrI3P_X1^kq"NRB2WX &8VK(& t0 q#ІԞjD51' Kꛂ /Pg <}L:Q1{.oRc[\z9R9p)ڴTnqt/4kck99-AdrT.cF5SF5#4- '"$Q0N+07']RTsA2xeA$$%j?D~#oTS%v-VL4{7OⒸfZE@ku0]U#Vjz^{WMոgbNQ-=% 3^5T),ϊQ>}eϯ"Ln߈0.t]2BӅX B*o[oݴQ^7DxRꜵ7Q9k"] ՜p~Gڻ50Lg`N8 [CsclE0" U$)Qb"KBULa$`ɁNVp=s0co!TWT/==q $(ō8uk\A^\4v9u}P垾.dx1? <]7P3螯"rkOQla!:n&%&)ip18y1Giz(9)E C17a %L}l)T.1@*ʽ|Nfy:gU 94*yseyNdCj8 9 Vf fi|&xw,4gyPˇwI}$.77p˂`JsO1%x-*(UM!R];ͻ//_/xz'}At="Е^S P= Eoꙃ8Է~tL#@nSXϗr rs.9+4\sA;#\is6 4lk!DpQ |t&O㎬ЦDkVFE'C|+[ֿ/]~aOwc5fR[=l Dۚuo_^O¯!j"{M "j\hqڥƍihWݘ^Ɣ@GN%XRmVJْXgѽejN* *k{ݜOpiD 33YPɛ #db%f1du">qrs-t'd+W}vԂ37+=|E 7xwETolL(zTa~X¯9a/@3Pڣ\ L3p\ N;-=/ ҃su8w^l{PνQqz2 pڛ_I8P6.Y.DzbDhaPb(eԨƩ2L/ !eR4~MKr5`ZCף1;9' clq,?pUh~4ɒp&1n\фRk\P+ݯ$Ur-i烵>4-2:X^ujE~;XxZ=AtǫYX~ r?2ya6# +w`z<ۦ"j(=K8b{d۱Ӂ_~$U۪ ,0gjڻs ^OQ)!vkhؼa&9<4nC|9h__~KP oa<{fܙH7gͻ_|ho?a0O_l94flsljl:ͮg⋶<$ddc*rT*^|ݤ 0鿼\\;|cÏٞ(l oC4VӦ%|UK޿Z@W|ϗt? rkgQ|c>r6r;pȷ|G>jLlK|&ZuNvf!"g?v` HJJUGBw>x8S]/!^bBWO_$b7 JQƌܺZŢĵeBIg%d OܘȲ'겴F -&qBNA9PpPy4L&fLcaA Hnɨ\뫓TrxᾞL6v!I]ߔ0Er;nYLڊ~~s{#Ykq1;~}v=:2dAhQ=Iڇd'y{}vvzhҢ5FĤi2<;7Pr}q'iZt{G3R:3ƇӼ77qMኂ #3eG at,T"xul:x-D)-\|fвl |3< %h&YpG}) *$7(L۠21FGJKmS\H؆{`)/}|$2/,}T/ zi̢`# $0) |='6A jRߡI 44nJGڧDA3gU6m6|Zgla:~gk2E;,0Fj5fl$  FjC>:mqe!bxz.xWx%ZL є9ІR)0/ e;dg"ک5P 5 sDν3`Q6DOvG1%ÐDn$:}UG X׆965 heZ&l;6$A3GJ5Q)zY^m&^]7).qZՈ`ĸ02isb`r HXӚGdrZH'Sbi ݥ75xB02jEn^,6)J\LE. XYcoè~DvPAi lBK#!Q1TrX&Fg»JŌ^̃*FDPQk3q(% t^K8Ƚj#ez^{92q´7 D(}рE6Df<E& b ڒT0zz0 ]a%woqXs>""iU|jl'\UobL)3%qd2j1&nv[WKu'/Ul'6* COЗjT(vA>쯸ߋ5|IyT`:>4sߚʑ6VV];FKc&`fFK%Rxj*yb{xf&#BROGJ*u3Fפ4‚e %32Pҡ_dTL+r i/֖"m?^[ZT&RJQ}@5J)RRJSJQҪ󿆯ez1M--on>)m=Nj42SX䮮Ojv~( o6m@Ǹ&Ol$׉ (ӯ~x;~8gHk/[] X>OJu8墫U@&)ڇ垥|c[C!AY*4:n2fc "LsDiT1[P5X*_3ŞoFq0LnYʀSָٟ:\naz\Jl]9W<^rP!;f³5~$7]sQiOchO$J-%x8[> )tЮ)nXoi_\I?qZlx ɯ*O{0ի6ԐMPl$oS*inްTPIOg…‡hϦEc6̞U+ۈ<|3듓p*awͳ|[%ui;RFȒd>B(j^@F{mީͽ.?- /_pbJRO ÉJ݂r3 APrFtr!Rzիqi04z<^XVYnIyUr)iT@Yh$i A:lk%62gD|kT(t ? 1Tͳ̹Rϲ{Po !طo,|vwps,;+ ⺗g 5mcjAOf?^2 [)ѧ]YUn;Nx `:Bp:!!:HZ6)WdBPJ*rJ< 5Lї\yMQrnhD;|;n{K`A 1|!x\$:V+-S-6Zi&tD(x o9QB)TXZ-mޣ ~Kd! w;I)DutAp%l)@f0z^O%8k0J:HֆGtsd"f14: }_MwvxR.pճ4Znm(-pV@6Bw..A{ku0~SQaAY\XH p3 y&p?|CŒ[d{Ś{? V/yfiWPl4"P 2"yij}Fnլh0LaPN@QӗSͣ.c^/isBr-Xw)Չ$"jh&@ PghEn~<Q'Lv$hNgJ/0a^8Ch!8Y,04!R^xƢ[DrVu'Cpt_۲< ߃,Nxd{&|ŏQ0,F o7SgϢ97$ߓw#ւm |ݛCpqšOgއ (B7Rа#և㦃iz7+O]좟f??}O! s|DW%W>Qۀ ^RzC.MHT_>/PaUFȽ>EN )_c5Ӥ/畇"k7s^`?ą4RvCvhǴntٍEؘ]li9CP{H:Ts&nmyr:/,PH +bPo|$Yxj@ΕkQ`ty=I=DZNKTcQ RpXgtJM(i @U| ܌zy aU~}u;_'ӱMg9B{SN ڍoi'^\7w8Q%acvaIuWP}<=:⇲{}OzǓ #^WŽ)-_B^V97*;;7Xr_$I_zz=Xlx!TaqkIʆ=EJ*E2u`X"Y̌"22"32V#'Ek67o}Q9TQ!JkH)Ej .e(ՔXɯť &B0_o¯vNQRl'@xv91e|q84I C|y;ae f:IŘ49cEeNLqt4ǜ)L&4l8K%i )4Y>״$彛% ;0jm 0/Ln56|oe@5|?yxY><,7sɐN&NH("D801q10{ciGi{8r$]!i$*F1inۨl116[өfAMmZSb2j `&xi:'HlO`MP2c&^KnmuhJh=_w{݄D o I2s1 } n-"x}8$_Y2uVs `gMZڋk~^(+O-NoCcR`-`XiFx s8=#j=s\ pb,1k @G{ukHK{P\WF q}bpqEo&a5[[V Ӹbvzh$W9u,a~^6j}Hs:qEץ7[%LoҶQVj!b+ HɯfјQaCCVB^#'ECa}(@'$;Yfl*,"QQ*Etzwݲ (;d=nvԠ8J5Tj@C[= [A " bXk<!N8 ]e|G8jǀ9 ~ߖ= } WmH яǿlu5ۖL_@M}ufG{eld ׸'y}wqz~N-cb.&UƋV\ϒ[Q[r`s+8U|*z={8Lo = 1ɺfOF~U?d`5%j2q9*l]<ݻƬ&0ekǿgzη1l1趹wWn[uB 6r'}ǽsk{PХXgCװӽ3CaZq4nd`a*z={4hfO+ցCG!69h|[%2N 3Bw.-G[)3x:a@ݗq)u1v G0hӡ xwv:tٜbvnB/~pX.1oP+ӡct9Q?`[D0Q)ٳP+Pˡs޾kHWH:8_u N)-|?W;ǝ!±qE+;=|瀅\zX,|l»geZamŭ>7-YouVƚ'qݧ%PؕJIP) ?lCLBO2PD 8%s!=^.}ʳ` bq [ivz?2?"R譕q&TR#L˚x{^>t[!Xv>$/xL,?y"ٍGdpZ#t%mp=VSg aΑrgy3 [ZR{ǰpxVF&)&2I!r˵IbNZF,C5LJ%<5CN(uyK~e["mFf \Ns K(, J2 Id0PF`q% eN!;r:uKIQnP|f%ͤc|5Xg[WvL'Ici\]13!L1#?7XrCYRƅ3[}ЩIikݸZTkΈ8F1ln4GA'Hω[%2AEesEs,jW@NEN##,-W0 -چJ[N'lWH TdU0C1pG0}@`<`W+t-ASL&S2@93 JGZfɒJ:# Xa^ ZQMO\"o[ŋ6T7xM68 huorfHXwhs{w^0z.ٗΌ; <1S*\/ ֠H ]-j{i:0Dջ?;P6CP dDJ2Ta?FG}#Jq-0fMPl5E)Y~u(1;("TqCu_g*U69j3(:zє~p,գ07oe~1xdG1!aD1<mr8c RJyЁ"OKz}2w'ѵhI! 1y"Ff@ev?"$ӛ&A#&e v}OJ'uH}) %N2yw{6,X|r_`tbA5sbk4>|y}hI2YīP>}r /^ݞ""E8}ܕEO~?~5Jd:/=/U!c0#'K/l6uYRwK{zL8Ţ^iб0,7K|* H+ (C 2Jfss6{x+=8N*o/lc$0;+s -No.+x%P*,u9<Ϲ+Uut븧 hPFe7*J0/K0ymEԻōrz>VބkF`۰|d%Q5VopgWWo9FT Eݩz*(rF Wo|P[ ^CuWxVoi$pxWVWQq+cUі#VXhԻ$ŏҦG iXi? }?`UU%HFr(ܲU4:y:l#R)/sneÈKO E=aZ[+*g9:{os"h/Φ}X9X2xª0`7Rq;ޔcE eLsV)$P꘵ Ɔf&aEa`劫,P.tJ&kV5\q 88 ;*ׂvV*~`Ĥ+vlmFPU7se m3갸t_/"@/{ہW]&|Qrg1'Qx%>߻թ^6˗Aډ]2:r)c:Qexq Ac.ZgËyxB&-bm\+Զlzl@26^.g0IFiDjZ %D!چBn%DpE,5;wDU!+4}L qʵɸ!'J7A%>cQm`jіQٺUpDWʎ b$'5ģC b.?xt~CvKCލ\W'3Ƕa#)17\AD%fZOc*C ?㸠h~EBYp$xJ*Y :)UJm7c>|^IdDNk&' Abg:Әjġ^YPbC0;g r0s#`~B]l6*rF`οV锬$D!lrgPwkKכ+)J ZԞԏg^͝Ao$j<σZ %]H4wfZ*% %D,#RiN07qM2>M()gnL ;F2@ \2e;aAZIl1JX։qԂ[FGǘpq35Vzdx"[2 sY(aN;M.p93 FQ>isK*xƛFt$!zXKt47_vcU¹YH*Uw9vN; tZ+2܎D1Dp륏+TըDp0DF "T{!ZUBYXbSzFB=lZK0t|viQSzph!}$rk$t]9&E;%T)pо sr6( rӏ~e1N=%GJ; $5ر|8Gn?pY)lЎ%ރ,U˄si(I _;D2/HfX֠բT.2_o`,( !s8g~Nf'j Fdè0FaTFW2|>H+N^ME`߁Cog`NoO%SW3ʦ W~Z{q yf`8)ʤ<ՌThb3"dx[{'[yČt)%]mw4D&萖T#C \*P4}%.^*%alŠQa1GbV .] y7uo2d9oCS:(cJ{8_!K7[ O łe'_dݵhpF\dف{nNsjv7`d/ΩwG/ooz˿oo/:vF(Z 9WCv!V+ oŨ&chQY M]jTTA\xXKVEdQPJӞ‰PNDƚPp +`M we,,pXCArg45xRW{).\}G"3Q\$ -p,jTӅ0z?B7Km:"R58zw\1Dq79YZvV;7Wc^]bvq9!rZ3[K(9v@$ r8hmp1mI9Y817 \;:`;2a<)G-MW+R+m:.HǣFn>ُWA +G*It ù(kC%v(ſ~#2AmQd\>)d&=;"s'[f^(w0;R۬=Ԃ_:h"N%peͨ=Hi59cU~H%[Cz9R_*#JZBxm#il]ZMpbf^_E`7NYk)i_~M?jC/LoߎDMO=}552lcqzY Jۦ +WblsJ $oF읽˷q6owÜIlϓdVEĚZ5䬵΁[[G4wn2'X|Kg|z[R fLm Z7j hqզ V؋iL%{ZaϙFrjuR;|؉YByđm`Q"b|蛸`h:$`9mm bْ_v~oS̚Jtp"r*璽s|j:?O:R6@i?z,}7C.ZOiq3UfJsjt諸:ANKn} [= ԮCyJvl 1Xq#&mB#AI#vQ1id@:J85Srsޡ-n 1#;.ېv-! b#wE)-|:LP*^%5 UAiA KbyFAYxa!Tf(GI[,d]d:48RE0B)46I6Us-a)r~Hθ.-L-;ċ;DƱԺEg"IˋǞ1K%QCB`LgkYAPm\X'2T&cJjSVA ÚAU-blQU(GbgI>T k34.a<⸖L.(/lgH|8"ru!0{BGz ᝛{VUQ:FLK_0=g'uF3t1:&+UX0YUm\,v=~AI'sLI?zء)}hqfe h&lV0-óޙDK6sJK)Oj㉖[' 7(\b*MEPkӅaV]<-_ߍcv[If;n}3+z >^\'rk[F(Iٹ0!ZimE|d9b5 1(S.^R*Oo7ИF8Ew*Ll$p=  t>m%ϏV]|滲՜=|ϸvFB \ &9y,$ gaLЂp=b:y3~ՙ"Y> T}UgE ,S6O"~9H~Wi%H6.溑~)>+f7,2Sc$ E(,` eVdq$ʖ ~ v^u0q6k:0 {˹^J^` s-'x7bٖ+$R^bE3!Ъ឴l+@lȺ5Sl}PŪLr`9*Cyi jDW+ JDMŲih|tl;3i? ?X [C˧]\k*Dd9.N3e \f\JuO)Oؔ)ڙXAX=O[ZZb8.pJ@MZrF?h)];!yy{S"pI8Bruv  \8<1<׋qF"4^3q+*C1z9Z!yu*$B7>:o ApM 9)"T%Dr}^?QpXv56 *5~>R 2C nqGouH֦eq"< ݘU^L΃_>Teu')n7Qѡd.rlZ5Rw$,}p4 U`u1-Xy*m/F4EzjG6v,i!OJZ{w5Y,lۆMa)_H.Ь5d4Jռn&zI8\@,}pWhsͳOLnHDa2(b]ʄ`DN X{r l3o mb!L%90} # 8)R@飬 RmdKzs\o)!Yg@(zR_46]n<\/OK#j9YqA1g %Dh@ Ra#O}d\Oq]u3t^~;oG/'/&%E8!jgit@0F /w2-vs{#b]+'L;r 1J*\~L58hHf3{xO1vZ|m^Ly-űx{qO㯓|>v]q(>#CLD`9k=/kɜ{&}/LXAeƝȤaKʅ^]ni2奻tʃxJg*dzk쿹CчÍ$H쐢}_TiV 2{6Ӆk,T봟 ǣ1~E0=\]^Ttح|.M׾<.P?tFIDehDFx ̄i%xbSA<ު,(\qQX@h- Y ʚb%H,7!b J&Y<$Ŏu&+EY'a}v]N<*+)1R`dLCRTR1W*/V p[ZXpJE]k^X*W<~` 6kP\ӆveTZ8csL%( 9my6=qQVˆTIRnHw85gID<źMRSLXCryˤTdRkPH9 T> =-^E3Oh$@Ҹ[QwmP+Dj'&'}\r'}MN2h{ʒ,Q^xٮo~F5h-siZ`+5[t6r|sC_59^nz,^.m7K85^n*F#re?Wzb#k.;kd~[jO^{=|lƣ:]Wks*&R6>&孡 DڦFgYn]I"d؟i2o≩ ҩ7}N]WK2jBJ 8UGmu`86Dy?m %kl N?.৪ք)lIe Hî_ h%158\JCPXZP K{a)ظ A~>`yݦ/XS'.9)qm>EF0pt18?;ͣWUӐB_S!+`d"bk˨ڥC"'7Xg7>>kYn%ommdjf6 h"Aezq_ e*хL`s;O@$}r}.sخX媲ۉŏ")"/ϨݠyJؐgRM)Ȥ^4cR5tkP t@`ۺd|#T&TP|UZvכJH¢]s2%u#ׁa51c5>i( GusRŇsRr`j@0Z>u֮ :KQnE beCn.g_ݐ=a)G ('ZH#S+E*c&iw}NMҁ3`ަ{(R%캛e q>02{p}A~{DA`b[f?i<@qxAqYcSL8h A;~o+e֚}8*ǚrqcQCV,auM0EzDgQ 1;Lkgv#ww<5dT²zg)EZĎ{Эf.bb' ,: վ?yC>y@&TL'OsuX ,/iQfR)ל#;*RO!2Ӻ|]G&{EHU0NI˴Ǵ*qUG:C$Z D h]QTJ{R2oEU{*Ź:si6<_hNʠˋm|L1~ߟ%pZ%p?>!]4R,;7)x91)P}Rxe>?磇4|q_%;:磟揀!tx,y]4i?cSkyE<|JVLUz7 RJhRd'O<-vӶlKe2@GJR1$o}2Ba.' HP-C$CX +4#۱91bC:%n8PWk0o bV=O$[$6,4jhs+Y2ek ,CH0&#?ͨZA!}z)_Q{(G+S tڕ{^60co>XیW'עnsXݢ[p5c(2[orG FizB &_YN:^O\Gʼn_!a7UMp"̤U7Vg۸^3g)ybu۫i͎6BS20aXnyp0 .3Gi4)*ɒIshXpڊ$lHQsZ(IC,mb[NcuuS!f.}^$(3DN`HY9,610'ߜj莳yR곤~6'eA⫪yH4$cne~QT ”.{0LĨ(3E44I:퍒ѐFc&E1X:dz NtPseu5y :.H%3VZ-I!riIKYu]i@f 0)u$Ns'XWЊ$S#Z"0τf,QSSFB8#~.{ CdGHkE1-2թYFWs_jya慎"'+,_88,e$,?.J6(usƺ0-yt;'C_!8y{a2jخ;1LC!h}7F]bcTJ v|NP&V%iv@H/:Ke9*hjVժQK%ʕ]i{Gwxli-2cM:… 7c&T?5|軺;4+O*W/Ъ+msQ#L-u҃f+'sz˯6 UE+6 ˜H,XP]QXL2GȻ*22MQaT>GUh"=vCHo32 ϫ8Dbno븣3r sFrg. p#MdAՍS7"j`n ..''sfD% OHX)u1aȥ ._.ϗZX)?Yv;s)*7i."q_G4$rbQ k1%3Mb"c,'SAaUTOVpu8{vco,gZWo(C :(t^ UG?&/ήH:N Z};` uog0 %y+mV|»_ [Ab($xh+>|);rxS"!5^v-$:YPcFqaVVv7ɫzI='MsM/풚fJ* )UXwq~|r苽yU?3ձFD z PȊҷ'_ӋKv#ͷ+c890K&"DBYQL%pI:$awa/ppp0L6I!Kn!Yy̔|3{qZ֍RQy~ ןJ?Ǥofӣ?ݿOnr~RvE~uX[ǤiU69O^; 4+Q+ Bh5/lYTf`U/]3>+Ș8Gţp =}>X3Sq$y|kW{-_ _UM&wOqkۋ*iհiY7zciY)VNO\Wu5"u0gh]Yp{U|b kc}Bw3xԩ5-laW^6&sԊl*n@*JA55ښ"܀otqm+BNP*C R+ңavu`C0't* jE#^4 17i$0`xl\tڮn ;е)ZO=Ji*<?6LW:FF1!d&\1|3C+3jz t *ýk~*0ls!ǒ:V`KdKq!\>&Mꖡ4\Pj=l?72ަTVUe3 ݃wK7O#7{<NfJ 4J~4)M@T|9ĠKиa! FyY5Z}5#]}FMrw9>X^e?24?rYUO53r}6VScv&Oܰi̋Կȯ_L?>蠞.z0jb$Ƿw`F2٤ Z28,qBx~ 0lcN5K~2IP WMO0 Z \?a!mrр(?LŽ\TmB7Jfmñ|߃Vl#*  Įal i۸^e >ǜi^VA` a Xj`/WQ{Ƃrߤy]\D#j΋2H%jiÔDJ*{XָM&/bȦ 3N0ErHʎ"U7I5/yɚ ~]w.T AvN$6evRoESU̕7W*D.' Sċ83K B᠁,`5ؤ҂lcD1 2H<\YwUJsWi8WzD C 4t> XRdFk BƖz퇶z7lʢgX^Ĥ~{JUU" VQ-aL1b6L Ob,. 8--~h+sJ4;$Hc'O~e;XmUĂ[iAo*YZKy[-KgꐀDwSuLfLj\Zypdr1vH ND1ҊjPώe*r"Uod[ER&H wu2yʉ lo `Za2PBLyTzU2dNǛO5E!o{JxLiM;axț|nii=,O^je-]=1ol.VrjI'ے><żnY>OWzvstټry7vTzv*96Oν+ƱJ~M?9&~z>1,K2.p*yT ĝƈ^-YlnXkM-2V1u\j)0SZ,Xka61ԷfJ,z$`;5{%ܾHͬJmM1Wh5M8[?Ե+knPw^m.r PC ]OIy t~;?u^o|@bڣ#ER!W+XeX2_uLPX374_ϛ#0^ߪk,}~xxۼ8k5jT _=yq<:yxݼ:k_4/ ј ˷w{QuymM}#MoƻVm6U͏Z.ZͫśZ㋫OAHl[pk*ՠN..O~z\7N޽. ֏|Wg?2f,lPb׏zzw z{Q LEg%NoZ j]tilz w'U=@x P7Z9*ϝ K՛f_>۹C7F['@=Wzhߦ.ݸZmv?]ofp?֨fs_0/vۡChe9z닐'o tM׭Y!L ~Ts Tc w{yf iW+1:*5D{'R*E#-J;ljS̉TPzMJ $ h9.1yPQ8G0@FB5¨BT#A+dFZh+dx ,rվ1!c>/Қ0 1q(h‰T+*(`&quxo}Z:o!D,bXgw2up [B:aFWɍz0ieK[YH B G%L0kHMr-Ҏ{4( gqW͑]bZz!Q:PX)iV:2MM`|48 J9&LiSy'`+LpcE0" w@EiqcmuN`QXN@ƞ{6or |wfAKQ@ ̂іB= smu: A7>&Ϻ\c;l*$ZKL:D!tKeG8r\PC!rk y ؃b!D  %4Xe7 C 71Gws~,vc`&2fCie@xjK$B#gN#n/Ñ`bB"naM9p" " .]E2ֆ Rʫ\Řx`G6c.WdST) EHQ5p75 $10R$XUo)`!P"5dkl:Cpky1BsQ,~Aўe`G[`C6hj+ {ãmm%A<>~Gm1|#P{rw , O٣F2PsL'3 9\ G.]2,Pv>1e~c:xkb2[?nu]__^c ,\f)k<)urf$Ǫ/_P}nx)4l1PE-޿azs.Cd+_RhWKPA+OB0x薯mQî՗lw!?x?\W4S&~8|8Pndgg)o?d9叢H hn}w׸d!K3!Y鈢 BΫxlVZ'@z(L~.YzDK Bvc4lD@)I+2:Ô nεa+"sNL9!`zjXȱX*@|!G!4 RRub8#3v~8L T-vqM $O؍ ,$ʀip`FPF3:o4ԧH$uT4iD!z S'>bL^HYX3z# p}WNQ]OiY+`qܺo3Yf&Lɇ^apX:Oy4'HXz"lMAD5EG p4g(.{'' (]yW>ߡR(qBgYopDʄ5C&bTrt0 bTEfGC̹AU脼K x@.݀e2H Kv Zu"TZ)C(&k ʗ xkAN(xjf [[] S1-i\*|z,DSMK"A6$q Qěiôh҈iC@^k"(CJ d QsĆQ&"|]QwPASIEU!ٳ3s֊ca0T =-5B/M@u]KMR$X&Pt=5)(M,/# a!-c.`R$/)̴twBk5"n>Q|aiɪuIWyW6} rwٞPNdH,&m5 o p 5af/@Xuk%Zyg65(!ƒIG cRyLrU2Э}}*?uz((tOg'=wLI,S1+B+"v}ؓxc <#k)S3 )'>>@X5Jji#?]j-́[gLϣ DU`lcy Q^RdDϠEQ̫oPIp=<)sQz6tYTN޺XUP"ra*$WtIX`b.Vr9h<<a1KO&JN1jG`o\Šd]4$D%Q[fJDF))6p<\a3t#Ca-%w`0ɬv-䨘߈,ğ׸uLKEpav f6\A֫|<74*"2I+C2^H,ҙª^ ի7qvy8L{Vs[Z>$pbt hNgA3#;Jy*qJ(\݄l/lj`6_Q y7N.Y+(ڒOp\FE;#c++ܠ-32z`rwX|NzZr Vpd%(ݤ٨XcP A*h*kU׌GFNcQn9&'ge\_dC*"V"?z6*훵iH~%Fd3(֔^+ߎ!)>}YR)6},9O*]C<:U*`y)9{ut 3ag'Ws'ag<;Ex: Ws?9ْfEǫ <Nھ\v "UeOfԵЏ: PtXZЗ[V_Jo5y޹,'zK/L*}{_{u/+sr#-スi?v%ƾ@eAnįRP'Πae>ѳ?g?^.϶?ZY`q>hIk94FMs A:d@i%wLܳw)}<;A˹)S>&aż0Tǔ0S"n}{\ /-WdD' sRQA94ۨ2L!&Ԫ&.eL/л_WCE #y7V77>guqg.Qp . s_Mlp.ٽG./zg%6ʓ΢q?bO +nv!ѠdK#x<65j\Ec{!?9ߙe{PI3$g0ayF@eT^ V:`b"$Zy { jƏ2ny͓LZb  AwBFΜe V9 5:g3O)TG߿ A/Hk/_|~&2 ̬5KσpW/@zWFɜ?^U߱}Rd۽/B+1k7t^G.40;Ia(.t}qAc[5^f$j>G96&AT7݅MJшδN?U.#'7&#.ٴtoRf"}y^NeVM_뢵2Y/e AHڅ8NSVMO7rmN$Z2\!Z( U4`{?LQH^H%rfܢTUޛ_ y&1]=^nc%Ŋz"|_4Q-Z$h:)9?-N\W18Qj+nI#A+ ) D@^Ҳkd 3}#ɽ3-Su'K%Qc6N3sG=ȅ 'Kߓ iK9eZ{pZZrJZh3[C;1& 3NodXOx;!eZʿ >!42l7/3PsH",ѓ(lmPI0}B`HvQD;ՄdkQ*h$!&ODrZ:R&r¼cSv6'ٌ9 6OZA\Y(r#}rhVO˨ncN-Yc #9(eA *$dkJٚ8pIT9X).ǰ DҘpa]iz' B nEi/Ojm)| Er.[rL*fHRc9HqW Cd:b`#&kɁq@E0L>Α\,2܅7|R%YhH.h<K. $&kڴ\Ri7@^wYڒ]B(qG& #5@~p"%IB @]99| | #QCy^8W|R))34ւNYT8P[nȘ/+.rY 7dgZI;P^(F'ƐGnWLbAmQ',4P0AR]LVNvúd;"5q[C7L=3قBCy2[`i$3{r+K NjSmhp5E6c*癯1 {PJB ےH˵5o,LSΆ=χu%=`dP'*aj@ԸuU2{--,2U:4> }}>jt}1!˫tvm>RZ!?Ҋ?\¡SMAfs@3'&NLR7.KԻ,QDkJ|Lt1ߘH,3TkɤMrrQ{d<"K|}N~y{=]N\2Ҿ=}UʠQ=\4&9sƤA{;! V[,¿ I%͞X-%BT+`\`N:Eq 81N$ ~id>D"0k KhBtk<* :VH0 1YP@dQ'A"ɤZ7BlyAzamMBc9c:hsi5,h 2*2Na-d+?'v5+ZAi0BAuGkLI) сU": LE^ S-B=V4J`{A"Wg2ßNńdȷ϶*M[u7}i@ &Le0X Tu2Z(te 3\Җ> rYq)rPkc}G/26q>پ,WJ^䠳k]jVDLhQ&v4+mpȪoh}۹}"5Gf ?ͯD&c!2Hv, 5~^$Z\}M3RovM8Bim/ܦ65}BD=uI5 }Pgm#Gt{֐U$ .vݛÂω_Q=mYlg-ŪbYuY>sFPFpvl'e3te0,x/eĜ@{(ۇGnI:} |(d!7߲5.Jbos+N7߲G|plPﵐ1vvWY;fa3Hʷ0x dm: SX~&@G6<&c7_{, |H#ZoXaQGQ=HTjL9 uZK^c:t$F` Whj<*,xugU6`Fe-8waڠFE$  z9idv&{,wT I-u 0msGEl2Vz ?DAik27Yy*$;m0?(A{' 5PTcp:\9m1˸j Am\"λ:@Sm:EJ%%!ɢ,wE:Z,#G@mؚ۳٭Uygv/Vc\/U_rV.P"ί>__=C;%oM͹%g]zwȟή']rbq^hKގAE>~E/CuLr{eY!v$ۓ[}Piz2ֽBNKUrgl(|&zj=ٸ ʗקLʜ4iD-Rɉ`K&:6SR焣NYh#I?6E*F=h(T:+g`]* }lz␦ iKQ'iٱXdx;a̤߈&V[I5:H6wUuL:t#V;J,0S]@C>p C)>Z-#*APr=CP^1hIYk"!,}H.${R! QeTVej#34%YKdܔbWॅBxBL1#VdQ9Xe,ՑqEBd.dO6/50%g!άC(cV1*ȇ AH6UO.pwx%Rk볫ŧK[P0;Ewhv^VmTkF3?nKŢ;?f*$ D:Q0Kb<+: eDjٓNwbjug7_[14Zna¢kŲxRttƯYܯ^R~XrHcc{k؟FK1aee\|rK!՗Wk fSÊ|#@l 919{e0n'Gt!0<Řp҇}Z7;ͅ{KM5nQ75pOkML,abd5:L7P]]P5ի8Pv:R,RKrg+Xo [ [Bo6/5eNBR1S A :?mWdFzoYahJ7O2%)O`=>|;%a  ejFuݼy6F!4#Fj=yJHTsVUaԲh^9x{ lBA%U&ÂGfg@9[{$Coh8 )V{2LȠDB]l)B#vz` 1vFP׉lGl3gnLTG4Qwt "^CCLIiitU}n" /29Wwdp_s TkԺ7`R4I?cV:wylW޺-=NE6q^Nc69ی^۝8W|pԳZˉ[Ċ+ghoD-xل@F~=&ORv=TQj7yWW ]XWze]iaH^Isqq0"'uہ%ncL P$kB1Zf.ۨ1[S+՛^%ɀ D T`[|( `>kBܮhTsAzy|V)D P3ڰksR.C/4 vV UQ:*2¡V_f"3zj][WȻy_##1wrקWߝ_FL=0A~ 3)(TR+H|25N"E.RF $Ocg $7t1g}=7iָ}cM}/[C5ni7~0#@~s\ڮ@ @kKv&)[;?WEiqyunɺ;LkGzsե[{gV_yv=ͧZڳ{NJwzG}":+{`p; MqzIg]x~L5N)&mRA]gXz$s[ KF'J<ˣ_|/0@[UK%<ߺ&2~@Uv~'=zߞv :Rҫ*Fݛ;2a E`b7(6%<(ETƙ#l͈Plͅ5f > jFfK_ƒ:#FvlT`=]I gDk]͒Becp%Vz] 6w nFM=4Д褗|(K` f+Ec1"C[V&r7O(to_ye T`c@Jޜ7^2NY#ˇq &S懁*/QVGv%Q=;E6IIxFxTI{0l6:Q;gsAKc(^4 v `2bԀ`.\FuDu10 (Ida8J; :ԍ>ֳ:JJ!7_?zx֢0$[B*i`H԰6cAhkC9|Y{ :$gd,6FMW;-Nj;`)j]:w,AȪ6 v;G G%/z6C1-1 0o7e;>WH[C2]!H }- p/iPADADqA}n.KOem l˲CBdY'N A!~# "jK4판6v|v5PBcrΧb¢ю/.uȍcb9$xq'WwDNߝ,yfn_>~c)6lЗ$.oyew|2e'K KhTF{ʮʅ+#;#xk+2^>F F3((#syuv+d/gL80 mn>ϳw_=pzl|3[E_zOAZSHѩ(zi)zvoMosG:?$m?|bJy.i`߹x_]R?xBޝjy4r G7"PXɹ.786Pyz Ȱ.E*O}u 5|na79:7 {~}kAy⢧!LRwp5ܳFw潡&nK>ytVJX%ޭu}m>cu}-+lyR`aю)6Bn6`X W=㌷o,('x)`mCr |hR>ѾS!jEbמ;jSS7ꦇ;B-+/wn>:71UjsAEh $>sT pqh]<9?Tv~xѾ@nݤT BO4CmӹNy ȾjSP|C&:駃ȣ>j FjlMу 9agY{8+pQ pe~Zݎ֒%T~$% Cb"SG` ;NZ+Q'e  Zژ#49=3#][*ES,dPZ*J]hȋTHMRK - RJ_<'~sBo>Lf"h2Y Z)z!iI3}z"d4t5c2N=yD´5aGh+B5xȒ!h4hݎw%՛vk?;Su!!߸)`rv# AA}FvSUZ#ݺo\DWd*Cݺvˋ5vYaKfE1RV4cW+P\qʥ-ϪJ#~sOJwC} ,'."SJňiDĕ0@,g4;A&|J;6#<yQWA-{ q  }+x 1A8T\ qw:STq"S1[nŒ &dU ěke>~U} ثU`eݷ FE2w:F,UI)H礢ėk^P.ܽ束xŻ4çYՑPXg B,Kjo?hs])KjP{%]DNk<6LQcAb[jY"Ƕ'1-JGll0 39^[ JߏsbW^2Cz@F֪[cslHYCnwAmlr oo\Id^v7$oӁx|K_~:OF1D]:da5P .)SّPqSVr s֞&,FĖ47M?ߺI~Q<+aٵwMT2?6z3Ȣ6)p> yC8o{e/~H~ ~=\ZʖrY5|DǮ6ʞ^|@t|8<ڣQ=pխx#:Lk|xѢ`JZͩ%ZCC,L ZFd^JU!c=W*v-JgZ IE̗RX> V**Qa8Ȩ['cf2bruѢSB錉Y &]T Tlxpz*rZғD\#$)X%pJ5ەZ#p_R\t& ?ױu"@i֒`b-xq 4 rF[ʇ6dTrʢ_mItirڡ?ٍ8]J{`Bu/\woH'G8F;Ɋ*=o_}ܤn$Co.n􇅢ڱ43W3 >"Jm^OGλ- 1{'nXT2r'3rvfR65J訳vkRҵtCw5?Z56?6:m_S%ڪ:|6?6j6ooMLuTcc5/V8=IU{4 ep{Ag&d=DZbU+^ TY*dP+ ].ťhү1l{5#ml)yeJ!BNETƋYa1 99Pk뻱z#Ѷz3Xл6!ʯ] 6޳J5@GhD}h*ߏ!*QqrKUe,<" 5T]%mo ghȡ+ xBk-cȑqC]%ALpɽlɹ6ZIRh+WPYcvnIֿ-Ѩ;lT;uk(;uhڻ}Zo]ڰ#j4hݎӨgLej.$62eǹݐi1-i3u>)un۝ EtE21PA~O羐J#a$ ;ߵA6c7Me>Ax4`x Lx͈T]۱z=笰3fzYg0qϽO٦s>.VXdg)=E)0\ҥAH)<)CM,'-T,aRyR*c>KiK͔R;2;<:Kvvo`J$6E`زxs2 R5?2{eX1b=oCYKeu.:'c1z|nfZ>/i](vr^1'I*Gո .jW߇-t})Tc '|>5Xb ] zǃwϮ~8}xlp`P[ɪJJ+-Bg΋2bOk,!uGV|^wϿVfY1I<&I6bt} ӐBצ1uӶ̃<0{jP9\j~fgNdpC5cQ JgBGl}s hks0A%XJȗP14"<Ny,F%p/ c HT_yO!q$[:ԝV%0K|I!b""*SsDV)*@Bf*j$ b@_тUZY jEeTmLkvV9d(UivsT=AfYYԅUR^j0ch,YRz02@6][ώht"1Q3Gve#-7J}y7/rjLt[km.Vf9ok!=6ȃCa:]̚jy=Nڗ'Ysۻ/-縖K& V1]'{+(٭r.ggŎ[p5b"(fUmbZZiϕ5̪6#Zhlt\`6ZYsd̑+TqW3VR[\$DA;Q7Ȣéy>&?2OnF|r9द6]1.QTn`.\XY"+OL z}=3(nFM*87.r>"ҲXP_BM/œoʓRF^B7sŲ 7&ۜwka.BͰРjܯJ;f)ak<ܱG[amj8t=heE<#Sm}fO՜WS5e9?#Ă#D=d~`T% :5E|=Mmf08e29ҹZv5W=PT7b˓]rVԈZ mV0Ӂ¦x9s?sN<@JzH՗CaTGӼ˂6T7Xk_ojMZ78**V!?b|c ?G)9;^`p&CUFV ՅTi{gghWz̢ 2 %fc[)_ g#6^f|߂JXrT{*pAU0:9NKej3e[ ``JbpШ#' :n5?AD'|F"UܨϹ28d74B#'2#1ds2_ [Bdϴ( IvyvI h`6?>D.]U/xɂqW1kӗsV؆|fq"*KXCǴZZ?b7dm LyPOh/'E$R/]B0.0Tń JdB\aTI[U$Kz,p{O>d#zf)nTZ_- ͎eqM`ʱOX:=*y<<\W? d]0J0 c.KFtAXGDӐ]-$dIno&/j(ץ mׅ } @kV~;y& ivE+ *ZM_[(_$%<PRI^BY]QV4\D(mVO&wzGc,1SpW*|ۂL֧ikE.ݧM$;nP\Ňߋ_!Eul&ȥ GvJ̬`KÍ*up\<[h&H}0ɇS[|PP tO:?>JI;=yl Ќ%s%ylEigҗZcК`5I3@ >J(H ?J~Y@8 vDHF̓"炗 / `-,JIrHo/;6_'hd _?|^7N6eB7@\t"t&wsA?@`te Ǘw.JmL":I1#!CؽE8YbzqДIT|M-Eo)B{ mwrǂh%o6#<3χfXUi>/=^0QZ\ꞌɎљtn=A4vsE~vj``@?eg0J:w3ְM~G8 b,"Y]_V~ (Zxs>oAf/Yt]S^Ԛj<4euF2Qs 3 0sۣC&53ާ'):96jdjSXɠ*] ԙ#d;Oi˥粽z6֭ ^C-MnZsK&$/yޞ>ˁ>NBʋac!سq w]˂f=ݤ*<G^>U[m']R.^kR,Brܯ=:$Ujr7-TV9 %_6n滒Dz1k'e{ޢ#4N\ )#93tx:@evx7ny:-d_^L)vdnԀ<9qf9x.ڟqEX<1~Mtf#m +tBso?o\me=(8{N=S@MTJcFssy8+5wuxa:2(ٛ@":g}ŷG󄈧0y8O_dVyXqH|1# FoN7D=䪈vJFqdY0~WX#\n>DMpA!PqksɣW|)`|܁vHLZX Y$Fkf~mycKςq־c$k3쉳lȦh9$w¾ `ų`Ա\;QnMyAb㤉IcbLPw 9Ki"1"fh;hl\+BAޞp$"H,g{R.?1ՎIemMrF#ԚqEez Hy0wLdywL α@&ŏQ- M5v X[g{Xl|hڕ#e{jRZ#aF"7'ͺqق ^Z%\gR{f$4wB6Q}n832n7fsU[IG#zQfp)ͧOZL^mw64S߳ش;bZS~c>b;أ  V^f95U l$w0r>wF[ ; فՈA= QPk{~qQxA8;{|w\:3`7͒7P2wen-ۙ=-z>YuT%QH=GqTyqx3p:Pb5 jK&0멐`0$ndyqW*!K)y74;Y?0{< C=gs.,\1Ҫ`JhMQ:Fb0FNs:a-TF\|bD3($1Pw؟RϽ İj׍0yxb\HBcF',/.˔[k1fT|ފ3ڂv.|2Qt]]k9S'2;<&qj%Xd ~Sv8w{HDnlk# {O֬Awbޟ0LJ^-~_#Ո,.FL#AF5q(aM} F/dJjoG Srdɨ$8WhmQZ{ fv;9k)VߒzK1)3^rRυ("'s>q7S?;򣺵qI$͗Op7%,NlsS։3/}\cU䄩̅w_iF: duyST&1ϫAE'MB1c&4>8E;}K [ejv#mtBHx–~}~$~C>zjB;;uuoJqh!6tvqٮwꝓ߯o7$>R3ؚ$Kk3Y<=qTϑԨA&xdu tk,!Ai\?0 cB4 ϛ{ORb8T z5[UlPzOtjj E_|Q Nl TGTK/0aSpx۬4F(rs(b|%m[1 ]*:nLиF-#Q*8?cb*-Ì "6ou~0TuϕnbCF;PXE[\TG=PC2esHA?6l`J\G(s_U_Ҷ\1J3zuʲG!K4xuփ+ =\:_x<=|WwM0/zx0ո du&m!fZ*gB~=:j68,{ NYt>ϮNĝjS}+52~<-6o8RY<Nw2+i@nT 76aDq=(@z Nb%Ӟ/ůyt.7r8ʤ~8ki.i]ٿ*&`9 wWC./5Fgs=~4Kq8wxhyv쏆g᝿㤟~͊_Cdz-V(b~q:EFM~2 A증(@ |EܷS굆2[ о6~?ϯCnl\Y~Dt pJ>pdc!j87V $DRb=^KУizRfbejF-ŤTA%{pLrRyBtLËjV7C{0Q 6ԢD UETqAK B)H_9opukV `?DJ T.Cy/0t}PbZ]# 7QB m ٤G(?Rd*&+_ 6bP^L\<;>w)8I(iIU(@*ff`EiVhP&,Yx?@_|hYEYvr<8bT *a+HHI+⩴22nų(C\H"@B Յv\,rGj8§/HG}/#Z8,']GWh B>w9Өpo0!ol2R0rňK6ZhT  ֛&08|jZ'j g};j'M~z*u$f_O )*Oǧ1 P#qr X\M$.zqe(dJeׂ|Qsp&htI%m( s] ׂ"BL:CweMNt{7v_jFj\JqͨyVd Ƚ`  p)7<}NZ<I-zjYnomou{7ARۭu{g#V%)jki4X!^HPXr0\/URc! 6:;T2)uШ+|m'%XPp-? } *.:5mUHX$z?Ft|ئ,ʇ}+t`Vrwr 5W:=v$JQ tvݾ稡E̱5q5Z"|82`J*k&͚r .CW5/nQJQZԊg4?&TsU^r]ޚOIFR7q/U ޼f xjHARB~bJ;T \Bpt7/`'gCzV޽9?B"h`Ub ӿ175+]vVD?A[4/}]yHu5-])H;qH;ɧ єB$"-Eo9F΃"TDtU9ث v^ Añ;c1Cqd$\IwJksO(44'i:{ K6!ݳ ,5ע6u w-ϕ7د9Ŵ~Z@9_?~qqEr{RNzhƆt eݙ8唇!pJQęNXIQyWܭt]`a]Ú7U3`広|J2e𗗅a˒Fցo}s4lIzxGA{R=uq}-uYVQga[4Ng;cw2MK@rmz!kűKG-\⬰ usY.q 7Z1J\dYg΢x͋Зݸd }&S [RŊڭ y,ZJՌnF]nuqPu>cq԰nVnMp3gƄ]ݿpz<˼HIg;taK3)y؝,XPָ\ei0.x Gn=]`)Q<5JV$ՍnSi.1[;ρD0yP܄K!Fazo(_\fjm6qk4?\ asY|Bʫ]>{x^VgŅ^"y]sٔ\=ZV`c 5pe`XilL "ºB0*\F3b,J̤:H}'2{<A8%I^+V l4z҄rVHn8O bD%0bfxl]h?41RPx3t@uƸ7NTS[iQ裂z*v亃r.t )$TGRJǹThS", 2]i˔R\j@LNW?8.=|zsфgѕON<'5a8k}hfAKp!$笻htS8y_HcNcjЇgwr+XϖZ`j`ۦ?Uh5ʦ /xE+h!H!9%}<9&H`0蔳rot>-2@#k: 1D(~yGs!\EVUbNIOR>|(rFL.Es"(Ԝۘ;ak?nu`ZD!܈=Gx>E/0xӷR4rp53{x{BCά_9g mJ{hbe!Mij";᥄84":SlsD4&Dt@5~DDܩ/IE\&dbD9y:"b!`$,\bZO )R$0[d%tcӞrDC@D,6,&D0&"/{&tD>0E4dhx Gt$R;"4Rn6Z g"rO)XxcB"`<d0<@Y<"pdT42:JFJЊ\|Dt0T;Gܮ&6@d R >#N}.DHy07cR8"|CߙMtP. LX!t j]^KOekR!K*nuh\na]wT35:2)a#EzI Qs {FԕreFE] kVՅVhHS<=?^ۄU 2%@o'zaפ@&THH$so#Vu+Յ0%O['1ko9"@]@jg0RyEpy~>rb8upMDg,gy3V`by6R+5bR[qv枸Rq\fLik5zg'YXC`r)ېf6kw Drw=J\,*B+: *squP 5 K,ʙLU T:(Mǝp.3 (X>aH3iAl{nQ.(J9['^я]6ARi X2ܢSk$LU /n|h1X.ogv|ItW7̼r_R&\i/gUAH`;Nk&pQs F~vKh/\ݏl%&Gq^T{yy}0A5uLd Sؠ?kZT䁋Oo~ w[ᾟL"CZK3F֬UAc+}G[;DG}0vwЩߘa'hT3BXxDHo!W\[$Nj8{,'a9+/y͍~<οbu\ޥ!$4q V[y3.cawvL^`RK ӋQ<$l '_s >~EMDI 0w~~vv; <Zb?,`ù]l^vC;o'548 ,^EF6@ >,n}t$7-DR7vf@Q<÷<`F3c~=N|U2;!jqda ۵L᫏̈́Ps:$joa*z,;v3cVMIw!`J!]nљM]gl5S&"-@Pptm 3P@- 7+8N(XāȂ@ Zϩ~jXW౿5Ț[ɉ]b`yaνeo)?Ǚ *##-DQ1Ȋ[fD^t"{LLǓ_qgpeLcgcY~m A)MBubŀ6b}W)hw_l|q+eFB2ǭ1Ek@B!T3V9asO[z$dF 8/JlgѪ%@Vt܎MCH $ M0!(,="\RGQxnU]8Z0V'N#MW)S͕rHt!"0NP/7"R`@uh lg\KFnw9Jغym[$ bUskj}+vnJn<\Jq>'ЧhM X{}tw'1>:7Wr|lACclK#o`)㬤Hb:> ۛ|ViPdDA3ZuM?ހ^6lyDa:^:0&B~a@,ً|tg{f} Cp˱ȩS6g9<X/8 ȵ'ψŏ=@bL,cV)ڔjQXUqbw%µ%D޳ZqŸ$턬q9q^EV-9/  ϹR9uܫg&Ԁ%(Xz)ߺ_ͧnko] @P<8k8Ạx{<Ω ArE$Eau/?'wY LUk1-náh aЁb%6Vw+RuW 3څM [JIc9l>Գ㡞]ToO$;5(5*q9 D83,QFQ/Q(M P,m(Y1KQQ?3Ѓ#p!Bȹ4[%GHk rN{AŠ ]`psvar/home/core/zuul-output/logs/kubelet.log0000644000000000000000006121021415137173063017700 0ustar rootrootJan 30 16:22:24 crc systemd[1]: Starting Kubernetes Kubelet... Jan 30 16:22:24 crc restorecon[4739]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 16:22:25 crc restorecon[4739]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 30 16:22:25 crc kubenswrapper[4766]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 16:22:25 crc kubenswrapper[4766]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 30 16:22:25 crc kubenswrapper[4766]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 16:22:25 crc kubenswrapper[4766]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 16:22:25 crc kubenswrapper[4766]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 16:22:25 crc kubenswrapper[4766]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.821139 4766 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828103 4766 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828140 4766 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828145 4766 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828151 4766 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828155 4766 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828159 4766 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828164 4766 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828170 4766 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828199 4766 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828204 4766 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828209 4766 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828214 4766 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828219 4766 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828223 4766 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828228 4766 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828232 4766 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828236 4766 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828241 4766 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828245 4766 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828250 4766 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828257 4766 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828263 4766 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828269 4766 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828276 4766 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828282 4766 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828287 4766 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828293 4766 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828298 4766 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828303 4766 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828308 4766 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828321 4766 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828326 4766 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828330 4766 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828336 4766 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828340 4766 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828345 4766 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828349 4766 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828354 4766 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828358 4766 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828363 4766 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828367 4766 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828371 4766 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828375 4766 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828380 4766 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828384 4766 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828388 4766 feature_gate.go:330] unrecognized feature gate: Example Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828393 4766 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828400 4766 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828406 4766 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828410 4766 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828415 4766 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828420 4766 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828424 4766 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828430 4766 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828434 4766 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828438 4766 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828442 4766 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828446 4766 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828451 4766 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828455 4766 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828459 4766 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828463 4766 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828467 4766 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828471 4766 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828476 4766 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828480 4766 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828484 4766 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828488 4766 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828492 4766 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828496 4766 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828500 4766 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829362 4766 flags.go:64] FLAG: --address="0.0.0.0" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829381 4766 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829389 4766 flags.go:64] FLAG: --anonymous-auth="true" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829396 4766 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829403 4766 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829409 4766 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829415 4766 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829421 4766 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829427 4766 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829432 4766 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829437 4766 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829444 4766 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829449 4766 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829454 4766 flags.go:64] FLAG: --cgroup-root="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829459 4766 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829464 4766 flags.go:64] FLAG: --client-ca-file="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829469 4766 flags.go:64] FLAG: --cloud-config="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829474 4766 flags.go:64] FLAG: --cloud-provider="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829478 4766 flags.go:64] FLAG: --cluster-dns="[]" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829486 4766 flags.go:64] FLAG: --cluster-domain="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829491 4766 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829498 4766 flags.go:64] FLAG: --config-dir="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829503 4766 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829509 4766 flags.go:64] FLAG: --container-log-max-files="5" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829515 4766 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829520 4766 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829525 4766 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829530 4766 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829535 4766 flags.go:64] FLAG: --contention-profiling="false" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829540 4766 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829545 4766 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829550 4766 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829555 4766 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829561 4766 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829566 4766 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829571 4766 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829576 4766 flags.go:64] FLAG: --enable-load-reader="false" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829581 4766 flags.go:64] FLAG: --enable-server="true" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829585 4766 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829592 4766 flags.go:64] FLAG: --event-burst="100" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829597 4766 flags.go:64] FLAG: --event-qps="50" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829602 4766 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829606 4766 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829611 4766 flags.go:64] FLAG: --eviction-hard="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829617 4766 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829622 4766 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829628 4766 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829633 4766 flags.go:64] FLAG: --eviction-soft="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829637 4766 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829642 4766 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829647 4766 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829652 4766 flags.go:64] FLAG: --experimental-mounter-path="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829656 4766 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829663 4766 flags.go:64] FLAG: --fail-swap-on="true" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829668 4766 flags.go:64] FLAG: --feature-gates="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829674 4766 flags.go:64] FLAG: --file-check-frequency="20s" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829679 4766 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829684 4766 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829689 4766 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829695 4766 flags.go:64] FLAG: --healthz-port="10248" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829701 4766 flags.go:64] FLAG: --help="false" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829707 4766 flags.go:64] FLAG: --hostname-override="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829712 4766 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829717 4766 flags.go:64] FLAG: --http-check-frequency="20s" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829722 4766 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829727 4766 flags.go:64] FLAG: --image-credential-provider-config="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829731 4766 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829736 4766 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829741 4766 flags.go:64] FLAG: --image-service-endpoint="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829746 4766 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829750 4766 flags.go:64] FLAG: --kube-api-burst="100" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829755 4766 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829761 4766 flags.go:64] FLAG: --kube-api-qps="50" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829765 4766 flags.go:64] FLAG: --kube-reserved="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829770 4766 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829775 4766 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829780 4766 flags.go:64] FLAG: --kubelet-cgroups="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829785 4766 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829790 4766 flags.go:64] FLAG: --lock-file="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829795 4766 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829800 4766 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829805 4766 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829821 4766 flags.go:64] FLAG: --log-json-split-stream="false" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829826 4766 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829830 4766 flags.go:64] FLAG: --log-text-split-stream="false" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829836 4766 flags.go:64] FLAG: --logging-format="text" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829842 4766 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829847 4766 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829852 4766 flags.go:64] FLAG: --manifest-url="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829857 4766 flags.go:64] FLAG: --manifest-url-header="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829864 4766 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829869 4766 flags.go:64] FLAG: --max-open-files="1000000" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829875 4766 flags.go:64] FLAG: --max-pods="110" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829880 4766 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829885 4766 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829890 4766 flags.go:64] FLAG: --memory-manager-policy="None" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829895 4766 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829900 4766 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829905 4766 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829910 4766 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829923 4766 flags.go:64] FLAG: --node-status-max-images="50" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829928 4766 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829933 4766 flags.go:64] FLAG: --oom-score-adj="-999" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829939 4766 flags.go:64] FLAG: --pod-cidr="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829943 4766 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829952 4766 flags.go:64] FLAG: --pod-manifest-path="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829956 4766 flags.go:64] FLAG: --pod-max-pids="-1" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829961 4766 flags.go:64] FLAG: --pods-per-core="0" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829966 4766 flags.go:64] FLAG: --port="10250" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829971 4766 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829975 4766 flags.go:64] FLAG: --provider-id="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829980 4766 flags.go:64] FLAG: --qos-reserved="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829985 4766 flags.go:64] FLAG: --read-only-port="10255" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829989 4766 flags.go:64] FLAG: --register-node="true" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829994 4766 flags.go:64] FLAG: --register-schedulable="true" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829999 4766 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830008 4766 flags.go:64] FLAG: --registry-burst="10" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830013 4766 flags.go:64] FLAG: --registry-qps="5" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830020 4766 flags.go:64] FLAG: --reserved-cpus="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830025 4766 flags.go:64] FLAG: --reserved-memory="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830031 4766 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830036 4766 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830041 4766 flags.go:64] FLAG: --rotate-certificates="false" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830046 4766 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830050 4766 flags.go:64] FLAG: --runonce="false" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830055 4766 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830060 4766 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830065 4766 flags.go:64] FLAG: --seccomp-default="false" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830070 4766 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830074 4766 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830079 4766 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830084 4766 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830089 4766 flags.go:64] FLAG: --storage-driver-password="root" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830094 4766 flags.go:64] FLAG: --storage-driver-secure="false" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830100 4766 flags.go:64] FLAG: --storage-driver-table="stats" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830104 4766 flags.go:64] FLAG: --storage-driver-user="root" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830109 4766 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830114 4766 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830119 4766 flags.go:64] FLAG: --system-cgroups="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830124 4766 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830132 4766 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830137 4766 flags.go:64] FLAG: --tls-cert-file="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830142 4766 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830149 4766 flags.go:64] FLAG: --tls-min-version="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830155 4766 flags.go:64] FLAG: --tls-private-key-file="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830160 4766 flags.go:64] FLAG: --topology-manager-policy="none" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830165 4766 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830170 4766 flags.go:64] FLAG: --topology-manager-scope="container" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830198 4766 flags.go:64] FLAG: --v="2" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830205 4766 flags.go:64] FLAG: --version="false" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830215 4766 flags.go:64] FLAG: --vmodule="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830221 4766 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830227 4766 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830377 4766 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830385 4766 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830391 4766 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830398 4766 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830402 4766 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830407 4766 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830411 4766 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830416 4766 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830421 4766 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830425 4766 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830429 4766 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830433 4766 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830438 4766 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830442 4766 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830446 4766 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830450 4766 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830454 4766 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830458 4766 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830463 4766 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830467 4766 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830471 4766 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830475 4766 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830479 4766 feature_gate.go:330] unrecognized feature gate: Example Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830483 4766 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830489 4766 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830494 4766 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830499 4766 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830505 4766 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830509 4766 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830514 4766 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830519 4766 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830523 4766 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830527 4766 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830531 4766 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830535 4766 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830539 4766 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830544 4766 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830550 4766 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830555 4766 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830560 4766 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830565 4766 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830569 4766 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830574 4766 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830579 4766 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830583 4766 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830588 4766 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830593 4766 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830598 4766 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830602 4766 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830607 4766 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830612 4766 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830617 4766 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830622 4766 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830626 4766 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830631 4766 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830636 4766 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830641 4766 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830645 4766 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830649 4766 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830653 4766 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830657 4766 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830662 4766 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830667 4766 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830671 4766 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830675 4766 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830679 4766 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830684 4766 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830688 4766 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830692 4766 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830696 4766 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830701 4766 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830718 4766 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.838244 4766 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.838278 4766 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838366 4766 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838376 4766 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838381 4766 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838386 4766 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838391 4766 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838397 4766 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838402 4766 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838406 4766 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838411 4766 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838416 4766 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838420 4766 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838425 4766 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838429 4766 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838434 4766 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838438 4766 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838443 4766 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838447 4766 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838452 4766 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838456 4766 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838461 4766 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838465 4766 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838472 4766 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838478 4766 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838483 4766 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838487 4766 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838492 4766 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838497 4766 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838503 4766 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838511 4766 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838517 4766 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838523 4766 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838529 4766 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838535 4766 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838541 4766 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838547 4766 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838552 4766 feature_gate.go:330] unrecognized feature gate: Example Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838556 4766 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838561 4766 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838566 4766 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838570 4766 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838574 4766 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838579 4766 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838583 4766 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838587 4766 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838592 4766 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838596 4766 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838601 4766 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838605 4766 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838610 4766 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838614 4766 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838619 4766 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838623 4766 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838628 4766 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838633 4766 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838637 4766 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838642 4766 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838646 4766 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838651 4766 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838656 4766 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838661 4766 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838666 4766 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838671 4766 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838675 4766 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838679 4766 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838685 4766 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838690 4766 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838696 4766 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838701 4766 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838707 4766 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838712 4766 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838716 4766 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.838726 4766 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838872 4766 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838881 4766 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838886 4766 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838892 4766 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838897 4766 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838901 4766 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838907 4766 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838911 4766 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838917 4766 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838922 4766 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838927 4766 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838933 4766 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838938 4766 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838944 4766 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838949 4766 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838953 4766 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838959 4766 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838963 4766 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838968 4766 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838973 4766 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838978 4766 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838983 4766 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838987 4766 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838992 4766 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838996 4766 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839001 4766 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839005 4766 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839010 4766 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839016 4766 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839020 4766 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839025 4766 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839029 4766 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839034 4766 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839038 4766 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839043 4766 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839048 4766 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839052 4766 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839057 4766 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839063 4766 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839069 4766 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839075 4766 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839079 4766 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839084 4766 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839089 4766 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839094 4766 feature_gate.go:330] unrecognized feature gate: Example Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839099 4766 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839105 4766 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839111 4766 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839115 4766 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839120 4766 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839126 4766 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839132 4766 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839137 4766 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839142 4766 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839146 4766 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839151 4766 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839155 4766 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839160 4766 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839165 4766 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839169 4766 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839177 4766 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839196 4766 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839202 4766 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839206 4766 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839213 4766 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839218 4766 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839257 4766 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839261 4766 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839266 4766 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839270 4766 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839275 4766 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.839282 4766 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.839477 4766 server.go:940] "Client rotation is on, will bootstrap in background" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.844140 4766 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.844255 4766 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.845646 4766 server.go:997] "Starting client certificate rotation" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.845681 4766 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.847509 4766 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-08 01:41:46.368209583 +0000 UTC Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.847607 4766 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.871099 4766 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 16:22:25 crc kubenswrapper[4766]: E0130 16:22:25.874029 4766 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.103:6443: connect: connection refused" logger="UnhandledError" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.874223 4766 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.892638 4766 log.go:25] "Validated CRI v1 runtime API" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.929728 4766 log.go:25] "Validated CRI v1 image API" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.931625 4766 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.936359 4766 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-30-16-17-51-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.936407 4766 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.952797 4766 manager.go:217] Machine: {Timestamp:2026-01-30 16:22:25.950067 +0000 UTC m=+0.588024356 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:a00817eb-12ea-49e2-ab4d-6ba5164a8361 BootID:6a40bef8-b5e4-4d79-9bcd-48caff34a744 Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:05:5e:29 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:05:5e:29 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:f8:47:29 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:f2:1b:00 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:6b:28:e8 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:f0:b2:36 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:29:2a:0a Speed:-1 Mtu:1496} {Name:eth10 MacAddress:42:47:a4:70:71:3f Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:fe:30:60:3a:f8:18 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.953050 4766 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.953332 4766 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.953651 4766 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.953834 4766 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.953874 4766 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.954136 4766 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.954146 4766 container_manager_linux.go:303] "Creating device plugin manager" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.955114 4766 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.955150 4766 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.955749 4766 state_mem.go:36] "Initialized new in-memory state store" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.955850 4766 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.962552 4766 kubelet.go:418] "Attempting to sync node with API server" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.962587 4766 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.962614 4766 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.962631 4766 kubelet.go:324] "Adding apiserver pod source" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.962644 4766 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.967481 4766 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.968364 4766 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.970002 4766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.103:6443: connect: connection refused Jan 30 16:22:25 crc kubenswrapper[4766]: E0130 16:22:25.970124 4766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.103:6443: connect: connection refused" logger="UnhandledError" Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.970411 4766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.103:6443: connect: connection refused Jan 30 16:22:25 crc kubenswrapper[4766]: E0130 16:22:25.970479 4766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.103:6443: connect: connection refused" logger="UnhandledError" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.970913 4766 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.972792 4766 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.972848 4766 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.972863 4766 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.972875 4766 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.972897 4766 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.972911 4766 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.972924 4766 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.972946 4766 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.972960 4766 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.972974 4766 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.973017 4766 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.973031 4766 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.974204 4766 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.974788 4766 server.go:1280] "Started kubelet" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.975733 4766 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.975827 4766 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.103:6443: connect: connection refused Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.975737 4766 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 16:22:25 crc systemd[1]: Started Kubernetes Kubelet. Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.976663 4766 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.981014 4766 server.go:460] "Adding debug handlers to kubelet server" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.981473 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.981509 4766 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.982534 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 14:21:32.225467725 +0000 UTC Jan 30 16:22:25 crc kubenswrapper[4766]: E0130 16:22:25.982791 4766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.984246 4766 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.984270 4766 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.984397 4766 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 30 16:22:25 crc kubenswrapper[4766]: E0130 16:22:25.984493 4766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.103:6443: connect: connection refused" interval="200ms" Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.985024 4766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.103:6443: connect: connection refused Jan 30 16:22:25 crc kubenswrapper[4766]: E0130 16:22:25.985092 4766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.103:6443: connect: connection refused" logger="UnhandledError" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.985492 4766 factory.go:55] Registering systemd factory Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.985533 4766 factory.go:221] Registration of the systemd container factory successfully Jan 30 16:22:25 crc kubenswrapper[4766]: E0130 16:22:25.984671 4766 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.103:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188f8ec2d1cfd9cb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 16:22:25.974753739 +0000 UTC m=+0.612711105,LastTimestamp:2026-01-30 16:22:25.974753739 +0000 UTC m=+0.612711105,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.986673 4766 factory.go:153] Registering CRI-O factory Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.986713 4766 factory.go:221] Registration of the crio container factory successfully Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.986793 4766 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.986831 4766 factory.go:103] Registering Raw factory Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.986848 4766 manager.go:1196] Started watching for new ooms in manager Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.988642 4766 manager.go:319] Starting recovery of all containers Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.992941 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.993066 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.993089 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.993109 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.993129 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.993149 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.993171 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.993219 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.993242 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.993263 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.993282 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.993303 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.993355 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.993385 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.993461 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.994317 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995290 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995328 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995342 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995360 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995373 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995387 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995400 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995423 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995440 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995459 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995478 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995493 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995531 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995549 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995564 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995577 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995593 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995607 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995620 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995644 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995664 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995678 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995693 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995713 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995727 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995771 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995788 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995807 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995825 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995838 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995851 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995866 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995882 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995896 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995908 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995921 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995940 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995955 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995970 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995990 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996005 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996018 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996031 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996042 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996055 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996099 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996113 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996126 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996140 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996153 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996165 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996196 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996210 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996222 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996235 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996279 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996293 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996306 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996319 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996333 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996347 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996363 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996377 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996392 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996409 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996422 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996436 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996450 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996465 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996480 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996496 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996514 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996528 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996543 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996557 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996569 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996583 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996598 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996612 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996627 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996643 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996657 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996671 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996685 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996701 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996718 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996733 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996748 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996768 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996784 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996801 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996816 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996832 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996846 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996860 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996875 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996892 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996907 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996922 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996937 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996953 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996968 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996982 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996997 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997012 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997027 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997042 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997056 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997068 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997081 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997093 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997107 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997122 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997135 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997148 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997162 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997222 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997240 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997252 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997264 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997278 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997292 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997307 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997323 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997336 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997353 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997367 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997380 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997395 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997409 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997422 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997435 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997449 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997465 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997478 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997493 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997507 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997520 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997533 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997547 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997560 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997575 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997589 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997602 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997617 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997634 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997649 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997664 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997678 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997692 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997705 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997719 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997732 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997745 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997761 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997776 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997788 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997803 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:25.997817 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:25.997830 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:25.997846 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:25.997861 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:25.997875 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:25.997888 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:25.997902 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:25.997916 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:25.997929 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:25.997942 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:25.997956 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:25.997971 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:25.997987 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:25.998000 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:25.998013 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:25.998026 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:25.998040 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:25.998053 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:25.998068 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:25.998081 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:25.998105 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.006676 4766 manager.go:324] Recovery completed Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.010284 4766 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.010375 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.010421 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.010458 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.011256 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.011553 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.011593 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.011614 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.011637 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.011656 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.011676 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.011695 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.011713 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.011730 4766 reconstruct.go:97] "Volume reconstruction finished" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.011743 4766 reconciler.go:26] "Reconciler: start to sync state" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.019067 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.020737 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.020767 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.020777 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.022335 4766 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.022356 4766 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.022375 4766 state_mem.go:36] "Initialized new in-memory state store" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.035824 4766 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.038064 4766 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.038103 4766 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.038132 4766 kubelet.go:2335] "Starting kubelet main sync loop" Jan 30 16:22:26 crc kubenswrapper[4766]: E0130 16:22:26.038293 4766 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 16:22:26 crc kubenswrapper[4766]: W0130 16:22:26.039228 4766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.103:6443: connect: connection refused Jan 30 16:22:26 crc kubenswrapper[4766]: E0130 16:22:26.039370 4766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.103:6443: connect: connection refused" logger="UnhandledError" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.045319 4766 policy_none.go:49] "None policy: Start" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.046249 4766 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.046281 4766 state_mem.go:35] "Initializing new in-memory state store" Jan 30 16:22:26 crc kubenswrapper[4766]: E0130 16:22:26.083591 4766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.105441 4766 manager.go:334] "Starting Device Plugin manager" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.105656 4766 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.105668 4766 server.go:79] "Starting device plugin registration server" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.106095 4766 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.106107 4766 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.106442 4766 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.106545 4766 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.106554 4766 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 16:22:26 crc kubenswrapper[4766]: E0130 16:22:26.112893 4766 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.138349 4766 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.138405 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.139310 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.139333 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.139343 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.139432 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.139664 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.139713 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.140559 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.140593 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.140604 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.141198 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.141230 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.141241 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.141341 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.141447 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.141476 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.142207 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.142225 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.142236 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.142251 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.142275 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.142290 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.142332 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.142396 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.142423 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.143004 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.143031 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.143041 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.143145 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.143285 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.143325 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.143870 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.143898 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.143910 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.144033 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.144070 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.144082 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.144344 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.144370 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.144629 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.144650 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.144658 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.145096 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.145119 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.145129 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:26 crc kubenswrapper[4766]: E0130 16:22:26.185292 4766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.103:6443: connect: connection refused" interval="400ms" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.206492 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.207737 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.208112 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.208263 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.208495 4766 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 16:22:26 crc kubenswrapper[4766]: E0130 16:22:26.209172 4766 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.103:6443: connect: connection refused" node="crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.216622 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.216666 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.216686 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.216701 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.216718 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.216756 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.216771 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.216794 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.216822 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.216847 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.216869 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.216904 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.216922 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.216980 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.217032 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.318243 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.318602 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.318628 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.318688 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.318429 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.318757 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.318780 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.318793 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.318902 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.318954 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.319168 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.319220 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.319248 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.319269 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.319276 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.319292 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.319297 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.319313 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.319316 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.319335 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.319342 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.319365 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.319261 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.319368 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.319318 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.319419 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.319456 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.319419 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.319503 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.319456 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.410119 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.411897 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.411943 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.411957 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.411981 4766 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 16:22:26 crc kubenswrapper[4766]: E0130 16:22:26.412508 4766 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.103:6443: connect: connection refused" node="crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.471020 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.480081 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.495906 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.506767 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.511689 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: W0130 16:22:26.528597 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-4df0f7675be841bebdfa274a2f03a26d63afa96fc634b3b5e9d8424c47c16e48 WatchSource:0}: Error finding container 4df0f7675be841bebdfa274a2f03a26d63afa96fc634b3b5e9d8424c47c16e48: Status 404 returned error can't find the container with id 4df0f7675be841bebdfa274a2f03a26d63afa96fc634b3b5e9d8424c47c16e48 Jan 30 16:22:26 crc kubenswrapper[4766]: W0130 16:22:26.530002 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-095f111d6f0c6efcdab70a8646c8a3ab93611cc7da0f19b2292794a74e109818 WatchSource:0}: Error finding container 095f111d6f0c6efcdab70a8646c8a3ab93611cc7da0f19b2292794a74e109818: Status 404 returned error can't find the container with id 095f111d6f0c6efcdab70a8646c8a3ab93611cc7da0f19b2292794a74e109818 Jan 30 16:22:26 crc kubenswrapper[4766]: W0130 16:22:26.535659 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-5b062338226e13b32f8132bf809903abe514df97859c5efe75f985b9fb1b8ec5 WatchSource:0}: Error finding container 5b062338226e13b32f8132bf809903abe514df97859c5efe75f985b9fb1b8ec5: Status 404 returned error can't find the container with id 5b062338226e13b32f8132bf809903abe514df97859c5efe75f985b9fb1b8ec5 Jan 30 16:22:26 crc kubenswrapper[4766]: W0130 16:22:26.541032 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-9df72ce5f537faef1832bd3204b9414467b59ca06fc5b69984500b878b6cb39f WatchSource:0}: Error finding container 9df72ce5f537faef1832bd3204b9414467b59ca06fc5b69984500b878b6cb39f: Status 404 returned error can't find the container with id 9df72ce5f537faef1832bd3204b9414467b59ca06fc5b69984500b878b6cb39f Jan 30 16:22:26 crc kubenswrapper[4766]: E0130 16:22:26.586878 4766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.103:6443: connect: connection refused" interval="800ms" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.813076 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.815470 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.815520 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.815533 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.815567 4766 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 16:22:26 crc kubenswrapper[4766]: E0130 16:22:26.816168 4766 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.103:6443: connect: connection refused" node="crc" Jan 30 16:22:26 crc kubenswrapper[4766]: W0130 16:22:26.852794 4766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.103:6443: connect: connection refused Jan 30 16:22:26 crc kubenswrapper[4766]: E0130 16:22:26.852906 4766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.103:6443: connect: connection refused" logger="UnhandledError" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.977021 4766 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.103:6443: connect: connection refused Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.983053 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 10:47:56.313399336 +0000 UTC Jan 30 16:22:27 crc kubenswrapper[4766]: W0130 16:22:27.021104 4766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.103:6443: connect: connection refused Jan 30 16:22:27 crc kubenswrapper[4766]: E0130 16:22:27.021212 4766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.103:6443: connect: connection refused" logger="UnhandledError" Jan 30 16:22:27 crc kubenswrapper[4766]: I0130 16:22:27.044158 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"4df0f7675be841bebdfa274a2f03a26d63afa96fc634b3b5e9d8424c47c16e48"} Jan 30 16:22:27 crc kubenswrapper[4766]: I0130 16:22:27.045224 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"9df72ce5f537faef1832bd3204b9414467b59ca06fc5b69984500b878b6cb39f"} Jan 30 16:22:27 crc kubenswrapper[4766]: I0130 16:22:27.046265 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"585d4a4004f6a9bb513d5de66744c5230d2b3386db687e9ff734ea5afdb49052"} Jan 30 16:22:27 crc kubenswrapper[4766]: I0130 16:22:27.047208 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5b062338226e13b32f8132bf809903abe514df97859c5efe75f985b9fb1b8ec5"} Jan 30 16:22:27 crc kubenswrapper[4766]: I0130 16:22:27.048243 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"095f111d6f0c6efcdab70a8646c8a3ab93611cc7da0f19b2292794a74e109818"} Jan 30 16:22:27 crc kubenswrapper[4766]: W0130 16:22:27.302432 4766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.103:6443: connect: connection refused Jan 30 16:22:27 crc kubenswrapper[4766]: E0130 16:22:27.302529 4766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.103:6443: connect: connection refused" logger="UnhandledError" Jan 30 16:22:27 crc kubenswrapper[4766]: E0130 16:22:27.387717 4766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.103:6443: connect: connection refused" interval="1.6s" Jan 30 16:22:27 crc kubenswrapper[4766]: W0130 16:22:27.584665 4766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.103:6443: connect: connection refused Jan 30 16:22:27 crc kubenswrapper[4766]: E0130 16:22:27.584826 4766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.103:6443: connect: connection refused" logger="UnhandledError" Jan 30 16:22:27 crc kubenswrapper[4766]: I0130 16:22:27.616395 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:27 crc kubenswrapper[4766]: I0130 16:22:27.618380 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:27 crc kubenswrapper[4766]: I0130 16:22:27.618432 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:27 crc kubenswrapper[4766]: I0130 16:22:27.618447 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:27 crc kubenswrapper[4766]: I0130 16:22:27.618504 4766 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 16:22:27 crc kubenswrapper[4766]: E0130 16:22:27.618981 4766 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.103:6443: connect: connection refused" node="crc" Jan 30 16:22:27 crc kubenswrapper[4766]: I0130 16:22:27.937619 4766 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 30 16:22:27 crc kubenswrapper[4766]: E0130 16:22:27.939025 4766 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.103:6443: connect: connection refused" logger="UnhandledError" Jan 30 16:22:27 crc kubenswrapper[4766]: I0130 16:22:27.977450 4766 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.103:6443: connect: connection refused Jan 30 16:22:27 crc kubenswrapper[4766]: I0130 16:22:27.983760 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 16:38:32.042409131 +0000 UTC Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.052542 4766 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="45a10d4089665cdb929797e9342a2cbcb49cf6734a3325a26037a23551bcf2de" exitCode=0 Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.052608 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"45a10d4089665cdb929797e9342a2cbcb49cf6734a3325a26037a23551bcf2de"} Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.052676 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.054039 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.054087 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.054098 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.054288 4766 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="0375b1004edcc905daa64587295fbcb381263d97a218c568bdc2028362b2b8fb" exitCode=0 Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.054356 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"0375b1004edcc905daa64587295fbcb381263d97a218c568bdc2028362b2b8fb"} Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.054494 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.058083 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.058128 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.058141 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.060020 4766 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="6de657c840f4dff63741fe42c702f457d123e0ba02f6602b75afe3271a7b6886" exitCode=0 Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.060377 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.060377 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"6de657c840f4dff63741fe42c702f457d123e0ba02f6602b75afe3271a7b6886"} Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.063247 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.063302 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.063316 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.064291 4766 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045" exitCode=0 Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.064401 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045"} Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.064563 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.065604 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.065650 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.065664 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.068011 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9"} Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.068059 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e"} Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.068080 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b"} Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.068062 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.068093 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38"} Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.068369 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.069432 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.069472 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.069488 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.069506 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.069538 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.069559 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:28 crc kubenswrapper[4766]: W0130 16:22:28.557198 4766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.103:6443: connect: connection refused Jan 30 16:22:28 crc kubenswrapper[4766]: E0130 16:22:28.557319 4766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.103:6443: connect: connection refused" logger="UnhandledError" Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.977107 4766 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.103:6443: connect: connection refused Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.984246 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 20:56:22.548172202 +0000 UTC Jan 30 16:22:28 crc kubenswrapper[4766]: E0130 16:22:28.989369 4766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.103:6443: connect: connection refused" interval="3.2s" Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.073025 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"af645998dd1c6804061825910fa2b9b2446abe356b9c90d83546dddbcf6133a2"} Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.073071 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.073075 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"51487101f0eb065018293732bef4fb466fa5a1c7bcb05f905fdd52f3593119c1"} Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.073245 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c8ffcd0c2a62c4b64c82096acad643c6c2baeb7a4a36b666ae69123551f364fe"} Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.073915 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.073944 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.073956 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.076398 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9"} Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.076443 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334"} Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.076460 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc"} Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.076470 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41"} Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.078578 4766 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="5dd974892c65b46b3e601e9d901a9a9888dcbe5d1f734b282938d46f297ffd3d" exitCode=0 Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.078645 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"5dd974892c65b46b3e601e9d901a9a9888dcbe5d1f734b282938d46f297ffd3d"} Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.078917 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.086333 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.086361 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.086370 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.089265 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"6a0a296aafa84488c77418bb8d4b945f5cec6783bedba7e498c2dfb3f54c39ab"} Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.089290 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.089354 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.090538 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.090563 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.090580 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.091201 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.091225 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.091235 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.219904 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.221343 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.221377 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.221396 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.221422 4766 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 16:22:29 crc kubenswrapper[4766]: E0130 16:22:29.221907 4766 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.103:6443: connect: connection refused" node="crc" Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.264605 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:22:29 crc kubenswrapper[4766]: W0130 16:22:29.455462 4766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.103:6443: connect: connection refused Jan 30 16:22:29 crc kubenswrapper[4766]: E0130 16:22:29.455590 4766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.103:6443: connect: connection refused" logger="UnhandledError" Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.985309 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 10:54:39.061625283 +0000 UTC Jan 30 16:22:30 crc kubenswrapper[4766]: I0130 16:22:30.096289 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036"} Jan 30 16:22:30 crc kubenswrapper[4766]: I0130 16:22:30.096404 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:30 crc kubenswrapper[4766]: I0130 16:22:30.097532 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:30 crc kubenswrapper[4766]: I0130 16:22:30.097573 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:30 crc kubenswrapper[4766]: I0130 16:22:30.097585 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:30 crc kubenswrapper[4766]: I0130 16:22:30.099058 4766 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="1c91fed698fcd080bb96cfb78c277c295568df8d5eb52e57c4656620822f6fac" exitCode=0 Jan 30 16:22:30 crc kubenswrapper[4766]: I0130 16:22:30.099123 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"1c91fed698fcd080bb96cfb78c277c295568df8d5eb52e57c4656620822f6fac"} Jan 30 16:22:30 crc kubenswrapper[4766]: I0130 16:22:30.099237 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:30 crc kubenswrapper[4766]: I0130 16:22:30.099306 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:30 crc kubenswrapper[4766]: I0130 16:22:30.099353 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 16:22:30 crc kubenswrapper[4766]: I0130 16:22:30.099312 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:30 crc kubenswrapper[4766]: I0130 16:22:30.099309 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:30 crc kubenswrapper[4766]: I0130 16:22:30.100831 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:30 crc kubenswrapper[4766]: I0130 16:22:30.100872 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:30 crc kubenswrapper[4766]: I0130 16:22:30.100837 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:30 crc kubenswrapper[4766]: I0130 16:22:30.100911 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:30 crc kubenswrapper[4766]: I0130 16:22:30.100929 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:30 crc kubenswrapper[4766]: I0130 16:22:30.100886 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:30 crc kubenswrapper[4766]: I0130 16:22:30.100871 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:30 crc kubenswrapper[4766]: I0130 16:22:30.101055 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:30 crc kubenswrapper[4766]: I0130 16:22:30.101071 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:30 crc kubenswrapper[4766]: I0130 16:22:30.101780 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:30 crc kubenswrapper[4766]: I0130 16:22:30.101805 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:30 crc kubenswrapper[4766]: I0130 16:22:30.101816 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:30 crc kubenswrapper[4766]: I0130 16:22:30.985889 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 08:01:55.037436576 +0000 UTC Jan 30 16:22:31 crc kubenswrapper[4766]: I0130 16:22:31.024632 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:22:31 crc kubenswrapper[4766]: I0130 16:22:31.106619 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"374cae6e4bbbb88f2f6fc9093a4f5597b2afeae8361a9a76ccf384cae5d8b2b3"} Jan 30 16:22:31 crc kubenswrapper[4766]: I0130 16:22:31.106701 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"332a9a9c49123e23601444adafca95852030d0e19a682316100bc45b0f849209"} Jan 30 16:22:31 crc kubenswrapper[4766]: I0130 16:22:31.106716 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"bfcc8c946ea5c547539386c797026307ba8bd235fd4694341695882ec2442702"} Jan 30 16:22:31 crc kubenswrapper[4766]: I0130 16:22:31.106730 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"108f1ca5a7cf1c4f0665b5b82b00c8b911dfe22582334836d3bc8a5afe17a1c6"} Jan 30 16:22:31 crc kubenswrapper[4766]: I0130 16:22:31.106743 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f01fb269c6fb534b4e45e60f3409c21e9700bc901eda3f975e990f77a9286838"} Jan 30 16:22:31 crc kubenswrapper[4766]: I0130 16:22:31.106784 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:31 crc kubenswrapper[4766]: I0130 16:22:31.106823 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:22:31 crc kubenswrapper[4766]: I0130 16:22:31.106786 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:31 crc kubenswrapper[4766]: I0130 16:22:31.106862 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:31 crc kubenswrapper[4766]: I0130 16:22:31.106782 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:31 crc kubenswrapper[4766]: I0130 16:22:31.108228 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:31 crc kubenswrapper[4766]: I0130 16:22:31.108251 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:31 crc kubenswrapper[4766]: I0130 16:22:31.108268 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:31 crc kubenswrapper[4766]: I0130 16:22:31.108279 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:31 crc kubenswrapper[4766]: I0130 16:22:31.108311 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:31 crc kubenswrapper[4766]: I0130 16:22:31.108350 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:31 crc kubenswrapper[4766]: I0130 16:22:31.108361 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:31 crc kubenswrapper[4766]: I0130 16:22:31.108388 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:31 crc kubenswrapper[4766]: I0130 16:22:31.108405 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:31 crc kubenswrapper[4766]: I0130 16:22:31.109275 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:31 crc kubenswrapper[4766]: I0130 16:22:31.109303 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:31 crc kubenswrapper[4766]: I0130 16:22:31.109314 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:31 crc kubenswrapper[4766]: I0130 16:22:31.986701 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 15:37:00.940794838 +0000 UTC Jan 30 16:22:32 crc kubenswrapper[4766]: I0130 16:22:32.069287 4766 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 30 16:22:32 crc kubenswrapper[4766]: I0130 16:22:32.109070 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:32 crc kubenswrapper[4766]: I0130 16:22:32.109101 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:32 crc kubenswrapper[4766]: I0130 16:22:32.110360 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:32 crc kubenswrapper[4766]: I0130 16:22:32.110388 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:32 crc kubenswrapper[4766]: I0130 16:22:32.110397 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:32 crc kubenswrapper[4766]: I0130 16:22:32.110466 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:32 crc kubenswrapper[4766]: I0130 16:22:32.110491 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:32 crc kubenswrapper[4766]: I0130 16:22:32.110502 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:32 crc kubenswrapper[4766]: I0130 16:22:32.399770 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:22:32 crc kubenswrapper[4766]: I0130 16:22:32.399929 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:32 crc kubenswrapper[4766]: I0130 16:22:32.401162 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:32 crc kubenswrapper[4766]: I0130 16:22:32.401239 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:32 crc kubenswrapper[4766]: I0130 16:22:32.401254 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:32 crc kubenswrapper[4766]: I0130 16:22:32.422937 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:32 crc kubenswrapper[4766]: I0130 16:22:32.424439 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:32 crc kubenswrapper[4766]: I0130 16:22:32.424523 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:32 crc kubenswrapper[4766]: I0130 16:22:32.424540 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:32 crc kubenswrapper[4766]: I0130 16:22:32.424581 4766 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 16:22:32 crc kubenswrapper[4766]: I0130 16:22:32.582715 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:22:33 crc kubenswrapper[4766]: I0130 16:22:32.987680 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 12:12:26.0525009 +0000 UTC Jan 30 16:22:33 crc kubenswrapper[4766]: I0130 16:22:33.111275 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:33 crc kubenswrapper[4766]: I0130 16:22:33.112161 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:33 crc kubenswrapper[4766]: I0130 16:22:33.112221 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:33 crc kubenswrapper[4766]: I0130 16:22:33.112233 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:33 crc kubenswrapper[4766]: I0130 16:22:33.314650 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 30 16:22:33 crc kubenswrapper[4766]: I0130 16:22:33.314819 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:33 crc kubenswrapper[4766]: I0130 16:22:33.315922 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:33 crc kubenswrapper[4766]: I0130 16:22:33.316008 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:33 crc kubenswrapper[4766]: I0130 16:22:33.316031 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:33 crc kubenswrapper[4766]: I0130 16:22:33.988056 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 21:42:21.867496807 +0000 UTC Jan 30 16:22:33 crc kubenswrapper[4766]: I0130 16:22:33.992436 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:22:34 crc kubenswrapper[4766]: I0130 16:22:34.025273 4766 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 16:22:34 crc kubenswrapper[4766]: I0130 16:22:34.025366 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 16:22:34 crc kubenswrapper[4766]: I0130 16:22:34.113589 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:34 crc kubenswrapper[4766]: I0130 16:22:34.114640 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:34 crc kubenswrapper[4766]: I0130 16:22:34.114686 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:34 crc kubenswrapper[4766]: I0130 16:22:34.114702 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:34 crc kubenswrapper[4766]: I0130 16:22:34.988896 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 15:56:16.36936635 +0000 UTC Jan 30 16:22:35 crc kubenswrapper[4766]: I0130 16:22:35.905281 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:22:35 crc kubenswrapper[4766]: I0130 16:22:35.905498 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:35 crc kubenswrapper[4766]: I0130 16:22:35.906569 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:35 crc kubenswrapper[4766]: I0130 16:22:35.906592 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:35 crc kubenswrapper[4766]: I0130 16:22:35.906600 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:35 crc kubenswrapper[4766]: I0130 16:22:35.911288 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:22:35 crc kubenswrapper[4766]: I0130 16:22:35.989472 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 12:13:20.847315973 +0000 UTC Jan 30 16:22:36 crc kubenswrapper[4766]: E0130 16:22:36.113029 4766 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 16:22:36 crc kubenswrapper[4766]: I0130 16:22:36.117023 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:36 crc kubenswrapper[4766]: I0130 16:22:36.117813 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:36 crc kubenswrapper[4766]: I0130 16:22:36.117848 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:36 crc kubenswrapper[4766]: I0130 16:22:36.117856 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:36 crc kubenswrapper[4766]: I0130 16:22:36.990195 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 06:16:23.748273929 +0000 UTC Jan 30 16:22:37 crc kubenswrapper[4766]: I0130 16:22:37.991125 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 15:20:39.167917108 +0000 UTC Jan 30 16:22:38 crc kubenswrapper[4766]: I0130 16:22:38.992284 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 15:41:08.183008527 +0000 UTC Jan 30 16:22:39 crc kubenswrapper[4766]: I0130 16:22:39.268293 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:22:39 crc kubenswrapper[4766]: I0130 16:22:39.268404 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:39 crc kubenswrapper[4766]: I0130 16:22:39.269421 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:39 crc kubenswrapper[4766]: I0130 16:22:39.269469 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:39 crc kubenswrapper[4766]: I0130 16:22:39.269480 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:39 crc kubenswrapper[4766]: I0130 16:22:39.977426 4766 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 30 16:22:39 crc kubenswrapper[4766]: I0130 16:22:39.993118 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 06:09:24.394457605 +0000 UTC Jan 30 16:22:40 crc kubenswrapper[4766]: W0130 16:22:40.086795 4766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 30 16:22:40 crc kubenswrapper[4766]: I0130 16:22:40.087104 4766 trace.go:236] Trace[1612209459]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 16:22:30.085) (total time: 10001ms): Jan 30 16:22:40 crc kubenswrapper[4766]: Trace[1612209459]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (16:22:40.086) Jan 30 16:22:40 crc kubenswrapper[4766]: Trace[1612209459]: [10.0016397s] [10.0016397s] END Jan 30 16:22:40 crc kubenswrapper[4766]: E0130 16:22:40.087130 4766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 30 16:22:40 crc kubenswrapper[4766]: I0130 16:22:40.123083 4766 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 30 16:22:40 crc kubenswrapper[4766]: I0130 16:22:40.123159 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 30 16:22:40 crc kubenswrapper[4766]: I0130 16:22:40.126350 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 30 16:22:40 crc kubenswrapper[4766]: I0130 16:22:40.128060 4766 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036" exitCode=255 Jan 30 16:22:40 crc kubenswrapper[4766]: I0130 16:22:40.128110 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036"} Jan 30 16:22:40 crc kubenswrapper[4766]: I0130 16:22:40.128288 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:40 crc kubenswrapper[4766]: I0130 16:22:40.129094 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:40 crc kubenswrapper[4766]: I0130 16:22:40.129116 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:40 crc kubenswrapper[4766]: I0130 16:22:40.129125 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:40 crc kubenswrapper[4766]: I0130 16:22:40.129552 4766 scope.go:117] "RemoveContainer" containerID="5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036" Jan 30 16:22:40 crc kubenswrapper[4766]: I0130 16:22:40.134277 4766 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Jan 30 16:22:40 crc kubenswrapper[4766]: I0130 16:22:40.134339 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 30 16:22:40 crc kubenswrapper[4766]: I0130 16:22:40.566112 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 30 16:22:40 crc kubenswrapper[4766]: I0130 16:22:40.566366 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:40 crc kubenswrapper[4766]: I0130 16:22:40.567740 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:40 crc kubenswrapper[4766]: I0130 16:22:40.567892 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:40 crc kubenswrapper[4766]: I0130 16:22:40.567962 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:40 crc kubenswrapper[4766]: I0130 16:22:40.603978 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 30 16:22:40 crc kubenswrapper[4766]: I0130 16:22:40.993532 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 20:30:49.133333254 +0000 UTC Jan 30 16:22:41 crc kubenswrapper[4766]: I0130 16:22:41.133104 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 30 16:22:41 crc kubenswrapper[4766]: I0130 16:22:41.135651 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1"} Jan 30 16:22:41 crc kubenswrapper[4766]: I0130 16:22:41.135803 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:41 crc kubenswrapper[4766]: I0130 16:22:41.136026 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:41 crc kubenswrapper[4766]: I0130 16:22:41.136565 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:41 crc kubenswrapper[4766]: I0130 16:22:41.136594 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:41 crc kubenswrapper[4766]: I0130 16:22:41.136606 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:41 crc kubenswrapper[4766]: I0130 16:22:41.137412 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:41 crc kubenswrapper[4766]: I0130 16:22:41.137435 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:41 crc kubenswrapper[4766]: I0130 16:22:41.137446 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:41 crc kubenswrapper[4766]: I0130 16:22:41.150967 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 30 16:22:41 crc kubenswrapper[4766]: I0130 16:22:41.993658 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 09:19:42.166037136 +0000 UTC Jan 30 16:22:42 crc kubenswrapper[4766]: I0130 16:22:42.137852 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:42 crc kubenswrapper[4766]: I0130 16:22:42.138601 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:42 crc kubenswrapper[4766]: I0130 16:22:42.138629 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:42 crc kubenswrapper[4766]: I0130 16:22:42.138642 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:42 crc kubenswrapper[4766]: I0130 16:22:42.994484 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 08:52:14.64602606 +0000 UTC Jan 30 16:22:43 crc kubenswrapper[4766]: I0130 16:22:43.995414 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 21:08:59.119691219 +0000 UTC Jan 30 16:22:43 crc kubenswrapper[4766]: I0130 16:22:43.996981 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:22:43 crc kubenswrapper[4766]: I0130 16:22:43.997147 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:43 crc kubenswrapper[4766]: I0130 16:22:43.997226 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:22:43 crc kubenswrapper[4766]: I0130 16:22:43.998155 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:43 crc kubenswrapper[4766]: I0130 16:22:43.998199 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:43 crc kubenswrapper[4766]: I0130 16:22:43.998211 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:44 crc kubenswrapper[4766]: I0130 16:22:44.000608 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:22:44 crc kubenswrapper[4766]: I0130 16:22:44.025753 4766 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 16:22:44 crc kubenswrapper[4766]: I0130 16:22:44.025836 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 16:22:44 crc kubenswrapper[4766]: I0130 16:22:44.142624 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:44 crc kubenswrapper[4766]: I0130 16:22:44.143467 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:44 crc kubenswrapper[4766]: I0130 16:22:44.143489 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:44 crc kubenswrapper[4766]: I0130 16:22:44.143498 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:44 crc kubenswrapper[4766]: I0130 16:22:44.604381 4766 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 30 16:22:44 crc kubenswrapper[4766]: I0130 16:22:44.973672 4766 apiserver.go:52] "Watching apiserver" Jan 30 16:22:44 crc kubenswrapper[4766]: I0130 16:22:44.979257 4766 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 30 16:22:44 crc kubenswrapper[4766]: I0130 16:22:44.979530 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Jan 30 16:22:44 crc kubenswrapper[4766]: I0130 16:22:44.979871 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 16:22:44 crc kubenswrapper[4766]: I0130 16:22:44.979978 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:22:44 crc kubenswrapper[4766]: I0130 16:22:44.980036 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:22:44 crc kubenswrapper[4766]: E0130 16:22:44.980238 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:22:44 crc kubenswrapper[4766]: I0130 16:22:44.980276 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 16:22:44 crc kubenswrapper[4766]: I0130 16:22:44.980331 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 16:22:44 crc kubenswrapper[4766]: I0130 16:22:44.980340 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:22:44 crc kubenswrapper[4766]: E0130 16:22:44.980386 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:22:44 crc kubenswrapper[4766]: E0130 16:22:44.980522 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:22:44 crc kubenswrapper[4766]: I0130 16:22:44.982570 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 30 16:22:44 crc kubenswrapper[4766]: I0130 16:22:44.982603 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 30 16:22:44 crc kubenswrapper[4766]: I0130 16:22:44.982570 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 30 16:22:44 crc kubenswrapper[4766]: I0130 16:22:44.982623 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 30 16:22:44 crc kubenswrapper[4766]: I0130 16:22:44.982579 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 30 16:22:44 crc kubenswrapper[4766]: I0130 16:22:44.983156 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 30 16:22:44 crc kubenswrapper[4766]: I0130 16:22:44.983164 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 30 16:22:44 crc kubenswrapper[4766]: I0130 16:22:44.983170 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 30 16:22:44 crc kubenswrapper[4766]: I0130 16:22:44.985688 4766 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 30 16:22:44 crc kubenswrapper[4766]: I0130 16:22:44.987171 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 30 16:22:44 crc kubenswrapper[4766]: I0130 16:22:44.996002 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 19:20:49.656259248 +0000 UTC Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.005845 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.021612 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.031867 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.046664 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.056457 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.065415 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.078072 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: E0130 16:22:45.114001 4766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.116908 4766 trace.go:236] Trace[1293579205]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 16:22:32.685) (total time: 12431ms): Jan 30 16:22:45 crc kubenswrapper[4766]: Trace[1293579205]: ---"Objects listed" error: 12431ms (16:22:45.116) Jan 30 16:22:45 crc kubenswrapper[4766]: Trace[1293579205]: [12.43141281s] [12.43141281s] END Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.116952 4766 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.116926 4766 trace.go:236] Trace[333219771]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 16:22:30.769) (total time: 14346ms): Jan 30 16:22:45 crc kubenswrapper[4766]: Trace[333219771]: ---"Objects listed" error: 14346ms (16:22:45.116) Jan 30 16:22:45 crc kubenswrapper[4766]: Trace[333219771]: [14.346862219s] [14.346862219s] END Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.117049 4766 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.117224 4766 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.117717 4766 trace.go:236] Trace[188495366]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 16:22:33.854) (total time: 11263ms): Jan 30 16:22:45 crc kubenswrapper[4766]: Trace[188495366]: ---"Objects listed" error: 11263ms (16:22:45.117) Jan 30 16:22:45 crc kubenswrapper[4766]: Trace[188495366]: [11.263564895s] [11.263564895s] END Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.117738 4766 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 30 16:22:45 crc kubenswrapper[4766]: E0130 16:22:45.118375 4766 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.124891 4766 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.148899 4766 csr.go:261] certificate signing request csr-sffz8 is approved, waiting to be issued Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.157340 4766 csr.go:257] certificate signing request csr-sffz8 is issued Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.217789 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.217832 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.217858 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.217889 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.217913 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.217971 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.218237 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.218398 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.218442 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.218418 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.218713 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.218727 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.218756 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.218804 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.218827 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.218872 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219193 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219324 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219382 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219400 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219422 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219438 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219454 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219492 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219518 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219534 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219552 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219567 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219584 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219599 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219613 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219630 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219645 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219662 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219681 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219713 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219729 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219748 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219766 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219785 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219804 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219822 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219838 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219858 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219882 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219898 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219932 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219950 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219966 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219982 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219997 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220012 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220029 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220082 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220099 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220115 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220130 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220153 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220168 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220204 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220228 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220253 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220275 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220297 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220315 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220331 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220349 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220365 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220381 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220396 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220411 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220427 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220442 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220458 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220473 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220488 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220504 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220520 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220537 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220552 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220568 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220584 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220599 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220613 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220628 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220644 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220661 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220678 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220693 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220707 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220723 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220738 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220755 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220775 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220794 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220812 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220831 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220851 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220877 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220896 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220916 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220937 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220959 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220981 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221005 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221024 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221044 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221063 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221078 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221094 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221112 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221134 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221154 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221247 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221272 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221296 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221314 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221329 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221346 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221365 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221381 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221396 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221410 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221425 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221459 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221476 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221492 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221508 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221525 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221541 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221556 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221570 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221588 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221602 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221618 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221634 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221650 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221665 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221681 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221710 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221727 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221744 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221761 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221777 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221797 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221819 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221838 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221855 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221877 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221900 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221924 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221951 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221976 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221999 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222020 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222036 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222057 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222073 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222089 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222105 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222121 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222137 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222160 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222203 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222223 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222239 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222255 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222273 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222290 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222312 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222335 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222359 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222383 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222409 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222435 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222459 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222481 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222506 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222513 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222530 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222548 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222557 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222589 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222605 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222667 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222691 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222710 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222729 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222747 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222771 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222805 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222805 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222830 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222826 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222847 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222834 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222854 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 16:22:45 crc kubenswrapper[4766]: E0130 16:22:45.222920 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:22:45.722895294 +0000 UTC m=+20.360852730 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222988 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223024 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223043 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223056 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223068 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223090 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223095 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223149 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223203 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223235 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223266 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223284 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223295 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223298 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223304 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223323 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223351 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223377 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223405 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223431 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223483 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223520 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223554 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223583 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223615 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223642 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223668 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223695 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223719 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223747 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223771 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223796 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223822 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223848 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223915 4766 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223931 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223945 4766 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223959 4766 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223973 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223985 4766 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223999 4766 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.224011 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.224025 4766 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.224073 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.224089 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.224102 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.224117 4766 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.224130 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.224143 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.224155 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.224167 4766 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.224199 4766 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.224217 4766 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.224230 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.224243 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.225515 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.229381 4766 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.244612 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.245331 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.252807 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.273671 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.264604 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223360 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223417 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223438 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223464 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223511 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223564 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223662 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223755 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223780 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223830 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223839 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223916 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223941 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.224121 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.224155 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.224245 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.224350 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.224499 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.224506 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.224536 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.224583 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.224648 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.224826 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.225001 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.225010 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.225125 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.225320 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.225512 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.225533 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.227631 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.227819 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.227963 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: E0130 16:22:45.228110 4766 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.228215 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.228303 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.228370 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.228536 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.228621 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.228966 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.228983 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.229109 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.229164 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.236000 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.238901 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.238932 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.239083 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.239229 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.239124 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.239700 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.239724 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.240473 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.241752 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.243315 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.243392 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.243437 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.243691 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.243844 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.243978 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.244197 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.244422 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.244494 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.244547 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: E0130 16:22:45.244940 4766 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.234240 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.245484 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.246904 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.247372 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.247699 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.248230 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.250350 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.250609 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.251698 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.251742 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.251764 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.252261 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.252499 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.252583 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.252957 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.253588 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.253709 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.253753 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.253858 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.286204 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.286341 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.286930 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.287235 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.287614 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.287697 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.288057 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.288219 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.288284 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.288518 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.289232 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.289288 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.289783 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.290607 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.253764 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.263663 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: E0130 16:22:45.264511 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 16:22:45 crc kubenswrapper[4766]: E0130 16:22:45.290784 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 16:22:45 crc kubenswrapper[4766]: E0130 16:22:45.290807 4766 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.271082 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.271712 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.271928 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.272171 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.274251 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.276411 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.276757 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.277060 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.277083 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.277307 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.277563 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.278004 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.278069 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.278103 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.278478 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.278521 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.278551 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.280302 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.281628 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.281795 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.281978 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.282442 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.282877 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.283076 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.283445 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.284322 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.285190 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: E0130 16:22:45.291094 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 16:22:45.791067212 +0000 UTC m=+20.429024558 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.291373 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.291444 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: E0130 16:22:45.291442 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 16:22:45 crc kubenswrapper[4766]: E0130 16:22:45.291492 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 16:22:45 crc kubenswrapper[4766]: E0130 16:22:45.291512 4766 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:22:45 crc kubenswrapper[4766]: E0130 16:22:45.291592 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 16:22:45.791575246 +0000 UTC m=+20.429532592 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.291606 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.291658 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-flxfz"] Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.291722 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.308484 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: E0130 16:22:45.292507 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 16:22:45.791739981 +0000 UTC m=+20.429697327 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.292886 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.293095 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.293151 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.293871 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.292854 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.293548 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-flxfz" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.293916 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.294219 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.294326 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.294576 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.295122 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.295301 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.295618 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.295954 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.296065 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.304211 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.304338 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.306397 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.306457 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.306660 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.307833 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.308398 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: E0130 16:22:45.308802 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 16:22:45.8087791 +0000 UTC m=+20.446736446 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.293668 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.293460 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-vhmx5"] Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.315750 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.316009 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.316045 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.316312 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.318875 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.320040 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-vhmx5" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.320131 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.320054 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.320135 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.320542 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.321095 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.321440 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.321643 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.321998 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.322167 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.323277 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.330576 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.331683 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332124 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332165 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332341 4766 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332363 4766 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332376 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332389 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332400 4766 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332411 4766 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332422 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332433 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332444 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332454 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332469 4766 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332479 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332489 4766 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332501 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332514 4766 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332526 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332541 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332552 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332563 4766 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332575 4766 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332587 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332599 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332611 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332622 4766 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332634 4766 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332645 4766 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332657 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332668 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332680 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332693 4766 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332705 4766 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332718 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332730 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332743 4766 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332757 4766 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332768 4766 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332780 4766 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332792 4766 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332803 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332813 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332825 4766 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332836 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332849 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332860 4766 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332871 4766 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332883 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332894 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332907 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332918 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332930 4766 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332941 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332952 4766 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332964 4766 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332976 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332988 4766 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333001 4766 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333013 4766 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333026 4766 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333036 4766 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333064 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333076 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333086 4766 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333098 4766 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333108 4766 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333121 4766 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333132 4766 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333143 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333154 4766 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333164 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333206 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333220 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333231 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333242 4766 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333254 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333264 4766 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333275 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333287 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333298 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333308 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333319 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333329 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333342 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333354 4766 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333365 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333376 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333387 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333401 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333412 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333424 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333434 4766 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333447 4766 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333457 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333468 4766 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333479 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333491 4766 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333503 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333514 4766 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333526 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333538 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333549 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333558 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333567 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333575 4766 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333585 4766 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333594 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333603 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333611 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333619 4766 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333630 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333640 4766 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333651 4766 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333661 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333672 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333682 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333696 4766 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333709 4766 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333722 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333732 4766 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333743 4766 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333753 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333766 4766 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333777 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333789 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333800 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333811 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333822 4766 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333832 4766 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333842 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333855 4766 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333869 4766 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333879 4766 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333889 4766 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333899 4766 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333910 4766 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333927 4766 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333937 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333947 4766 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333957 4766 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333967 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333979 4766 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333989 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333999 4766 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.334010 4766 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.334021 4766 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.334033 4766 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.334043 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.334053 4766 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.334063 4766 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.334073 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.334083 4766 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.334093 4766 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.334103 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.334113 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.334124 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.334134 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.334144 4766 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.334154 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.334166 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.335609 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.335632 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.335645 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.335720 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.335928 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.338031 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.339623 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.339739 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.339886 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.339995 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.340107 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.345790 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.346690 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.348048 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.351509 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.352438 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.352510 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.352946 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.357079 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.366416 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.368961 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.383305 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.386943 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.388466 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.436291 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c3a8d75a-1f1e-416a-a96b-c774ffdc24b2-serviceca\") pod \"node-ca-vhmx5\" (UID: \"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\") " pod="openshift-image-registry/node-ca-vhmx5" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.436574 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/8bd09169-41b7-4eb3-80a5-a842e79f7d94-hosts-file\") pod \"node-resolver-flxfz\" (UID: \"8bd09169-41b7-4eb3-80a5-a842e79f7d94\") " pod="openshift-dns/node-resolver-flxfz" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.436697 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctb7v\" (UniqueName: \"kubernetes.io/projected/c3a8d75a-1f1e-416a-a96b-c774ffdc24b2-kube-api-access-ctb7v\") pod \"node-ca-vhmx5\" (UID: \"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\") " pod="openshift-image-registry/node-ca-vhmx5" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.436833 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnw6f\" (UniqueName: \"kubernetes.io/projected/8bd09169-41b7-4eb3-80a5-a842e79f7d94-kube-api-access-gnw6f\") pod \"node-resolver-flxfz\" (UID: \"8bd09169-41b7-4eb3-80a5-a842e79f7d94\") " pod="openshift-dns/node-resolver-flxfz" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.436920 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c3a8d75a-1f1e-416a-a96b-c774ffdc24b2-host\") pod \"node-ca-vhmx5\" (UID: \"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\") " pod="openshift-image-registry/node-ca-vhmx5" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.437024 4766 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.437103 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.437190 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.437275 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.437347 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.437414 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.437480 4766 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.437549 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.437626 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.437694 4766 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.437763 4766 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.437833 4766 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.458402 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.506469 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.522229 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.533300 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.538204 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnw6f\" (UniqueName: \"kubernetes.io/projected/8bd09169-41b7-4eb3-80a5-a842e79f7d94-kube-api-access-gnw6f\") pod \"node-resolver-flxfz\" (UID: \"8bd09169-41b7-4eb3-80a5-a842e79f7d94\") " pod="openshift-dns/node-resolver-flxfz" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.538409 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c3a8d75a-1f1e-416a-a96b-c774ffdc24b2-host\") pod \"node-ca-vhmx5\" (UID: \"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\") " pod="openshift-image-registry/node-ca-vhmx5" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.538474 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c3a8d75a-1f1e-416a-a96b-c774ffdc24b2-host\") pod \"node-ca-vhmx5\" (UID: \"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\") " pod="openshift-image-registry/node-ca-vhmx5" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.538543 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c3a8d75a-1f1e-416a-a96b-c774ffdc24b2-serviceca\") pod \"node-ca-vhmx5\" (UID: \"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\") " pod="openshift-image-registry/node-ca-vhmx5" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.538646 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctb7v\" (UniqueName: \"kubernetes.io/projected/c3a8d75a-1f1e-416a-a96b-c774ffdc24b2-kube-api-access-ctb7v\") pod \"node-ca-vhmx5\" (UID: \"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\") " pod="openshift-image-registry/node-ca-vhmx5" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.538734 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/8bd09169-41b7-4eb3-80a5-a842e79f7d94-hosts-file\") pod \"node-resolver-flxfz\" (UID: \"8bd09169-41b7-4eb3-80a5-a842e79f7d94\") " pod="openshift-dns/node-resolver-flxfz" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.538821 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/8bd09169-41b7-4eb3-80a5-a842e79f7d94-hosts-file\") pod \"node-resolver-flxfz\" (UID: \"8bd09169-41b7-4eb3-80a5-a842e79f7d94\") " pod="openshift-dns/node-resolver-flxfz" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.539491 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c3a8d75a-1f1e-416a-a96b-c774ffdc24b2-serviceca\") pod \"node-ca-vhmx5\" (UID: \"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\") " pod="openshift-image-registry/node-ca-vhmx5" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.544027 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.550640 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.557231 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnw6f\" (UniqueName: \"kubernetes.io/projected/8bd09169-41b7-4eb3-80a5-a842e79f7d94-kube-api-access-gnw6f\") pod \"node-resolver-flxfz\" (UID: \"8bd09169-41b7-4eb3-80a5-a842e79f7d94\") " pod="openshift-dns/node-resolver-flxfz" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.557953 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctb7v\" (UniqueName: \"kubernetes.io/projected/c3a8d75a-1f1e-416a-a96b-c774ffdc24b2-kube-api-access-ctb7v\") pod \"node-ca-vhmx5\" (UID: \"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\") " pod="openshift-image-registry/node-ca-vhmx5" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.565591 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.574653 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.585079 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.596358 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.596385 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.606572 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.608758 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-d478013c2f2e56c511f41a063283cb549fbb00ee8820c8d4bd123af5f807a849 WatchSource:0}: Error finding container d478013c2f2e56c511f41a063283cb549fbb00ee8820c8d4bd123af5f807a849: Status 404 returned error can't find the container with id d478013c2f2e56c511f41a063283cb549fbb00ee8820c8d4bd123af5f807a849 Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.613934 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.618651 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-ea6918573df502c85bf5c5765559e335385923375685c895d6c6d8da943d38a1 WatchSource:0}: Error finding container ea6918573df502c85bf5c5765559e335385923375685c895d6c6d8da943d38a1: Status 404 returned error can't find the container with id ea6918573df502c85bf5c5765559e335385923375685c895d6c6d8da943d38a1 Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.629400 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.643378 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.654018 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-flxfz" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.656231 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-vhmx5" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.656297 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.669439 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-ddhn5"] Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.669788 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.673395 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.673425 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.673550 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.673568 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.674887 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.675316 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-54ngm"] Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.684255 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.684890 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-vvzk9"] Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.686092 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.692938 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-l6xdr"] Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.694361 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.695060 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.695279 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.695474 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.695481 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.695362 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.695350 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.696614 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.707637 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.707672 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.717441 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.717653 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.717766 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.718008 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.718139 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.718303 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.718458 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.727406 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.739702 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.739887 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8da0c398-554f-47ad-aada-70e4b5c9ec98-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-vvzk9\" (UID: \"8da0c398-554f-47ad-aada-70e4b5c9ec98\") " pod="openshift-multus/multus-additional-cni-plugins-vvzk9" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.739976 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-var-lib-openvswitch\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.740064 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-multus-conf-dir\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.740133 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-multus-cni-dir\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.740233 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-host-run-k8s-cni-cncf-io\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.740306 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-host-var-lib-cni-bin\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.740373 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-run-systemd\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.740448 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6s9kc\" (UniqueName: \"kubernetes.io/projected/0a25c516-3d8c-4fdb-9425-692ce650f427-kube-api-access-6s9kc\") pod \"machine-config-daemon-ddhn5\" (UID: \"0a25c516-3d8c-4fdb-9425-692ce650f427\") " pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.740510 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8da0c398-554f-47ad-aada-70e4b5c9ec98-cni-binary-copy\") pod \"multus-additional-cni-plugins-vvzk9\" (UID: \"8da0c398-554f-47ad-aada-70e4b5c9ec98\") " pod="openshift-multus/multus-additional-cni-plugins-vvzk9" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.740571 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv5xn\" (UniqueName: \"kubernetes.io/projected/8da0c398-554f-47ad-aada-70e4b5c9ec98-kube-api-access-dv5xn\") pod \"multus-additional-cni-plugins-vvzk9\" (UID: \"8da0c398-554f-47ad-aada-70e4b5c9ec98\") " pod="openshift-multus/multus-additional-cni-plugins-vvzk9" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.740642 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d6a299e8-188d-4777-bb82-a0994feabcff-ovnkube-config\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.740786 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-host-run-multus-certs\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.740866 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-run-openvswitch\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.740937 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0a25c516-3d8c-4fdb-9425-692ce650f427-mcd-auth-proxy-config\") pod \"machine-config-daemon-ddhn5\" (UID: \"0a25c516-3d8c-4fdb-9425-692ce650f427\") " pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.741007 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-cni-netd\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.741068 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-hostroot\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.741144 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-run-netns\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.741235 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-cni-bin\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.741311 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0a25c516-3d8c-4fdb-9425-692ce650f427-proxy-tls\") pod \"machine-config-daemon-ddhn5\" (UID: \"0a25c516-3d8c-4fdb-9425-692ce650f427\") " pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.741377 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-run-ovn-kubernetes\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.741442 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.741513 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d6a299e8-188d-4777-bb82-a0994feabcff-ovnkube-script-lib\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.741576 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-system-cni-dir\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.741639 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-etc-kubernetes\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.741722 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25lp6\" (UniqueName: \"kubernetes.io/projected/3a74bc5e-af98-4849-820c-7056caabc485-kube-api-access-25lp6\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.741810 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8da0c398-554f-47ad-aada-70e4b5c9ec98-tuning-conf-dir\") pod \"multus-additional-cni-plugins-vvzk9\" (UID: \"8da0c398-554f-47ad-aada-70e4b5c9ec98\") " pod="openshift-multus/multus-additional-cni-plugins-vvzk9" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.741905 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-host-var-lib-kubelet\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.741976 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/3a74bc5e-af98-4849-820c-7056caabc485-multus-daemon-config\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.742045 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-os-release\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: E0130 16:22:45.742064 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:22:46.742043823 +0000 UTC m=+21.380001219 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.742207 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-host-run-netns\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.741918 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.742377 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-host-var-lib-cni-multus\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.742485 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-systemd-units\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.742519 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4psqh\" (UniqueName: \"kubernetes.io/projected/d6a299e8-188d-4777-bb82-a0994feabcff-kube-api-access-4psqh\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.742555 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8da0c398-554f-47ad-aada-70e4b5c9ec98-system-cni-dir\") pod \"multus-additional-cni-plugins-vvzk9\" (UID: \"8da0c398-554f-47ad-aada-70e4b5c9ec98\") " pod="openshift-multus/multus-additional-cni-plugins-vvzk9" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.742579 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0a25c516-3d8c-4fdb-9425-692ce650f427-rootfs\") pod \"machine-config-daemon-ddhn5\" (UID: \"0a25c516-3d8c-4fdb-9425-692ce650f427\") " pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.742618 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-run-ovn\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.742644 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-log-socket\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.742665 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-cnibin\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.742688 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3a74bc5e-af98-4849-820c-7056caabc485-cni-binary-copy\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.742710 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-kubelet\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.742734 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d6a299e8-188d-4777-bb82-a0994feabcff-env-overrides\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.742768 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d6a299e8-188d-4777-bb82-a0994feabcff-ovn-node-metrics-cert\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.742799 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-multus-socket-dir-parent\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.742819 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8da0c398-554f-47ad-aada-70e4b5c9ec98-cnibin\") pod \"multus-additional-cni-plugins-vvzk9\" (UID: \"8da0c398-554f-47ad-aada-70e4b5c9ec98\") " pod="openshift-multus/multus-additional-cni-plugins-vvzk9" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.742847 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8da0c398-554f-47ad-aada-70e4b5c9ec98-os-release\") pod \"multus-additional-cni-plugins-vvzk9\" (UID: \"8da0c398-554f-47ad-aada-70e4b5c9ec98\") " pod="openshift-multus/multus-additional-cni-plugins-vvzk9" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.742869 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-slash\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.742886 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-etc-openvswitch\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.742904 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-node-log\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.764991 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.775910 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.785552 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.796564 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.810244 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.821688 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.833613 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.841532 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.843984 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0a25c516-3d8c-4fdb-9425-692ce650f427-rootfs\") pod \"machine-config-daemon-ddhn5\" (UID: \"0a25c516-3d8c-4fdb-9425-692ce650f427\") " pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844021 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-run-ovn\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844045 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-log-socket\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844067 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-cnibin\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844087 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3a74bc5e-af98-4849-820c-7056caabc485-cni-binary-copy\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844109 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-kubelet\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844130 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d6a299e8-188d-4777-bb82-a0994feabcff-env-overrides\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844142 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0a25c516-3d8c-4fdb-9425-692ce650f427-rootfs\") pod \"machine-config-daemon-ddhn5\" (UID: \"0a25c516-3d8c-4fdb-9425-692ce650f427\") " pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844153 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d6a299e8-188d-4777-bb82-a0994feabcff-ovn-node-metrics-cert\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844245 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844268 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-multus-socket-dir-parent\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844290 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8da0c398-554f-47ad-aada-70e4b5c9ec98-cnibin\") pod \"multus-additional-cni-plugins-vvzk9\" (UID: \"8da0c398-554f-47ad-aada-70e4b5c9ec98\") " pod="openshift-multus/multus-additional-cni-plugins-vvzk9" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844292 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-run-ovn\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844309 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8da0c398-554f-47ad-aada-70e4b5c9ec98-os-release\") pod \"multus-additional-cni-plugins-vvzk9\" (UID: \"8da0c398-554f-47ad-aada-70e4b5c9ec98\") " pod="openshift-multus/multus-additional-cni-plugins-vvzk9" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844330 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-slash\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844330 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-cnibin\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844350 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-etc-openvswitch\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844367 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-node-log\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844371 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-kubelet\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844393 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8da0c398-554f-47ad-aada-70e4b5c9ec98-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-vvzk9\" (UID: \"8da0c398-554f-47ad-aada-70e4b5c9ec98\") " pod="openshift-multus/multus-additional-cni-plugins-vvzk9" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844405 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8da0c398-554f-47ad-aada-70e4b5c9ec98-cnibin\") pod \"multus-additional-cni-plugins-vvzk9\" (UID: \"8da0c398-554f-47ad-aada-70e4b5c9ec98\") " pod="openshift-multus/multus-additional-cni-plugins-vvzk9" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844415 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844518 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-var-lib-openvswitch\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: E0130 16:22:45.844538 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 16:22:45 crc kubenswrapper[4766]: E0130 16:22:45.844564 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844541 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-multus-conf-dir\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: E0130 16:22:45.844579 4766 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844584 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-multus-cni-dir\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: E0130 16:22:45.844634 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 16:22:46.844615877 +0000 UTC m=+21.482573283 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844658 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-host-run-k8s-cni-cncf-io\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844663 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-multus-cni-dir\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844682 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-host-var-lib-cni-bin\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844703 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-run-systemd\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844720 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6s9kc\" (UniqueName: \"kubernetes.io/projected/0a25c516-3d8c-4fdb-9425-692ce650f427-kube-api-access-6s9kc\") pod \"machine-config-daemon-ddhn5\" (UID: \"0a25c516-3d8c-4fdb-9425-692ce650f427\") " pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844743 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8da0c398-554f-47ad-aada-70e4b5c9ec98-cni-binary-copy\") pod \"multus-additional-cni-plugins-vvzk9\" (UID: \"8da0c398-554f-47ad-aada-70e4b5c9ec98\") " pod="openshift-multus/multus-additional-cni-plugins-vvzk9" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844758 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dv5xn\" (UniqueName: \"kubernetes.io/projected/8da0c398-554f-47ad-aada-70e4b5c9ec98-kube-api-access-dv5xn\") pod \"multus-additional-cni-plugins-vvzk9\" (UID: \"8da0c398-554f-47ad-aada-70e4b5c9ec98\") " pod="openshift-multus/multus-additional-cni-plugins-vvzk9" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844776 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d6a299e8-188d-4777-bb82-a0994feabcff-ovnkube-config\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844793 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-host-run-multus-certs\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844842 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-run-openvswitch\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844866 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0a25c516-3d8c-4fdb-9425-692ce650f427-mcd-auth-proxy-config\") pod \"machine-config-daemon-ddhn5\" (UID: \"0a25c516-3d8c-4fdb-9425-692ce650f427\") " pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844884 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-slash\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844896 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-cni-netd\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844930 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-cni-netd\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844932 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-hostroot\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844972 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844989 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3a74bc5e-af98-4849-820c-7056caabc485-cni-binary-copy\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.845000 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-run-netns\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844954 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-hostroot\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.845030 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-run-netns\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844340 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-log-socket\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.845030 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-cni-bin\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844976 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8da0c398-554f-47ad-aada-70e4b5c9ec98-os-release\") pod \"multus-additional-cni-plugins-vvzk9\" (UID: \"8da0c398-554f-47ad-aada-70e4b5c9ec98\") " pod="openshift-multus/multus-additional-cni-plugins-vvzk9" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.845057 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0a25c516-3d8c-4fdb-9425-692ce650f427-proxy-tls\") pod \"machine-config-daemon-ddhn5\" (UID: \"0a25c516-3d8c-4fdb-9425-692ce650f427\") " pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.845071 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-cni-bin\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.845075 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-run-ovn-kubernetes\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.845094 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-run-ovn-kubernetes\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.845101 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.845119 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d6a299e8-188d-4777-bb82-a0994feabcff-ovnkube-script-lib\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.845137 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-system-cni-dir\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.845155 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-etc-kubernetes\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.845170 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25lp6\" (UniqueName: \"kubernetes.io/projected/3a74bc5e-af98-4849-820c-7056caabc485-kube-api-access-25lp6\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.845205 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8da0c398-554f-47ad-aada-70e4b5c9ec98-tuning-conf-dir\") pod \"multus-additional-cni-plugins-vvzk9\" (UID: \"8da0c398-554f-47ad-aada-70e4b5c9ec98\") " pod="openshift-multus/multus-additional-cni-plugins-vvzk9" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.845222 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-host-var-lib-kubelet\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.845224 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d6a299e8-188d-4777-bb82-a0994feabcff-env-overrides\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.845237 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/3a74bc5e-af98-4849-820c-7056caabc485-multus-daemon-config\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.845254 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-os-release\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.845280 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-host-run-netns\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.845295 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-host-var-lib-cni-multus\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.845284 4766 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.845320 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.845342 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-systemd-units\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.845429 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-host-run-k8s-cni-cncf-io\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.845439 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4psqh\" (UniqueName: \"kubernetes.io/projected/d6a299e8-188d-4777-bb82-a0994feabcff-kube-api-access-4psqh\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.845403 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-host-var-lib-cni-bin\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.845380 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-var-lib-openvswitch\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.845467 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8da0c398-554f-47ad-aada-70e4b5c9ec98-system-cni-dir\") pod \"multus-additional-cni-plugins-vvzk9\" (UID: \"8da0c398-554f-47ad-aada-70e4b5c9ec98\") " pod="openshift-multus/multus-additional-cni-plugins-vvzk9" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.845485 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-host-var-lib-cni-multus\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.845379 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-multus-conf-dir\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.845391 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-host-run-netns\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: E0130 16:22:45.845509 4766 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845632 4766 reflector.go:484] object-"openshift-dns"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845651 4766 reflector.go:484] object-"openshift-network-node-identity"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845670 4766 reflector.go:484] object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845685 4766 reflector.go:484] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845693 4766 reflector.go:484] object-"openshift-image-registry"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845727 4766 reflector.go:484] object-"openshift-network-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845632 4766 reflector.go:484] object-"openshift-network-node-identity"/"ovnkube-identity-cm": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845749 4766 reflector.go:484] object-"openshift-network-operator"/"iptables-alerter-script": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845767 4766 reflector.go:484] object-"openshift-machine-config-operator"/"proxy-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845779 4766 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845787 4766 reflector.go:484] object-"openshift-multus"/"cni-copy-resources": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845804 4766 reflector.go:484] object-"openshift-multus"/"default-dockercfg-2q5b6": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845736 4766 reflector.go:484] object-"openshift-multus"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845660 4766 reflector.go:484] object-"openshift-image-registry"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845830 4766 reflector.go:484] object-"openshift-dns"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845834 4766 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845753 4766 reflector.go:484] object-"openshift-multus"/"default-cni-sysctl-allowlist": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845805 4766 reflector.go:484] object-"openshift-image-registry"/"node-ca-dockercfg-4777p": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.845823 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/pods/networking-console-plugin-85b44fc459-gdk6g/status\": http2: client connection force closed via ClientConn.Close" Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845861 4766 reflector.go:484] object-"openshift-network-operator"/"metrics-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845869 4766 reflector.go:484] object-"openshift-network-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845852 4766 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.846053 4766 reflector.go:484] object-"openshift-machine-config-operator"/"kube-rbac-proxy": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845878 4766 reflector.go:484] object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845884 4766 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845893 4766 reflector.go:484] object-"openshift-network-node-identity"/"network-node-identity-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845897 4766 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845905 4766 reflector.go:484] object-"openshift-ovn-kubernetes"/"env-overrides": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845913 4766 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845916 4766 reflector.go:484] pkg/kubelet/config/apiserver.go:66: watch of *v1.Pod ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.846163 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-node-log\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845928 4766 reflector.go:484] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.846206 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-system-cni-dir\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.846227 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-systemd-units\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.846241 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8da0c398-554f-47ad-aada-70e4b5c9ec98-system-cni-dir\") pod \"multus-additional-cni-plugins-vvzk9\" (UID: \"8da0c398-554f-47ad-aada-70e4b5c9ec98\") " pod="openshift-multus/multus-additional-cni-plugins-vvzk9" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.846255 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-run-systemd\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: E0130 16:22:45.846287 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.846307 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: E0130 16:22:45.846311 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.846304 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-etc-openvswitch\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: E0130 16:22:45.846328 4766 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.846358 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-host-run-multus-certs\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.846345 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-os-release\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: E0130 16:22:45.846290 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 16:22:46.846281173 +0000 UTC m=+21.484238519 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.846420 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/3a74bc5e-af98-4849-820c-7056caabc485-multus-daemon-config\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: E0130 16:22:45.846432 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 16:22:46.846410257 +0000 UTC m=+21.484367663 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845939 4766 reflector.go:484] object-"openshift-multus"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845946 4766 reflector.go:484] object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845959 4766 reflector.go:484] object-"openshift-network-node-identity"/"env-overrides": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845972 4766 reflector.go:484] object-"openshift-network-node-identity"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: E0130 16:22:45.846478 4766 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845984 4766 reflector.go:484] object-"openshift-image-registry"/"image-registry-certificates": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.846504 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-run-openvswitch\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845818 4766 reflector.go:484] object-"openshift-machine-config-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: E0130 16:22:45.846541 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 16:22:46.8465291 +0000 UTC m=+21.484486446 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845753 4766 reflector.go:484] object-"openshift-machine-config-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.846012 4766 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: E0130 16:22:45.846033 4766 projected.go:194] Error preparing data for projected volume kube-api-access-dv5xn for pod openshift-multus/multus-additional-cni-plugins-vvzk9: failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/serviceaccounts/multus-ancillary-tools/token": write tcp 38.102.83.103:45762->38.102.83.103:6443: use of closed network connection Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845928 4766 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovnkube-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.846641 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-host-var-lib-kubelet\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.846653 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-multus-socket-dir-parent\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.846672 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-etc-kubernetes\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.846775 4766 reflector.go:484] object-"openshift-multus"/"multus-daemon-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: E0130 16:22:45.846891 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8da0c398-554f-47ad-aada-70e4b5c9ec98-kube-api-access-dv5xn podName:8da0c398-554f-47ad-aada-70e4b5c9ec98 nodeName:}" failed. No retries permitted until 2026-01-30 16:22:46.346782417 +0000 UTC m=+20.984739853 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dv5xn" (UniqueName: "kubernetes.io/projected/8da0c398-554f-47ad-aada-70e4b5c9ec98-kube-api-access-dv5xn") pod "multus-additional-cni-plugins-vvzk9" (UID: "8da0c398-554f-47ad-aada-70e4b5c9ec98") : failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/serviceaccounts/multus-ancillary-tools/token": write tcp 38.102.83.103:45762->38.102.83.103:6443: use of closed network connection Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.847145 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0a25c516-3d8c-4fdb-9425-692ce650f427-mcd-auth-proxy-config\") pod \"machine-config-daemon-ddhn5\" (UID: \"0a25c516-3d8c-4fdb-9425-692ce650f427\") " pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.848104 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8da0c398-554f-47ad-aada-70e4b5c9ec98-tuning-conf-dir\") pod \"multus-additional-cni-plugins-vvzk9\" (UID: \"8da0c398-554f-47ad-aada-70e4b5c9ec98\") " pod="openshift-multus/multus-additional-cni-plugins-vvzk9" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.848338 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8da0c398-554f-47ad-aada-70e4b5c9ec98-cni-binary-copy\") pod \"multus-additional-cni-plugins-vvzk9\" (UID: \"8da0c398-554f-47ad-aada-70e4b5c9ec98\") " pod="openshift-multus/multus-additional-cni-plugins-vvzk9" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.848368 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8da0c398-554f-47ad-aada-70e4b5c9ec98-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-vvzk9\" (UID: \"8da0c398-554f-47ad-aada-70e4b5c9ec98\") " pod="openshift-multus/multus-additional-cni-plugins-vvzk9" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.848734 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d6a299e8-188d-4777-bb82-a0994feabcff-ovnkube-script-lib\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.848841 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d6a299e8-188d-4777-bb82-a0994feabcff-ovnkube-config\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.853116 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0a25c516-3d8c-4fdb-9425-692ce650f427-proxy-tls\") pod \"machine-config-daemon-ddhn5\" (UID: \"0a25c516-3d8c-4fdb-9425-692ce650f427\") " pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.853780 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d6a299e8-188d-4777-bb82-a0994feabcff-ovn-node-metrics-cert\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.864381 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.879190 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6s9kc\" (UniqueName: \"kubernetes.io/projected/0a25c516-3d8c-4fdb-9425-692ce650f427-kube-api-access-6s9kc\") pod \"machine-config-daemon-ddhn5\" (UID: \"0a25c516-3d8c-4fdb-9425-692ce650f427\") " pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.879212 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25lp6\" (UniqueName: \"kubernetes.io/projected/3a74bc5e-af98-4849-820c-7056caabc485-kube-api-access-25lp6\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.879904 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4psqh\" (UniqueName: \"kubernetes.io/projected/d6a299e8-188d-4777-bb82-a0994feabcff-kube-api-access-4psqh\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.886449 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.897283 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.908570 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.917390 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.927416 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.938434 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.949670 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.961355 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.977435 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.993612 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.996650 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 06:51:31.123706674 +0000 UTC Jan 30 16:22:46 crc kubenswrapper[4766]: W0130 16:22:46.003693 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a25c516_3d8c_4fdb_9425_692ce650f427.slice/crio-368218924bf4b48531e7c0de2fc7c25d3580b39e016b0d38da73383e35fef3f0 WatchSource:0}: Error finding container 368218924bf4b48531e7c0de2fc7c25d3580b39e016b0d38da73383e35fef3f0: Status 404 returned error can't find the container with id 368218924bf4b48531e7c0de2fc7c25d3580b39e016b0d38da73383e35fef3f0 Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.029478 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.039443 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:22:46 crc kubenswrapper[4766]: E0130 16:22:46.039592 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.044591 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.045428 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.046761 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.047523 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.048730 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.049525 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.050286 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.051457 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.052261 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.052483 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-l6xdr" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.055367 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.055743 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.056103 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.057021 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.059692 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.060342 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.063126 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.063701 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.065720 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.066329 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.066904 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.069525 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.070720 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.071369 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.072577 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.073056 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.073809 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.076830 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.077498 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.079622 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.083685 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.085395 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.086295 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.086639 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.087280 4766 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.087400 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.089827 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.091119 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.091758 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.095072 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.096881 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.098201 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.100521 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.101400 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.102512 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.103274 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.104457 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.105403 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.109916 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.110806 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.112039 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.113934 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.115242 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.116196 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.116821 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.119842 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.120838 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.121863 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.125572 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.148102 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerStarted","Data":"183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823"} Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.148146 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerStarted","Data":"368218924bf4b48531e7c0de2fc7c25d3580b39e016b0d38da73383e35fef3f0"} Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.151245 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-vhmx5" event={"ID":"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2","Type":"ContainerStarted","Data":"e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6"} Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.151277 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-vhmx5" event={"ID":"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2","Type":"ContainerStarted","Data":"bcf6fab4871f54ca9be9d9fc2ac5a6250af7cf9558678c7a35c43165a466ecbd"} Jan 30 16:22:46 crc kubenswrapper[4766]: E0130 16:22:46.152004 4766 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6a299e8_188d_4777_bb82_a0994feabcff.slice/crio-458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6a299e8_188d_4777_bb82_a0994feabcff.slice/crio-conmon-458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1.scope\": RecentStats: unable to find data in memory cache]" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.153571 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"ea6918573df502c85bf5c5765559e335385923375685c895d6c6d8da943d38a1"} Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.155858 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb"} Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.155962 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"d478013c2f2e56c511f41a063283cb549fbb00ee8820c8d4bd123af5f807a849"} Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.158886 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30"} Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.158948 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba"} Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.158965 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"ef229e024d3235ddfa3de93d2c9a064c5b96d1b262c193283b09b0981dfc0409"} Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.159025 4766 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-30 16:17:45 +0000 UTC, rotation deadline is 2026-11-30 11:01:48.56817191 +0000 UTC Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.159062 4766 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7290h39m2.409111813s for next certificate rotation Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.160889 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-l6xdr" event={"ID":"3a74bc5e-af98-4849-820c-7056caabc485","Type":"ContainerStarted","Data":"da64b2bf34b406c771c571dae893c26b44c0c80fc71584fafe8548d33fc5cbe3"} Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.165609 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.167289 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-flxfz" event={"ID":"8bd09169-41b7-4eb3-80a5-a842e79f7d94","Type":"ContainerStarted","Data":"4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a"} Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.167334 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-flxfz" event={"ID":"8bd09169-41b7-4eb3-80a5-a842e79f7d94","Type":"ContainerStarted","Data":"9f06270adae90c7d7bd6c122e885399dfe099c64dfb53fdba92e06b97f1fb78a"} Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.169287 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" event={"ID":"d6a299e8-188d-4777-bb82-a0994feabcff","Type":"ContainerStarted","Data":"458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1"} Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.169327 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" event={"ID":"d6a299e8-188d-4777-bb82-a0994feabcff","Type":"ContainerStarted","Data":"b7c7571b036dc1cbf0576f5638a00f9530f0e7ad9d69b4b12af59327bef5efe3"} Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.208773 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.242834 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.312435 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.328170 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.350304 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dv5xn\" (UniqueName: \"kubernetes.io/projected/8da0c398-554f-47ad-aada-70e4b5c9ec98-kube-api-access-dv5xn\") pod \"multus-additional-cni-plugins-vvzk9\" (UID: \"8da0c398-554f-47ad-aada-70e4b5c9ec98\") " pod="openshift-multus/multus-additional-cni-plugins-vvzk9" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.395064 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.406006 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dv5xn\" (UniqueName: \"kubernetes.io/projected/8da0c398-554f-47ad-aada-70e4b5c9ec98-kube-api-access-dv5xn\") pod \"multus-additional-cni-plugins-vvzk9\" (UID: \"8da0c398-554f-47ad-aada-70e4b5c9ec98\") " pod="openshift-multus/multus-additional-cni-plugins-vvzk9" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.435827 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.467442 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.520868 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.557895 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.586262 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.626267 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.639040 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" Jan 30 16:22:46 crc kubenswrapper[4766]: W0130 16:22:46.661909 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8da0c398_554f_47ad_aada_70e4b5c9ec98.slice/crio-a1b4aba2432644a382e15a810232a9c7825eeca7839a3a3c32e16ce0a8000c06 WatchSource:0}: Error finding container a1b4aba2432644a382e15a810232a9c7825eeca7839a3a3c32e16ce0a8000c06: Status 404 returned error can't find the container with id a1b4aba2432644a382e15a810232a9c7825eeca7839a3a3c32e16ce0a8000c06 Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.677202 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.680463 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.695551 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.742686 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.753705 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:22:46 crc kubenswrapper[4766]: E0130 16:22:46.753941 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:22:48.75392455 +0000 UTC m=+23.391881896 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.775737 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.805673 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.817497 4766 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.836465 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.855126 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.855193 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.855234 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:22:46 crc kubenswrapper[4766]: E0130 16:22:46.855243 4766 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.855265 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:22:46 crc kubenswrapper[4766]: E0130 16:22:46.855319 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 16:22:48.855299632 +0000 UTC m=+23.493256978 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 16:22:46 crc kubenswrapper[4766]: E0130 16:22:46.855355 4766 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 16:22:46 crc kubenswrapper[4766]: E0130 16:22:46.855364 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 16:22:46 crc kubenswrapper[4766]: E0130 16:22:46.855383 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 16:22:46 crc kubenswrapper[4766]: E0130 16:22:46.855402 4766 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:22:46 crc kubenswrapper[4766]: E0130 16:22:46.855410 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 16:22:48.855392714 +0000 UTC m=+23.493350060 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 16:22:46 crc kubenswrapper[4766]: E0130 16:22:46.855440 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 16:22:48.855429325 +0000 UTC m=+23.493386741 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:22:46 crc kubenswrapper[4766]: E0130 16:22:46.855499 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 16:22:46 crc kubenswrapper[4766]: E0130 16:22:46.855559 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 16:22:46 crc kubenswrapper[4766]: E0130 16:22:46.855571 4766 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:22:46 crc kubenswrapper[4766]: E0130 16:22:46.855631 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 16:22:48.85561534 +0000 UTC m=+23.493572676 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.884791 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.919522 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.941384 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.955144 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.975971 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.997354 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 18:55:12.565978706 +0000 UTC Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.016220 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.036057 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.038952 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.038990 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:22:47 crc kubenswrapper[4766]: E0130 16:22:47.039053 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:22:47 crc kubenswrapper[4766]: E0130 16:22:47.039199 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.055069 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.076535 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.109399 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.115890 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.136421 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.174062 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-l6xdr" event={"ID":"3a74bc5e-af98-4849-820c-7056caabc485","Type":"ContainerStarted","Data":"5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008"} Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.175293 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" event={"ID":"8da0c398-554f-47ad-aada-70e4b5c9ec98","Type":"ContainerStarted","Data":"80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8"} Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.175335 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" event={"ID":"8da0c398-554f-47ad-aada-70e4b5c9ec98","Type":"ContainerStarted","Data":"a1b4aba2432644a382e15a810232a9c7825eeca7839a3a3c32e16ce0a8000c06"} Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.176286 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.177007 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerStarted","Data":"e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d"} Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.179610 4766 generic.go:334] "Generic (PLEG): container finished" podID="d6a299e8-188d-4777-bb82-a0994feabcff" containerID="458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1" exitCode=0 Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.179642 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" event={"ID":"d6a299e8-188d-4777-bb82-a0994feabcff","Type":"ContainerDied","Data":"458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1"} Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.179671 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" event={"ID":"d6a299e8-188d-4777-bb82-a0994feabcff","Type":"ContainerStarted","Data":"9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5"} Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.179685 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" event={"ID":"d6a299e8-188d-4777-bb82-a0994feabcff","Type":"ContainerStarted","Data":"5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044"} Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.179697 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" event={"ID":"d6a299e8-188d-4777-bb82-a0994feabcff","Type":"ContainerStarted","Data":"3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441"} Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.179709 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" event={"ID":"d6a299e8-188d-4777-bb82-a0994feabcff","Type":"ContainerStarted","Data":"fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01"} Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.179719 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" event={"ID":"d6a299e8-188d-4777-bb82-a0994feabcff","Type":"ContainerStarted","Data":"041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6"} Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.179730 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" event={"ID":"d6a299e8-188d-4777-bb82-a0994feabcff","Type":"ContainerStarted","Data":"eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78"} Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.195485 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.216084 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.236216 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.255910 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.275227 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.311624 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.315760 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.335968 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.356054 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.375730 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.400536 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.416430 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.436074 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.476514 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.495947 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.517380 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.535653 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.556893 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.595493 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.597640 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.615855 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.636938 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.664737 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.704588 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.743769 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.787143 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.824098 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.863340 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.906344 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.943984 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.986727 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.998085 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 14:10:31.397162052 +0000 UTC Jan 30 16:22:48 crc kubenswrapper[4766]: I0130 16:22:48.024791 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:48 crc kubenswrapper[4766]: I0130 16:22:48.038578 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:22:48 crc kubenswrapper[4766]: E0130 16:22:48.038726 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:22:48 crc kubenswrapper[4766]: I0130 16:22:48.063231 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:48 crc kubenswrapper[4766]: I0130 16:22:48.104123 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:48 crc kubenswrapper[4766]: I0130 16:22:48.142815 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:48 crc kubenswrapper[4766]: I0130 16:22:48.183716 4766 generic.go:334] "Generic (PLEG): container finished" podID="8da0c398-554f-47ad-aada-70e4b5c9ec98" containerID="80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8" exitCode=0 Jan 30 16:22:48 crc kubenswrapper[4766]: I0130 16:22:48.183779 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" event={"ID":"8da0c398-554f-47ad-aada-70e4b5c9ec98","Type":"ContainerDied","Data":"80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8"} Jan 30 16:22:48 crc kubenswrapper[4766]: I0130 16:22:48.185916 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120"} Jan 30 16:22:48 crc kubenswrapper[4766]: I0130 16:22:48.193762 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:48 crc kubenswrapper[4766]: I0130 16:22:48.232856 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:48 crc kubenswrapper[4766]: I0130 16:22:48.267077 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:48 crc kubenswrapper[4766]: I0130 16:22:48.303219 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:48 crc kubenswrapper[4766]: I0130 16:22:48.349414 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:48 crc kubenswrapper[4766]: I0130 16:22:48.387311 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:48 crc kubenswrapper[4766]: I0130 16:22:48.425200 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:48 crc kubenswrapper[4766]: I0130 16:22:48.465863 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:48 crc kubenswrapper[4766]: I0130 16:22:48.503491 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:48 crc kubenswrapper[4766]: I0130 16:22:48.543367 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:48 crc kubenswrapper[4766]: I0130 16:22:48.584461 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:48 crc kubenswrapper[4766]: I0130 16:22:48.623819 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:48 crc kubenswrapper[4766]: I0130 16:22:48.668005 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:48 crc kubenswrapper[4766]: I0130 16:22:48.704933 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:48 crc kubenswrapper[4766]: I0130 16:22:48.744505 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:48 crc kubenswrapper[4766]: I0130 16:22:48.774111 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:22:48 crc kubenswrapper[4766]: E0130 16:22:48.774294 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:22:52.774273781 +0000 UTC m=+27.412231127 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:22:48 crc kubenswrapper[4766]: I0130 16:22:48.783109 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:48 crc kubenswrapper[4766]: I0130 16:22:48.875306 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:22:48 crc kubenswrapper[4766]: I0130 16:22:48.875361 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:22:48 crc kubenswrapper[4766]: I0130 16:22:48.875391 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:22:48 crc kubenswrapper[4766]: I0130 16:22:48.875417 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:22:48 crc kubenswrapper[4766]: E0130 16:22:48.875446 4766 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 16:22:48 crc kubenswrapper[4766]: E0130 16:22:48.875518 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 16:22:52.875501609 +0000 UTC m=+27.513458955 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 16:22:48 crc kubenswrapper[4766]: E0130 16:22:48.875533 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 16:22:48 crc kubenswrapper[4766]: E0130 16:22:48.875537 4766 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 16:22:48 crc kubenswrapper[4766]: E0130 16:22:48.875552 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 16:22:48 crc kubenswrapper[4766]: E0130 16:22:48.875565 4766 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:22:48 crc kubenswrapper[4766]: E0130 16:22:48.875569 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 16:22:48 crc kubenswrapper[4766]: E0130 16:22:48.875591 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 16:22:48 crc kubenswrapper[4766]: E0130 16:22:48.875599 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 16:22:52.875582431 +0000 UTC m=+27.513539777 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 16:22:48 crc kubenswrapper[4766]: E0130 16:22:48.875603 4766 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:22:48 crc kubenswrapper[4766]: E0130 16:22:48.875617 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 16:22:52.875609272 +0000 UTC m=+27.513566728 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:22:48 crc kubenswrapper[4766]: E0130 16:22:48.875642 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 16:22:52.875630142 +0000 UTC m=+27.513587488 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:22:48 crc kubenswrapper[4766]: I0130 16:22:48.998699 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 23:10:34.807219361 +0000 UTC Jan 30 16:22:49 crc kubenswrapper[4766]: I0130 16:22:49.039308 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:22:49 crc kubenswrapper[4766]: I0130 16:22:49.039331 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:22:49 crc kubenswrapper[4766]: E0130 16:22:49.039452 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:22:49 crc kubenswrapper[4766]: E0130 16:22:49.040071 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:22:49 crc kubenswrapper[4766]: I0130 16:22:49.192664 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" event={"ID":"d6a299e8-188d-4777-bb82-a0994feabcff","Type":"ContainerStarted","Data":"03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd"} Jan 30 16:22:49 crc kubenswrapper[4766]: I0130 16:22:49.194957 4766 generic.go:334] "Generic (PLEG): container finished" podID="8da0c398-554f-47ad-aada-70e4b5c9ec98" containerID="7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47" exitCode=0 Jan 30 16:22:49 crc kubenswrapper[4766]: I0130 16:22:49.195148 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" event={"ID":"8da0c398-554f-47ad-aada-70e4b5c9ec98","Type":"ContainerDied","Data":"7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47"} Jan 30 16:22:49 crc kubenswrapper[4766]: I0130 16:22:49.212301 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:49Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:49 crc kubenswrapper[4766]: I0130 16:22:49.232384 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:49Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:49 crc kubenswrapper[4766]: I0130 16:22:49.252384 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:49Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:49 crc kubenswrapper[4766]: I0130 16:22:49.265600 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:49Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:49 crc kubenswrapper[4766]: I0130 16:22:49.276297 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:49Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:49 crc kubenswrapper[4766]: I0130 16:22:49.287160 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:49Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:49 crc kubenswrapper[4766]: I0130 16:22:49.300314 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:49Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:49 crc kubenswrapper[4766]: I0130 16:22:49.316374 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:49Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:49 crc kubenswrapper[4766]: I0130 16:22:49.328591 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:49Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:49 crc kubenswrapper[4766]: I0130 16:22:49.339228 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:49Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:49 crc kubenswrapper[4766]: I0130 16:22:49.354618 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:49Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:49 crc kubenswrapper[4766]: I0130 16:22:49.367738 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:49Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:49 crc kubenswrapper[4766]: I0130 16:22:49.379539 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:49Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:49 crc kubenswrapper[4766]: I0130 16:22:49.999353 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 22:28:58.711856011 +0000 UTC Jan 30 16:22:50 crc kubenswrapper[4766]: I0130 16:22:50.038858 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:22:50 crc kubenswrapper[4766]: E0130 16:22:50.038987 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:22:50 crc kubenswrapper[4766]: I0130 16:22:50.200313 4766 generic.go:334] "Generic (PLEG): container finished" podID="8da0c398-554f-47ad-aada-70e4b5c9ec98" containerID="a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65" exitCode=0 Jan 30 16:22:50 crc kubenswrapper[4766]: I0130 16:22:50.200359 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" event={"ID":"8da0c398-554f-47ad-aada-70e4b5c9ec98","Type":"ContainerDied","Data":"a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65"} Jan 30 16:22:50 crc kubenswrapper[4766]: I0130 16:22:50.217605 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:50Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:50 crc kubenswrapper[4766]: I0130 16:22:50.229607 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:50Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:50 crc kubenswrapper[4766]: I0130 16:22:50.247153 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:50Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:50 crc kubenswrapper[4766]: I0130 16:22:50.258923 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:50Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:50 crc kubenswrapper[4766]: I0130 16:22:50.272211 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:50Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:50 crc kubenswrapper[4766]: I0130 16:22:50.284722 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:50Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:50 crc kubenswrapper[4766]: I0130 16:22:50.298118 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:50Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:50 crc kubenswrapper[4766]: I0130 16:22:50.308167 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:50Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:50 crc kubenswrapper[4766]: I0130 16:22:50.321905 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:50Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:50 crc kubenswrapper[4766]: I0130 16:22:50.330952 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:50Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:50 crc kubenswrapper[4766]: I0130 16:22:50.349031 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:50Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:50 crc kubenswrapper[4766]: I0130 16:22:50.366641 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:50Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:50 crc kubenswrapper[4766]: I0130 16:22:50.383456 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:50Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.000417 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 20:53:10.037984439 +0000 UTC Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.028771 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.032288 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.038380 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.038533 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:22:51 crc kubenswrapper[4766]: E0130 16:22:51.038605 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.038535 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:22:51 crc kubenswrapper[4766]: E0130 16:22:51.038699 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.039975 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.050852 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.066972 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.080098 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.091119 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.105088 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.115407 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.126752 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.141205 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.151849 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.167489 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.180483 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.194763 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.204938 4766 generic.go:334] "Generic (PLEG): container finished" podID="8da0c398-554f-47ad-aada-70e4b5c9ec98" containerID="de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf" exitCode=0 Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.205023 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" event={"ID":"8da0c398-554f-47ad-aada-70e4b5c9ec98","Type":"ContainerDied","Data":"de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf"} Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.209109 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: E0130 16:22:51.210394 4766 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.222164 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc80898-f7b6-4e82-8da7-1d054381e411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.234436 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.244488 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.257802 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.268694 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.283523 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.296016 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.308928 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.319773 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.337609 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.349964 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.361331 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.372319 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.381466 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.401665 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.421619 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.434713 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.446620 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.458165 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.469718 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.480361 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.491423 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.502244 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.513769 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.518845 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.520814 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.520859 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.520871 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.521008 4766 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.524996 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc80898-f7b6-4e82-8da7-1d054381e411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.527057 4766 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.527322 4766 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.528235 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.528274 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.528282 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.528295 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.528303 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:51Z","lastTransitionTime":"2026-01-30T16:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.536785 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: E0130 16:22:51.539567 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.542661 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.542701 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.542726 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.542740 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.542751 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:51Z","lastTransitionTime":"2026-01-30T16:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.546376 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: E0130 16:22:51.554210 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.557893 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.557930 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.557942 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.557958 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.557969 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:51Z","lastTransitionTime":"2026-01-30T16:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:51 crc kubenswrapper[4766]: E0130 16:22:51.568891 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.572357 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.572412 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.572429 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.572450 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.572467 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:51Z","lastTransitionTime":"2026-01-30T16:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:51 crc kubenswrapper[4766]: E0130 16:22:51.583952 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.587054 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.587087 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.587097 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.587111 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.587121 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:51Z","lastTransitionTime":"2026-01-30T16:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:51 crc kubenswrapper[4766]: E0130 16:22:51.598403 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: E0130 16:22:51.598524 4766 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.599997 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.600026 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.600036 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.600050 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.600060 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:51Z","lastTransitionTime":"2026-01-30T16:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.627976 4766 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.701965 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.702007 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.702018 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.702035 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.702046 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:51Z","lastTransitionTime":"2026-01-30T16:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.804417 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.804457 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.804470 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.804487 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.804501 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:51Z","lastTransitionTime":"2026-01-30T16:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.906418 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.906760 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.906774 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.906795 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.906808 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:51Z","lastTransitionTime":"2026-01-30T16:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.000986 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 13:08:31.044888516 +0000 UTC Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.009133 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.009193 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.009202 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.009222 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.009232 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:52Z","lastTransitionTime":"2026-01-30T16:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.039261 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:22:52 crc kubenswrapper[4766]: E0130 16:22:52.039381 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.111755 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.111790 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.111802 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.111815 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.111825 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:52Z","lastTransitionTime":"2026-01-30T16:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.213684 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.213724 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.213736 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.213755 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.213727 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" event={"ID":"d6a299e8-188d-4777-bb82-a0994feabcff","Type":"ContainerStarted","Data":"1c3dfb77226b722802de4a6648846be67d88834db86128acbeb613ecd26cc46d"} Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.213766 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:52Z","lastTransitionTime":"2026-01-30T16:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.216082 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.216117 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.221734 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" event={"ID":"8da0c398-554f-47ad-aada-70e4b5c9ec98","Type":"ContainerStarted","Data":"22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b"} Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.238492 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.268009 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.269710 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.278609 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc80898-f7b6-4e82-8da7-1d054381e411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.293251 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.305464 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.317364 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.317397 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.317406 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.317419 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.317428 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:52Z","lastTransitionTime":"2026-01-30T16:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.320400 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.330927 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.343874 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.353769 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.375284 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c3dfb77226b722802de4a6648846be67d88834db86128acbeb613ecd26cc46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.389211 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.400541 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.412734 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.419866 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.419902 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.419912 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.419925 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.419936 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:52Z","lastTransitionTime":"2026-01-30T16:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.424023 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.435968 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.475070 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.476659 4766 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.487589 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.499714 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.510616 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.521911 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.521940 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.521949 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.521962 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.521971 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:52Z","lastTransitionTime":"2026-01-30T16:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.529428 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c3dfb77226b722802de4a6648846be67d88834db86128acbeb613ecd26cc46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.540816 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.552942 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.563287 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.574887 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.588529 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc80898-f7b6-4e82-8da7-1d054381e411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.601080 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.624064 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.624195 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.624236 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.624247 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.624262 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.624272 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:52Z","lastTransitionTime":"2026-01-30T16:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.665976 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.714715 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.726588 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.726640 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.726653 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.726674 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.726685 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:52Z","lastTransitionTime":"2026-01-30T16:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.812526 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:22:52 crc kubenswrapper[4766]: E0130 16:22:52.812733 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:23:00.812702792 +0000 UTC m=+35.450660138 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.829080 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.829134 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.829157 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.829200 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.829217 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:52Z","lastTransitionTime":"2026-01-30T16:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.913718 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.913815 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:22:52 crc kubenswrapper[4766]: E0130 16:22:52.913850 4766 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.913870 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.913906 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:22:52 crc kubenswrapper[4766]: E0130 16:22:52.913920 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 16:23:00.913901199 +0000 UTC m=+35.551858555 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 16:22:52 crc kubenswrapper[4766]: E0130 16:22:52.913996 4766 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 16:22:52 crc kubenswrapper[4766]: E0130 16:22:52.914041 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 16:23:00.914027292 +0000 UTC m=+35.551984658 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 16:22:52 crc kubenswrapper[4766]: E0130 16:22:52.914050 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 16:22:52 crc kubenswrapper[4766]: E0130 16:22:52.914098 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 16:22:52 crc kubenswrapper[4766]: E0130 16:22:52.914050 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 16:22:52 crc kubenswrapper[4766]: E0130 16:22:52.914143 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 16:22:52 crc kubenswrapper[4766]: E0130 16:22:52.914167 4766 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:22:52 crc kubenswrapper[4766]: E0130 16:22:52.914271 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 16:23:00.914248238 +0000 UTC m=+35.552205624 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:22:52 crc kubenswrapper[4766]: E0130 16:22:52.914112 4766 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:22:52 crc kubenswrapper[4766]: E0130 16:22:52.914330 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 16:23:00.91431943 +0000 UTC m=+35.552276856 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.931776 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.931808 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.931817 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.931832 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.931843 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:52Z","lastTransitionTime":"2026-01-30T16:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.001625 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 03:37:15.370350632 +0000 UTC Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.033849 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.034087 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.034153 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.034240 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.034338 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:53Z","lastTransitionTime":"2026-01-30T16:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.039015 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:22:53 crc kubenswrapper[4766]: E0130 16:22:53.039380 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.039015 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:22:53 crc kubenswrapper[4766]: E0130 16:22:53.039600 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.156283 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.156621 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.156720 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.156859 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.156962 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:53Z","lastTransitionTime":"2026-01-30T16:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.224718 4766 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.252645 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c3dfb77226b722802de4a6648846be67d88834db86128acbeb613ecd26cc46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.259118 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.259147 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.259155 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.259167 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.259190 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:53Z","lastTransitionTime":"2026-01-30T16:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.267814 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.282615 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.296314 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.307271 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.326770 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.347391 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.361164 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.361217 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.361225 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.361239 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.361251 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:53Z","lastTransitionTime":"2026-01-30T16:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.362632 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.378907 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.393532 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.405288 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc80898-f7b6-4e82-8da7-1d054381e411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.417771 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.430449 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.444557 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.463587 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.463624 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.463637 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.463654 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.463666 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:53Z","lastTransitionTime":"2026-01-30T16:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.566656 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.566698 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.566707 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.566721 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.566730 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:53Z","lastTransitionTime":"2026-01-30T16:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.669191 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.669240 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.669258 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.669276 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.669287 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:53Z","lastTransitionTime":"2026-01-30T16:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.771541 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.771580 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.771592 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.771609 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.771620 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:53Z","lastTransitionTime":"2026-01-30T16:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.873786 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.873847 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.873859 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.873877 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.873889 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:53Z","lastTransitionTime":"2026-01-30T16:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.978516 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.978588 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.978602 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.978628 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.978648 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:53Z","lastTransitionTime":"2026-01-30T16:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.002383 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 05:36:59.794463511 +0000 UTC Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.038580 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:22:54 crc kubenswrapper[4766]: E0130 16:22:54.038775 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.080861 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.081385 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.081483 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.081573 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.081670 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:54Z","lastTransitionTime":"2026-01-30T16:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.184513 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.184549 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.184563 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.184580 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.184592 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:54Z","lastTransitionTime":"2026-01-30T16:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.229390 4766 generic.go:334] "Generic (PLEG): container finished" podID="8da0c398-554f-47ad-aada-70e4b5c9ec98" containerID="22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b" exitCode=0 Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.229496 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" event={"ID":"8da0c398-554f-47ad-aada-70e4b5c9ec98","Type":"ContainerDied","Data":"22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b"} Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.229538 4766 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.243313 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.267151 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c3dfb77226b722802de4a6648846be67d88834db86128acbeb613ecd26cc46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.285696 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.287905 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.287948 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.287959 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.287975 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.287986 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:54Z","lastTransitionTime":"2026-01-30T16:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.298705 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.310408 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.321703 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.333043 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.345391 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.358572 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.369657 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.385364 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.390203 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.390235 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.390245 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.390265 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.390284 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:54Z","lastTransitionTime":"2026-01-30T16:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.396937 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc80898-f7b6-4e82-8da7-1d054381e411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.411488 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.424669 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.494707 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.494757 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.494771 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.494790 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.494800 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:54Z","lastTransitionTime":"2026-01-30T16:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.596996 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.597027 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.597035 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.597053 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.597071 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:54Z","lastTransitionTime":"2026-01-30T16:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.700245 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.700310 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.700324 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.700342 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.700353 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:54Z","lastTransitionTime":"2026-01-30T16:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.802077 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.802117 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.802127 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.802142 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.802153 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:54Z","lastTransitionTime":"2026-01-30T16:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.904740 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.905008 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.905093 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.905191 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.905281 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:54Z","lastTransitionTime":"2026-01-30T16:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.984914 4766 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.003576 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 07:06:37.253372828 +0000 UTC Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.007488 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.007712 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.007776 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.007840 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.007900 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:55Z","lastTransitionTime":"2026-01-30T16:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.039201 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:22:55 crc kubenswrapper[4766]: E0130 16:22:55.039329 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.039215 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:22:55 crc kubenswrapper[4766]: E0130 16:22:55.039643 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.109917 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.110262 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.110384 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.110462 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.110525 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:55Z","lastTransitionTime":"2026-01-30T16:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.214205 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.214240 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.214252 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.214270 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.214281 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:55Z","lastTransitionTime":"2026-01-30T16:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.238053 4766 generic.go:334] "Generic (PLEG): container finished" podID="8da0c398-554f-47ad-aada-70e4b5c9ec98" containerID="73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04" exitCode=0 Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.238102 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" event={"ID":"8da0c398-554f-47ad-aada-70e4b5c9ec98","Type":"ContainerDied","Data":"73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04"} Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.253090 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:55Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.264736 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:55Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.278226 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:55Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.290644 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:55Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.301383 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:55Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.316423 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.316476 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.316485 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.316499 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.316508 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:55Z","lastTransitionTime":"2026-01-30T16:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.323731 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c3dfb77226b722802de4a6648846be67d88834db86128acbeb613ecd26cc46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:55Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.376973 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:55Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.400603 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:55Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.418367 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:55Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.418978 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.418993 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.419003 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.419014 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.419023 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:55Z","lastTransitionTime":"2026-01-30T16:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.434563 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:55Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.449921 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:55Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.463976 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc80898-f7b6-4e82-8da7-1d054381e411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:55Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.479072 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:55Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.493770 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:55Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.522663 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.522722 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.522737 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.522759 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.522775 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:55Z","lastTransitionTime":"2026-01-30T16:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.625375 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.625419 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.625431 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.625451 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.625464 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:55Z","lastTransitionTime":"2026-01-30T16:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.728107 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.728162 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.728200 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.728222 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.728239 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:55Z","lastTransitionTime":"2026-01-30T16:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.784060 4766 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.819735 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.829979 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.830003 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.830011 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.830024 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.830033 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:55Z","lastTransitionTime":"2026-01-30T16:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.932543 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.932605 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.932615 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.932628 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.932639 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:55Z","lastTransitionTime":"2026-01-30T16:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.003727 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 18:12:44.507042955 +0000 UTC Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.034840 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.034921 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.034941 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.035040 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.035061 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:56Z","lastTransitionTime":"2026-01-30T16:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.039272 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:22:56 crc kubenswrapper[4766]: E0130 16:22:56.039404 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.059804 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.076128 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.088619 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.108988 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c3dfb77226b722802de4a6648846be67d88834db86128acbeb613ecd26cc46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.124711 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.136621 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.137158 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.137289 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.137347 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.137404 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.137472 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:56Z","lastTransitionTime":"2026-01-30T16:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.151752 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.162726 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.173754 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.188223 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.208628 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.219867 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.231482 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.240291 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.240324 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.240333 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.240347 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.240356 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:56Z","lastTransitionTime":"2026-01-30T16:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.250826 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc80898-f7b6-4e82-8da7-1d054381e411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.253562 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" event={"ID":"8da0c398-554f-47ad-aada-70e4b5c9ec98","Type":"ContainerStarted","Data":"b5545aa724616d7714b8886c8505d44241291240e7ac7afd4a85192542a09e5c"} Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.255249 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-54ngm_d6a299e8-188d-4777-bb82-a0994feabcff/ovnkube-controller/0.log" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.257541 4766 generic.go:334] "Generic (PLEG): container finished" podID="d6a299e8-188d-4777-bb82-a0994feabcff" containerID="1c3dfb77226b722802de4a6648846be67d88834db86128acbeb613ecd26cc46d" exitCode=1 Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.257584 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" event={"ID":"d6a299e8-188d-4777-bb82-a0994feabcff","Type":"ContainerDied","Data":"1c3dfb77226b722802de4a6648846be67d88834db86128acbeb613ecd26cc46d"} Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.258301 4766 scope.go:117] "RemoveContainer" containerID="1c3dfb77226b722802de4a6648846be67d88834db86128acbeb613ecd26cc46d" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.268631 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.286031 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.307881 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c3dfb77226b722802de4a6648846be67d88834db86128acbeb613ecd26cc46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.321252 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5545aa724616d7714b8886c8505d44241291240e7ac7afd4a85192542a09e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.332772 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.342803 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.342839 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.342853 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.342869 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.342881 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:56Z","lastTransitionTime":"2026-01-30T16:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.343801 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.353164 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.367301 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.381766 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.394262 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.407129 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.424333 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.438001 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc80898-f7b6-4e82-8da7-1d054381e411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.445470 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.445517 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.445532 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.445549 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.445564 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:56Z","lastTransitionTime":"2026-01-30T16:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.453015 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.469491 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.481276 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.491599 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.503564 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.532073 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc80898-f7b6-4e82-8da7-1d054381e411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.547560 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.547605 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.547616 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.547633 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.547646 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:56Z","lastTransitionTime":"2026-01-30T16:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.548386 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.569103 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.582751 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.591063 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.604879 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.618010 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.636587 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c3dfb77226b722802de4a6648846be67d88834db86128acbeb613ecd26cc46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c3dfb77226b722802de4a6648846be67d88834db86128acbeb613ecd26cc46d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"message\\\":\\\" 5990 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:22:55.549095 5990 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:22:55.549425 5990 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 16:22:55.549491 5990 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 16:22:55.549512 5990 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 16:22:55.549518 5990 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 16:22:55.549540 5990 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 16:22:55.549558 5990 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 16:22:55.549598 5990 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 16:22:55.549617 5990 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 16:22:55.549640 5990 factory.go:656] Stopping watch factory\\\\nI0130 16:22:55.549659 5990 ovnkube.go:599] Stopped ovnkube\\\\nI0130 16:22:55.549706 5990 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 16:22:55.549717 5990 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 16:22:55.549726 5990 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 16:22:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.660500 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.660542 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.660553 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.660567 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.660577 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:56Z","lastTransitionTime":"2026-01-30T16:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.665327 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5545aa724616d7714b8886c8505d44241291240e7ac7afd4a85192542a09e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.681543 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.762500 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.762544 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.762555 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.762572 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.762583 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:56Z","lastTransitionTime":"2026-01-30T16:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.865466 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.865517 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.865533 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.865551 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.865562 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:56Z","lastTransitionTime":"2026-01-30T16:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.967910 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.967953 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.967965 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.967980 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.967991 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:56Z","lastTransitionTime":"2026-01-30T16:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.004392 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 18:31:54.589321331 +0000 UTC Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.039350 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.039366 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:22:57 crc kubenswrapper[4766]: E0130 16:22:57.039486 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:22:57 crc kubenswrapper[4766]: E0130 16:22:57.039590 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.070232 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.070284 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.070301 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.070328 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.070347 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:57Z","lastTransitionTime":"2026-01-30T16:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.173319 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.173357 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.173366 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.173378 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.173390 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:57Z","lastTransitionTime":"2026-01-30T16:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.263207 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-54ngm_d6a299e8-188d-4777-bb82-a0994feabcff/ovnkube-controller/1.log" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.264038 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-54ngm_d6a299e8-188d-4777-bb82-a0994feabcff/ovnkube-controller/0.log" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.267999 4766 generic.go:334] "Generic (PLEG): container finished" podID="d6a299e8-188d-4777-bb82-a0994feabcff" containerID="de97db5af666a3aa1c19f12bc73f1b2f74553cffd269480101a7c1ac325e11de" exitCode=1 Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.268031 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" event={"ID":"d6a299e8-188d-4777-bb82-a0994feabcff","Type":"ContainerDied","Data":"de97db5af666a3aa1c19f12bc73f1b2f74553cffd269480101a7c1ac325e11de"} Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.268073 4766 scope.go:117] "RemoveContainer" containerID="1c3dfb77226b722802de4a6648846be67d88834db86128acbeb613ecd26cc46d" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.269892 4766 scope.go:117] "RemoveContainer" containerID="de97db5af666a3aa1c19f12bc73f1b2f74553cffd269480101a7c1ac325e11de" Jan 30 16:22:57 crc kubenswrapper[4766]: E0130 16:22:57.270222 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-54ngm_openshift-ovn-kubernetes(d6a299e8-188d-4777-bb82-a0994feabcff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.275319 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.275377 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.275395 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.275422 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.275439 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:57Z","lastTransitionTime":"2026-01-30T16:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.287921 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.306999 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.320490 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.340674 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.357062 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc80898-f7b6-4e82-8da7-1d054381e411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.372653 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.377966 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.378020 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.378033 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.378046 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.378056 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:57Z","lastTransitionTime":"2026-01-30T16:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.384788 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.403782 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.419228 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.430344 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.440993 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.462374 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de97db5af666a3aa1c19f12bc73f1b2f74553cffd269480101a7c1ac325e11de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c3dfb77226b722802de4a6648846be67d88834db86128acbeb613ecd26cc46d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"message\\\":\\\" 5990 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:22:55.549095 5990 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:22:55.549425 5990 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 16:22:55.549491 5990 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 16:22:55.549512 5990 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 16:22:55.549518 5990 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 16:22:55.549540 5990 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 16:22:55.549558 5990 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 16:22:55.549598 5990 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 16:22:55.549617 5990 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 16:22:55.549640 5990 factory.go:656] Stopping watch factory\\\\nI0130 16:22:55.549659 5990 ovnkube.go:599] Stopped ovnkube\\\\nI0130 16:22:55.549706 5990 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 16:22:55.549717 5990 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 16:22:55.549726 5990 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 16:22:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de97db5af666a3aa1c19f12bc73f1b2f74553cffd269480101a7c1ac325e11de\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"message\\\":\\\" 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.217.4.1,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.1],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0130 16:22:57.121614 6200 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-controller]} name:Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.16:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3f1b9878-e751-4e46-a226-ce007d2c4aa7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0130 16:22:57.121767 6200 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.481776 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.481824 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.481834 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.481860 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.481871 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:57Z","lastTransitionTime":"2026-01-30T16:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.485214 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5545aa724616d7714b8886c8505d44241291240e7ac7afd4a85192542a09e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.499281 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.570041 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.584407 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.584454 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.584470 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.584493 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.584509 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:57Z","lastTransitionTime":"2026-01-30T16:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.591839 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de97db5af666a3aa1c19f12bc73f1b2f74553cffd269480101a7c1ac325e11de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c3dfb77226b722802de4a6648846be67d88834db86128acbeb613ecd26cc46d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"message\\\":\\\" 5990 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:22:55.549095 5990 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:22:55.549425 5990 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 16:22:55.549491 5990 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 16:22:55.549512 5990 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 16:22:55.549518 5990 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 16:22:55.549540 5990 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 16:22:55.549558 5990 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 16:22:55.549598 5990 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 16:22:55.549617 5990 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 16:22:55.549640 5990 factory.go:656] Stopping watch factory\\\\nI0130 16:22:55.549659 5990 ovnkube.go:599] Stopped ovnkube\\\\nI0130 16:22:55.549706 5990 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 16:22:55.549717 5990 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 16:22:55.549726 5990 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 16:22:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de97db5af666a3aa1c19f12bc73f1b2f74553cffd269480101a7c1ac325e11de\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"message\\\":\\\" 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.217.4.1,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.1],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0130 16:22:57.121614 6200 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-controller]} name:Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.16:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3f1b9878-e751-4e46-a226-ce007d2c4aa7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0130 16:22:57.121767 6200 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.609065 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5545aa724616d7714b8886c8505d44241291240e7ac7afd4a85192542a09e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.622416 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.633082 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.641330 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.680581 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.719990 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.720040 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.720054 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.720071 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.720088 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:57Z","lastTransitionTime":"2026-01-30T16:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.722374 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.732002 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf"] Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.732256 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.732467 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.746555 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.747977 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.765632 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.776289 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.787460 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc80898-f7b6-4e82-8da7-1d054381e411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.796604 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5feae404-d53f-4bf5-af27-07a7ce350594-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-rg9cf\" (UID: \"5feae404-d53f-4bf5-af27-07a7ce350594\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.796787 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5feae404-d53f-4bf5-af27-07a7ce350594-env-overrides\") pod \"ovnkube-control-plane-749d76644c-rg9cf\" (UID: \"5feae404-d53f-4bf5-af27-07a7ce350594\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.796918 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5feae404-d53f-4bf5-af27-07a7ce350594-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-rg9cf\" (UID: \"5feae404-d53f-4bf5-af27-07a7ce350594\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.797042 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rc5l\" (UniqueName: \"kubernetes.io/projected/5feae404-d53f-4bf5-af27-07a7ce350594-kube-api-access-7rc5l\") pod \"ovnkube-control-plane-749d76644c-rg9cf\" (UID: \"5feae404-d53f-4bf5-af27-07a7ce350594\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.797497 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.805416 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.818153 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.822163 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.822235 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.822248 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.822282 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.822291 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:57Z","lastTransitionTime":"2026-01-30T16:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.829372 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.838453 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.851845 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5feae404-d53f-4bf5-af27-07a7ce350594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rg9cf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.861841 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.877826 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de97db5af666a3aa1c19f12bc73f1b2f74553cffd269480101a7c1ac325e11de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c3dfb77226b722802de4a6648846be67d88834db86128acbeb613ecd26cc46d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"message\\\":\\\" 5990 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:22:55.549095 5990 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:22:55.549425 5990 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 16:22:55.549491 5990 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 16:22:55.549512 5990 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 16:22:55.549518 5990 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 16:22:55.549540 5990 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 16:22:55.549558 5990 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 16:22:55.549598 5990 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 16:22:55.549617 5990 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 16:22:55.549640 5990 factory.go:656] Stopping watch factory\\\\nI0130 16:22:55.549659 5990 ovnkube.go:599] Stopped ovnkube\\\\nI0130 16:22:55.549706 5990 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 16:22:55.549717 5990 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 16:22:55.549726 5990 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 16:22:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de97db5af666a3aa1c19f12bc73f1b2f74553cffd269480101a7c1ac325e11de\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"message\\\":\\\" 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.217.4.1,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.1],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0130 16:22:57.121614 6200 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-controller]} name:Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.16:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3f1b9878-e751-4e46-a226-ce007d2c4aa7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0130 16:22:57.121767 6200 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.890089 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5545aa724616d7714b8886c8505d44241291240e7ac7afd4a85192542a09e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.898345 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5feae404-d53f-4bf5-af27-07a7ce350594-env-overrides\") pod \"ovnkube-control-plane-749d76644c-rg9cf\" (UID: \"5feae404-d53f-4bf5-af27-07a7ce350594\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.898647 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5feae404-d53f-4bf5-af27-07a7ce350594-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-rg9cf\" (UID: \"5feae404-d53f-4bf5-af27-07a7ce350594\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.898820 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rc5l\" (UniqueName: \"kubernetes.io/projected/5feae404-d53f-4bf5-af27-07a7ce350594-kube-api-access-7rc5l\") pod \"ovnkube-control-plane-749d76644c-rg9cf\" (UID: \"5feae404-d53f-4bf5-af27-07a7ce350594\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.898968 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5feae404-d53f-4bf5-af27-07a7ce350594-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-rg9cf\" (UID: \"5feae404-d53f-4bf5-af27-07a7ce350594\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.899003 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5feae404-d53f-4bf5-af27-07a7ce350594-env-overrides\") pod \"ovnkube-control-plane-749d76644c-rg9cf\" (UID: \"5feae404-d53f-4bf5-af27-07a7ce350594\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.899480 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5feae404-d53f-4bf5-af27-07a7ce350594-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-rg9cf\" (UID: \"5feae404-d53f-4bf5-af27-07a7ce350594\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.904397 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.905038 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5feae404-d53f-4bf5-af27-07a7ce350594-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-rg9cf\" (UID: \"5feae404-d53f-4bf5-af27-07a7ce350594\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.918920 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rc5l\" (UniqueName: \"kubernetes.io/projected/5feae404-d53f-4bf5-af27-07a7ce350594-kube-api-access-7rc5l\") pod \"ovnkube-control-plane-749d76644c-rg9cf\" (UID: \"5feae404-d53f-4bf5-af27-07a7ce350594\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.920616 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.925139 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.925223 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.925242 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.925263 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.925280 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:57Z","lastTransitionTime":"2026-01-30T16:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.934229 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.947246 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.959569 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.972251 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.982937 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.996822 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.005162 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 02:12:03.659352798 +0000 UTC Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.008765 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc80898-f7b6-4e82-8da7-1d054381e411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.027345 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.027382 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.027400 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.027417 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.027428 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:58Z","lastTransitionTime":"2026-01-30T16:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.038982 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:22:58 crc kubenswrapper[4766]: E0130 16:22:58.039242 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.056978 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" Jan 30 16:22:58 crc kubenswrapper[4766]: W0130 16:22:58.071666 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5feae404_d53f_4bf5_af27_07a7ce350594.slice/crio-cbaa1eb895ea8895c467293aa41dda73f8914e14ffda4aecade43866cbf14f56 WatchSource:0}: Error finding container cbaa1eb895ea8895c467293aa41dda73f8914e14ffda4aecade43866cbf14f56: Status 404 returned error can't find the container with id cbaa1eb895ea8895c467293aa41dda73f8914e14ffda4aecade43866cbf14f56 Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.131601 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.131644 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.131654 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.131669 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.131679 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:58Z","lastTransitionTime":"2026-01-30T16:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.234460 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.234502 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.234515 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.234536 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.234557 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:58Z","lastTransitionTime":"2026-01-30T16:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.272148 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" event={"ID":"5feae404-d53f-4bf5-af27-07a7ce350594","Type":"ContainerStarted","Data":"cbaa1eb895ea8895c467293aa41dda73f8914e14ffda4aecade43866cbf14f56"} Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.274041 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-54ngm_d6a299e8-188d-4777-bb82-a0994feabcff/ovnkube-controller/1.log" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.337060 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.337101 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.337113 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.337129 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.337140 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:58Z","lastTransitionTime":"2026-01-30T16:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.439936 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.439966 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.439977 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.439989 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.439997 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:58Z","lastTransitionTime":"2026-01-30T16:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.542662 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.542705 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.542719 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.542735 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.542746 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:58Z","lastTransitionTime":"2026-01-30T16:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.644755 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.644806 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.644816 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.644835 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.644847 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:58Z","lastTransitionTime":"2026-01-30T16:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.747586 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.747657 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.747669 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.747691 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.747715 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:58Z","lastTransitionTime":"2026-01-30T16:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.851841 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.852247 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.852397 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.852528 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.852664 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:58Z","lastTransitionTime":"2026-01-30T16:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.955893 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.955946 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.955959 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.955979 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.955994 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:58Z","lastTransitionTime":"2026-01-30T16:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.005752 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 20:23:06.580802856 +0000 UTC Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.039431 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.039510 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:22:59 crc kubenswrapper[4766]: E0130 16:22:59.039561 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:22:59 crc kubenswrapper[4766]: E0130 16:22:59.039752 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.058687 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.058725 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.058735 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.058749 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.058760 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:59Z","lastTransitionTime":"2026-01-30T16:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.160995 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.161042 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.161054 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.161071 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.161082 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:59Z","lastTransitionTime":"2026-01-30T16:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.263607 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.263646 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.263658 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.263673 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.263684 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:59Z","lastTransitionTime":"2026-01-30T16:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.280547 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" event={"ID":"5feae404-d53f-4bf5-af27-07a7ce350594","Type":"ContainerStarted","Data":"06cd4d835e7db624be9961818c2531e2eab6cfa661faed78c9c14942abec0512"} Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.280790 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" event={"ID":"5feae404-d53f-4bf5-af27-07a7ce350594","Type":"ContainerStarted","Data":"0bd3cad42b4c959f5cb67492e8be64db7d98c24e44958bdedc08dbdd5fb9bc10"} Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.294117 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.310090 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.325298 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc80898-f7b6-4e82-8da7-1d054381e411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.338274 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.348030 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.360240 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5feae404-d53f-4bf5-af27-07a7ce350594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bd3cad42b4c959f5cb67492e8be64db7d98c24e44958bdedc08dbdd5fb9bc10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cd4d835e7db624be9961818c2531e2eab6cfa661faed78c9c14942abec0512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rg9cf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.366419 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.366454 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.366463 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.366478 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.366489 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:59Z","lastTransitionTime":"2026-01-30T16:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.375164 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.393160 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de97db5af666a3aa1c19f12bc73f1b2f74553cffd269480101a7c1ac325e11de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c3dfb77226b722802de4a6648846be67d88834db86128acbeb613ecd26cc46d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"message\\\":\\\" 5990 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:22:55.549095 5990 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:22:55.549425 5990 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 16:22:55.549491 5990 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 16:22:55.549512 5990 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 16:22:55.549518 5990 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 16:22:55.549540 5990 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 16:22:55.549558 5990 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 16:22:55.549598 5990 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 16:22:55.549617 5990 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 16:22:55.549640 5990 factory.go:656] Stopping watch factory\\\\nI0130 16:22:55.549659 5990 ovnkube.go:599] Stopped ovnkube\\\\nI0130 16:22:55.549706 5990 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 16:22:55.549717 5990 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 16:22:55.549726 5990 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 16:22:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de97db5af666a3aa1c19f12bc73f1b2f74553cffd269480101a7c1ac325e11de\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"message\\\":\\\" 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.217.4.1,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.1],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0130 16:22:57.121614 6200 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-controller]} name:Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.16:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3f1b9878-e751-4e46-a226-ce007d2c4aa7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0130 16:22:57.121767 6200 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.406523 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5545aa724616d7714b8886c8505d44241291240e7ac7afd4a85192542a09e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.417499 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.428166 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.439058 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.452058 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.464889 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.468756 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.468822 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.468841 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.468865 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.468895 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:59Z","lastTransitionTime":"2026-01-30T16:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.476369 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.548993 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-xrldv"] Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.549497 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:22:59 crc kubenswrapper[4766]: E0130 16:22:59.549566 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.564533 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.571458 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.571484 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.571493 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.571506 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.571515 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:59Z","lastTransitionTime":"2026-01-30T16:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.595311 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.616043 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/de5fecf1-cb2c-4ae2-a240-6f8826f6dac3-metrics-certs\") pod \"network-metrics-daemon-xrldv\" (UID: \"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3\") " pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.616085 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp9nh\" (UniqueName: \"kubernetes.io/projected/de5fecf1-cb2c-4ae2-a240-6f8826f6dac3-kube-api-access-mp9nh\") pod \"network-metrics-daemon-xrldv\" (UID: \"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3\") " pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.619963 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de97db5af666a3aa1c19f12bc73f1b2f74553cffd269480101a7c1ac325e11de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c3dfb77226b722802de4a6648846be67d88834db86128acbeb613ecd26cc46d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"message\\\":\\\" 5990 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:22:55.549095 5990 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:22:55.549425 5990 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 16:22:55.549491 5990 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 16:22:55.549512 5990 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 16:22:55.549518 5990 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 16:22:55.549540 5990 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 16:22:55.549558 5990 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 16:22:55.549598 5990 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 16:22:55.549617 5990 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 16:22:55.549640 5990 factory.go:656] Stopping watch factory\\\\nI0130 16:22:55.549659 5990 ovnkube.go:599] Stopped ovnkube\\\\nI0130 16:22:55.549706 5990 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 16:22:55.549717 5990 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 16:22:55.549726 5990 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 16:22:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de97db5af666a3aa1c19f12bc73f1b2f74553cffd269480101a7c1ac325e11de\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"message\\\":\\\" 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.217.4.1,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.1],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0130 16:22:57.121614 6200 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-controller]} name:Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.16:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3f1b9878-e751-4e46-a226-ce007d2c4aa7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0130 16:22:57.121767 6200 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.635373 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5545aa724616d7714b8886c8505d44241291240e7ac7afd4a85192542a09e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.646804 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.661476 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.673699 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.673738 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.673747 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.673762 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.673772 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:59Z","lastTransitionTime":"2026-01-30T16:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.676089 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.687392 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.699844 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.710580 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc80898-f7b6-4e82-8da7-1d054381e411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.716992 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/de5fecf1-cb2c-4ae2-a240-6f8826f6dac3-metrics-certs\") pod \"network-metrics-daemon-xrldv\" (UID: \"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3\") " pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.717061 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mp9nh\" (UniqueName: \"kubernetes.io/projected/de5fecf1-cb2c-4ae2-a240-6f8826f6dac3-kube-api-access-mp9nh\") pod \"network-metrics-daemon-xrldv\" (UID: \"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3\") " pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:22:59 crc kubenswrapper[4766]: E0130 16:22:59.717130 4766 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 16:22:59 crc kubenswrapper[4766]: E0130 16:22:59.717238 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/de5fecf1-cb2c-4ae2-a240-6f8826f6dac3-metrics-certs podName:de5fecf1-cb2c-4ae2-a240-6f8826f6dac3 nodeName:}" failed. No retries permitted until 2026-01-30 16:23:00.217218925 +0000 UTC m=+34.855176351 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/de5fecf1-cb2c-4ae2-a240-6f8826f6dac3-metrics-certs") pod "network-metrics-daemon-xrldv" (UID: "de5fecf1-cb2c-4ae2-a240-6f8826f6dac3") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.721652 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.731673 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mp9nh\" (UniqueName: \"kubernetes.io/projected/de5fecf1-cb2c-4ae2-a240-6f8826f6dac3-kube-api-access-mp9nh\") pod \"network-metrics-daemon-xrldv\" (UID: \"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3\") " pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.738895 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.752389 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.763257 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.776547 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.776596 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.776607 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.776625 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.776641 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:59Z","lastTransitionTime":"2026-01-30T16:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.778049 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5feae404-d53f-4bf5-af27-07a7ce350594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bd3cad42b4c959f5cb67492e8be64db7d98c24e44958bdedc08dbdd5fb9bc10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cd4d835e7db624be9961818c2531e2eab6cfa661faed78c9c14942abec0512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rg9cf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.789754 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrldv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrldv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.878823 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.878857 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.878868 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.878884 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.878898 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:59Z","lastTransitionTime":"2026-01-30T16:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.982107 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.982159 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.982171 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.982208 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.982222 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:59Z","lastTransitionTime":"2026-01-30T16:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.006483 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 20:07:03.003210926 +0000 UTC Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.039119 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:00 crc kubenswrapper[4766]: E0130 16:23:00.039249 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.085032 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.085346 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.085995 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.086206 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.086300 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:00Z","lastTransitionTime":"2026-01-30T16:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.188645 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.188675 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.188683 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.188695 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.188705 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:00Z","lastTransitionTime":"2026-01-30T16:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.222397 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/de5fecf1-cb2c-4ae2-a240-6f8826f6dac3-metrics-certs\") pod \"network-metrics-daemon-xrldv\" (UID: \"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3\") " pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:00 crc kubenswrapper[4766]: E0130 16:23:00.222519 4766 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 16:23:00 crc kubenswrapper[4766]: E0130 16:23:00.222586 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/de5fecf1-cb2c-4ae2-a240-6f8826f6dac3-metrics-certs podName:de5fecf1-cb2c-4ae2-a240-6f8826f6dac3 nodeName:}" failed. No retries permitted until 2026-01-30 16:23:01.222567093 +0000 UTC m=+35.860524439 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/de5fecf1-cb2c-4ae2-a240-6f8826f6dac3-metrics-certs") pod "network-metrics-daemon-xrldv" (UID: "de5fecf1-cb2c-4ae2-a240-6f8826f6dac3") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.291242 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.291270 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.291280 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.291292 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.291301 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:00Z","lastTransitionTime":"2026-01-30T16:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.394744 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.395288 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.395302 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.395324 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.395338 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:00Z","lastTransitionTime":"2026-01-30T16:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.498460 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.498529 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.498541 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.498562 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.498577 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:00Z","lastTransitionTime":"2026-01-30T16:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.601424 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.601457 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.601466 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.601479 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.601488 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:00Z","lastTransitionTime":"2026-01-30T16:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.703977 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.704022 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.704033 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.704047 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.704056 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:00Z","lastTransitionTime":"2026-01-30T16:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.806268 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.806308 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.806326 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.806346 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.806358 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:00Z","lastTransitionTime":"2026-01-30T16:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.830973 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:23:00 crc kubenswrapper[4766]: E0130 16:23:00.831155 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:23:16.831134833 +0000 UTC m=+51.469092189 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.909076 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.909110 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.909121 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.909135 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.909147 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:00Z","lastTransitionTime":"2026-01-30T16:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.931919 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.931990 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.932037 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.932069 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:00 crc kubenswrapper[4766]: E0130 16:23:00.932128 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 16:23:00 crc kubenswrapper[4766]: E0130 16:23:00.932153 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 16:23:00 crc kubenswrapper[4766]: E0130 16:23:00.932167 4766 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:23:00 crc kubenswrapper[4766]: E0130 16:23:00.932228 4766 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 16:23:00 crc kubenswrapper[4766]: E0130 16:23:00.932240 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 16:23:16.932226717 +0000 UTC m=+51.570184063 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:23:00 crc kubenswrapper[4766]: E0130 16:23:00.932358 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 16:23:16.93234185 +0000 UTC m=+51.570299196 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 16:23:00 crc kubenswrapper[4766]: E0130 16:23:00.932235 4766 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 16:23:00 crc kubenswrapper[4766]: E0130 16:23:00.932384 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 16:23:16.932379001 +0000 UTC m=+51.570336347 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 16:23:00 crc kubenswrapper[4766]: E0130 16:23:00.932247 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 16:23:00 crc kubenswrapper[4766]: E0130 16:23:00.932496 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 16:23:00 crc kubenswrapper[4766]: E0130 16:23:00.932528 4766 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:23:00 crc kubenswrapper[4766]: E0130 16:23:00.932633 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 16:23:16.932600697 +0000 UTC m=+51.570558083 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.006653 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 03:55:20.161190173 +0000 UTC Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.012040 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.012069 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.012077 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.012090 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.012099 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:01Z","lastTransitionTime":"2026-01-30T16:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.039100 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.039097 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:01 crc kubenswrapper[4766]: E0130 16:23:01.039284 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.039239 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:01 crc kubenswrapper[4766]: E0130 16:23:01.039376 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:23:01 crc kubenswrapper[4766]: E0130 16:23:01.039446 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.115334 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.115390 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.115400 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.115419 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.115428 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:01Z","lastTransitionTime":"2026-01-30T16:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.218576 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.218644 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.218658 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.218686 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.218699 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:01Z","lastTransitionTime":"2026-01-30T16:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.235729 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/de5fecf1-cb2c-4ae2-a240-6f8826f6dac3-metrics-certs\") pod \"network-metrics-daemon-xrldv\" (UID: \"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3\") " pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:01 crc kubenswrapper[4766]: E0130 16:23:01.235990 4766 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 16:23:01 crc kubenswrapper[4766]: E0130 16:23:01.236079 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/de5fecf1-cb2c-4ae2-a240-6f8826f6dac3-metrics-certs podName:de5fecf1-cb2c-4ae2-a240-6f8826f6dac3 nodeName:}" failed. No retries permitted until 2026-01-30 16:23:03.236058164 +0000 UTC m=+37.874015510 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/de5fecf1-cb2c-4ae2-a240-6f8826f6dac3-metrics-certs") pod "network-metrics-daemon-xrldv" (UID: "de5fecf1-cb2c-4ae2-a240-6f8826f6dac3") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.322018 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.322106 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.322131 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.322163 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.322224 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:01Z","lastTransitionTime":"2026-01-30T16:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.423986 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.424038 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.424050 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.424064 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.424074 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:01Z","lastTransitionTime":"2026-01-30T16:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.526746 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.526785 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.526793 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.526807 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.526816 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:01Z","lastTransitionTime":"2026-01-30T16:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.629678 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.629744 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.629752 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.629768 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.629779 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:01Z","lastTransitionTime":"2026-01-30T16:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.732634 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.732676 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.732684 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.732702 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.732712 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:01Z","lastTransitionTime":"2026-01-30T16:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.752487 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.752536 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.752545 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.752558 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.752566 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:01Z","lastTransitionTime":"2026-01-30T16:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:01 crc kubenswrapper[4766]: E0130 16:23:01.763489 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:01Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.772089 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.772153 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.772163 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.772212 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.772227 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:01Z","lastTransitionTime":"2026-01-30T16:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:01 crc kubenswrapper[4766]: E0130 16:23:01.782578 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:01Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.786349 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.786377 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.786385 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.786399 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.786408 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:01Z","lastTransitionTime":"2026-01-30T16:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:01 crc kubenswrapper[4766]: E0130 16:23:01.797667 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:01Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.801456 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.801494 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.801505 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.801525 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.801537 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:01Z","lastTransitionTime":"2026-01-30T16:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:01 crc kubenswrapper[4766]: E0130 16:23:01.812574 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:01Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.816145 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.816204 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.816223 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.816244 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.816255 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:01Z","lastTransitionTime":"2026-01-30T16:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:01 crc kubenswrapper[4766]: E0130 16:23:01.829880 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:01Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:01 crc kubenswrapper[4766]: E0130 16:23:01.830018 4766 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.835619 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.835648 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.835659 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.835673 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.835682 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:01Z","lastTransitionTime":"2026-01-30T16:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.938304 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.938350 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.938360 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.938376 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.938388 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:01Z","lastTransitionTime":"2026-01-30T16:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.007159 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 19:18:25.255086748 +0000 UTC Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.038584 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:02 crc kubenswrapper[4766]: E0130 16:23:02.038759 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.040345 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.040371 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.040379 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.040391 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.040399 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:02Z","lastTransitionTime":"2026-01-30T16:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.142365 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.142395 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.142403 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.142416 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.142424 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:02Z","lastTransitionTime":"2026-01-30T16:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.244844 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.244891 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.244902 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.244917 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.244928 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:02Z","lastTransitionTime":"2026-01-30T16:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.346950 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.346980 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.346990 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.347005 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.347014 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:02Z","lastTransitionTime":"2026-01-30T16:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.449162 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.449230 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.449242 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.449260 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.449273 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:02Z","lastTransitionTime":"2026-01-30T16:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.551529 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.551594 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.551611 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.551632 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.551650 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:02Z","lastTransitionTime":"2026-01-30T16:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.653739 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.653779 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.653790 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.653804 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.653812 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:02Z","lastTransitionTime":"2026-01-30T16:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.756686 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.756725 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.756737 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.756753 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.756764 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:02Z","lastTransitionTime":"2026-01-30T16:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.858582 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.858618 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.858633 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.858655 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.858691 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:02Z","lastTransitionTime":"2026-01-30T16:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.961701 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.961753 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.961765 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.961779 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.961788 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:02Z","lastTransitionTime":"2026-01-30T16:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.007739 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 13:33:13.33571322 +0000 UTC Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.039441 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.039546 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.039557 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:03 crc kubenswrapper[4766]: E0130 16:23:03.039620 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:23:03 crc kubenswrapper[4766]: E0130 16:23:03.039761 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:23:03 crc kubenswrapper[4766]: E0130 16:23:03.039805 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.064096 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.064143 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.064155 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.064170 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.064233 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:03Z","lastTransitionTime":"2026-01-30T16:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.165989 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.166026 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.166035 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.166049 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.166058 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:03Z","lastTransitionTime":"2026-01-30T16:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.255019 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/de5fecf1-cb2c-4ae2-a240-6f8826f6dac3-metrics-certs\") pod \"network-metrics-daemon-xrldv\" (UID: \"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3\") " pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:03 crc kubenswrapper[4766]: E0130 16:23:03.255232 4766 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 16:23:03 crc kubenswrapper[4766]: E0130 16:23:03.255329 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/de5fecf1-cb2c-4ae2-a240-6f8826f6dac3-metrics-certs podName:de5fecf1-cb2c-4ae2-a240-6f8826f6dac3 nodeName:}" failed. No retries permitted until 2026-01-30 16:23:07.255309326 +0000 UTC m=+41.893266752 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/de5fecf1-cb2c-4ae2-a240-6f8826f6dac3-metrics-certs") pod "network-metrics-daemon-xrldv" (UID: "de5fecf1-cb2c-4ae2-a240-6f8826f6dac3") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.268417 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.268456 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.268467 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.268481 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.268490 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:03Z","lastTransitionTime":"2026-01-30T16:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.370678 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.370742 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.370756 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.370772 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.370783 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:03Z","lastTransitionTime":"2026-01-30T16:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.473472 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.473518 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.473527 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.473541 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.473552 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:03Z","lastTransitionTime":"2026-01-30T16:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.576266 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.576325 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.576344 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.576368 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.576384 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:03Z","lastTransitionTime":"2026-01-30T16:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.679053 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.679114 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.679142 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.679166 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.679226 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:03Z","lastTransitionTime":"2026-01-30T16:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.782903 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.782959 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.783000 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.783019 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.783033 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:03Z","lastTransitionTime":"2026-01-30T16:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.885889 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.885940 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.885950 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.885964 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.885975 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:03Z","lastTransitionTime":"2026-01-30T16:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.988745 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.988784 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.988827 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.988843 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.988853 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:03Z","lastTransitionTime":"2026-01-30T16:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.008748 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 02:26:35.286594272 +0000 UTC Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.039330 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:04 crc kubenswrapper[4766]: E0130 16:23:04.039481 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.091396 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.091483 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.091500 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.091519 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.091556 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:04Z","lastTransitionTime":"2026-01-30T16:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.194217 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.194262 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.194274 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.194289 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.194300 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:04Z","lastTransitionTime":"2026-01-30T16:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.296168 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.296225 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.296235 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.296253 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.296265 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:04Z","lastTransitionTime":"2026-01-30T16:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.399430 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.399475 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.399485 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.399502 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.399511 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:04Z","lastTransitionTime":"2026-01-30T16:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.501550 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.501592 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.501601 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.501615 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.501625 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:04Z","lastTransitionTime":"2026-01-30T16:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.604049 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.604095 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.604108 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.604123 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.604135 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:04Z","lastTransitionTime":"2026-01-30T16:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.709837 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.709878 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.709890 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.709905 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.709914 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:04Z","lastTransitionTime":"2026-01-30T16:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.812603 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.812641 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.812654 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.812669 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.812682 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:04Z","lastTransitionTime":"2026-01-30T16:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.914795 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.914857 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.914876 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.914893 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.914903 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:04Z","lastTransitionTime":"2026-01-30T16:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.009373 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 19:52:34.458294173 +0000 UTC Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.017279 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.017326 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.017341 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.017361 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.017375 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:05Z","lastTransitionTime":"2026-01-30T16:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.038337 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.038360 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.038419 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:05 crc kubenswrapper[4766]: E0130 16:23:05.038569 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:23:05 crc kubenswrapper[4766]: E0130 16:23:05.038668 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:23:05 crc kubenswrapper[4766]: E0130 16:23:05.038738 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.119790 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.119829 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.119846 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.119866 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.119881 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:05Z","lastTransitionTime":"2026-01-30T16:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.221998 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.222043 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.222062 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.222082 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.222096 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:05Z","lastTransitionTime":"2026-01-30T16:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.324801 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.324847 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.324859 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.324877 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.324888 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:05Z","lastTransitionTime":"2026-01-30T16:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.427208 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.427244 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.427254 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.427268 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.427277 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:05Z","lastTransitionTime":"2026-01-30T16:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.530377 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.530425 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.530437 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.530454 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.530468 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:05Z","lastTransitionTime":"2026-01-30T16:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.633616 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.633655 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.633664 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.633685 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.633696 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:05Z","lastTransitionTime":"2026-01-30T16:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.736799 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.736856 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.736871 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.736886 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.736896 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:05Z","lastTransitionTime":"2026-01-30T16:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.839343 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.839390 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.839401 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.839415 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.839425 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:05Z","lastTransitionTime":"2026-01-30T16:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.942215 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.942261 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.942271 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.942286 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.942294 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:05Z","lastTransitionTime":"2026-01-30T16:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.010045 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 00:17:05.363681721 +0000 UTC Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.039366 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:06 crc kubenswrapper[4766]: E0130 16:23:06.039481 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.045218 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.045259 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.045272 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.045290 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.045302 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:06Z","lastTransitionTime":"2026-01-30T16:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.052804 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.066350 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.077335 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.089844 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.102336 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc80898-f7b6-4e82-8da7-1d054381e411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.115237 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.127472 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.140155 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.147882 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.147920 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.147928 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.147942 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.147952 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:06Z","lastTransitionTime":"2026-01-30T16:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.149884 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.159864 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5feae404-d53f-4bf5-af27-07a7ce350594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bd3cad42b4c959f5cb67492e8be64db7d98c24e44958bdedc08dbdd5fb9bc10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cd4d835e7db624be9961818c2531e2eab6cfa661faed78c9c14942abec0512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rg9cf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.169578 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrldv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrldv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.179650 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.188259 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.205293 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de97db5af666a3aa1c19f12bc73f1b2f74553cffd269480101a7c1ac325e11de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c3dfb77226b722802de4a6648846be67d88834db86128acbeb613ecd26cc46d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"message\\\":\\\" 5990 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:22:55.549095 5990 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:22:55.549425 5990 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 16:22:55.549491 5990 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 16:22:55.549512 5990 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 16:22:55.549518 5990 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 16:22:55.549540 5990 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 16:22:55.549558 5990 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 16:22:55.549598 5990 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 16:22:55.549617 5990 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 16:22:55.549640 5990 factory.go:656] Stopping watch factory\\\\nI0130 16:22:55.549659 5990 ovnkube.go:599] Stopped ovnkube\\\\nI0130 16:22:55.549706 5990 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 16:22:55.549717 5990 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 16:22:55.549726 5990 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 16:22:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de97db5af666a3aa1c19f12bc73f1b2f74553cffd269480101a7c1ac325e11de\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"message\\\":\\\" 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.217.4.1,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.1],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0130 16:22:57.121614 6200 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-controller]} name:Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.16:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3f1b9878-e751-4e46-a226-ce007d2c4aa7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0130 16:22:57.121767 6200 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.218674 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5545aa724616d7714b8886c8505d44241291240e7ac7afd4a85192542a09e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.229911 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.249249 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.249279 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.249288 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.249301 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.249310 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:06Z","lastTransitionTime":"2026-01-30T16:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.352083 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.352137 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.352149 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.352167 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.352198 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:06Z","lastTransitionTime":"2026-01-30T16:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.455170 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.455224 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.455242 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.455258 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.455269 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:06Z","lastTransitionTime":"2026-01-30T16:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.604165 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.604282 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.604303 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.604325 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.604342 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:06Z","lastTransitionTime":"2026-01-30T16:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.709255 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.709361 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.709383 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.709408 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.709426 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:06Z","lastTransitionTime":"2026-01-30T16:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.814727 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.814790 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.814808 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.814832 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.814850 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:06Z","lastTransitionTime":"2026-01-30T16:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.917154 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.917212 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.917226 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.917246 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.917259 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:06Z","lastTransitionTime":"2026-01-30T16:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.011233 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 11:29:55.571280892 +0000 UTC Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.020671 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.020736 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.020759 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.020790 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.020813 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:07Z","lastTransitionTime":"2026-01-30T16:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.038799 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.038845 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.038909 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:07 crc kubenswrapper[4766]: E0130 16:23:07.038938 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:23:07 crc kubenswrapper[4766]: E0130 16:23:07.038996 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:23:07 crc kubenswrapper[4766]: E0130 16:23:07.039089 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.123059 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.123103 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.123115 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.123132 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.123145 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:07Z","lastTransitionTime":"2026-01-30T16:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.225082 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.225117 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.225128 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.225142 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.225152 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:07Z","lastTransitionTime":"2026-01-30T16:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.299698 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/de5fecf1-cb2c-4ae2-a240-6f8826f6dac3-metrics-certs\") pod \"network-metrics-daemon-xrldv\" (UID: \"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3\") " pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:07 crc kubenswrapper[4766]: E0130 16:23:07.299871 4766 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 16:23:07 crc kubenswrapper[4766]: E0130 16:23:07.299959 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/de5fecf1-cb2c-4ae2-a240-6f8826f6dac3-metrics-certs podName:de5fecf1-cb2c-4ae2-a240-6f8826f6dac3 nodeName:}" failed. No retries permitted until 2026-01-30 16:23:15.299938887 +0000 UTC m=+49.937896243 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/de5fecf1-cb2c-4ae2-a240-6f8826f6dac3-metrics-certs") pod "network-metrics-daemon-xrldv" (UID: "de5fecf1-cb2c-4ae2-a240-6f8826f6dac3") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.327506 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.327540 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.327548 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.327561 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.327612 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:07Z","lastTransitionTime":"2026-01-30T16:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.430560 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.430625 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.430648 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.430677 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.430699 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:07Z","lastTransitionTime":"2026-01-30T16:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.533656 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.533736 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.533761 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.533794 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.533823 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:07Z","lastTransitionTime":"2026-01-30T16:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.636900 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.636930 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.636939 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.636951 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.636960 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:07Z","lastTransitionTime":"2026-01-30T16:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.738564 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.738862 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.738979 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.739123 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.739307 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:07Z","lastTransitionTime":"2026-01-30T16:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.841848 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.841907 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.841929 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.841956 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.841982 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:07Z","lastTransitionTime":"2026-01-30T16:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.945358 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.945420 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.945441 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.945467 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.945484 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:07Z","lastTransitionTime":"2026-01-30T16:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.012353 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 07:42:02.032628955 +0000 UTC Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.039147 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:08 crc kubenswrapper[4766]: E0130 16:23:08.039393 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.048638 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.048686 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.048700 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.048716 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.048729 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:08Z","lastTransitionTime":"2026-01-30T16:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.151021 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.151050 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.151058 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.151069 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.151080 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:08Z","lastTransitionTime":"2026-01-30T16:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.253627 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.253674 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.253688 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.253706 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.253720 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:08Z","lastTransitionTime":"2026-01-30T16:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.356090 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.356340 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.356401 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.356500 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.356572 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:08Z","lastTransitionTime":"2026-01-30T16:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.459448 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.459480 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.459489 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.459502 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.459510 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:08Z","lastTransitionTime":"2026-01-30T16:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.562359 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.562412 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.562424 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.562444 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.562455 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:08Z","lastTransitionTime":"2026-01-30T16:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.665666 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.665708 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.665717 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.665731 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.665744 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:08Z","lastTransitionTime":"2026-01-30T16:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.768854 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.769204 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.769298 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.769388 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.769471 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:08Z","lastTransitionTime":"2026-01-30T16:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.871566 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.871606 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.871618 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.871634 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.871646 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:08Z","lastTransitionTime":"2026-01-30T16:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.974400 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.974659 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.974787 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.974900 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.974989 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:08Z","lastTransitionTime":"2026-01-30T16:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.013078 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 01:56:13.873077564 +0000 UTC Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.038947 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:09 crc kubenswrapper[4766]: E0130 16:23:09.039369 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.038962 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.038947 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:09 crc kubenswrapper[4766]: E0130 16:23:09.039702 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:23:09 crc kubenswrapper[4766]: E0130 16:23:09.039888 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.077600 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.077639 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.077651 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.077669 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.077679 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:09Z","lastTransitionTime":"2026-01-30T16:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.180111 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.180357 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.180444 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.180532 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.180600 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:09Z","lastTransitionTime":"2026-01-30T16:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.283245 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.283607 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.283848 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.284111 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.284371 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:09Z","lastTransitionTime":"2026-01-30T16:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.386729 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.386784 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.386795 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.386812 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.386821 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:09Z","lastTransitionTime":"2026-01-30T16:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.489096 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.489445 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.489472 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.489504 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.489529 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:09Z","lastTransitionTime":"2026-01-30T16:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.593128 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.593165 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.593190 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.593206 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.593217 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:09Z","lastTransitionTime":"2026-01-30T16:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.695964 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.696367 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.696471 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.696572 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.696655 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:09Z","lastTransitionTime":"2026-01-30T16:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.815194 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.815247 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.815260 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.815275 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.815285 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:09Z","lastTransitionTime":"2026-01-30T16:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.917203 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.917246 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.917256 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.917273 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.917291 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:09Z","lastTransitionTime":"2026-01-30T16:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.014150 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 16:14:23.895250366 +0000 UTC Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.019970 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.020030 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.020040 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.020074 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.020086 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:10Z","lastTransitionTime":"2026-01-30T16:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.038841 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:10 crc kubenswrapper[4766]: E0130 16:23:10.039000 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.121976 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.122022 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.122033 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.122048 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.122059 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:10Z","lastTransitionTime":"2026-01-30T16:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.224037 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.224078 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.224087 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.224100 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.224111 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:10Z","lastTransitionTime":"2026-01-30T16:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.326785 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.326832 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.326840 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.326853 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.326863 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:10Z","lastTransitionTime":"2026-01-30T16:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.429104 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.429156 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.429168 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.429214 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.429227 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:10Z","lastTransitionTime":"2026-01-30T16:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.532951 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.532991 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.533004 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.533020 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.533034 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:10Z","lastTransitionTime":"2026-01-30T16:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.636250 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.636301 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.636315 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.636331 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.636342 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:10Z","lastTransitionTime":"2026-01-30T16:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.738371 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.738443 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.738461 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.738486 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.738503 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:10Z","lastTransitionTime":"2026-01-30T16:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.840993 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.841046 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.841058 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.841073 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.841084 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:10Z","lastTransitionTime":"2026-01-30T16:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.943627 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.943670 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.943706 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.943731 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.943747 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:10Z","lastTransitionTime":"2026-01-30T16:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.014731 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 06:34:07.178164548 +0000 UTC Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.038364 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.038421 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.038450 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:11 crc kubenswrapper[4766]: E0130 16:23:11.038508 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:23:11 crc kubenswrapper[4766]: E0130 16:23:11.038836 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:23:11 crc kubenswrapper[4766]: E0130 16:23:11.038926 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.045859 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.046593 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.046668 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.046689 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.046700 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:11Z","lastTransitionTime":"2026-01-30T16:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.150606 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.150646 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.150657 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.150674 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.150685 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:11Z","lastTransitionTime":"2026-01-30T16:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.254156 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.254232 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.254244 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.254263 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.254276 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:11Z","lastTransitionTime":"2026-01-30T16:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.356945 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.357022 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.357040 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.357061 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.357078 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:11Z","lastTransitionTime":"2026-01-30T16:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.460266 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.460348 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.460426 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.460521 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.460549 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:11Z","lastTransitionTime":"2026-01-30T16:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.564325 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.564368 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.564377 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.564391 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.564400 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:11Z","lastTransitionTime":"2026-01-30T16:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.667364 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.667411 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.667422 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.667437 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.667448 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:11Z","lastTransitionTime":"2026-01-30T16:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.770142 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.770218 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.770231 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.770250 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.770263 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:11Z","lastTransitionTime":"2026-01-30T16:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.872871 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.872911 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.872921 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.872937 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.872948 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:11Z","lastTransitionTime":"2026-01-30T16:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.975374 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.975427 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.975439 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.975458 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.975470 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:11Z","lastTransitionTime":"2026-01-30T16:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.015916 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 16:09:14.083694882 +0000 UTC Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.039406 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:12 crc kubenswrapper[4766]: E0130 16:23:12.039539 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.040754 4766 scope.go:117] "RemoveContainer" containerID="de97db5af666a3aa1c19f12bc73f1b2f74553cffd269480101a7c1ac325e11de" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.054534 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.066568 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.078045 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.078759 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.078792 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.078804 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.078823 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.078836 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:12Z","lastTransitionTime":"2026-01-30T16:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.089957 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.102834 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.115507 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc80898-f7b6-4e82-8da7-1d054381e411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.131639 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.141729 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.153308 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5feae404-d53f-4bf5-af27-07a7ce350594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bd3cad42b4c959f5cb67492e8be64db7d98c24e44958bdedc08dbdd5fb9bc10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cd4d835e7db624be9961818c2531e2eab6cfa661faed78c9c14942abec0512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rg9cf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.163408 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrldv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrldv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.174846 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.180262 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.180298 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.180310 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.180325 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.180337 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:12Z","lastTransitionTime":"2026-01-30T16:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.192443 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de97db5af666a3aa1c19f12bc73f1b2f74553cffd269480101a7c1ac325e11de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de97db5af666a3aa1c19f12bc73f1b2f74553cffd269480101a7c1ac325e11de\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"message\\\":\\\" 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.217.4.1,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.1],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0130 16:22:57.121614 6200 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-controller]} name:Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.16:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3f1b9878-e751-4e46-a226-ce007d2c4aa7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0130 16:22:57.121767 6200 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-54ngm_openshift-ovn-kubernetes(d6a299e8-188d-4777-bb82-a0994feabcff)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.202845 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.202881 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.202892 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.202908 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.202919 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:12Z","lastTransitionTime":"2026-01-30T16:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.206736 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5545aa724616d7714b8886c8505d44241291240e7ac7afd4a85192542a09e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: E0130 16:23:12.214819 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.218454 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.218567 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.218605 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.218614 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.218628 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.218638 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:12Z","lastTransitionTime":"2026-01-30T16:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.230435 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: E0130 16:23:12.230449 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.233503 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.233528 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.233541 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.233557 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.233571 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:12Z","lastTransitionTime":"2026-01-30T16:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.239309 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: E0130 16:23:12.244065 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.247218 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.247246 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.247256 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.247271 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.247282 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:12Z","lastTransitionTime":"2026-01-30T16:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:12 crc kubenswrapper[4766]: E0130 16:23:12.256732 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.259871 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.259902 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.259916 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.259933 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.259946 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:12Z","lastTransitionTime":"2026-01-30T16:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:12 crc kubenswrapper[4766]: E0130 16:23:12.270033 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: E0130 16:23:12.270146 4766 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.282725 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.282748 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.282757 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.282771 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.282782 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:12Z","lastTransitionTime":"2026-01-30T16:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.319050 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-54ngm_d6a299e8-188d-4777-bb82-a0994feabcff/ovnkube-controller/1.log" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.320944 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" event={"ID":"d6a299e8-188d-4777-bb82-a0994feabcff","Type":"ContainerStarted","Data":"7a74b51a75799769b1eccf7cde8dbf771dcd168728257bb835b5e68ab920c2e3"} Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.321667 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.332231 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.342297 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.352451 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.363708 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.375913 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc80898-f7b6-4e82-8da7-1d054381e411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.385114 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.385146 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.385153 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.385166 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.385193 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:12Z","lastTransitionTime":"2026-01-30T16:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.389199 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.399267 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.412486 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrldv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrldv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.429728 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.447002 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.467339 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5feae404-d53f-4bf5-af27-07a7ce350594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bd3cad42b4c959f5cb67492e8be64db7d98c24e44958bdedc08dbdd5fb9bc10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cd4d835e7db624be9961818c2531e2eab6cfa661faed78c9c14942abec0512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rg9cf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.487006 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.487037 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.487046 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.487059 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.487071 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:12Z","lastTransitionTime":"2026-01-30T16:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.497516 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.515158 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.529160 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.554025 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a74b51a75799769b1eccf7cde8dbf771dcd168728257bb835b5e68ab920c2e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de97db5af666a3aa1c19f12bc73f1b2f74553cffd269480101a7c1ac325e11de\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"message\\\":\\\" 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.217.4.1,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.1],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0130 16:22:57.121614 6200 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-controller]} name:Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.16:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3f1b9878-e751-4e46-a226-ce007d2c4aa7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0130 16:22:57.121767 6200 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:23:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.567996 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5545aa724616d7714b8886c8505d44241291240e7ac7afd4a85192542a09e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.589570 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.589601 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.589611 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.589625 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.589636 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:12Z","lastTransitionTime":"2026-01-30T16:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.692213 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.692266 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.692278 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.692301 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.692316 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:12Z","lastTransitionTime":"2026-01-30T16:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.794407 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.794448 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.794460 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.794475 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.794487 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:12Z","lastTransitionTime":"2026-01-30T16:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.897561 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.897615 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.897634 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.897657 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.897683 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:12Z","lastTransitionTime":"2026-01-30T16:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.000122 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.000154 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.000162 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.000190 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.000200 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:13Z","lastTransitionTime":"2026-01-30T16:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.016829 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 12:14:22.003570946 +0000 UTC Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.039263 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.039306 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.039353 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:13 crc kubenswrapper[4766]: E0130 16:23:13.039407 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:23:13 crc kubenswrapper[4766]: E0130 16:23:13.039512 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:23:13 crc kubenswrapper[4766]: E0130 16:23:13.039592 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.102437 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.102484 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.102496 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.102511 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.102522 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:13Z","lastTransitionTime":"2026-01-30T16:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.205353 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.205388 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.205396 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.205408 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.205420 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:13Z","lastTransitionTime":"2026-01-30T16:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.307582 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.307629 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.307641 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.307660 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.307672 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:13Z","lastTransitionTime":"2026-01-30T16:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.326356 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-54ngm_d6a299e8-188d-4777-bb82-a0994feabcff/ovnkube-controller/2.log" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.326940 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-54ngm_d6a299e8-188d-4777-bb82-a0994feabcff/ovnkube-controller/1.log" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.329568 4766 generic.go:334] "Generic (PLEG): container finished" podID="d6a299e8-188d-4777-bb82-a0994feabcff" containerID="7a74b51a75799769b1eccf7cde8dbf771dcd168728257bb835b5e68ab920c2e3" exitCode=1 Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.329610 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" event={"ID":"d6a299e8-188d-4777-bb82-a0994feabcff","Type":"ContainerDied","Data":"7a74b51a75799769b1eccf7cde8dbf771dcd168728257bb835b5e68ab920c2e3"} Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.329687 4766 scope.go:117] "RemoveContainer" containerID="de97db5af666a3aa1c19f12bc73f1b2f74553cffd269480101a7c1ac325e11de" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.330211 4766 scope.go:117] "RemoveContainer" containerID="7a74b51a75799769b1eccf7cde8dbf771dcd168728257bb835b5e68ab920c2e3" Jan 30 16:23:13 crc kubenswrapper[4766]: E0130 16:23:13.330382 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-54ngm_openshift-ovn-kubernetes(d6a299e8-188d-4777-bb82-a0994feabcff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.349528 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.361842 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.373072 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.384799 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.398859 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.410440 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.410484 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.410493 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.410506 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.410515 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:13Z","lastTransitionTime":"2026-01-30T16:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.412073 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc80898-f7b6-4e82-8da7-1d054381e411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.422836 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.432778 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.443637 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5feae404-d53f-4bf5-af27-07a7ce350594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bd3cad42b4c959f5cb67492e8be64db7d98c24e44958bdedc08dbdd5fb9bc10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cd4d835e7db624be9961818c2531e2eab6cfa661faed78c9c14942abec0512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rg9cf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.453646 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrldv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrldv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.466210 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.484399 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a74b51a75799769b1eccf7cde8dbf771dcd168728257bb835b5e68ab920c2e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de97db5af666a3aa1c19f12bc73f1b2f74553cffd269480101a7c1ac325e11de\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"message\\\":\\\" 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.217.4.1,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.1],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0130 16:22:57.121614 6200 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-controller]} name:Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.16:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3f1b9878-e751-4e46-a226-ce007d2c4aa7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0130 16:22:57.121767 6200 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a74b51a75799769b1eccf7cde8dbf771dcd168728257bb835b5e68ab920c2e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"message\\\":\\\"or_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 16:23:12.944427 6408 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 16:23:12.944213 6408 services_controller.go:445] Built service openshift-multus/multus-admission-controller LB template configs for network=default: []services.lbConfig(nil)\\\\nI0130 16:23:12.944465 6408 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" but failed to find it\\\\nI0130 16:23:12.944472 6408 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" in cache\\\\nF0130 16:23:12.944413 6408 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:23:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.497479 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5545aa724616d7714b8886c8505d44241291240e7ac7afd4a85192542a09e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.509860 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.512660 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.512691 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.512701 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.512714 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.512725 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:13Z","lastTransitionTime":"2026-01-30T16:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.521600 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.534740 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.615034 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.615071 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.615080 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.615094 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.615103 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:13Z","lastTransitionTime":"2026-01-30T16:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.717228 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.717265 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.717276 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.717289 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.717298 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:13Z","lastTransitionTime":"2026-01-30T16:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.820241 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.820312 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.820323 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.820337 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.820346 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:13Z","lastTransitionTime":"2026-01-30T16:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.922337 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.922385 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.922397 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.922411 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.922422 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:13Z","lastTransitionTime":"2026-01-30T16:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.017146 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 22:44:06.921974672 +0000 UTC Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.024991 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.025024 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.025035 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.025051 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.025062 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:14Z","lastTransitionTime":"2026-01-30T16:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.038915 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:14 crc kubenswrapper[4766]: E0130 16:23:14.039066 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.127099 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.127131 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.127139 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.127151 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.127159 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:14Z","lastTransitionTime":"2026-01-30T16:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.229421 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.229481 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.229504 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.229527 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.229586 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:14Z","lastTransitionTime":"2026-01-30T16:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.331620 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.331661 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.331673 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.331688 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.331698 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:14Z","lastTransitionTime":"2026-01-30T16:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.334443 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-54ngm_d6a299e8-188d-4777-bb82-a0994feabcff/ovnkube-controller/2.log" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.339726 4766 scope.go:117] "RemoveContainer" containerID="7a74b51a75799769b1eccf7cde8dbf771dcd168728257bb835b5e68ab920c2e3" Jan 30 16:23:14 crc kubenswrapper[4766]: E0130 16:23:14.339896 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-54ngm_openshift-ovn-kubernetes(d6a299e8-188d-4777-bb82-a0994feabcff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.351968 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.362028 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.372526 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.383954 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc80898-f7b6-4e82-8da7-1d054381e411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.395244 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.437391 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.437661 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.437765 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.437859 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.437942 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:14Z","lastTransitionTime":"2026-01-30T16:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.440625 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.454624 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.468168 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.479400 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.490905 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5feae404-d53f-4bf5-af27-07a7ce350594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bd3cad42b4c959f5cb67492e8be64db7d98c24e44958bdedc08dbdd5fb9bc10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cd4d835e7db624be9961818c2531e2eab6cfa661faed78c9c14942abec0512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rg9cf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.500501 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrldv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrldv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.511471 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.520198 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.535829 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a74b51a75799769b1eccf7cde8dbf771dcd168728257bb835b5e68ab920c2e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a74b51a75799769b1eccf7cde8dbf771dcd168728257bb835b5e68ab920c2e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"message\\\":\\\"or_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 16:23:12.944427 6408 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 16:23:12.944213 6408 services_controller.go:445] Built service openshift-multus/multus-admission-controller LB template configs for network=default: []services.lbConfig(nil)\\\\nI0130 16:23:12.944465 6408 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" but failed to find it\\\\nI0130 16:23:12.944472 6408 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" in cache\\\\nF0130 16:23:12.944413 6408 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:23:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-54ngm_openshift-ovn-kubernetes(d6a299e8-188d-4777-bb82-a0994feabcff)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.539927 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.539968 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.539977 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.539994 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.540006 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:14Z","lastTransitionTime":"2026-01-30T16:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.552344 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5545aa724616d7714b8886c8505d44241291240e7ac7afd4a85192542a09e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.566135 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.642650 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.642698 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.642708 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.642723 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.642733 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:14Z","lastTransitionTime":"2026-01-30T16:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.744993 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.745354 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.745461 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.745541 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.745600 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:14Z","lastTransitionTime":"2026-01-30T16:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.848013 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.848062 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.848073 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.848095 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.848107 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:14Z","lastTransitionTime":"2026-01-30T16:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.951087 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.951112 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.951119 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.951131 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.951139 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:14Z","lastTransitionTime":"2026-01-30T16:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.017537 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 04:24:59.152485004 +0000 UTC Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.039090 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:15 crc kubenswrapper[4766]: E0130 16:23:15.039230 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.039313 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.039358 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:15 crc kubenswrapper[4766]: E0130 16:23:15.039463 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:23:15 crc kubenswrapper[4766]: E0130 16:23:15.039570 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.053135 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.053168 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.053196 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.053212 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.053223 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:15Z","lastTransitionTime":"2026-01-30T16:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.155857 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.155911 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.155930 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.155954 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.155970 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:15Z","lastTransitionTime":"2026-01-30T16:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.258044 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.258093 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.258110 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.258127 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.258135 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:15Z","lastTransitionTime":"2026-01-30T16:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.361355 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.361443 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.361469 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.361498 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.361516 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:15Z","lastTransitionTime":"2026-01-30T16:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.381536 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/de5fecf1-cb2c-4ae2-a240-6f8826f6dac3-metrics-certs\") pod \"network-metrics-daemon-xrldv\" (UID: \"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3\") " pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:15 crc kubenswrapper[4766]: E0130 16:23:15.381723 4766 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 16:23:15 crc kubenswrapper[4766]: E0130 16:23:15.381830 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/de5fecf1-cb2c-4ae2-a240-6f8826f6dac3-metrics-certs podName:de5fecf1-cb2c-4ae2-a240-6f8826f6dac3 nodeName:}" failed. No retries permitted until 2026-01-30 16:23:31.381805976 +0000 UTC m=+66.019763352 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/de5fecf1-cb2c-4ae2-a240-6f8826f6dac3-metrics-certs") pod "network-metrics-daemon-xrldv" (UID: "de5fecf1-cb2c-4ae2-a240-6f8826f6dac3") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.465005 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.465054 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.465065 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.465079 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.465090 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:15Z","lastTransitionTime":"2026-01-30T16:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.567477 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.567555 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.567568 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.567584 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.567598 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:15Z","lastTransitionTime":"2026-01-30T16:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.670252 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.670286 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.670294 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.670306 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.670314 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:15Z","lastTransitionTime":"2026-01-30T16:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.773506 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.773564 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.773577 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.773595 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.773607 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:15Z","lastTransitionTime":"2026-01-30T16:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.877664 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.877712 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.877723 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.877742 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.877753 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:15Z","lastTransitionTime":"2026-01-30T16:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.981617 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.981681 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.981695 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.981717 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.981735 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:15Z","lastTransitionTime":"2026-01-30T16:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.018512 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 05:51:01.386038027 +0000 UTC Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.039015 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:16 crc kubenswrapper[4766]: E0130 16:23:16.039215 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.056898 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrldv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrldv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:16Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.073054 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:16Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.083995 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.084049 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.084061 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.084078 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.084461 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:16Z","lastTransitionTime":"2026-01-30T16:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.084869 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:16Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.095342 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5feae404-d53f-4bf5-af27-07a7ce350594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bd3cad42b4c959f5cb67492e8be64db7d98c24e44958bdedc08dbdd5fb9bc10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cd4d835e7db624be9961818c2531e2eab6cfa661faed78c9c14942abec0512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rg9cf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:16Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.109707 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:16Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.124158 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:16Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.135552 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:16Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.152583 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a74b51a75799769b1eccf7cde8dbf771dcd168728257bb835b5e68ab920c2e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a74b51a75799769b1eccf7cde8dbf771dcd168728257bb835b5e68ab920c2e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"message\\\":\\\"or_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 16:23:12.944427 6408 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 16:23:12.944213 6408 services_controller.go:445] Built service openshift-multus/multus-admission-controller LB template configs for network=default: []services.lbConfig(nil)\\\\nI0130 16:23:12.944465 6408 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" but failed to find it\\\\nI0130 16:23:12.944472 6408 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" in cache\\\\nF0130 16:23:12.944413 6408 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:23:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-54ngm_openshift-ovn-kubernetes(d6a299e8-188d-4777-bb82-a0994feabcff)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:16Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.169786 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5545aa724616d7714b8886c8505d44241291240e7ac7afd4a85192542a09e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:16Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.186145 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:16Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.186716 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.186749 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.186760 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.186776 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.186785 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:16Z","lastTransitionTime":"2026-01-30T16:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.200436 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:16Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.211028 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:16Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.223966 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:16Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.236601 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc80898-f7b6-4e82-8da7-1d054381e411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:16Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.249506 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:16Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.260573 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:16Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.288865 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.288909 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.288920 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.288935 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.288949 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:16Z","lastTransitionTime":"2026-01-30T16:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.401546 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.401583 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.401592 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.401607 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.401620 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:16Z","lastTransitionTime":"2026-01-30T16:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.503870 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.504008 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.504022 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.504037 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.504049 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:16Z","lastTransitionTime":"2026-01-30T16:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.606614 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.606652 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.606661 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.606675 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.606685 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:16Z","lastTransitionTime":"2026-01-30T16:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.708461 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.708527 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.708540 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.708556 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.708567 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:16Z","lastTransitionTime":"2026-01-30T16:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.810776 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.810835 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.810849 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.810865 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.810876 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:16Z","lastTransitionTime":"2026-01-30T16:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.899415 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:23:16 crc kubenswrapper[4766]: E0130 16:23:16.899690 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:23:48.899652069 +0000 UTC m=+83.537609475 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.913328 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.913361 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.913370 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.913384 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.913395 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:16Z","lastTransitionTime":"2026-01-30T16:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.000902 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.000962 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.000994 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.001023 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:17 crc kubenswrapper[4766]: E0130 16:23:17.001033 4766 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 16:23:17 crc kubenswrapper[4766]: E0130 16:23:17.001131 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 16:23:49.001106033 +0000 UTC m=+83.639063419 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 16:23:17 crc kubenswrapper[4766]: E0130 16:23:17.001145 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 16:23:17 crc kubenswrapper[4766]: E0130 16:23:17.001158 4766 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 16:23:17 crc kubenswrapper[4766]: E0130 16:23:17.001166 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 16:23:17 crc kubenswrapper[4766]: E0130 16:23:17.001221 4766 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:23:17 crc kubenswrapper[4766]: E0130 16:23:17.001220 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 16:23:17 crc kubenswrapper[4766]: E0130 16:23:17.001247 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 16:23:17 crc kubenswrapper[4766]: E0130 16:23:17.001262 4766 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:23:17 crc kubenswrapper[4766]: E0130 16:23:17.001233 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 16:23:49.001221626 +0000 UTC m=+83.639179032 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 16:23:17 crc kubenswrapper[4766]: E0130 16:23:17.001293 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 16:23:49.001276927 +0000 UTC m=+83.639234333 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:23:17 crc kubenswrapper[4766]: E0130 16:23:17.001310 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 16:23:49.001302178 +0000 UTC m=+83.639259644 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.015902 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.015948 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.015963 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.015979 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.015989 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:17Z","lastTransitionTime":"2026-01-30T16:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.019066 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 22:16:48.033275362 +0000 UTC Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.038342 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.038382 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.038342 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:17 crc kubenswrapper[4766]: E0130 16:23:17.038493 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:23:17 crc kubenswrapper[4766]: E0130 16:23:17.038571 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:23:17 crc kubenswrapper[4766]: E0130 16:23:17.038737 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.118809 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.118868 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.118881 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.118902 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.118915 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:17Z","lastTransitionTime":"2026-01-30T16:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.221112 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.221160 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.221170 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.221212 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.221225 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:17Z","lastTransitionTime":"2026-01-30T16:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.324411 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.324460 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.324475 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.324492 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.324504 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:17Z","lastTransitionTime":"2026-01-30T16:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.427269 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.427334 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.427352 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.427375 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.427393 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:17Z","lastTransitionTime":"2026-01-30T16:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.530784 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.530848 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.530866 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.530888 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.530906 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:17Z","lastTransitionTime":"2026-01-30T16:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.633218 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.633267 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.633278 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.633295 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.633308 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:17Z","lastTransitionTime":"2026-01-30T16:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.666058 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.674145 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.677920 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:17Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.687134 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:17Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.703896 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a74b51a75799769b1eccf7cde8dbf771dcd168728257bb835b5e68ab920c2e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a74b51a75799769b1eccf7cde8dbf771dcd168728257bb835b5e68ab920c2e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"message\\\":\\\"or_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 16:23:12.944427 6408 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 16:23:12.944213 6408 services_controller.go:445] Built service openshift-multus/multus-admission-controller LB template configs for network=default: []services.lbConfig(nil)\\\\nI0130 16:23:12.944465 6408 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" but failed to find it\\\\nI0130 16:23:12.944472 6408 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" in cache\\\\nF0130 16:23:12.944413 6408 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:23:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-54ngm_openshift-ovn-kubernetes(d6a299e8-188d-4777-bb82-a0994feabcff)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:17Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.719090 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5545aa724616d7714b8886c8505d44241291240e7ac7afd4a85192542a09e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:17Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.732654 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:17Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.735396 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.735457 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.735469 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.735488 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.735500 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:17Z","lastTransitionTime":"2026-01-30T16:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.747393 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:17Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.766973 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:17Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.784247 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:17Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.801821 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:17Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.816795 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc80898-f7b6-4e82-8da7-1d054381e411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:17Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.828633 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:17Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.838599 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.838638 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.838653 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.838674 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.838689 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:17Z","lastTransitionTime":"2026-01-30T16:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.841126 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:17Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.854024 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:17Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.862929 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:17Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.874513 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5feae404-d53f-4bf5-af27-07a7ce350594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bd3cad42b4c959f5cb67492e8be64db7d98c24e44958bdedc08dbdd5fb9bc10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cd4d835e7db624be9961818c2531e2eab6cfa661faed78c9c14942abec0512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rg9cf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:17Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.885370 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrldv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrldv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:17Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.942950 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.942987 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.943001 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.943022 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.943036 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:17Z","lastTransitionTime":"2026-01-30T16:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.020248 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 05:30:06.63252422 +0000 UTC Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.038751 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:18 crc kubenswrapper[4766]: E0130 16:23:18.038940 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.046602 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.046669 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.046692 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.046720 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.046746 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:18Z","lastTransitionTime":"2026-01-30T16:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.149276 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.149303 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.149313 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.149326 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.149335 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:18Z","lastTransitionTime":"2026-01-30T16:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.251880 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.251929 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.251943 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.251959 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.251971 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:18Z","lastTransitionTime":"2026-01-30T16:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.355464 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.355514 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.355540 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.355562 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.355577 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:18Z","lastTransitionTime":"2026-01-30T16:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.458603 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.458631 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.458641 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.458654 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.458663 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:18Z","lastTransitionTime":"2026-01-30T16:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.561631 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.561688 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.561705 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.561727 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.561744 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:18Z","lastTransitionTime":"2026-01-30T16:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.664145 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.664254 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.664276 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.664307 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.664329 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:18Z","lastTransitionTime":"2026-01-30T16:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.766652 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.766710 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.766725 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.766744 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.766755 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:18Z","lastTransitionTime":"2026-01-30T16:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.868965 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.868999 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.869008 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.869023 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.869033 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:18Z","lastTransitionTime":"2026-01-30T16:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.970962 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.971019 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.971028 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.971041 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.971051 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:18Z","lastTransitionTime":"2026-01-30T16:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.021130 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 04:16:51.329029657 +0000 UTC Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.038413 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.038477 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:19 crc kubenswrapper[4766]: E0130 16:23:19.038540 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.038413 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:19 crc kubenswrapper[4766]: E0130 16:23:19.038679 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:23:19 crc kubenswrapper[4766]: E0130 16:23:19.038712 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.073536 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.073573 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.073582 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.073599 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.073608 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:19Z","lastTransitionTime":"2026-01-30T16:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.177156 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.177435 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.177448 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.177466 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.177480 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:19Z","lastTransitionTime":"2026-01-30T16:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.281154 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.281243 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.281267 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.281296 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.281320 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:19Z","lastTransitionTime":"2026-01-30T16:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.384326 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.384380 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.384396 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.384413 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.384424 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:19Z","lastTransitionTime":"2026-01-30T16:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.487128 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.487170 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.487191 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.487204 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.487218 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:19Z","lastTransitionTime":"2026-01-30T16:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.589818 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.589895 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.589913 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.589936 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.589953 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:19Z","lastTransitionTime":"2026-01-30T16:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.692196 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.692231 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.692242 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.692257 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.692268 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:19Z","lastTransitionTime":"2026-01-30T16:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.795299 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.795353 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.795365 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.795382 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.795394 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:19Z","lastTransitionTime":"2026-01-30T16:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.897976 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.898029 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.898040 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.898064 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.898074 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:19Z","lastTransitionTime":"2026-01-30T16:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.000580 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.000639 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.000649 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.000663 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.000672 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:20Z","lastTransitionTime":"2026-01-30T16:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.021889 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 14:05:54.859965173 +0000 UTC Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.038708 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:20 crc kubenswrapper[4766]: E0130 16:23:20.038863 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.103106 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.103161 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.103200 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.103217 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.103229 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:20Z","lastTransitionTime":"2026-01-30T16:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.205562 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.205600 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.205610 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.205622 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.205631 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:20Z","lastTransitionTime":"2026-01-30T16:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.309001 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.309039 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.309049 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.309061 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.309070 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:20Z","lastTransitionTime":"2026-01-30T16:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.411450 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.411479 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.411487 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.411499 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.411506 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:20Z","lastTransitionTime":"2026-01-30T16:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.514125 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.514162 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.514170 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.514194 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.514203 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:20Z","lastTransitionTime":"2026-01-30T16:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.616775 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.616831 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.616846 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.616866 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.616882 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:20Z","lastTransitionTime":"2026-01-30T16:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.719162 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.719227 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.719244 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.719259 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.719270 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:20Z","lastTransitionTime":"2026-01-30T16:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.822303 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.822382 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.822397 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.822416 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.822430 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:20Z","lastTransitionTime":"2026-01-30T16:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.924662 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.924763 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.924791 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.924825 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.924843 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:20Z","lastTransitionTime":"2026-01-30T16:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.022396 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 02:50:24.88891897 +0000 UTC Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.027900 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.027988 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.028014 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.028038 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.028057 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:21Z","lastTransitionTime":"2026-01-30T16:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.038646 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.038727 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.038656 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:21 crc kubenswrapper[4766]: E0130 16:23:21.038829 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:23:21 crc kubenswrapper[4766]: E0130 16:23:21.038938 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:23:21 crc kubenswrapper[4766]: E0130 16:23:21.039247 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.131316 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.131368 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.131380 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.131398 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.131407 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:21Z","lastTransitionTime":"2026-01-30T16:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.234504 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.234561 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.234573 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.234590 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.234603 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:21Z","lastTransitionTime":"2026-01-30T16:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.336736 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.336778 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.336814 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.336833 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.336846 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:21Z","lastTransitionTime":"2026-01-30T16:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.438903 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.438934 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.438942 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.438954 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.438962 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:21Z","lastTransitionTime":"2026-01-30T16:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.541999 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.542035 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.542043 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.542058 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.542068 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:21Z","lastTransitionTime":"2026-01-30T16:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.644933 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.644971 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.644980 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.644995 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.645005 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:21Z","lastTransitionTime":"2026-01-30T16:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.748340 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.748374 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.748383 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.748397 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.748408 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:21Z","lastTransitionTime":"2026-01-30T16:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.851137 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.851195 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.851203 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.851218 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.851227 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:21Z","lastTransitionTime":"2026-01-30T16:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.953365 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.953424 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.953433 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.953445 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.953470 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:21Z","lastTransitionTime":"2026-01-30T16:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.023290 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 06:18:24.730049379 +0000 UTC Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.038609 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:22 crc kubenswrapper[4766]: E0130 16:23:22.038721 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.055752 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.055789 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.055802 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.055818 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.055832 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:22Z","lastTransitionTime":"2026-01-30T16:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.158432 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.158634 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.158665 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.158695 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.158719 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:22Z","lastTransitionTime":"2026-01-30T16:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.260789 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.260869 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.260892 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.260917 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.260934 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:22Z","lastTransitionTime":"2026-01-30T16:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.363538 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.363580 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.363592 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.363608 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.363618 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:22Z","lastTransitionTime":"2026-01-30T16:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.436025 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.436079 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.436089 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.436106 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.436121 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:22Z","lastTransitionTime":"2026-01-30T16:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:22 crc kubenswrapper[4766]: E0130 16:23:22.448521 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:22Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.452801 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.452858 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.452871 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.452889 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.452902 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:22Z","lastTransitionTime":"2026-01-30T16:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:22 crc kubenswrapper[4766]: E0130 16:23:22.468479 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:22Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.473120 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.473172 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.473202 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.473227 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.473241 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:22Z","lastTransitionTime":"2026-01-30T16:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:22 crc kubenswrapper[4766]: E0130 16:23:22.493677 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:22Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.499004 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.499048 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.499059 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.499075 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.499085 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:22Z","lastTransitionTime":"2026-01-30T16:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:22 crc kubenswrapper[4766]: E0130 16:23:22.515605 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:22Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.520124 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.520208 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.520234 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.520270 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.520293 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:22Z","lastTransitionTime":"2026-01-30T16:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:22 crc kubenswrapper[4766]: E0130 16:23:22.535971 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:22Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:22 crc kubenswrapper[4766]: E0130 16:23:22.536135 4766 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.538312 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.538394 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.538601 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.538621 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.538634 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:22Z","lastTransitionTime":"2026-01-30T16:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.642171 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.642292 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.642324 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.642355 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.642378 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:22Z","lastTransitionTime":"2026-01-30T16:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.744065 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.744114 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.744136 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.744156 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.744194 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:22Z","lastTransitionTime":"2026-01-30T16:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.846639 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.846683 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.846698 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.846721 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.846736 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:22Z","lastTransitionTime":"2026-01-30T16:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.949557 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.949601 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.949611 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.949625 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.949635 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:22Z","lastTransitionTime":"2026-01-30T16:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.023781 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 11:57:21.078551278 +0000 UTC Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.039263 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.039354 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:23 crc kubenswrapper[4766]: E0130 16:23:23.039416 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:23:23 crc kubenswrapper[4766]: E0130 16:23:23.039480 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.039643 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:23 crc kubenswrapper[4766]: E0130 16:23:23.039768 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.051514 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.051558 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.051575 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.051590 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.051600 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:23Z","lastTransitionTime":"2026-01-30T16:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.154944 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.155001 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.155026 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.155051 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.155068 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:23Z","lastTransitionTime":"2026-01-30T16:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.257552 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.257591 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.257602 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.257614 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.257622 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:23Z","lastTransitionTime":"2026-01-30T16:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.360003 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.360045 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.360054 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.360070 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.360084 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:23Z","lastTransitionTime":"2026-01-30T16:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.462382 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.462433 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.462446 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.462462 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.462476 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:23Z","lastTransitionTime":"2026-01-30T16:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.565229 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.565287 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.565298 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.565314 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.565328 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:23Z","lastTransitionTime":"2026-01-30T16:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.668060 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.668103 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.668112 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.668127 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.668136 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:23Z","lastTransitionTime":"2026-01-30T16:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.770611 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.770667 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.770679 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.770694 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.770704 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:23Z","lastTransitionTime":"2026-01-30T16:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.872527 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.872573 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.872584 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.872599 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.872608 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:23Z","lastTransitionTime":"2026-01-30T16:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.975801 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.975846 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.975860 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.975876 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.975891 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:23Z","lastTransitionTime":"2026-01-30T16:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.024644 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 12:59:19.402867346 +0000 UTC Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.039070 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:24 crc kubenswrapper[4766]: E0130 16:23:24.039230 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.077983 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.078014 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.078023 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.078036 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.078045 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:24Z","lastTransitionTime":"2026-01-30T16:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.180901 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.180954 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.180978 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.181003 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.181017 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:24Z","lastTransitionTime":"2026-01-30T16:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.283468 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.283526 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.283537 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.283554 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.283564 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:24Z","lastTransitionTime":"2026-01-30T16:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.386582 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.386654 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.386666 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.386685 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.386699 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:24Z","lastTransitionTime":"2026-01-30T16:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.489060 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.489125 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.489143 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.489165 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.489205 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:24Z","lastTransitionTime":"2026-01-30T16:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.592007 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.592061 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.592077 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.592093 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.592105 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:24Z","lastTransitionTime":"2026-01-30T16:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.695028 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.695080 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.695093 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.695110 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.695122 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:24Z","lastTransitionTime":"2026-01-30T16:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.797895 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.797938 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.797947 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.797962 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.797972 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:24Z","lastTransitionTime":"2026-01-30T16:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.900434 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.900482 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.900494 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.900512 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.900535 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:24Z","lastTransitionTime":"2026-01-30T16:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.002414 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.002459 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.002468 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.002483 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.002493 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:25Z","lastTransitionTime":"2026-01-30T16:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.025777 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 14:10:17.296184681 +0000 UTC Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.039138 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.039239 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.039243 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:25 crc kubenswrapper[4766]: E0130 16:23:25.039322 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:23:25 crc kubenswrapper[4766]: E0130 16:23:25.039416 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:23:25 crc kubenswrapper[4766]: E0130 16:23:25.039527 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.105297 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.105347 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.105358 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.105382 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.105396 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:25Z","lastTransitionTime":"2026-01-30T16:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.207997 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.208054 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.208067 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.208085 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.208096 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:25Z","lastTransitionTime":"2026-01-30T16:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.310086 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.310153 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.310165 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.310211 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.310224 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:25Z","lastTransitionTime":"2026-01-30T16:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.413140 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.413246 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.413268 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.413296 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.413317 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:25Z","lastTransitionTime":"2026-01-30T16:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.515716 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.515771 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.515784 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.515802 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.515815 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:25Z","lastTransitionTime":"2026-01-30T16:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.617807 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.617848 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.617857 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.617872 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.617881 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:25Z","lastTransitionTime":"2026-01-30T16:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.719827 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.719881 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.719891 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.719904 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.719914 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:25Z","lastTransitionTime":"2026-01-30T16:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.821908 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.822007 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.822023 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.822043 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.822056 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:25Z","lastTransitionTime":"2026-01-30T16:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.924779 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.924830 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.924841 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.924859 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.924872 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:25Z","lastTransitionTime":"2026-01-30T16:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.027515 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.027722 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.027731 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.027747 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.027800 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:26Z","lastTransitionTime":"2026-01-30T16:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.026065 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 17:10:59.742007517 +0000 UTC Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.038582 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:26 crc kubenswrapper[4766]: E0130 16:23:26.038730 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.040280 4766 scope.go:117] "RemoveContainer" containerID="7a74b51a75799769b1eccf7cde8dbf771dcd168728257bb835b5e68ab920c2e3" Jan 30 16:23:26 crc kubenswrapper[4766]: E0130 16:23:26.040498 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-54ngm_openshift-ovn-kubernetes(d6a299e8-188d-4777-bb82-a0994feabcff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.052968 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f402ec3a-e31d-4f62-80e5-fa9bd9f7ac15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ffcd0c2a62c4b64c82096acad643c6c2baeb7a4a36b666ae69123551f364fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51487101f0eb065018293732bef4fb466fa5a1c7bcb05f905fdd52f3593119c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af645998dd1c6804061825910fa2b9b2446abe356b9c90d83546dddbcf6133a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6de657c840f4dff63741fe42c702f457d123e0ba02f6602b75afe3271a7b6886\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de657c840f4dff63741fe42c702f457d123e0ba02f6602b75afe3271a7b6886\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:26Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.066353 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:26Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.076380 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:26Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.093359 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a74b51a75799769b1eccf7cde8dbf771dcd168728257bb835b5e68ab920c2e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a74b51a75799769b1eccf7cde8dbf771dcd168728257bb835b5e68ab920c2e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"message\\\":\\\"or_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 16:23:12.944427 6408 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 16:23:12.944213 6408 services_controller.go:445] Built service openshift-multus/multus-admission-controller LB template configs for network=default: []services.lbConfig(nil)\\\\nI0130 16:23:12.944465 6408 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" but failed to find it\\\\nI0130 16:23:12.944472 6408 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" in cache\\\\nF0130 16:23:12.944413 6408 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:23:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-54ngm_openshift-ovn-kubernetes(d6a299e8-188d-4777-bb82-a0994feabcff)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:26Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.106711 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5545aa724616d7714b8886c8505d44241291240e7ac7afd4a85192542a09e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:26Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.129112 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:26Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.129670 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.129691 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.129700 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.129714 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.129724 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:26Z","lastTransitionTime":"2026-01-30T16:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.142033 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:26Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.153565 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:26Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.164586 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:26Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.177104 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:26Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.189028 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc80898-f7b6-4e82-8da7-1d054381e411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:26Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.202063 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:26Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.214420 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:26Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.228769 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:26Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.232485 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.232512 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.232520 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.232533 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.232541 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:26Z","lastTransitionTime":"2026-01-30T16:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.241538 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:26Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.255592 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5feae404-d53f-4bf5-af27-07a7ce350594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bd3cad42b4c959f5cb67492e8be64db7d98c24e44958bdedc08dbdd5fb9bc10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cd4d835e7db624be9961818c2531e2eab6cfa661faed78c9c14942abec0512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rg9cf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:26Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.269947 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrldv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrldv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:26Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.334712 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.334762 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.334772 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.334786 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.334796 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:26Z","lastTransitionTime":"2026-01-30T16:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.437441 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.437489 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.437503 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.437521 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.437534 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:26Z","lastTransitionTime":"2026-01-30T16:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.539886 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.539922 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.539935 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.539953 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.539996 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:26Z","lastTransitionTime":"2026-01-30T16:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.643590 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.643638 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.643659 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.643676 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.643689 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:26Z","lastTransitionTime":"2026-01-30T16:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.746744 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.746782 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.746791 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.746803 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.746811 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:26Z","lastTransitionTime":"2026-01-30T16:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.849757 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.849825 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.849851 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.849881 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.849900 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:26Z","lastTransitionTime":"2026-01-30T16:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.953010 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.953081 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.953102 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.953131 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.953152 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:26Z","lastTransitionTime":"2026-01-30T16:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.029958 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 06:35:47.494178092 +0000 UTC Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.039364 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.039383 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.039387 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:27 crc kubenswrapper[4766]: E0130 16:23:27.039667 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:23:27 crc kubenswrapper[4766]: E0130 16:23:27.039698 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:23:27 crc kubenswrapper[4766]: E0130 16:23:27.039523 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.056368 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.056417 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.056428 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.056444 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.056457 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:27Z","lastTransitionTime":"2026-01-30T16:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.159167 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.159221 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.159239 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.159257 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.159269 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:27Z","lastTransitionTime":"2026-01-30T16:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.261542 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.261601 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.261617 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.261638 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.261654 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:27Z","lastTransitionTime":"2026-01-30T16:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.364084 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.364162 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.364230 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.364261 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.364285 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:27Z","lastTransitionTime":"2026-01-30T16:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.466977 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.467011 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.467020 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.467031 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.467040 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:27Z","lastTransitionTime":"2026-01-30T16:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.569470 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.569515 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.569524 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.569543 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.569560 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:27Z","lastTransitionTime":"2026-01-30T16:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.671588 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.671626 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.671637 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.671654 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.671672 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:27Z","lastTransitionTime":"2026-01-30T16:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.774242 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.774275 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.774283 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.774296 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.774305 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:27Z","lastTransitionTime":"2026-01-30T16:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.878187 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.878233 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.878244 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.878262 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.878275 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:27Z","lastTransitionTime":"2026-01-30T16:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.981051 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.981084 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.981092 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.981104 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.981112 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:27Z","lastTransitionTime":"2026-01-30T16:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.030438 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 11:38:36.171482427 +0000 UTC Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.039035 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:28 crc kubenswrapper[4766]: E0130 16:23:28.039272 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.083867 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.083920 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.083933 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.083946 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.083957 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:28Z","lastTransitionTime":"2026-01-30T16:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.186999 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.187035 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.187046 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.187063 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.187076 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:28Z","lastTransitionTime":"2026-01-30T16:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.289760 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.289804 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.289815 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.289831 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.289843 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:28Z","lastTransitionTime":"2026-01-30T16:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.391631 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.391677 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.391692 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.391711 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.391725 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:28Z","lastTransitionTime":"2026-01-30T16:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.493614 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.493657 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.493668 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.493684 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.493694 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:28Z","lastTransitionTime":"2026-01-30T16:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.595566 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.595623 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.595650 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.595680 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.595699 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:28Z","lastTransitionTime":"2026-01-30T16:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.699037 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.699086 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.699108 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.699127 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.699138 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:28Z","lastTransitionTime":"2026-01-30T16:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.800842 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.800929 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.800950 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.800970 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.800983 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:28Z","lastTransitionTime":"2026-01-30T16:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.903039 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.903076 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.903084 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.903097 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.903106 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:28Z","lastTransitionTime":"2026-01-30T16:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.005504 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.005534 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.005543 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.005556 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.005565 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:29Z","lastTransitionTime":"2026-01-30T16:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.031396 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 18:09:04.560009758 +0000 UTC Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.038791 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.038878 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:29 crc kubenswrapper[4766]: E0130 16:23:29.039016 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.038963 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:29 crc kubenswrapper[4766]: E0130 16:23:29.039344 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:23:29 crc kubenswrapper[4766]: E0130 16:23:29.039158 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.107484 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.107519 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.107527 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.107540 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.107550 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:29Z","lastTransitionTime":"2026-01-30T16:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.211048 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.211099 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.211108 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.211121 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.211133 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:29Z","lastTransitionTime":"2026-01-30T16:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.313850 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.313882 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.313891 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.313904 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.313914 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:29Z","lastTransitionTime":"2026-01-30T16:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.416656 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.416687 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.416696 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.416716 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.416731 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:29Z","lastTransitionTime":"2026-01-30T16:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.519348 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.519382 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.519390 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.519402 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.519415 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:29Z","lastTransitionTime":"2026-01-30T16:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.621256 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.621326 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.621339 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.621355 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.621365 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:29Z","lastTransitionTime":"2026-01-30T16:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.725129 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.725206 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.725219 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.725236 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.725281 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:29Z","lastTransitionTime":"2026-01-30T16:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.827078 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.827112 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.827122 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.827134 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.827142 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:29Z","lastTransitionTime":"2026-01-30T16:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.928962 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.929016 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.929030 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.929047 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.929059 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:29Z","lastTransitionTime":"2026-01-30T16:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.031506 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 10:31:39.291684466 +0000 UTC Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.031842 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.031865 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.031876 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.031893 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.031903 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:30Z","lastTransitionTime":"2026-01-30T16:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.039455 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:30 crc kubenswrapper[4766]: E0130 16:23:30.039581 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.134787 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.134813 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.134821 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.134833 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.134842 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:30Z","lastTransitionTime":"2026-01-30T16:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.237686 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.237722 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.237730 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.237744 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.237752 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:30Z","lastTransitionTime":"2026-01-30T16:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.340264 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.340320 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.340333 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.340350 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.340361 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:30Z","lastTransitionTime":"2026-01-30T16:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.442085 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.442217 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.442234 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.442250 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.442261 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:30Z","lastTransitionTime":"2026-01-30T16:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.544710 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.544744 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.544756 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.544772 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.544784 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:30Z","lastTransitionTime":"2026-01-30T16:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.647153 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.647218 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.647234 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.647252 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.647266 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:30Z","lastTransitionTime":"2026-01-30T16:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.749569 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.749616 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.749629 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.749646 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.749658 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:30Z","lastTransitionTime":"2026-01-30T16:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.852026 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.852069 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.852082 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.852097 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.852109 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:30Z","lastTransitionTime":"2026-01-30T16:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.954458 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.954505 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.954517 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.954535 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.954546 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:30Z","lastTransitionTime":"2026-01-30T16:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.032277 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 05:18:47.58765427 +0000 UTC Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.038569 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.038677 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:31 crc kubenswrapper[4766]: E0130 16:23:31.038773 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.038798 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:31 crc kubenswrapper[4766]: E0130 16:23:31.038892 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:23:31 crc kubenswrapper[4766]: E0130 16:23:31.038963 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.056684 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.056724 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.056736 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.056756 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.056769 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:31Z","lastTransitionTime":"2026-01-30T16:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.158884 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.158931 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.158944 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.158961 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.158971 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:31Z","lastTransitionTime":"2026-01-30T16:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.261080 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.261117 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.261129 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.261143 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.261151 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:31Z","lastTransitionTime":"2026-01-30T16:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.363449 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.363509 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.363522 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.363541 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.363556 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:31Z","lastTransitionTime":"2026-01-30T16:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.447127 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/de5fecf1-cb2c-4ae2-a240-6f8826f6dac3-metrics-certs\") pod \"network-metrics-daemon-xrldv\" (UID: \"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3\") " pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:31 crc kubenswrapper[4766]: E0130 16:23:31.447386 4766 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 16:23:31 crc kubenswrapper[4766]: E0130 16:23:31.447491 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/de5fecf1-cb2c-4ae2-a240-6f8826f6dac3-metrics-certs podName:de5fecf1-cb2c-4ae2-a240-6f8826f6dac3 nodeName:}" failed. No retries permitted until 2026-01-30 16:24:03.447466783 +0000 UTC m=+98.085424189 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/de5fecf1-cb2c-4ae2-a240-6f8826f6dac3-metrics-certs") pod "network-metrics-daemon-xrldv" (UID: "de5fecf1-cb2c-4ae2-a240-6f8826f6dac3") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.468095 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.468136 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.468153 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.468170 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.468196 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:31Z","lastTransitionTime":"2026-01-30T16:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.570080 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.570118 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.570131 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.570147 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.570160 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:31Z","lastTransitionTime":"2026-01-30T16:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.672470 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.672544 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.672561 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.672579 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.672592 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:31Z","lastTransitionTime":"2026-01-30T16:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.774683 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.774722 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.774732 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.774744 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.774754 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:31Z","lastTransitionTime":"2026-01-30T16:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.877426 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.877462 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.877471 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.877486 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.877494 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:31Z","lastTransitionTime":"2026-01-30T16:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.979488 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.979529 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.979538 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.979553 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.979563 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:31Z","lastTransitionTime":"2026-01-30T16:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.033136 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 00:15:48.899637259 +0000 UTC Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.038539 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:32 crc kubenswrapper[4766]: E0130 16:23:32.038648 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.081913 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.081955 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.081965 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.081979 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.081991 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:32Z","lastTransitionTime":"2026-01-30T16:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.184640 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.184681 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.184692 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.184707 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.184720 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:32Z","lastTransitionTime":"2026-01-30T16:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.287796 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.287849 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.287862 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.287882 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.288093 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:32Z","lastTransitionTime":"2026-01-30T16:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.390408 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.390455 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.390476 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.390494 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.390506 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:32Z","lastTransitionTime":"2026-01-30T16:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.493115 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.493156 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.493165 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.493200 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.493212 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:32Z","lastTransitionTime":"2026-01-30T16:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.597025 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.597087 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.597100 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.597125 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.597140 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:32Z","lastTransitionTime":"2026-01-30T16:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.699580 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.699645 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.699658 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.699680 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.699692 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:32Z","lastTransitionTime":"2026-01-30T16:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.801655 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.801704 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.801717 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.801735 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.801748 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:32Z","lastTransitionTime":"2026-01-30T16:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.899041 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.899084 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.899095 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.899113 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.899125 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:32Z","lastTransitionTime":"2026-01-30T16:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:32 crc kubenswrapper[4766]: E0130 16:23:32.910949 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:32Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.914709 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.914761 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.914776 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.914795 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.914812 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:32Z","lastTransitionTime":"2026-01-30T16:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:32 crc kubenswrapper[4766]: E0130 16:23:32.927070 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:32Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.930751 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.930793 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.930808 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.930827 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.930838 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:32Z","lastTransitionTime":"2026-01-30T16:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:32 crc kubenswrapper[4766]: E0130 16:23:32.943049 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:32Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.946344 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.946372 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.946382 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.946396 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.946406 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:32Z","lastTransitionTime":"2026-01-30T16:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:32 crc kubenswrapper[4766]: E0130 16:23:32.957863 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:32Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.961108 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.961140 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.961151 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.961168 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.961194 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:32Z","lastTransitionTime":"2026-01-30T16:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:32 crc kubenswrapper[4766]: E0130 16:23:32.973208 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:32Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:32 crc kubenswrapper[4766]: E0130 16:23:32.973319 4766 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.974623 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.974660 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.974676 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.974692 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.974702 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:32Z","lastTransitionTime":"2026-01-30T16:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.033338 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 16:20:47.277812912 +0000 UTC Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.038621 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.038692 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:33 crc kubenswrapper[4766]: E0130 16:23:33.038746 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.038782 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:33 crc kubenswrapper[4766]: E0130 16:23:33.038844 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:23:33 crc kubenswrapper[4766]: E0130 16:23:33.039139 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.076446 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.076483 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.076492 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.076505 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.076515 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:33Z","lastTransitionTime":"2026-01-30T16:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.178871 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.178914 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.178926 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.178940 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.178952 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:33Z","lastTransitionTime":"2026-01-30T16:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.281097 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.281146 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.281169 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.281210 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.281223 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:33Z","lastTransitionTime":"2026-01-30T16:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.383286 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.383558 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.383632 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.383729 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.383824 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:33Z","lastTransitionTime":"2026-01-30T16:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.398866 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-l6xdr_3a74bc5e-af98-4849-820c-7056caabc485/kube-multus/0.log" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.399140 4766 generic.go:334] "Generic (PLEG): container finished" podID="3a74bc5e-af98-4849-820c-7056caabc485" containerID="5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008" exitCode=1 Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.399227 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-l6xdr" event={"ID":"3a74bc5e-af98-4849-820c-7056caabc485","Type":"ContainerDied","Data":"5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008"} Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.399674 4766 scope.go:117] "RemoveContainer" containerID="5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.412656 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.423212 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.436969 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5feae404-d53f-4bf5-af27-07a7ce350594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bd3cad42b4c959f5cb67492e8be64db7d98c24e44958bdedc08dbdd5fb9bc10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cd4d835e7db624be9961818c2531e2eab6cfa661faed78c9c14942abec0512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rg9cf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.447299 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrldv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrldv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.457844 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f402ec3a-e31d-4f62-80e5-fa9bd9f7ac15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ffcd0c2a62c4b64c82096acad643c6c2baeb7a4a36b666ae69123551f364fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51487101f0eb065018293732bef4fb466fa5a1c7bcb05f905fdd52f3593119c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af645998dd1c6804061825910fa2b9b2446abe356b9c90d83546dddbcf6133a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6de657c840f4dff63741fe42c702f457d123e0ba02f6602b75afe3271a7b6886\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de657c840f4dff63741fe42c702f457d123e0ba02f6602b75afe3271a7b6886\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.469076 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.478291 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.486192 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.486223 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.486233 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.486250 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.486260 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:33Z","lastTransitionTime":"2026-01-30T16:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.501086 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a74b51a75799769b1eccf7cde8dbf771dcd168728257bb835b5e68ab920c2e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a74b51a75799769b1eccf7cde8dbf771dcd168728257bb835b5e68ab920c2e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"message\\\":\\\"or_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 16:23:12.944427 6408 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 16:23:12.944213 6408 services_controller.go:445] Built service openshift-multus/multus-admission-controller LB template configs for network=default: []services.lbConfig(nil)\\\\nI0130 16:23:12.944465 6408 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" but failed to find it\\\\nI0130 16:23:12.944472 6408 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" in cache\\\\nF0130 16:23:12.944413 6408 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:23:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-54ngm_openshift-ovn-kubernetes(d6a299e8-188d-4777-bb82-a0994feabcff)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.515521 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5545aa724616d7714b8886c8505d44241291240e7ac7afd4a85192542a09e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.526403 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:23:33Z\\\",\\\"message\\\":\\\"2026-01-30T16:22:47+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1da83fa5-bc65-441b-9b89-edba1769ad7d\\\\n2026-01-30T16:22:47+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1da83fa5-bc65-441b-9b89-edba1769ad7d to /host/opt/cni/bin/\\\\n2026-01-30T16:22:48Z [verbose] multus-daemon started\\\\n2026-01-30T16:22:48Z [verbose] Readiness Indicator file check\\\\n2026-01-30T16:23:33Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.537420 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.546822 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.588753 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.588796 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.588808 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.588823 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.588835 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:33Z","lastTransitionTime":"2026-01-30T16:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.690967 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.691011 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.691021 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.691036 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.691046 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:33Z","lastTransitionTime":"2026-01-30T16:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.726780 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.750024 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.763685 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc80898-f7b6-4e82-8da7-1d054381e411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.778434 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.789119 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.793531 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.793569 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.793578 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.793593 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.793604 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:33Z","lastTransitionTime":"2026-01-30T16:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.896343 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.896384 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.896394 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.896411 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.896421 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:33Z","lastTransitionTime":"2026-01-30T16:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.998669 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.998718 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.998728 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.998743 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.998754 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:33Z","lastTransitionTime":"2026-01-30T16:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.034231 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 22:57:11.303044139 +0000 UTC Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.038687 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:34 crc kubenswrapper[4766]: E0130 16:23:34.038788 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.101352 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.101417 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.101437 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.101461 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.101479 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:34Z","lastTransitionTime":"2026-01-30T16:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.204350 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.204395 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.204406 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.204424 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.204436 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:34Z","lastTransitionTime":"2026-01-30T16:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.305846 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.305881 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.305896 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.305913 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.305923 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:34Z","lastTransitionTime":"2026-01-30T16:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.403227 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-l6xdr_3a74bc5e-af98-4849-820c-7056caabc485/kube-multus/0.log" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.403281 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-l6xdr" event={"ID":"3a74bc5e-af98-4849-820c-7056caabc485","Type":"ContainerStarted","Data":"5cca75d6ff61e1e073104559f6fcac5d76919a1e445da3f2d6fe271d1e0e4082"} Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.410929 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.410963 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.410977 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.410994 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.411005 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:34Z","lastTransitionTime":"2026-01-30T16:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.417678 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f402ec3a-e31d-4f62-80e5-fa9bd9f7ac15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ffcd0c2a62c4b64c82096acad643c6c2baeb7a4a36b666ae69123551f364fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51487101f0eb065018293732bef4fb466fa5a1c7bcb05f905fdd52f3593119c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af645998dd1c6804061825910fa2b9b2446abe356b9c90d83546dddbcf6133a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6de657c840f4dff63741fe42c702f457d123e0ba02f6602b75afe3271a7b6886\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de657c840f4dff63741fe42c702f457d123e0ba02f6602b75afe3271a7b6886\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:34Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.428766 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:34Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.437905 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:34Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.455100 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a74b51a75799769b1eccf7cde8dbf771dcd168728257bb835b5e68ab920c2e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a74b51a75799769b1eccf7cde8dbf771dcd168728257bb835b5e68ab920c2e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"message\\\":\\\"or_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 16:23:12.944427 6408 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 16:23:12.944213 6408 services_controller.go:445] Built service openshift-multus/multus-admission-controller LB template configs for network=default: []services.lbConfig(nil)\\\\nI0130 16:23:12.944465 6408 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" but failed to find it\\\\nI0130 16:23:12.944472 6408 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" in cache\\\\nF0130 16:23:12.944413 6408 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:23:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-54ngm_openshift-ovn-kubernetes(d6a299e8-188d-4777-bb82-a0994feabcff)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:34Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.468051 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5545aa724616d7714b8886c8505d44241291240e7ac7afd4a85192542a09e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:34Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.480878 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cca75d6ff61e1e073104559f6fcac5d76919a1e445da3f2d6fe271d1e0e4082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:23:33Z\\\",\\\"message\\\":\\\"2026-01-30T16:22:47+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1da83fa5-bc65-441b-9b89-edba1769ad7d\\\\n2026-01-30T16:22:47+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1da83fa5-bc65-441b-9b89-edba1769ad7d to /host/opt/cni/bin/\\\\n2026-01-30T16:22:48Z [verbose] multus-daemon started\\\\n2026-01-30T16:22:48Z [verbose] Readiness Indicator file check\\\\n2026-01-30T16:23:33Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:23:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:34Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.493047 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:34Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.504210 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:34Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.512865 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.512899 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.512909 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.512927 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.512938 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:34Z","lastTransitionTime":"2026-01-30T16:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.517011 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:34Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.530208 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:34Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.541456 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc80898-f7b6-4e82-8da7-1d054381e411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:34Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.551312 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:34Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.561654 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:34Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.572812 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:34Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.581466 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:34Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.590946 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5feae404-d53f-4bf5-af27-07a7ce350594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bd3cad42b4c959f5cb67492e8be64db7d98c24e44958bdedc08dbdd5fb9bc10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cd4d835e7db624be9961818c2531e2eab6cfa661faed78c9c14942abec0512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rg9cf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:34Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.599437 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrldv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrldv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:34Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.615004 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.615239 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.615256 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.615274 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.615285 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:34Z","lastTransitionTime":"2026-01-30T16:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.717678 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.717733 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.717744 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.717760 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.717773 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:34Z","lastTransitionTime":"2026-01-30T16:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.819936 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.819992 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.820000 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.820014 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.820024 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:34Z","lastTransitionTime":"2026-01-30T16:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.923374 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.923422 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.923432 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.923447 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.923458 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:34Z","lastTransitionTime":"2026-01-30T16:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.025713 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.025755 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.025766 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.025785 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.025802 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:35Z","lastTransitionTime":"2026-01-30T16:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.034908 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 08:04:04.526001435 +0000 UTC Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.039157 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.039188 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.039239 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:35 crc kubenswrapper[4766]: E0130 16:23:35.039290 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:23:35 crc kubenswrapper[4766]: E0130 16:23:35.039392 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:23:35 crc kubenswrapper[4766]: E0130 16:23:35.039447 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.128307 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.128349 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.128359 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.128374 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.128386 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:35Z","lastTransitionTime":"2026-01-30T16:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.230933 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.230968 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.230977 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.230990 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.230998 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:35Z","lastTransitionTime":"2026-01-30T16:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.332923 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.332959 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.332970 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.332984 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.332993 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:35Z","lastTransitionTime":"2026-01-30T16:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.435821 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.435874 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.435888 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.435910 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.435922 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:35Z","lastTransitionTime":"2026-01-30T16:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.538298 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.538367 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.538379 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.538393 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.538404 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:35Z","lastTransitionTime":"2026-01-30T16:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.641335 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.641378 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.641391 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.641407 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.641419 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:35Z","lastTransitionTime":"2026-01-30T16:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.743590 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.743674 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.743684 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.743698 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.743709 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:35Z","lastTransitionTime":"2026-01-30T16:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.846153 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.846203 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.846212 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.846225 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.846234 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:35Z","lastTransitionTime":"2026-01-30T16:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.950044 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.950110 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.950123 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.950142 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.950155 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:35Z","lastTransitionTime":"2026-01-30T16:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.035706 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 07:28:21.287282315 +0000 UTC Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.039355 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:36 crc kubenswrapper[4766]: E0130 16:23:36.039517 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.052868 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:36Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.052962 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.053110 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.053128 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.053144 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.053153 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:36Z","lastTransitionTime":"2026-01-30T16:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.064467 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc80898-f7b6-4e82-8da7-1d054381e411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:36Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.077107 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:36Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.088806 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:36Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.101427 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5feae404-d53f-4bf5-af27-07a7ce350594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bd3cad42b4c959f5cb67492e8be64db7d98c24e44958bdedc08dbdd5fb9bc10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cd4d835e7db624be9961818c2531e2eab6cfa661faed78c9c14942abec0512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rg9cf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:36Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.113086 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrldv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrldv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:36Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.133113 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:36Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.144662 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:36Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.157498 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.157538 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.157547 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.157562 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.157572 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:36Z","lastTransitionTime":"2026-01-30T16:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.160550 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5545aa724616d7714b8886c8505d44241291240e7ac7afd4a85192542a09e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:36Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.174516 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cca75d6ff61e1e073104559f6fcac5d76919a1e445da3f2d6fe271d1e0e4082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:23:33Z\\\",\\\"message\\\":\\\"2026-01-30T16:22:47+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1da83fa5-bc65-441b-9b89-edba1769ad7d\\\\n2026-01-30T16:22:47+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1da83fa5-bc65-441b-9b89-edba1769ad7d to /host/opt/cni/bin/\\\\n2026-01-30T16:22:48Z [verbose] multus-daemon started\\\\n2026-01-30T16:22:48Z [verbose] Readiness Indicator file check\\\\n2026-01-30T16:23:33Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:23:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:36Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.184275 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f402ec3a-e31d-4f62-80e5-fa9bd9f7ac15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ffcd0c2a62c4b64c82096acad643c6c2baeb7a4a36b666ae69123551f364fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51487101f0eb065018293732bef4fb466fa5a1c7bcb05f905fdd52f3593119c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af645998dd1c6804061825910fa2b9b2446abe356b9c90d83546dddbcf6133a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6de657c840f4dff63741fe42c702f457d123e0ba02f6602b75afe3271a7b6886\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de657c840f4dff63741fe42c702f457d123e0ba02f6602b75afe3271a7b6886\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:36Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.196195 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:36Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.205891 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:36Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.225653 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a74b51a75799769b1eccf7cde8dbf771dcd168728257bb835b5e68ab920c2e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a74b51a75799769b1eccf7cde8dbf771dcd168728257bb835b5e68ab920c2e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"message\\\":\\\"or_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 16:23:12.944427 6408 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 16:23:12.944213 6408 services_controller.go:445] Built service openshift-multus/multus-admission-controller LB template configs for network=default: []services.lbConfig(nil)\\\\nI0130 16:23:12.944465 6408 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" but failed to find it\\\\nI0130 16:23:12.944472 6408 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" in cache\\\\nF0130 16:23:12.944413 6408 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:23:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-54ngm_openshift-ovn-kubernetes(d6a299e8-188d-4777-bb82-a0994feabcff)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:36Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.238814 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:36Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.251202 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:36Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.260152 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.260219 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.260230 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.260244 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.260254 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:36Z","lastTransitionTime":"2026-01-30T16:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.262804 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:36Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.361888 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.361935 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.361945 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.361960 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.361968 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:36Z","lastTransitionTime":"2026-01-30T16:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.464591 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.464640 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.464653 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.464671 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.464683 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:36Z","lastTransitionTime":"2026-01-30T16:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.566994 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.567034 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.567044 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.567058 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.567069 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:36Z","lastTransitionTime":"2026-01-30T16:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.669403 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.669434 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.669444 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.669457 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.669469 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:36Z","lastTransitionTime":"2026-01-30T16:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.772615 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.772645 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.772655 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.772670 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.772679 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:36Z","lastTransitionTime":"2026-01-30T16:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.874901 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.874969 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.874983 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.874999 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.875012 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:36Z","lastTransitionTime":"2026-01-30T16:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.977779 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.977830 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.977838 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.977852 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.977863 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:36Z","lastTransitionTime":"2026-01-30T16:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.036603 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 11:40:54.164602687 +0000 UTC Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.038948 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.038958 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.038961 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:37 crc kubenswrapper[4766]: E0130 16:23:37.039214 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:23:37 crc kubenswrapper[4766]: E0130 16:23:37.039063 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:23:37 crc kubenswrapper[4766]: E0130 16:23:37.039350 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.079680 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.079725 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.079734 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.079747 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.079758 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:37Z","lastTransitionTime":"2026-01-30T16:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.182591 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.182663 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.182675 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.182691 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.182702 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:37Z","lastTransitionTime":"2026-01-30T16:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.285429 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.285468 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.285482 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.285496 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.285508 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:37Z","lastTransitionTime":"2026-01-30T16:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.388087 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.388121 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.388131 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.388143 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.388152 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:37Z","lastTransitionTime":"2026-01-30T16:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.490657 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.490692 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.490701 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.490718 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.490729 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:37Z","lastTransitionTime":"2026-01-30T16:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.597841 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.598939 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.598949 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.598966 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.598976 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:37Z","lastTransitionTime":"2026-01-30T16:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.701240 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.701277 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.701286 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.701299 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.701309 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:37Z","lastTransitionTime":"2026-01-30T16:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.803449 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.803487 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.803498 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.803514 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.803527 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:37Z","lastTransitionTime":"2026-01-30T16:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.907691 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.907760 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.907772 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.907787 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.907799 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:37Z","lastTransitionTime":"2026-01-30T16:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.009762 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.009806 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.009817 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.009834 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.009845 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:38Z","lastTransitionTime":"2026-01-30T16:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.037345 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 13:36:45.307604891 +0000 UTC Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.038689 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:38 crc kubenswrapper[4766]: E0130 16:23:38.038834 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.112333 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.112380 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.112390 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.112403 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.112413 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:38Z","lastTransitionTime":"2026-01-30T16:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.215668 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.215713 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.215724 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.215740 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.215751 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:38Z","lastTransitionTime":"2026-01-30T16:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.321470 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.321514 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.321529 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.321545 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.321556 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:38Z","lastTransitionTime":"2026-01-30T16:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.424123 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.424189 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.424202 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.424218 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.424228 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:38Z","lastTransitionTime":"2026-01-30T16:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.526598 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.526642 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.526653 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.526667 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.526678 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:38Z","lastTransitionTime":"2026-01-30T16:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.628624 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.628665 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.628673 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.628688 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.628697 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:38Z","lastTransitionTime":"2026-01-30T16:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.730911 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.730964 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.730981 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.731001 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.731013 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:38Z","lastTransitionTime":"2026-01-30T16:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.832949 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.832985 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.832993 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.833008 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.833017 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:38Z","lastTransitionTime":"2026-01-30T16:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.935033 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.935077 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.935085 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.935100 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.935109 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:38Z","lastTransitionTime":"2026-01-30T16:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.037598 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 18:49:21.32062105 +0000 UTC Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.037630 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.037677 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.037692 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.037711 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.037724 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:39Z","lastTransitionTime":"2026-01-30T16:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.038964 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.039005 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:39 crc kubenswrapper[4766]: E0130 16:23:39.039079 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.038971 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:39 crc kubenswrapper[4766]: E0130 16:23:39.039211 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:23:39 crc kubenswrapper[4766]: E0130 16:23:39.039344 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.140491 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.140541 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.140550 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.140565 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.140574 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:39Z","lastTransitionTime":"2026-01-30T16:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.242751 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.242798 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.242809 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.242822 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.242837 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:39Z","lastTransitionTime":"2026-01-30T16:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.345135 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.345171 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.345206 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.345231 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.345245 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:39Z","lastTransitionTime":"2026-01-30T16:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.448262 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.448302 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.448311 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.448328 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.448345 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:39Z","lastTransitionTime":"2026-01-30T16:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.550719 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.550761 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.550770 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.550782 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.550791 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:39Z","lastTransitionTime":"2026-01-30T16:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.652960 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.652991 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.653004 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.653020 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.653032 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:39Z","lastTransitionTime":"2026-01-30T16:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.755267 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.755310 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.755319 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.755330 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.755339 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:39Z","lastTransitionTime":"2026-01-30T16:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.857653 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.857692 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.857701 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.857716 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.857725 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:39Z","lastTransitionTime":"2026-01-30T16:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.959913 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.959953 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.959965 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.959981 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.959995 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:39Z","lastTransitionTime":"2026-01-30T16:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.038518 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 12:38:08.839797951 +0000 UTC Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.038531 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:40 crc kubenswrapper[4766]: E0130 16:23:40.038678 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.062262 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.062313 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.062327 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.062345 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.062362 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:40Z","lastTransitionTime":"2026-01-30T16:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.165646 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.165674 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.165682 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.165694 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.165702 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:40Z","lastTransitionTime":"2026-01-30T16:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.267907 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.267952 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.267963 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.267981 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.267998 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:40Z","lastTransitionTime":"2026-01-30T16:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.370704 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.370753 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.370763 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.370777 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.370785 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:40Z","lastTransitionTime":"2026-01-30T16:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.473103 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.473155 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.473164 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.473196 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.473206 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:40Z","lastTransitionTime":"2026-01-30T16:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.575746 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.575816 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.575829 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.575861 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.575873 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:40Z","lastTransitionTime":"2026-01-30T16:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.677656 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.677700 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.677710 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.677725 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.677737 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:40Z","lastTransitionTime":"2026-01-30T16:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.780321 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.780376 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.780386 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.780400 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.780410 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:40Z","lastTransitionTime":"2026-01-30T16:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.883672 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.883754 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.883799 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.883817 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.883829 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:40Z","lastTransitionTime":"2026-01-30T16:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.985955 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.986019 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.986041 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.986059 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.986072 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:40Z","lastTransitionTime":"2026-01-30T16:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.038994 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.039004 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.039059 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.039051 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 07:15:19.038612132 +0000 UTC Jan 30 16:23:41 crc kubenswrapper[4766]: E0130 16:23:41.039444 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:23:41 crc kubenswrapper[4766]: E0130 16:23:41.039725 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.039819 4766 scope.go:117] "RemoveContainer" containerID="7a74b51a75799769b1eccf7cde8dbf771dcd168728257bb835b5e68ab920c2e3" Jan 30 16:23:41 crc kubenswrapper[4766]: E0130 16:23:41.039878 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.088972 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.089001 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.089009 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.089022 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.089031 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:41Z","lastTransitionTime":"2026-01-30T16:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.190879 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.190912 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.190924 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.190940 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.190952 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:41Z","lastTransitionTime":"2026-01-30T16:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.292981 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.293016 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.293026 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.293039 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.293048 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:41Z","lastTransitionTime":"2026-01-30T16:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.394626 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.394660 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.394669 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.394683 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.394695 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:41Z","lastTransitionTime":"2026-01-30T16:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.422856 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-54ngm_d6a299e8-188d-4777-bb82-a0994feabcff/ovnkube-controller/2.log" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.425288 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" event={"ID":"d6a299e8-188d-4777-bb82-a0994feabcff","Type":"ContainerStarted","Data":"18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75"} Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.426353 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.436826 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5feae404-d53f-4bf5-af27-07a7ce350594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bd3cad42b4c959f5cb67492e8be64db7d98c24e44958bdedc08dbdd5fb9bc10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cd4d835e7db624be9961818c2531e2eab6cfa661faed78c9c14942abec0512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rg9cf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:41Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.446580 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrldv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrldv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:41Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.458901 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:41Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.467874 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:41Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.481398 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5545aa724616d7714b8886c8505d44241291240e7ac7afd4a85192542a09e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:41Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.496864 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.496906 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.496915 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.496928 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.496937 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:41Z","lastTransitionTime":"2026-01-30T16:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.497085 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cca75d6ff61e1e073104559f6fcac5d76919a1e445da3f2d6fe271d1e0e4082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:23:33Z\\\",\\\"message\\\":\\\"2026-01-30T16:22:47+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1da83fa5-bc65-441b-9b89-edba1769ad7d\\\\n2026-01-30T16:22:47+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1da83fa5-bc65-441b-9b89-edba1769ad7d to /host/opt/cni/bin/\\\\n2026-01-30T16:22:48Z [verbose] multus-daemon started\\\\n2026-01-30T16:22:48Z [verbose] Readiness Indicator file check\\\\n2026-01-30T16:23:33Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:23:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:41Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.511299 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f402ec3a-e31d-4f62-80e5-fa9bd9f7ac15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ffcd0c2a62c4b64c82096acad643c6c2baeb7a4a36b666ae69123551f364fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51487101f0eb065018293732bef4fb466fa5a1c7bcb05f905fdd52f3593119c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af645998dd1c6804061825910fa2b9b2446abe356b9c90d83546dddbcf6133a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6de657c840f4dff63741fe42c702f457d123e0ba02f6602b75afe3271a7b6886\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de657c840f4dff63741fe42c702f457d123e0ba02f6602b75afe3271a7b6886\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:41Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.539701 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:41Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.554095 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:41Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.572819 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a74b51a75799769b1eccf7cde8dbf771dcd168728257bb835b5e68ab920c2e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"message\\\":\\\"or_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 16:23:12.944427 6408 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 16:23:12.944213 6408 services_controller.go:445] Built service openshift-multus/multus-admission-controller LB template configs for network=default: []services.lbConfig(nil)\\\\nI0130 16:23:12.944465 6408 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" but failed to find it\\\\nI0130 16:23:12.944472 6408 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" in cache\\\\nF0130 16:23:12.944413 6408 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:23:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:23:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:41Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.588050 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:41Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.599151 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.599199 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.599212 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.599228 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.599240 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:41Z","lastTransitionTime":"2026-01-30T16:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.599337 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:41Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.610297 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:41Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.627997 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:41Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.639548 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc80898-f7b6-4e82-8da7-1d054381e411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:41Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.650790 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:41Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.661070 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:41Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.701734 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.701766 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.701775 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.701787 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.701795 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:41Z","lastTransitionTime":"2026-01-30T16:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.804196 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.804233 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.804244 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.804266 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.804278 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:41Z","lastTransitionTime":"2026-01-30T16:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.906827 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.906849 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.906857 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.906870 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.906879 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:41Z","lastTransitionTime":"2026-01-30T16:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.009236 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.009264 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.009272 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.009284 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.009293 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:42Z","lastTransitionTime":"2026-01-30T16:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.038932 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.039144 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 17:38:23.858525015 +0000 UTC Jan 30 16:23:42 crc kubenswrapper[4766]: E0130 16:23:42.039398 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.052347 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.111427 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.111463 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.111474 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.111486 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.111495 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:42Z","lastTransitionTime":"2026-01-30T16:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.214774 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.214856 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.214883 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.214912 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.214938 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:42Z","lastTransitionTime":"2026-01-30T16:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.318098 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.318221 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.318235 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.318382 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.318402 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:42Z","lastTransitionTime":"2026-01-30T16:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.422371 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.422451 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.422474 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.422501 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.422523 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:42Z","lastTransitionTime":"2026-01-30T16:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.430898 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-54ngm_d6a299e8-188d-4777-bb82-a0994feabcff/ovnkube-controller/3.log" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.432249 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-54ngm_d6a299e8-188d-4777-bb82-a0994feabcff/ovnkube-controller/2.log" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.436743 4766 generic.go:334] "Generic (PLEG): container finished" podID="d6a299e8-188d-4777-bb82-a0994feabcff" containerID="18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75" exitCode=1 Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.436826 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" event={"ID":"d6a299e8-188d-4777-bb82-a0994feabcff","Type":"ContainerDied","Data":"18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75"} Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.437051 4766 scope.go:117] "RemoveContainer" containerID="7a74b51a75799769b1eccf7cde8dbf771dcd168728257bb835b5e68ab920c2e3" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.438045 4766 scope.go:117] "RemoveContainer" containerID="18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75" Jan 30 16:23:42 crc kubenswrapper[4766]: E0130 16:23:42.438384 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-54ngm_openshift-ovn-kubernetes(d6a299e8-188d-4777-bb82-a0994feabcff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.457425 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:42Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.472570 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc80898-f7b6-4e82-8da7-1d054381e411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:42Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.489290 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:42Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.505839 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:42Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.521957 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca5a63-8303-4e36-8733-74136416819f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a0a296aafa84488c77418bb8d4b945f5cec6783bedba7e498c2dfb3f54c39ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0375b1004edcc905daa64587295fbcb381263d97a218c568bdc2028362b2b8fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0375b1004edcc905daa64587295fbcb381263d97a218c568bdc2028362b2b8fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:42Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.524728 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.524796 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.524811 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.524837 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.524855 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:42Z","lastTransitionTime":"2026-01-30T16:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.538466 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:42Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.551882 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:42Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.565779 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5feae404-d53f-4bf5-af27-07a7ce350594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bd3cad42b4c959f5cb67492e8be64db7d98c24e44958bdedc08dbdd5fb9bc10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cd4d835e7db624be9961818c2531e2eab6cfa661faed78c9c14942abec0512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rg9cf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:42Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.578138 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrldv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrldv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:42Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.593299 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f402ec3a-e31d-4f62-80e5-fa9bd9f7ac15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ffcd0c2a62c4b64c82096acad643c6c2baeb7a4a36b666ae69123551f364fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51487101f0eb065018293732bef4fb466fa5a1c7bcb05f905fdd52f3593119c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af645998dd1c6804061825910fa2b9b2446abe356b9c90d83546dddbcf6133a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6de657c840f4dff63741fe42c702f457d123e0ba02f6602b75afe3271a7b6886\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de657c840f4dff63741fe42c702f457d123e0ba02f6602b75afe3271a7b6886\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:42Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.604810 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:42Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.615119 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:42Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.627670 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.627707 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.627720 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.627737 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.627749 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:42Z","lastTransitionTime":"2026-01-30T16:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.633001 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a74b51a75799769b1eccf7cde8dbf771dcd168728257bb835b5e68ab920c2e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"message\\\":\\\"or_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 16:23:12.944427 6408 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 16:23:12.944213 6408 services_controller.go:445] Built service openshift-multus/multus-admission-controller LB template configs for network=default: []services.lbConfig(nil)\\\\nI0130 16:23:12.944465 6408 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" but failed to find it\\\\nI0130 16:23:12.944472 6408 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" in cache\\\\nF0130 16:23:12.944413 6408 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:23:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:23:41Z\\\",\\\"message\\\":\\\"_openshift-cluster-version/cluster-version-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"61d39e4d-21a9-4387-9a2b-fa4ad14792e2\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-cluster-version/cluster-version-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-cluster-version/cluster-version-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-cluster-version/cluster-version-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.182\\\\\\\", Port:9099, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0130 16:23:41.758830 6812 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:23:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:42Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.649075 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5545aa724616d7714b8886c8505d44241291240e7ac7afd4a85192542a09e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:42Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.664138 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cca75d6ff61e1e073104559f6fcac5d76919a1e445da3f2d6fe271d1e0e4082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:23:33Z\\\",\\\"message\\\":\\\"2026-01-30T16:22:47+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1da83fa5-bc65-441b-9b89-edba1769ad7d\\\\n2026-01-30T16:22:47+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1da83fa5-bc65-441b-9b89-edba1769ad7d to /host/opt/cni/bin/\\\\n2026-01-30T16:22:48Z [verbose] multus-daemon started\\\\n2026-01-30T16:22:48Z [verbose] Readiness Indicator file check\\\\n2026-01-30T16:23:33Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:23:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:42Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.679047 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:42Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.692364 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:42Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.706068 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:42Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.730259 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.730326 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.730336 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.730352 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.730363 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:42Z","lastTransitionTime":"2026-01-30T16:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.837028 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.837091 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.837109 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.837132 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.837153 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:42Z","lastTransitionTime":"2026-01-30T16:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.940755 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.940824 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.940841 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.940863 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.940879 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:42Z","lastTransitionTime":"2026-01-30T16:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.039086 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.039117 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.039163 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:43 crc kubenswrapper[4766]: E0130 16:23:43.039264 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.039293 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 17:18:29.96848122 +0000 UTC Jan 30 16:23:43 crc kubenswrapper[4766]: E0130 16:23:43.039465 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:23:43 crc kubenswrapper[4766]: E0130 16:23:43.039518 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.043632 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.043688 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.043712 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.043743 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.043770 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:43Z","lastTransitionTime":"2026-01-30T16:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.089472 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.089534 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.089558 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.089588 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.089609 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:43Z","lastTransitionTime":"2026-01-30T16:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:43 crc kubenswrapper[4766]: E0130 16:23:43.113461 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:43Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.118457 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.118659 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.118777 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.118890 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.118990 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:43Z","lastTransitionTime":"2026-01-30T16:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:43 crc kubenswrapper[4766]: E0130 16:23:43.133901 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:43Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.138074 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.138272 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.138393 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.138596 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.138723 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:43Z","lastTransitionTime":"2026-01-30T16:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:43 crc kubenswrapper[4766]: E0130 16:23:43.151934 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:43Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.156423 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.156459 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.156469 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.156482 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.156495 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:43Z","lastTransitionTime":"2026-01-30T16:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:43 crc kubenswrapper[4766]: E0130 16:23:43.172983 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:43Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.177733 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.177798 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.177812 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.177832 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.177888 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:43Z","lastTransitionTime":"2026-01-30T16:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:43 crc kubenswrapper[4766]: E0130 16:23:43.192473 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:43Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:43 crc kubenswrapper[4766]: E0130 16:23:43.192987 4766 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.194936 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.194972 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.194984 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.194999 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.195014 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:43Z","lastTransitionTime":"2026-01-30T16:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.297389 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.297429 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.297439 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.297455 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.297466 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:43Z","lastTransitionTime":"2026-01-30T16:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.399773 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.399834 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.399861 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.399890 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.399911 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:43Z","lastTransitionTime":"2026-01-30T16:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.442619 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-54ngm_d6a299e8-188d-4777-bb82-a0994feabcff/ovnkube-controller/3.log" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.445994 4766 scope.go:117] "RemoveContainer" containerID="18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75" Jan 30 16:23:43 crc kubenswrapper[4766]: E0130 16:23:43.446129 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-54ngm_openshift-ovn-kubernetes(d6a299e8-188d-4777-bb82-a0994feabcff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.462304 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:43Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.476858 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:43Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.491925 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:43Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.504563 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.504615 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.504626 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.504641 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.504652 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:43Z","lastTransitionTime":"2026-01-30T16:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.507839 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:43Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.520144 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc80898-f7b6-4e82-8da7-1d054381e411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:43Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.536924 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:43Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.553323 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:43Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.562952 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca5a63-8303-4e36-8733-74136416819f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a0a296aafa84488c77418bb8d4b945f5cec6783bedba7e498c2dfb3f54c39ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0375b1004edcc905daa64587295fbcb381263d97a218c568bdc2028362b2b8fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0375b1004edcc905daa64587295fbcb381263d97a218c568bdc2028362b2b8fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:43Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.587346 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:43Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.606662 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.606699 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.606708 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.606721 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.606731 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:43Z","lastTransitionTime":"2026-01-30T16:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.616615 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:43Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.630775 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5feae404-d53f-4bf5-af27-07a7ce350594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bd3cad42b4c959f5cb67492e8be64db7d98c24e44958bdedc08dbdd5fb9bc10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cd4d835e7db624be9961818c2531e2eab6cfa661faed78c9c14942abec0512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rg9cf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:43Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.640243 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrldv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrldv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:43Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.650161 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f402ec3a-e31d-4f62-80e5-fa9bd9f7ac15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ffcd0c2a62c4b64c82096acad643c6c2baeb7a4a36b666ae69123551f364fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51487101f0eb065018293732bef4fb466fa5a1c7bcb05f905fdd52f3593119c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af645998dd1c6804061825910fa2b9b2446abe356b9c90d83546dddbcf6133a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6de657c840f4dff63741fe42c702f457d123e0ba02f6602b75afe3271a7b6886\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de657c840f4dff63741fe42c702f457d123e0ba02f6602b75afe3271a7b6886\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:43Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.659980 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:43Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.668483 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:43Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.686028 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:23:41Z\\\",\\\"message\\\":\\\"_openshift-cluster-version/cluster-version-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"61d39e4d-21a9-4387-9a2b-fa4ad14792e2\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-cluster-version/cluster-version-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-cluster-version/cluster-version-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-cluster-version/cluster-version-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.182\\\\\\\", Port:9099, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0130 16:23:41.758830 6812 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:23:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-54ngm_openshift-ovn-kubernetes(d6a299e8-188d-4777-bb82-a0994feabcff)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:43Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.705713 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5545aa724616d7714b8886c8505d44241291240e7ac7afd4a85192542a09e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:43Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.709409 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.709455 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.709465 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.709484 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.709493 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:43Z","lastTransitionTime":"2026-01-30T16:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.719477 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cca75d6ff61e1e073104559f6fcac5d76919a1e445da3f2d6fe271d1e0e4082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:23:33Z\\\",\\\"message\\\":\\\"2026-01-30T16:22:47+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1da83fa5-bc65-441b-9b89-edba1769ad7d\\\\n2026-01-30T16:22:47+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1da83fa5-bc65-441b-9b89-edba1769ad7d to /host/opt/cni/bin/\\\\n2026-01-30T16:22:48Z [verbose] multus-daemon started\\\\n2026-01-30T16:22:48Z [verbose] Readiness Indicator file check\\\\n2026-01-30T16:23:33Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:23:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:43Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.812671 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.813070 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.813159 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.813351 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.813442 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:43Z","lastTransitionTime":"2026-01-30T16:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.915729 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.915771 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.915780 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.915793 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.915803 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:43Z","lastTransitionTime":"2026-01-30T16:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.018069 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.018141 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.018160 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.018217 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.018237 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:44Z","lastTransitionTime":"2026-01-30T16:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.038857 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:44 crc kubenswrapper[4766]: E0130 16:23:44.039021 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.039834 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 03:51:31.042616073 +0000 UTC Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.120956 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.121261 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.121359 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.121485 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.121570 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:44Z","lastTransitionTime":"2026-01-30T16:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.224353 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.224394 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.224407 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.224425 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.224438 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:44Z","lastTransitionTime":"2026-01-30T16:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.326481 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.326529 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.326543 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.326561 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.326572 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:44Z","lastTransitionTime":"2026-01-30T16:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.429969 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.430014 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.430029 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.430050 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.430065 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:44Z","lastTransitionTime":"2026-01-30T16:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.533011 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.533052 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.533063 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.533080 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.533092 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:44Z","lastTransitionTime":"2026-01-30T16:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.635765 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.635871 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.635890 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.635914 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.635935 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:44Z","lastTransitionTime":"2026-01-30T16:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.738948 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.738999 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.739013 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.739058 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.739071 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:44Z","lastTransitionTime":"2026-01-30T16:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.841001 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.841045 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.841056 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.841073 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.841084 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:44Z","lastTransitionTime":"2026-01-30T16:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.943443 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.943487 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.943498 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.943522 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.943538 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:44Z","lastTransitionTime":"2026-01-30T16:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.039435 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.039495 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.039502 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:45 crc kubenswrapper[4766]: E0130 16:23:45.039621 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:23:45 crc kubenswrapper[4766]: E0130 16:23:45.039705 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:23:45 crc kubenswrapper[4766]: E0130 16:23:45.039811 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.040318 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 05:10:47.910878693 +0000 UTC Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.045974 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.046012 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.046022 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.046035 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.046053 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:45Z","lastTransitionTime":"2026-01-30T16:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.148245 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.148310 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.148330 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.148354 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.148373 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:45Z","lastTransitionTime":"2026-01-30T16:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.250553 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.250598 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.250611 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.250654 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.250666 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:45Z","lastTransitionTime":"2026-01-30T16:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.353977 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.354303 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.354329 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.354360 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.354385 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:45Z","lastTransitionTime":"2026-01-30T16:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.457504 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.457571 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.457587 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.457609 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.457628 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:45Z","lastTransitionTime":"2026-01-30T16:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.560406 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.560433 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.560441 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.560454 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.560463 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:45Z","lastTransitionTime":"2026-01-30T16:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.663128 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.663206 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.663224 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.663238 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.663249 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:45Z","lastTransitionTime":"2026-01-30T16:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.765886 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.765922 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.765934 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.765947 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.765955 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:45Z","lastTransitionTime":"2026-01-30T16:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.868769 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.868837 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.868850 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.868873 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.868889 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:45Z","lastTransitionTime":"2026-01-30T16:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.971638 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.971841 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.971850 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.971862 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.971871 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:45Z","lastTransitionTime":"2026-01-30T16:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.038421 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:46 crc kubenswrapper[4766]: E0130 16:23:46.038525 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.040534 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 08:56:51.111839811 +0000 UTC Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.058169 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.070943 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.074860 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.074903 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.074920 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.074942 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.074954 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:46Z","lastTransitionTime":"2026-01-30T16:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.088897 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.101657 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc80898-f7b6-4e82-8da7-1d054381e411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.112033 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.128023 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.144111 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.157639 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.168674 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.177472 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.177500 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.177529 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.177544 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.177553 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:46Z","lastTransitionTime":"2026-01-30T16:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.180994 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5feae404-d53f-4bf5-af27-07a7ce350594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bd3cad42b4c959f5cb67492e8be64db7d98c24e44958bdedc08dbdd5fb9bc10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cd4d835e7db624be9961818c2531e2eab6cfa661faed78c9c14942abec0512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rg9cf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.190605 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrldv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrldv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.200363 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca5a63-8303-4e36-8733-74136416819f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a0a296aafa84488c77418bb8d4b945f5cec6783bedba7e498c2dfb3f54c39ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0375b1004edcc905daa64587295fbcb381263d97a218c568bdc2028362b2b8fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0375b1004edcc905daa64587295fbcb381263d97a218c568bdc2028362b2b8fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.212164 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.222548 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.244045 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:23:41Z\\\",\\\"message\\\":\\\"_openshift-cluster-version/cluster-version-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"61d39e4d-21a9-4387-9a2b-fa4ad14792e2\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-cluster-version/cluster-version-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-cluster-version/cluster-version-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-cluster-version/cluster-version-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.182\\\\\\\", Port:9099, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0130 16:23:41.758830 6812 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:23:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-54ngm_openshift-ovn-kubernetes(d6a299e8-188d-4777-bb82-a0994feabcff)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.261281 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5545aa724616d7714b8886c8505d44241291240e7ac7afd4a85192542a09e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.275296 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cca75d6ff61e1e073104559f6fcac5d76919a1e445da3f2d6fe271d1e0e4082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:23:33Z\\\",\\\"message\\\":\\\"2026-01-30T16:22:47+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1da83fa5-bc65-441b-9b89-edba1769ad7d\\\\n2026-01-30T16:22:47+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1da83fa5-bc65-441b-9b89-edba1769ad7d to /host/opt/cni/bin/\\\\n2026-01-30T16:22:48Z [verbose] multus-daemon started\\\\n2026-01-30T16:22:48Z [verbose] Readiness Indicator file check\\\\n2026-01-30T16:23:33Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:23:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.280120 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.280167 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.280220 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.280243 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.280259 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:46Z","lastTransitionTime":"2026-01-30T16:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.287935 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f402ec3a-e31d-4f62-80e5-fa9bd9f7ac15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ffcd0c2a62c4b64c82096acad643c6c2baeb7a4a36b666ae69123551f364fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51487101f0eb065018293732bef4fb466fa5a1c7bcb05f905fdd52f3593119c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af645998dd1c6804061825910fa2b9b2446abe356b9c90d83546dddbcf6133a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6de657c840f4dff63741fe42c702f457d123e0ba02f6602b75afe3271a7b6886\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de657c840f4dff63741fe42c702f457d123e0ba02f6602b75afe3271a7b6886\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.382737 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.382781 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.382792 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.382808 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.382819 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:46Z","lastTransitionTime":"2026-01-30T16:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.484893 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.484950 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.484970 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.484991 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.485007 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:46Z","lastTransitionTime":"2026-01-30T16:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.587679 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.587720 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.587732 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.587772 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.587780 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:46Z","lastTransitionTime":"2026-01-30T16:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.690645 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.690735 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.690757 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.690824 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.690851 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:46Z","lastTransitionTime":"2026-01-30T16:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.794452 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.794510 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.794528 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.794559 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.794612 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:46Z","lastTransitionTime":"2026-01-30T16:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.897592 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.897667 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.897679 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.897701 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.897715 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:46Z","lastTransitionTime":"2026-01-30T16:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.000270 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.000340 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.000356 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.000379 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.000396 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:47Z","lastTransitionTime":"2026-01-30T16:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.039397 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.039443 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.039512 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:47 crc kubenswrapper[4766]: E0130 16:23:47.039622 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:23:47 crc kubenswrapper[4766]: E0130 16:23:47.039724 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:23:47 crc kubenswrapper[4766]: E0130 16:23:47.039876 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.041423 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 06:27:27.017518218 +0000 UTC Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.103669 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.103758 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.103795 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.103837 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.103860 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:47Z","lastTransitionTime":"2026-01-30T16:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.206064 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.206114 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.206137 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.206157 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.206171 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:47Z","lastTransitionTime":"2026-01-30T16:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.309383 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.309416 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.309425 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.309439 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.309457 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:47Z","lastTransitionTime":"2026-01-30T16:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.413515 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.413563 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.413580 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.413604 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.413621 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:47Z","lastTransitionTime":"2026-01-30T16:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.516420 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.516488 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.516510 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.516534 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.516551 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:47Z","lastTransitionTime":"2026-01-30T16:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.619738 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.619853 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.619879 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.619967 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.620054 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:47Z","lastTransitionTime":"2026-01-30T16:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.722625 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.722718 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.722735 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.722754 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.722770 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:47Z","lastTransitionTime":"2026-01-30T16:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.825124 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.825166 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.825197 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.825215 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.825226 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:47Z","lastTransitionTime":"2026-01-30T16:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.927890 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.927928 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.927936 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.927949 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.927957 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:47Z","lastTransitionTime":"2026-01-30T16:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.030840 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.030877 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.030886 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.030928 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.030945 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:48Z","lastTransitionTime":"2026-01-30T16:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.038342 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:48 crc kubenswrapper[4766]: E0130 16:23:48.038473 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.041588 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 18:16:26.864939336 +0000 UTC Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.134097 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.134173 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.134245 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.134275 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.134300 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:48Z","lastTransitionTime":"2026-01-30T16:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.236686 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.236761 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.236783 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.236815 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.236838 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:48Z","lastTransitionTime":"2026-01-30T16:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.339128 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.339165 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.339202 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.339230 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.339247 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:48Z","lastTransitionTime":"2026-01-30T16:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.441513 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.441550 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.441561 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.441577 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.441589 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:48Z","lastTransitionTime":"2026-01-30T16:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.544224 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.544272 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.544290 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.544310 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.544324 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:48Z","lastTransitionTime":"2026-01-30T16:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.647217 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.647258 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.647268 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.647281 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.647291 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:48Z","lastTransitionTime":"2026-01-30T16:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.749658 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.749702 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.749718 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.749741 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.749755 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:48Z","lastTransitionTime":"2026-01-30T16:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.851980 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.852029 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.852039 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.852055 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.852067 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:48Z","lastTransitionTime":"2026-01-30T16:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.931128 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:23:48 crc kubenswrapper[4766]: E0130 16:23:48.931356 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:52.931322354 +0000 UTC m=+147.569279700 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.954900 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.954978 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.954994 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.955011 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.955024 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:48Z","lastTransitionTime":"2026-01-30T16:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.032563 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.032634 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.032707 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.032748 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:49 crc kubenswrapper[4766]: E0130 16:23:49.032762 4766 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 16:23:49 crc kubenswrapper[4766]: E0130 16:23:49.032834 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 16:24:53.032812461 +0000 UTC m=+147.670769807 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 16:23:49 crc kubenswrapper[4766]: E0130 16:23:49.032897 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 16:23:49 crc kubenswrapper[4766]: E0130 16:23:49.032919 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 16:23:49 crc kubenswrapper[4766]: E0130 16:23:49.032933 4766 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:23:49 crc kubenswrapper[4766]: E0130 16:23:49.032971 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 16:24:53.032959431 +0000 UTC m=+147.670916797 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:23:49 crc kubenswrapper[4766]: E0130 16:23:49.032893 4766 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 16:23:49 crc kubenswrapper[4766]: E0130 16:23:49.033011 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 16:24:53.033003753 +0000 UTC m=+147.670961119 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 16:23:49 crc kubenswrapper[4766]: E0130 16:23:49.033020 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 16:23:49 crc kubenswrapper[4766]: E0130 16:23:49.033059 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 16:23:49 crc kubenswrapper[4766]: E0130 16:23:49.033072 4766 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:23:49 crc kubenswrapper[4766]: E0130 16:23:49.033144 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 16:24:53.033124183 +0000 UTC m=+147.671081539 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.038699 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.038846 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:49 crc kubenswrapper[4766]: E0130 16:23:49.038923 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.039017 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:49 crc kubenswrapper[4766]: E0130 16:23:49.039372 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:23:49 crc kubenswrapper[4766]: E0130 16:23:49.039505 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.041982 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 15:48:54.945482776 +0000 UTC Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.058588 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.058653 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.058665 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.058682 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.058693 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:49Z","lastTransitionTime":"2026-01-30T16:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.162426 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.162845 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.162862 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.162887 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.162906 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:49Z","lastTransitionTime":"2026-01-30T16:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.266539 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.266591 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.266606 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.266631 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.266649 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:49Z","lastTransitionTime":"2026-01-30T16:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.369226 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.369282 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.369299 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.369324 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.369340 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:49Z","lastTransitionTime":"2026-01-30T16:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.475062 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.475110 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.475123 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.475144 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.475158 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:49Z","lastTransitionTime":"2026-01-30T16:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.578669 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.578740 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.578756 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.578775 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.578791 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:49Z","lastTransitionTime":"2026-01-30T16:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.681626 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.681668 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.681681 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.681696 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.681736 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:49Z","lastTransitionTime":"2026-01-30T16:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.784730 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.784823 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.784843 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.784865 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.784881 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:49Z","lastTransitionTime":"2026-01-30T16:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.887390 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.887434 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.887447 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.887460 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.887471 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:49Z","lastTransitionTime":"2026-01-30T16:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.990037 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.990075 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.990084 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.990096 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.990105 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:49Z","lastTransitionTime":"2026-01-30T16:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.038667 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:50 crc kubenswrapper[4766]: E0130 16:23:50.038810 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.042622 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 05:51:11.747058018 +0000 UTC Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.093454 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.093523 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.093612 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.093639 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.093658 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:50Z","lastTransitionTime":"2026-01-30T16:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.196406 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.196465 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.196475 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.196496 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.196507 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:50Z","lastTransitionTime":"2026-01-30T16:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.299435 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.299501 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.299519 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.299542 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.299556 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:50Z","lastTransitionTime":"2026-01-30T16:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.403793 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.403838 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.403849 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.403869 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.403882 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:50Z","lastTransitionTime":"2026-01-30T16:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.506390 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.506483 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.506511 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.506550 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.506574 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:50Z","lastTransitionTime":"2026-01-30T16:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.610508 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.610585 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.610611 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.610641 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.610663 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:50Z","lastTransitionTime":"2026-01-30T16:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.713304 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.713381 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.713393 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.713406 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.713415 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:50Z","lastTransitionTime":"2026-01-30T16:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.815441 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.815487 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.815502 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.815520 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.815534 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:50Z","lastTransitionTime":"2026-01-30T16:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.918559 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.918603 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.918616 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.918634 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.918647 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:50Z","lastTransitionTime":"2026-01-30T16:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.020940 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.020999 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.021015 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.021031 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.021042 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:51Z","lastTransitionTime":"2026-01-30T16:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.039156 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.039230 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:51 crc kubenswrapper[4766]: E0130 16:23:51.039319 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.039161 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:51 crc kubenswrapper[4766]: E0130 16:23:51.039475 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:23:51 crc kubenswrapper[4766]: E0130 16:23:51.039519 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.043691 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 21:39:59.066129954 +0000 UTC Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.123775 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.123846 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.123873 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.123903 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.123925 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:51Z","lastTransitionTime":"2026-01-30T16:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.226767 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.227024 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.227055 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.227086 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.227110 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:51Z","lastTransitionTime":"2026-01-30T16:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.330124 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.330224 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.330251 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.330282 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.330305 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:51Z","lastTransitionTime":"2026-01-30T16:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.432000 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.432055 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.432063 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.432076 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.432086 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:51Z","lastTransitionTime":"2026-01-30T16:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.535073 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.535150 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.535166 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.535235 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.535260 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:51Z","lastTransitionTime":"2026-01-30T16:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.637870 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.637912 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.637923 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.637942 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.637953 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:51Z","lastTransitionTime":"2026-01-30T16:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.740935 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.740976 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.740987 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.741001 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.741009 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:51Z","lastTransitionTime":"2026-01-30T16:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.843351 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.843402 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.843415 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.843432 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.843443 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:51Z","lastTransitionTime":"2026-01-30T16:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.945933 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.945975 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.945985 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.946000 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.946011 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:51Z","lastTransitionTime":"2026-01-30T16:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.038340 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:52 crc kubenswrapper[4766]: E0130 16:23:52.038507 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.044078 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 13:23:14.579113988 +0000 UTC Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.047858 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.047898 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.047907 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.047919 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.047928 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:52Z","lastTransitionTime":"2026-01-30T16:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.150152 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.150243 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.150260 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.150282 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.150298 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:52Z","lastTransitionTime":"2026-01-30T16:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.253009 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.253062 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.253078 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.253099 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.253114 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:52Z","lastTransitionTime":"2026-01-30T16:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.355633 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.355935 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.356120 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.356323 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.356611 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:52Z","lastTransitionTime":"2026-01-30T16:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.460247 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.460312 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.460324 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.460362 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.460374 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:52Z","lastTransitionTime":"2026-01-30T16:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.567372 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.568091 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.568116 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.568146 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.568166 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:52Z","lastTransitionTime":"2026-01-30T16:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.671361 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.671410 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.671421 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.671441 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.671452 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:52Z","lastTransitionTime":"2026-01-30T16:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.775508 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.775554 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.775570 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.775594 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.775616 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:52Z","lastTransitionTime":"2026-01-30T16:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.877847 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.877907 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.877945 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.877973 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.877994 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:52Z","lastTransitionTime":"2026-01-30T16:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.980403 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.980441 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.980476 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.980493 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.980504 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:52Z","lastTransitionTime":"2026-01-30T16:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.039413 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.039470 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:53 crc kubenswrapper[4766]: E0130 16:23:53.039570 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:23:53 crc kubenswrapper[4766]: E0130 16:23:53.039675 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.040027 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:53 crc kubenswrapper[4766]: E0130 16:23:53.040423 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.044673 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 23:05:32.413613619 +0000 UTC Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.084523 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.084593 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.084602 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.084618 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.084627 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:53Z","lastTransitionTime":"2026-01-30T16:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.187894 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.187929 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.187941 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.187958 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.187968 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:53Z","lastTransitionTime":"2026-01-30T16:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.290928 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.291757 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.291874 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.291986 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.292077 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:53Z","lastTransitionTime":"2026-01-30T16:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.397847 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.398982 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.399096 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.399240 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.399336 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:53Z","lastTransitionTime":"2026-01-30T16:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.439903 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.439971 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.439983 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.440006 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.440020 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:53Z","lastTransitionTime":"2026-01-30T16:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:53 crc kubenswrapper[4766]: E0130 16:23:53.456091 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.462363 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.462454 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.462468 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.462488 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.462500 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:53Z","lastTransitionTime":"2026-01-30T16:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:53 crc kubenswrapper[4766]: E0130 16:23:53.479151 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.483574 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.483642 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.483658 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.483682 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.483697 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:53Z","lastTransitionTime":"2026-01-30T16:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:53 crc kubenswrapper[4766]: E0130 16:23:53.497552 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.501983 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.502041 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.502061 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.502086 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.502102 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:53Z","lastTransitionTime":"2026-01-30T16:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:53 crc kubenswrapper[4766]: E0130 16:23:53.514983 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.519244 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.519292 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.519305 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.519324 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.519337 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:53Z","lastTransitionTime":"2026-01-30T16:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:53 crc kubenswrapper[4766]: E0130 16:23:53.532720 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:53 crc kubenswrapper[4766]: E0130 16:23:53.532941 4766 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.534824 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.534876 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.534890 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.534911 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.534926 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:53Z","lastTransitionTime":"2026-01-30T16:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.638142 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.638202 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.638214 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.638227 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.638236 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:53Z","lastTransitionTime":"2026-01-30T16:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.741947 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.742027 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.742040 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.742064 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.742080 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:53Z","lastTransitionTime":"2026-01-30T16:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.845461 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.845520 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.845548 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.845581 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.845601 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:53Z","lastTransitionTime":"2026-01-30T16:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.948729 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.948792 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.948815 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.948843 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.948868 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:53Z","lastTransitionTime":"2026-01-30T16:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.039308 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:54 crc kubenswrapper[4766]: E0130 16:23:54.039536 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.044845 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 20:36:18.525127483 +0000 UTC Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.051342 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.051410 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.051428 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.051453 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.051471 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:54Z","lastTransitionTime":"2026-01-30T16:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.154622 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.154695 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.154707 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.154723 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.154735 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:54Z","lastTransitionTime":"2026-01-30T16:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.258413 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.258486 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.258509 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.258542 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.258566 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:54Z","lastTransitionTime":"2026-01-30T16:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.362120 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.362242 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.362272 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.362303 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.362329 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:54Z","lastTransitionTime":"2026-01-30T16:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.465407 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.465471 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.465487 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.465504 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.465516 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:54Z","lastTransitionTime":"2026-01-30T16:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.568494 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.568788 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.568897 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.568999 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.569068 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:54Z","lastTransitionTime":"2026-01-30T16:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.671797 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.672063 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.672165 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.672299 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.672384 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:54Z","lastTransitionTime":"2026-01-30T16:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.774373 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.774423 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.774433 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.774451 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.774462 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:54Z","lastTransitionTime":"2026-01-30T16:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.877280 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.877342 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.877363 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.877388 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.877404 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:54Z","lastTransitionTime":"2026-01-30T16:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.979961 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.980236 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.980333 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.980427 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.980499 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:54Z","lastTransitionTime":"2026-01-30T16:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.038943 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.039062 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:55 crc kubenswrapper[4766]: E0130 16:23:55.039117 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.039076 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:55 crc kubenswrapper[4766]: E0130 16:23:55.039293 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:23:55 crc kubenswrapper[4766]: E0130 16:23:55.039345 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.040103 4766 scope.go:117] "RemoveContainer" containerID="18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75" Jan 30 16:23:55 crc kubenswrapper[4766]: E0130 16:23:55.040344 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-54ngm_openshift-ovn-kubernetes(d6a299e8-188d-4777-bb82-a0994feabcff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.044933 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 06:44:09.530110018 +0000 UTC Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.082735 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.082775 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.082787 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.082803 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.082815 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:55Z","lastTransitionTime":"2026-01-30T16:23:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.185310 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.185343 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.185354 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.185369 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.185380 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:55Z","lastTransitionTime":"2026-01-30T16:23:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.287753 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.287795 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.287806 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.287822 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.287835 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:55Z","lastTransitionTime":"2026-01-30T16:23:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.389724 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.389765 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.389777 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.389796 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.389808 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:55Z","lastTransitionTime":"2026-01-30T16:23:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.491739 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.491774 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.491782 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.491795 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.491804 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:55Z","lastTransitionTime":"2026-01-30T16:23:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.594569 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.594625 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.594639 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.594652 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.594660 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:55Z","lastTransitionTime":"2026-01-30T16:23:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.697076 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.697119 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.697129 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.697148 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.697159 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:55Z","lastTransitionTime":"2026-01-30T16:23:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.800658 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.800712 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.800727 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.800745 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.800758 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:55Z","lastTransitionTime":"2026-01-30T16:23:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.903486 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.903538 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.903551 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.903566 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.903577 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:55Z","lastTransitionTime":"2026-01-30T16:23:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.006682 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.006747 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.006761 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.006800 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.006813 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:56Z","lastTransitionTime":"2026-01-30T16:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.039505 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:56 crc kubenswrapper[4766]: E0130 16:23:56.039692 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.045082 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 17:15:12.834885485 +0000 UTC Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.061432 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.077889 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.093110 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.107533 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.110122 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.110199 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.110216 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.110236 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.110252 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:56Z","lastTransitionTime":"2026-01-30T16:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.124089 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.138711 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc80898-f7b6-4e82-8da7-1d054381e411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.153147 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.169052 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.182780 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5feae404-d53f-4bf5-af27-07a7ce350594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bd3cad42b4c959f5cb67492e8be64db7d98c24e44958bdedc08dbdd5fb9bc10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cd4d835e7db624be9961818c2531e2eab6cfa661faed78c9c14942abec0512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rg9cf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.195018 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrldv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrldv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.206225 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca5a63-8303-4e36-8733-74136416819f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a0a296aafa84488c77418bb8d4b945f5cec6783bedba7e498c2dfb3f54c39ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0375b1004edcc905daa64587295fbcb381263d97a218c568bdc2028362b2b8fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0375b1004edcc905daa64587295fbcb381263d97a218c568bdc2028362b2b8fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.212789 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.212838 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.212852 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.212873 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.212888 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:56Z","lastTransitionTime":"2026-01-30T16:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.221852 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.250021 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:23:41Z\\\",\\\"message\\\":\\\"_openshift-cluster-version/cluster-version-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"61d39e4d-21a9-4387-9a2b-fa4ad14792e2\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-cluster-version/cluster-version-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-cluster-version/cluster-version-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-cluster-version/cluster-version-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.182\\\\\\\", Port:9099, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0130 16:23:41.758830 6812 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:23:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-54ngm_openshift-ovn-kubernetes(d6a299e8-188d-4777-bb82-a0994feabcff)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.264670 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5545aa724616d7714b8886c8505d44241291240e7ac7afd4a85192542a09e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.281462 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cca75d6ff61e1e073104559f6fcac5d76919a1e445da3f2d6fe271d1e0e4082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:23:33Z\\\",\\\"message\\\":\\\"2026-01-30T16:22:47+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1da83fa5-bc65-441b-9b89-edba1769ad7d\\\\n2026-01-30T16:22:47+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1da83fa5-bc65-441b-9b89-edba1769ad7d to /host/opt/cni/bin/\\\\n2026-01-30T16:22:48Z [verbose] multus-daemon started\\\\n2026-01-30T16:22:48Z [verbose] Readiness Indicator file check\\\\n2026-01-30T16:23:33Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:23:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.298934 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f402ec3a-e31d-4f62-80e5-fa9bd9f7ac15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ffcd0c2a62c4b64c82096acad643c6c2baeb7a4a36b666ae69123551f364fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51487101f0eb065018293732bef4fb466fa5a1c7bcb05f905fdd52f3593119c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af645998dd1c6804061825910fa2b9b2446abe356b9c90d83546dddbcf6133a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6de657c840f4dff63741fe42c702f457d123e0ba02f6602b75afe3271a7b6886\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de657c840f4dff63741fe42c702f457d123e0ba02f6602b75afe3271a7b6886\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.312572 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.316250 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.316300 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.316311 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.316326 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.316339 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:56Z","lastTransitionTime":"2026-01-30T16:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.328726 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.421967 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.422031 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.422042 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.422078 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.422091 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:56Z","lastTransitionTime":"2026-01-30T16:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.524714 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.524760 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.524773 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.524791 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.524802 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:56Z","lastTransitionTime":"2026-01-30T16:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.627315 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.627358 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.627369 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.627385 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.627396 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:56Z","lastTransitionTime":"2026-01-30T16:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.730779 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.730823 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.730834 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.730850 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.730862 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:56Z","lastTransitionTime":"2026-01-30T16:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.833685 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.833742 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.833759 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.833775 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.833787 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:56Z","lastTransitionTime":"2026-01-30T16:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.935696 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.935758 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.935799 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.935822 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.935838 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:56Z","lastTransitionTime":"2026-01-30T16:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.038547 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.038552 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:57 crc kubenswrapper[4766]: E0130 16:23:57.038750 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:23:57 crc kubenswrapper[4766]: E0130 16:23:57.038815 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.038578 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:57 crc kubenswrapper[4766]: E0130 16:23:57.038893 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.039210 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.039231 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.039241 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.039255 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.039266 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:57Z","lastTransitionTime":"2026-01-30T16:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.045930 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 18:23:24.147922994 +0000 UTC Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.141695 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.141742 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.141753 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.141769 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.141780 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:57Z","lastTransitionTime":"2026-01-30T16:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.245255 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.245319 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.245335 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.245360 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.245382 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:57Z","lastTransitionTime":"2026-01-30T16:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.348171 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.348275 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.348298 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.348326 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.348347 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:57Z","lastTransitionTime":"2026-01-30T16:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.451240 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.451288 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.451300 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.451318 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.451333 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:57Z","lastTransitionTime":"2026-01-30T16:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.553203 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.553252 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.553264 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.553281 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.553293 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:57Z","lastTransitionTime":"2026-01-30T16:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.656313 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.656394 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.656419 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.656446 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.656467 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:57Z","lastTransitionTime":"2026-01-30T16:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.758740 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.758783 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.758791 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.758807 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.758817 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:57Z","lastTransitionTime":"2026-01-30T16:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.861997 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.862063 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.862082 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.862106 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.862122 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:57Z","lastTransitionTime":"2026-01-30T16:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.965107 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.965153 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.965201 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.965220 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.965231 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:57Z","lastTransitionTime":"2026-01-30T16:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.039327 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:58 crc kubenswrapper[4766]: E0130 16:23:58.039481 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.046318 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 00:50:29.581401907 +0000 UTC Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.067085 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.067117 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.067124 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.067136 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.067145 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:58Z","lastTransitionTime":"2026-01-30T16:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.169130 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.169202 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.169212 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.169227 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.169236 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:58Z","lastTransitionTime":"2026-01-30T16:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.272219 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.272292 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.272349 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.272374 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.272390 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:58Z","lastTransitionTime":"2026-01-30T16:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.375019 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.375086 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.375110 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.375134 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.375163 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:58Z","lastTransitionTime":"2026-01-30T16:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.478139 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.478263 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.478303 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.478336 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.478357 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:58Z","lastTransitionTime":"2026-01-30T16:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.582208 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.582277 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.582291 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.582313 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.582327 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:58Z","lastTransitionTime":"2026-01-30T16:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.685217 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.685264 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.685276 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.685294 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.685306 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:58Z","lastTransitionTime":"2026-01-30T16:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.788671 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.788750 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.788772 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.788804 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.788826 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:58Z","lastTransitionTime":"2026-01-30T16:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.892508 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.893010 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.893046 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.893223 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.893261 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:58Z","lastTransitionTime":"2026-01-30T16:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.995236 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.995271 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.995284 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.995300 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.995310 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:58Z","lastTransitionTime":"2026-01-30T16:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.039217 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:59 crc kubenswrapper[4766]: E0130 16:23:59.039398 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.039661 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:59 crc kubenswrapper[4766]: E0130 16:23:59.039756 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.040079 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:59 crc kubenswrapper[4766]: E0130 16:23:59.040160 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.046674 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 12:32:31.284290467 +0000 UTC Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.097725 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.097784 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.097805 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.097822 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.097832 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:59Z","lastTransitionTime":"2026-01-30T16:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.200055 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.200110 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.200130 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.200152 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.200171 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:59Z","lastTransitionTime":"2026-01-30T16:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.303737 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.303779 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.303791 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.303806 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.303817 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:59Z","lastTransitionTime":"2026-01-30T16:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.406631 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.406677 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.406689 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.406704 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.406716 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:59Z","lastTransitionTime":"2026-01-30T16:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.509024 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.509068 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.509082 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.509097 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.509108 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:59Z","lastTransitionTime":"2026-01-30T16:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.611606 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.611691 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.611713 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.611740 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.611762 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:59Z","lastTransitionTime":"2026-01-30T16:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.714331 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.714382 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.714398 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.714420 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.714435 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:59Z","lastTransitionTime":"2026-01-30T16:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.817676 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.817735 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.817754 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.817777 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.817795 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:59Z","lastTransitionTime":"2026-01-30T16:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.920354 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.920392 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.920403 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.920420 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.920432 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:59Z","lastTransitionTime":"2026-01-30T16:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.023745 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.023921 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.023956 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.023985 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.024005 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:00Z","lastTransitionTime":"2026-01-30T16:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.038532 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:24:00 crc kubenswrapper[4766]: E0130 16:24:00.038715 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.047762 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 23:22:20.179583102 +0000 UTC Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.126352 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.126384 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.126396 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.126413 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.126423 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:00Z","lastTransitionTime":"2026-01-30T16:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.229551 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.229624 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.229644 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.229674 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.229693 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:00Z","lastTransitionTime":"2026-01-30T16:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.332804 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.332869 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.332882 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.332900 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.332914 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:00Z","lastTransitionTime":"2026-01-30T16:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.436913 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.436971 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.437000 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.437014 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.437025 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:00Z","lastTransitionTime":"2026-01-30T16:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.539072 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.539113 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.539121 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.539136 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.539146 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:00Z","lastTransitionTime":"2026-01-30T16:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.642655 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.642718 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.642734 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.642758 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.642774 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:00Z","lastTransitionTime":"2026-01-30T16:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.745422 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.745459 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.745486 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.745502 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.745520 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:00Z","lastTransitionTime":"2026-01-30T16:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.848857 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.848910 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.848921 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.848938 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.848949 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:00Z","lastTransitionTime":"2026-01-30T16:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.952686 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.952762 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.952781 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.952806 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.952830 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:00Z","lastTransitionTime":"2026-01-30T16:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.039344 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.039441 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.039497 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:24:01 crc kubenswrapper[4766]: E0130 16:24:01.039696 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:24:01 crc kubenswrapper[4766]: E0130 16:24:01.040005 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:24:01 crc kubenswrapper[4766]: E0130 16:24:01.040346 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.048338 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 17:13:43.438421431 +0000 UTC Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.055561 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.055592 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.055619 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.055634 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.055642 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:01Z","lastTransitionTime":"2026-01-30T16:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.158561 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.158605 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.158615 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.158627 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.158636 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:01Z","lastTransitionTime":"2026-01-30T16:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.261053 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.261132 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.261155 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.261219 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.261242 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:01Z","lastTransitionTime":"2026-01-30T16:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.364415 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.364474 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.364483 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.364500 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.364510 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:01Z","lastTransitionTime":"2026-01-30T16:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.467012 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.467055 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.467064 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.467078 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.467086 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:01Z","lastTransitionTime":"2026-01-30T16:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.569723 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.569798 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.569813 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.569832 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.569844 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:01Z","lastTransitionTime":"2026-01-30T16:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.672622 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.672690 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.672699 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.672714 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.672725 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:01Z","lastTransitionTime":"2026-01-30T16:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.774942 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.774996 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.775012 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.775031 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.775045 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:01Z","lastTransitionTime":"2026-01-30T16:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.877618 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.877705 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.877732 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.877795 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.877821 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:01Z","lastTransitionTime":"2026-01-30T16:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.980700 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.980773 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.980800 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.980829 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.980847 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:01Z","lastTransitionTime":"2026-01-30T16:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.038652 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:24:02 crc kubenswrapper[4766]: E0130 16:24:02.038836 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.049305 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 20:16:38.785518667 +0000 UTC Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.056694 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.083487 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.083792 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.083876 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.083942 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.084010 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:02Z","lastTransitionTime":"2026-01-30T16:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.186611 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.186667 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.186678 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.186698 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.186708 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:02Z","lastTransitionTime":"2026-01-30T16:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.289905 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.290013 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.290031 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.290055 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.290074 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:02Z","lastTransitionTime":"2026-01-30T16:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.394778 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.395226 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.395399 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.395534 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.395663 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:02Z","lastTransitionTime":"2026-01-30T16:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.499174 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.499341 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.499370 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.499400 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.499426 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:02Z","lastTransitionTime":"2026-01-30T16:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.601918 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.601951 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.601961 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.601976 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.601989 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:02Z","lastTransitionTime":"2026-01-30T16:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.704883 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.704944 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.704957 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.704975 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.704988 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:02Z","lastTransitionTime":"2026-01-30T16:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.807286 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.807332 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.807344 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.807361 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.807373 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:02Z","lastTransitionTime":"2026-01-30T16:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.910080 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.910119 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.910128 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.910142 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.910155 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:02Z","lastTransitionTime":"2026-01-30T16:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.014582 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.014643 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.014660 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.014683 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.014701 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:03Z","lastTransitionTime":"2026-01-30T16:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.038909 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.039005 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.038949 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:24:03 crc kubenswrapper[4766]: E0130 16:24:03.039093 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:24:03 crc kubenswrapper[4766]: E0130 16:24:03.039269 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:24:03 crc kubenswrapper[4766]: E0130 16:24:03.039371 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.050421 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 12:23:21.743220591 +0000 UTC Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.116545 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.116610 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.116621 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.116637 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.116648 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:03Z","lastTransitionTime":"2026-01-30T16:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.219931 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.219969 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.219979 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.219994 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.220024 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:03Z","lastTransitionTime":"2026-01-30T16:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.322771 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.322876 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.322899 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.322930 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.322959 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:03Z","lastTransitionTime":"2026-01-30T16:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.426014 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.426062 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.426074 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.426092 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.426105 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:03Z","lastTransitionTime":"2026-01-30T16:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.488044 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/de5fecf1-cb2c-4ae2-a240-6f8826f6dac3-metrics-certs\") pod \"network-metrics-daemon-xrldv\" (UID: \"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3\") " pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:24:03 crc kubenswrapper[4766]: E0130 16:24:03.488251 4766 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 16:24:03 crc kubenswrapper[4766]: E0130 16:24:03.488340 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/de5fecf1-cb2c-4ae2-a240-6f8826f6dac3-metrics-certs podName:de5fecf1-cb2c-4ae2-a240-6f8826f6dac3 nodeName:}" failed. No retries permitted until 2026-01-30 16:25:07.488321207 +0000 UTC m=+162.126278553 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/de5fecf1-cb2c-4ae2-a240-6f8826f6dac3-metrics-certs") pod "network-metrics-daemon-xrldv" (UID: "de5fecf1-cb2c-4ae2-a240-6f8826f6dac3") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.529080 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.529137 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.529146 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.529161 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.529171 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:03Z","lastTransitionTime":"2026-01-30T16:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.631835 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.631901 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.631921 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.631954 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.631977 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:03Z","lastTransitionTime":"2026-01-30T16:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.691384 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.691443 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.691463 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.691492 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.691512 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:03Z","lastTransitionTime":"2026-01-30T16:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:03 crc kubenswrapper[4766]: E0130 16:24:03.711291 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:24:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.716891 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.716962 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.716990 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.717021 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.717045 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:03Z","lastTransitionTime":"2026-01-30T16:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:03 crc kubenswrapper[4766]: E0130 16:24:03.732743 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:24:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.737161 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.737368 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.737448 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.737548 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.737616 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:03Z","lastTransitionTime":"2026-01-30T16:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:03 crc kubenswrapper[4766]: E0130 16:24:03.754090 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:24:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.759633 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.759780 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.759847 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.759918 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.759981 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:03Z","lastTransitionTime":"2026-01-30T16:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:03 crc kubenswrapper[4766]: E0130 16:24:03.776581 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:24:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.780930 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.780983 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.780997 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.781034 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.781048 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:03Z","lastTransitionTime":"2026-01-30T16:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:03 crc kubenswrapper[4766]: E0130 16:24:03.796156 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:24:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:24:03 crc kubenswrapper[4766]: E0130 16:24:03.796328 4766 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.797832 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.797888 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.797911 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.797941 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.797964 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:03Z","lastTransitionTime":"2026-01-30T16:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.902035 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.902082 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.902098 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.902120 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.902136 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:03Z","lastTransitionTime":"2026-01-30T16:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.005454 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.005496 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.005507 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.005523 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.005536 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:04Z","lastTransitionTime":"2026-01-30T16:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.039054 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:24:04 crc kubenswrapper[4766]: E0130 16:24:04.039218 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.051052 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 17:41:08.807222536 +0000 UTC Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.109011 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.109079 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.109102 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.109131 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.109153 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:04Z","lastTransitionTime":"2026-01-30T16:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.212478 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.212526 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.212535 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.212552 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.212561 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:04Z","lastTransitionTime":"2026-01-30T16:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.315053 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.315110 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.315130 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.315150 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.315164 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:04Z","lastTransitionTime":"2026-01-30T16:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.418159 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.418216 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.418225 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.418238 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.418247 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:04Z","lastTransitionTime":"2026-01-30T16:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.520322 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.520378 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.520391 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.520406 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.520417 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:04Z","lastTransitionTime":"2026-01-30T16:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.623059 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.623140 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.623163 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.623233 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.623285 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:04Z","lastTransitionTime":"2026-01-30T16:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.725835 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.725879 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.725890 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.725908 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.725919 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:04Z","lastTransitionTime":"2026-01-30T16:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.829609 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.829705 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.829727 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.829755 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.829773 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:04Z","lastTransitionTime":"2026-01-30T16:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.933520 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.933569 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.933579 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.933597 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.933608 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:04Z","lastTransitionTime":"2026-01-30T16:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.036226 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.036306 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.036322 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.036348 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.036363 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:05Z","lastTransitionTime":"2026-01-30T16:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.038389 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.038442 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.038405 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:24:05 crc kubenswrapper[4766]: E0130 16:24:05.038571 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:24:05 crc kubenswrapper[4766]: E0130 16:24:05.038668 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:24:05 crc kubenswrapper[4766]: E0130 16:24:05.038821 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.051719 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 02:14:17.769657777 +0000 UTC Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.145379 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.145431 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.145451 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.145511 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.145552 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:05Z","lastTransitionTime":"2026-01-30T16:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.250092 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.250154 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.250172 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.250229 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.250247 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:05Z","lastTransitionTime":"2026-01-30T16:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.354901 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.354967 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.354982 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.355010 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.355026 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:05Z","lastTransitionTime":"2026-01-30T16:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.457864 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.457934 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.457958 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.457997 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.458019 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:05Z","lastTransitionTime":"2026-01-30T16:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.561153 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.561202 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.561211 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.561224 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.561234 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:05Z","lastTransitionTime":"2026-01-30T16:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.665125 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.665212 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.665229 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.665249 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.665266 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:05Z","lastTransitionTime":"2026-01-30T16:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.768672 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.768712 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.768810 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.768858 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.768871 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:05Z","lastTransitionTime":"2026-01-30T16:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.871769 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.872116 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.872211 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.872296 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.872373 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:05Z","lastTransitionTime":"2026-01-30T16:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.974482 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.974761 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.974843 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.974919 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.974978 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:05Z","lastTransitionTime":"2026-01-30T16:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.038733 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:24:06 crc kubenswrapper[4766]: E0130 16:24:06.038872 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.052696 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 00:24:56.509341867 +0000 UTC Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.054883 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:24:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.067165 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:24:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.076770 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:24:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.077495 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.077526 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.077537 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.077552 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.077563 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:06Z","lastTransitionTime":"2026-01-30T16:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.088928 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:24:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.101240 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc80898-f7b6-4e82-8da7-1d054381e411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:24:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.113170 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:24:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.123801 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:24:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.137089 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca5a63-8303-4e36-8733-74136416819f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a0a296aafa84488c77418bb8d4b945f5cec6783bedba7e498c2dfb3f54c39ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0375b1004edcc905daa64587295fbcb381263d97a218c568bdc2028362b2b8fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0375b1004edcc905daa64587295fbcb381263d97a218c568bdc2028362b2b8fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:24:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.156858 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73113a73-bf9b-47b1-9053-8dff1c9ea225\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://108f1ca5a7cf1c4f0665b5b82b00c8b911dfe22582334836d3bc8a5afe17a1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfcc8c946ea5c547539386c797026307ba8bd235fd4694341695882ec2442702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://332a9a9c49123e23601444adafca95852030d0e19a682316100bc45b0f849209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374cae6e4bbbb88f2f6fc9093a4f5597b2afeae8361a9a76ccf384cae5d8b2b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f01fb269c6fb534b4e45e60f3409c21e9700bc901eda3f975e990f77a9286838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45a10d4089665cdb929797e9342a2cbcb49cf6734a3325a26037a23551bcf2de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45a10d4089665cdb929797e9342a2cbcb49cf6734a3325a26037a23551bcf2de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dd974892c65b46b3e601e9d901a9a9888dcbe5d1f734b282938d46f297ffd3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5dd974892c65b46b3e601e9d901a9a9888dcbe5d1f734b282938d46f297ffd3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1c91fed698fcd080bb96cfb78c277c295568df8d5eb52e57c4656620822f6fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c91fed698fcd080bb96cfb78c277c295568df8d5eb52e57c4656620822f6fac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:24:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.168686 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:24:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.177334 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:24:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.179531 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.179567 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.179579 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.179595 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.179606 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:06Z","lastTransitionTime":"2026-01-30T16:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.189311 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5feae404-d53f-4bf5-af27-07a7ce350594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bd3cad42b4c959f5cb67492e8be64db7d98c24e44958bdedc08dbdd5fb9bc10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cd4d835e7db624be9961818c2531e2eab6cfa661faed78c9c14942abec0512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rg9cf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:24:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.201119 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrldv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrldv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:24:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.212887 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f402ec3a-e31d-4f62-80e5-fa9bd9f7ac15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ffcd0c2a62c4b64c82096acad643c6c2baeb7a4a36b666ae69123551f364fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51487101f0eb065018293732bef4fb466fa5a1c7bcb05f905fdd52f3593119c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af645998dd1c6804061825910fa2b9b2446abe356b9c90d83546dddbcf6133a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6de657c840f4dff63741fe42c702f457d123e0ba02f6602b75afe3271a7b6886\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de657c840f4dff63741fe42c702f457d123e0ba02f6602b75afe3271a7b6886\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:24:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.224611 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:24:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.234565 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:24:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.253066 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:23:41Z\\\",\\\"message\\\":\\\"_openshift-cluster-version/cluster-version-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"61d39e4d-21a9-4387-9a2b-fa4ad14792e2\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-cluster-version/cluster-version-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-cluster-version/cluster-version-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-cluster-version/cluster-version-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.182\\\\\\\", Port:9099, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0130 16:23:41.758830 6812 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:23:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-54ngm_openshift-ovn-kubernetes(d6a299e8-188d-4777-bb82-a0994feabcff)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:24:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.268248 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5545aa724616d7714b8886c8505d44241291240e7ac7afd4a85192542a09e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:24:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.280308 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cca75d6ff61e1e073104559f6fcac5d76919a1e445da3f2d6fe271d1e0e4082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:23:33Z\\\",\\\"message\\\":\\\"2026-01-30T16:22:47+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1da83fa5-bc65-441b-9b89-edba1769ad7d\\\\n2026-01-30T16:22:47+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1da83fa5-bc65-441b-9b89-edba1769ad7d to /host/opt/cni/bin/\\\\n2026-01-30T16:22:48Z [verbose] multus-daemon started\\\\n2026-01-30T16:22:48Z [verbose] Readiness Indicator file check\\\\n2026-01-30T16:23:33Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:23:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:24:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.281092 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.281132 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.281148 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.281164 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.281198 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:06Z","lastTransitionTime":"2026-01-30T16:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.383503 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.383537 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.383548 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.383561 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.383572 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:06Z","lastTransitionTime":"2026-01-30T16:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.485982 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.486060 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.486085 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.486173 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.486307 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:06Z","lastTransitionTime":"2026-01-30T16:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.588945 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.589286 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.589386 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.589488 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.589577 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:06Z","lastTransitionTime":"2026-01-30T16:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.693145 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.693203 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.693213 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.693226 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.693236 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:06Z","lastTransitionTime":"2026-01-30T16:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.795318 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.795368 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.795377 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.795390 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.795399 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:06Z","lastTransitionTime":"2026-01-30T16:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.897674 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.897718 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.897727 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.897740 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.897749 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:06Z","lastTransitionTime":"2026-01-30T16:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.000928 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.000997 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.001012 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.001036 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.001055 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:07Z","lastTransitionTime":"2026-01-30T16:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.038900 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.038945 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.039108 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:24:07 crc kubenswrapper[4766]: E0130 16:24:07.039316 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:24:07 crc kubenswrapper[4766]: E0130 16:24:07.039481 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:24:07 crc kubenswrapper[4766]: E0130 16:24:07.039752 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.053550 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 14:15:20.764445454 +0000 UTC Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.104339 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.104492 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.104536 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.104569 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.104591 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:07Z","lastTransitionTime":"2026-01-30T16:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.207714 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.207845 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.207868 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.207893 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.207958 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:07Z","lastTransitionTime":"2026-01-30T16:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.311118 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.311172 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.311213 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.311231 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.311243 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:07Z","lastTransitionTime":"2026-01-30T16:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.413966 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.414029 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.414038 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.414052 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.414061 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:07Z","lastTransitionTime":"2026-01-30T16:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.516426 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.516472 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.516484 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.516501 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.516518 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:07Z","lastTransitionTime":"2026-01-30T16:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.619143 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.619646 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.619679 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.619885 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.619917 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:07Z","lastTransitionTime":"2026-01-30T16:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.723131 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.723327 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.723347 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.723366 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.723378 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:07Z","lastTransitionTime":"2026-01-30T16:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.826034 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.826092 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.826103 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.826125 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.826138 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:07Z","lastTransitionTime":"2026-01-30T16:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.929559 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.929621 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.929639 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.929664 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.929682 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:07Z","lastTransitionTime":"2026-01-30T16:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.032730 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.032785 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.032800 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.032818 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.032835 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:08Z","lastTransitionTime":"2026-01-30T16:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.039304 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:24:08 crc kubenswrapper[4766]: E0130 16:24:08.039402 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.053936 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 06:19:24.159095681 +0000 UTC Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.134630 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.134684 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.134697 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.134714 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.134726 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:08Z","lastTransitionTime":"2026-01-30T16:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.236995 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.237083 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.237107 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.237138 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.237161 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:08Z","lastTransitionTime":"2026-01-30T16:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.339779 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.339828 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.339844 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.339870 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.339887 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:08Z","lastTransitionTime":"2026-01-30T16:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.442347 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.442388 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.442396 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.442408 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.442417 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:08Z","lastTransitionTime":"2026-01-30T16:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.544423 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.544489 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.544498 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.544514 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.544522 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:08Z","lastTransitionTime":"2026-01-30T16:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.647754 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.647802 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.647813 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.647831 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.647842 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:08Z","lastTransitionTime":"2026-01-30T16:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.750879 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.750952 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.750974 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.751002 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.751025 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:08Z","lastTransitionTime":"2026-01-30T16:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.853796 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.853828 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.853837 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.853850 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.853859 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:08Z","lastTransitionTime":"2026-01-30T16:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.957018 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.957075 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.957099 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.957119 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.957134 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:08Z","lastTransitionTime":"2026-01-30T16:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.039226 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.039226 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.039323 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:24:09 crc kubenswrapper[4766]: E0130 16:24:09.039356 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:24:09 crc kubenswrapper[4766]: E0130 16:24:09.039514 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:24:09 crc kubenswrapper[4766]: E0130 16:24:09.039565 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.054981 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 17:56:25.473719568 +0000 UTC Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.060037 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.060070 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.060082 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.060097 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.060107 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:09Z","lastTransitionTime":"2026-01-30T16:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.163745 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.163820 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.163845 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.163876 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.163899 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:09Z","lastTransitionTime":"2026-01-30T16:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.267034 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.267082 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.267091 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.267106 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.267118 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:09Z","lastTransitionTime":"2026-01-30T16:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.370351 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.370399 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.370408 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.370423 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.370432 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:09Z","lastTransitionTime":"2026-01-30T16:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.473437 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.473504 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.473521 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.473574 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.473588 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:09Z","lastTransitionTime":"2026-01-30T16:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.575998 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.576044 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.576056 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.576072 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.576083 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:09Z","lastTransitionTime":"2026-01-30T16:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.678472 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.678524 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.678536 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.678553 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.678563 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:09Z","lastTransitionTime":"2026-01-30T16:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.781750 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.781825 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.781841 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.781859 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.781872 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:09Z","lastTransitionTime":"2026-01-30T16:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.885217 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.885343 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.885360 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.885383 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.885397 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:09Z","lastTransitionTime":"2026-01-30T16:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.987962 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.988022 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.988033 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.988049 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.988061 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:09Z","lastTransitionTime":"2026-01-30T16:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.038743 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:24:10 crc kubenswrapper[4766]: E0130 16:24:10.038946 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.039941 4766 scope.go:117] "RemoveContainer" containerID="18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75" Jan 30 16:24:10 crc kubenswrapper[4766]: E0130 16:24:10.040310 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-54ngm_openshift-ovn-kubernetes(d6a299e8-188d-4777-bb82-a0994feabcff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.056148 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 02:22:36.364555132 +0000 UTC Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.090275 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.090344 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.090405 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.090426 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.090440 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:10Z","lastTransitionTime":"2026-01-30T16:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.193831 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.193895 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.193914 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.193938 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.193959 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:10Z","lastTransitionTime":"2026-01-30T16:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.297341 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.297391 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.297406 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.297427 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.297442 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:10Z","lastTransitionTime":"2026-01-30T16:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.401041 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.401125 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.401141 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.401166 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.401237 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:10Z","lastTransitionTime":"2026-01-30T16:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.504117 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.504191 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.504206 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.504230 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.504243 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:10Z","lastTransitionTime":"2026-01-30T16:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.607550 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.607594 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.607604 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.607657 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.607670 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:10Z","lastTransitionTime":"2026-01-30T16:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.711944 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.712018 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.712037 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.712063 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.712081 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:10Z","lastTransitionTime":"2026-01-30T16:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.815324 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.815371 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.815382 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.815405 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.815417 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:10Z","lastTransitionTime":"2026-01-30T16:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.917988 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.918065 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.918123 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.918150 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.918164 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:10Z","lastTransitionTime":"2026-01-30T16:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.020852 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.020921 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.020932 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.020948 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.020960 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:11Z","lastTransitionTime":"2026-01-30T16:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.038351 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:24:11 crc kubenswrapper[4766]: E0130 16:24:11.038479 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.038480 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.038556 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:24:11 crc kubenswrapper[4766]: E0130 16:24:11.038697 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:24:11 crc kubenswrapper[4766]: E0130 16:24:11.039156 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.056948 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 19:18:55.626282551 +0000 UTC Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.123476 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.123548 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.123576 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.123604 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.123625 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:11Z","lastTransitionTime":"2026-01-30T16:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.226454 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.226512 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.226524 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.226542 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.226555 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:11Z","lastTransitionTime":"2026-01-30T16:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.330913 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.330970 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.330987 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.331010 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.331027 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:11Z","lastTransitionTime":"2026-01-30T16:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.433331 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.433364 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.433371 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.433384 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.433393 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:11Z","lastTransitionTime":"2026-01-30T16:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.534807 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.534841 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.534850 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.534862 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.534871 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:11Z","lastTransitionTime":"2026-01-30T16:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.637746 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.637818 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.637835 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.637855 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.637869 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:11Z","lastTransitionTime":"2026-01-30T16:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.740645 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.740707 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.740719 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.740733 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.740744 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:11Z","lastTransitionTime":"2026-01-30T16:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.842801 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.842841 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.842853 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.842868 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.842879 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:11Z","lastTransitionTime":"2026-01-30T16:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.946722 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.946798 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.946819 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.946851 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.946874 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:11Z","lastTransitionTime":"2026-01-30T16:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.039016 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:24:12 crc kubenswrapper[4766]: E0130 16:24:12.039137 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.049586 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.049618 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.049626 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.049636 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.049645 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:12Z","lastTransitionTime":"2026-01-30T16:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.057979 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 12:53:21.07565126 +0000 UTC Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.151630 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.151685 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.151698 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.151715 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.151728 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:12Z","lastTransitionTime":"2026-01-30T16:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.254612 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.254640 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.254649 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.254667 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.254679 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:12Z","lastTransitionTime":"2026-01-30T16:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.356805 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.356837 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.356846 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.356859 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.356867 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:12Z","lastTransitionTime":"2026-01-30T16:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.458926 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.458968 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.458985 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.459000 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.459010 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:12Z","lastTransitionTime":"2026-01-30T16:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.561384 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.561442 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.561458 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.561476 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.561487 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:12Z","lastTransitionTime":"2026-01-30T16:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.663622 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.663653 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.663664 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.663679 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.663689 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:12Z","lastTransitionTime":"2026-01-30T16:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.766126 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.766216 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.766227 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.766241 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.766250 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:12Z","lastTransitionTime":"2026-01-30T16:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.869420 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.869466 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.869481 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.869494 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.869502 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:12Z","lastTransitionTime":"2026-01-30T16:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.972206 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.972245 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.972256 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.972271 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.972283 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:12Z","lastTransitionTime":"2026-01-30T16:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.039031 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.039054 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:24:13 crc kubenswrapper[4766]: E0130 16:24:13.039372 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.039445 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:24:13 crc kubenswrapper[4766]: E0130 16:24:13.039504 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:24:13 crc kubenswrapper[4766]: E0130 16:24:13.039598 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.058335 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 19:03:54.713513657 +0000 UTC Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.075381 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.075422 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.075434 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.075450 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.075459 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:13Z","lastTransitionTime":"2026-01-30T16:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.178575 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.178750 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.178783 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.178812 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.178834 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:13Z","lastTransitionTime":"2026-01-30T16:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.282509 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.282580 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.282602 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.282629 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.282650 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:13Z","lastTransitionTime":"2026-01-30T16:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.385218 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.385254 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.385265 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.385277 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.385287 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:13Z","lastTransitionTime":"2026-01-30T16:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.487734 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.487785 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.487799 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.487819 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.487832 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:13Z","lastTransitionTime":"2026-01-30T16:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.590073 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.590111 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.590127 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.590143 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.590153 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:13Z","lastTransitionTime":"2026-01-30T16:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.696072 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.696155 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.696214 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.696237 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.696248 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:13Z","lastTransitionTime":"2026-01-30T16:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.799819 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.799882 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.799905 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.799929 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.799946 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:13Z","lastTransitionTime":"2026-01-30T16:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.902517 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.902675 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.902686 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.902737 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.902750 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:13Z","lastTransitionTime":"2026-01-30T16:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.004712 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.004746 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.004754 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.004766 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.004774 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:14Z","lastTransitionTime":"2026-01-30T16:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.038799 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:24:14 crc kubenswrapper[4766]: E0130 16:24:14.038982 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.058508 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 19:48:31.022026758 +0000 UTC Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.106823 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.106881 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.106897 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.106920 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.106934 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:14Z","lastTransitionTime":"2026-01-30T16:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.133820 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.133897 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.133914 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.133937 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.133952 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:14Z","lastTransitionTime":"2026-01-30T16:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.195295 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpghm"] Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.195857 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpghm" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.198639 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.198995 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.199548 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.200003 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.214625 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=32.214604561 podStartE2EDuration="32.214604561s" podCreationTimestamp="2026-01-30 16:23:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:14.214502598 +0000 UTC m=+108.852459964" watchObservedRunningTime="2026-01-30 16:24:14.214604561 +0000 UTC m=+108.852561917" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.243944 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=12.243920082 podStartE2EDuration="12.243920082s" podCreationTimestamp="2026-01-30 16:24:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:14.243787418 +0000 UTC m=+108.881744804" watchObservedRunningTime="2026-01-30 16:24:14.243920082 +0000 UTC m=+108.881877448" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.289517 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" podStartSLOduration=88.289484617 podStartE2EDuration="1m28.289484617s" podCreationTimestamp="2026-01-30 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:14.28740695 +0000 UTC m=+108.925364326" watchObservedRunningTime="2026-01-30 16:24:14.289484617 +0000 UTC m=+108.927441993" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.290153 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-flxfz" podStartSLOduration=89.290140685 podStartE2EDuration="1m29.290140685s" podCreationTimestamp="2026-01-30 16:22:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:14.275838124 +0000 UTC m=+108.913795510" watchObservedRunningTime="2026-01-30 16:24:14.290140685 +0000 UTC m=+108.928098081" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.310516 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/10b6cd3c-7511-4776-adb7-f48f2bdee155-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rpghm\" (UID: \"10b6cd3c-7511-4776-adb7-f48f2bdee155\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpghm" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.310564 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10b6cd3c-7511-4776-adb7-f48f2bdee155-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rpghm\" (UID: \"10b6cd3c-7511-4776-adb7-f48f2bdee155\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpghm" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.310581 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/10b6cd3c-7511-4776-adb7-f48f2bdee155-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rpghm\" (UID: \"10b6cd3c-7511-4776-adb7-f48f2bdee155\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpghm" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.310732 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/10b6cd3c-7511-4776-adb7-f48f2bdee155-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rpghm\" (UID: \"10b6cd3c-7511-4776-adb7-f48f2bdee155\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpghm" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.310803 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/10b6cd3c-7511-4776-adb7-f48f2bdee155-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rpghm\" (UID: \"10b6cd3c-7511-4776-adb7-f48f2bdee155\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpghm" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.323327 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=57.323310381 podStartE2EDuration="57.323310381s" podCreationTimestamp="2026-01-30 16:23:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:14.322584562 +0000 UTC m=+108.960541908" watchObservedRunningTime="2026-01-30 16:24:14.323310381 +0000 UTC m=+108.961267717" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.345339 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-vhmx5" podStartSLOduration=89.345318613 podStartE2EDuration="1m29.345318613s" podCreationTimestamp="2026-01-30 16:22:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:14.345252971 +0000 UTC m=+108.983210327" watchObservedRunningTime="2026-01-30 16:24:14.345318613 +0000 UTC m=+108.983275959" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.397862 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" podStartSLOduration=89.397843218 podStartE2EDuration="1m29.397843218s" podCreationTimestamp="2026-01-30 16:22:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:14.384804192 +0000 UTC m=+109.022761538" watchObservedRunningTime="2026-01-30 16:24:14.397843218 +0000 UTC m=+109.035800564" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.398201 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-l6xdr" podStartSLOduration=89.398196028 podStartE2EDuration="1m29.398196028s" podCreationTimestamp="2026-01-30 16:22:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:14.397742026 +0000 UTC m=+109.035699372" watchObservedRunningTime="2026-01-30 16:24:14.398196028 +0000 UTC m=+109.036153374" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.411344 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/10b6cd3c-7511-4776-adb7-f48f2bdee155-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rpghm\" (UID: \"10b6cd3c-7511-4776-adb7-f48f2bdee155\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpghm" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.411382 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/10b6cd3c-7511-4776-adb7-f48f2bdee155-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rpghm\" (UID: \"10b6cd3c-7511-4776-adb7-f48f2bdee155\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpghm" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.411409 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10b6cd3c-7511-4776-adb7-f48f2bdee155-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rpghm\" (UID: \"10b6cd3c-7511-4776-adb7-f48f2bdee155\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpghm" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.411432 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/10b6cd3c-7511-4776-adb7-f48f2bdee155-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rpghm\" (UID: \"10b6cd3c-7511-4776-adb7-f48f2bdee155\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpghm" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.411436 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/10b6cd3c-7511-4776-adb7-f48f2bdee155-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rpghm\" (UID: \"10b6cd3c-7511-4776-adb7-f48f2bdee155\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpghm" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.411511 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/10b6cd3c-7511-4776-adb7-f48f2bdee155-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rpghm\" (UID: \"10b6cd3c-7511-4776-adb7-f48f2bdee155\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpghm" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.411512 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/10b6cd3c-7511-4776-adb7-f48f2bdee155-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rpghm\" (UID: \"10b6cd3c-7511-4776-adb7-f48f2bdee155\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpghm" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.412544 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/10b6cd3c-7511-4776-adb7-f48f2bdee155-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rpghm\" (UID: \"10b6cd3c-7511-4776-adb7-f48f2bdee155\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpghm" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.425253 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10b6cd3c-7511-4776-adb7-f48f2bdee155-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rpghm\" (UID: \"10b6cd3c-7511-4776-adb7-f48f2bdee155\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpghm" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.434641 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/10b6cd3c-7511-4776-adb7-f48f2bdee155-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rpghm\" (UID: \"10b6cd3c-7511-4776-adb7-f48f2bdee155\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpghm" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.453774 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podStartSLOduration=89.453758516 podStartE2EDuration="1m29.453758516s" podCreationTimestamp="2026-01-30 16:22:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:14.453286854 +0000 UTC m=+109.091244200" watchObservedRunningTime="2026-01-30 16:24:14.453758516 +0000 UTC m=+109.091715862" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.471245 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=89.471227523 podStartE2EDuration="1m29.471227523s" podCreationTimestamp="2026-01-30 16:22:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:14.470583516 +0000 UTC m=+109.108540882" watchObservedRunningTime="2026-01-30 16:24:14.471227523 +0000 UTC m=+109.109184869" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.501750 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=83.501728757 podStartE2EDuration="1m23.501728757s" podCreationTimestamp="2026-01-30 16:22:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:14.48573607 +0000 UTC m=+109.123693416" watchObservedRunningTime="2026-01-30 16:24:14.501728757 +0000 UTC m=+109.139686103" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.515067 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpghm" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.544905 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpghm" event={"ID":"10b6cd3c-7511-4776-adb7-f48f2bdee155","Type":"ContainerStarted","Data":"cb019ecf96bad4457d0528b49e7c9763beec3d52ab36ea07c8241d8e708aaede"} Jan 30 16:24:15 crc kubenswrapper[4766]: I0130 16:24:15.039369 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:24:15 crc kubenswrapper[4766]: I0130 16:24:15.039477 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:24:15 crc kubenswrapper[4766]: E0130 16:24:15.039553 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:24:15 crc kubenswrapper[4766]: I0130 16:24:15.039588 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:24:15 crc kubenswrapper[4766]: E0130 16:24:15.039754 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:24:15 crc kubenswrapper[4766]: E0130 16:24:15.040414 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:24:15 crc kubenswrapper[4766]: I0130 16:24:15.059328 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 17:38:32.504904334 +0000 UTC Jan 30 16:24:15 crc kubenswrapper[4766]: I0130 16:24:15.059433 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 30 16:24:15 crc kubenswrapper[4766]: I0130 16:24:15.069528 4766 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 30 16:24:15 crc kubenswrapper[4766]: I0130 16:24:15.549109 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpghm" event={"ID":"10b6cd3c-7511-4776-adb7-f48f2bdee155","Type":"ContainerStarted","Data":"b39a72a60a8f59aec3377b15d145a1e62af0582fc6dab5efefa03cad37531e0f"} Jan 30 16:24:15 crc kubenswrapper[4766]: I0130 16:24:15.561863 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpghm" podStartSLOduration=90.561847517 podStartE2EDuration="1m30.561847517s" podCreationTimestamp="2026-01-30 16:22:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:15.56160925 +0000 UTC m=+110.199566586" watchObservedRunningTime="2026-01-30 16:24:15.561847517 +0000 UTC m=+110.199804853" Jan 30 16:24:16 crc kubenswrapper[4766]: I0130 16:24:16.038563 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:24:16 crc kubenswrapper[4766]: E0130 16:24:16.041174 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:24:17 crc kubenswrapper[4766]: I0130 16:24:17.038949 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:24:17 crc kubenswrapper[4766]: I0130 16:24:17.038978 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:24:17 crc kubenswrapper[4766]: E0130 16:24:17.039145 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:24:17 crc kubenswrapper[4766]: I0130 16:24:17.038978 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:24:17 crc kubenswrapper[4766]: E0130 16:24:17.039297 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:24:17 crc kubenswrapper[4766]: E0130 16:24:17.039379 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:24:18 crc kubenswrapper[4766]: I0130 16:24:18.039003 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:24:18 crc kubenswrapper[4766]: E0130 16:24:18.039155 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:24:19 crc kubenswrapper[4766]: I0130 16:24:19.038877 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:24:19 crc kubenswrapper[4766]: I0130 16:24:19.038892 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:24:19 crc kubenswrapper[4766]: E0130 16:24:19.039298 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:24:19 crc kubenswrapper[4766]: E0130 16:24:19.039453 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:24:19 crc kubenswrapper[4766]: I0130 16:24:19.039759 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:24:19 crc kubenswrapper[4766]: E0130 16:24:19.039851 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:24:19 crc kubenswrapper[4766]: I0130 16:24:19.570510 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-l6xdr_3a74bc5e-af98-4849-820c-7056caabc485/kube-multus/1.log" Jan 30 16:24:19 crc kubenswrapper[4766]: I0130 16:24:19.571244 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-l6xdr_3a74bc5e-af98-4849-820c-7056caabc485/kube-multus/0.log" Jan 30 16:24:19 crc kubenswrapper[4766]: I0130 16:24:19.571302 4766 generic.go:334] "Generic (PLEG): container finished" podID="3a74bc5e-af98-4849-820c-7056caabc485" containerID="5cca75d6ff61e1e073104559f6fcac5d76919a1e445da3f2d6fe271d1e0e4082" exitCode=1 Jan 30 16:24:19 crc kubenswrapper[4766]: I0130 16:24:19.571332 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-l6xdr" event={"ID":"3a74bc5e-af98-4849-820c-7056caabc485","Type":"ContainerDied","Data":"5cca75d6ff61e1e073104559f6fcac5d76919a1e445da3f2d6fe271d1e0e4082"} Jan 30 16:24:19 crc kubenswrapper[4766]: I0130 16:24:19.571368 4766 scope.go:117] "RemoveContainer" containerID="5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008" Jan 30 16:24:19 crc kubenswrapper[4766]: I0130 16:24:19.571855 4766 scope.go:117] "RemoveContainer" containerID="5cca75d6ff61e1e073104559f6fcac5d76919a1e445da3f2d6fe271d1e0e4082" Jan 30 16:24:19 crc kubenswrapper[4766]: E0130 16:24:19.572088 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-l6xdr_openshift-multus(3a74bc5e-af98-4849-820c-7056caabc485)\"" pod="openshift-multus/multus-l6xdr" podUID="3a74bc5e-af98-4849-820c-7056caabc485" Jan 30 16:24:20 crc kubenswrapper[4766]: I0130 16:24:20.039316 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:24:20 crc kubenswrapper[4766]: E0130 16:24:20.039829 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:24:20 crc kubenswrapper[4766]: I0130 16:24:20.577112 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-l6xdr_3a74bc5e-af98-4849-820c-7056caabc485/kube-multus/1.log" Jan 30 16:24:21 crc kubenswrapper[4766]: I0130 16:24:21.039423 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:24:21 crc kubenswrapper[4766]: I0130 16:24:21.039697 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:24:21 crc kubenswrapper[4766]: I0130 16:24:21.039786 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:24:21 crc kubenswrapper[4766]: E0130 16:24:21.040481 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:24:21 crc kubenswrapper[4766]: E0130 16:24:21.040646 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:24:21 crc kubenswrapper[4766]: E0130 16:24:21.040799 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:24:21 crc kubenswrapper[4766]: I0130 16:24:21.041132 4766 scope.go:117] "RemoveContainer" containerID="18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75" Jan 30 16:24:21 crc kubenswrapper[4766]: E0130 16:24:21.041421 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-54ngm_openshift-ovn-kubernetes(d6a299e8-188d-4777-bb82-a0994feabcff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" Jan 30 16:24:22 crc kubenswrapper[4766]: I0130 16:24:22.039610 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:24:22 crc kubenswrapper[4766]: E0130 16:24:22.039873 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:24:23 crc kubenswrapper[4766]: I0130 16:24:23.039083 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:24:23 crc kubenswrapper[4766]: I0130 16:24:23.039141 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:24:23 crc kubenswrapper[4766]: I0130 16:24:23.039230 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:24:23 crc kubenswrapper[4766]: E0130 16:24:23.039412 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:24:23 crc kubenswrapper[4766]: E0130 16:24:23.039495 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:24:23 crc kubenswrapper[4766]: E0130 16:24:23.039550 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:24:24 crc kubenswrapper[4766]: I0130 16:24:24.038924 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:24:24 crc kubenswrapper[4766]: E0130 16:24:24.039228 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:24:25 crc kubenswrapper[4766]: I0130 16:24:25.038484 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:24:25 crc kubenswrapper[4766]: I0130 16:24:25.038534 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:24:25 crc kubenswrapper[4766]: E0130 16:24:25.038906 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:24:25 crc kubenswrapper[4766]: E0130 16:24:25.040550 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:24:25 crc kubenswrapper[4766]: I0130 16:24:25.038559 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:24:25 crc kubenswrapper[4766]: E0130 16:24:25.041119 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:24:26 crc kubenswrapper[4766]: I0130 16:24:26.039467 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:24:26 crc kubenswrapper[4766]: E0130 16:24:26.040891 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:24:26 crc kubenswrapper[4766]: E0130 16:24:26.080414 4766 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 30 16:24:26 crc kubenswrapper[4766]: E0130 16:24:26.130883 4766 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 16:24:27 crc kubenswrapper[4766]: I0130 16:24:27.039168 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:24:27 crc kubenswrapper[4766]: I0130 16:24:27.039217 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:24:27 crc kubenswrapper[4766]: E0130 16:24:27.039427 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:24:27 crc kubenswrapper[4766]: E0130 16:24:27.039532 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:24:27 crc kubenswrapper[4766]: I0130 16:24:27.039255 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:24:27 crc kubenswrapper[4766]: E0130 16:24:27.039666 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:24:28 crc kubenswrapper[4766]: I0130 16:24:28.039327 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:24:28 crc kubenswrapper[4766]: E0130 16:24:28.039507 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:24:29 crc kubenswrapper[4766]: I0130 16:24:29.038372 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:24:29 crc kubenswrapper[4766]: I0130 16:24:29.038421 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:24:29 crc kubenswrapper[4766]: E0130 16:24:29.038510 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:24:29 crc kubenswrapper[4766]: E0130 16:24:29.038606 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:24:29 crc kubenswrapper[4766]: I0130 16:24:29.038671 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:24:29 crc kubenswrapper[4766]: E0130 16:24:29.038727 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:24:30 crc kubenswrapper[4766]: I0130 16:24:30.038978 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:24:30 crc kubenswrapper[4766]: E0130 16:24:30.039222 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:24:31 crc kubenswrapper[4766]: I0130 16:24:31.039007 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:24:31 crc kubenswrapper[4766]: E0130 16:24:31.039273 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:24:31 crc kubenswrapper[4766]: I0130 16:24:31.039035 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:24:31 crc kubenswrapper[4766]: I0130 16:24:31.039007 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:24:31 crc kubenswrapper[4766]: E0130 16:24:31.039399 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:24:31 crc kubenswrapper[4766]: E0130 16:24:31.039461 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:24:31 crc kubenswrapper[4766]: E0130 16:24:31.132403 4766 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 16:24:32 crc kubenswrapper[4766]: I0130 16:24:32.039075 4766 scope.go:117] "RemoveContainer" containerID="5cca75d6ff61e1e073104559f6fcac5d76919a1e445da3f2d6fe271d1e0e4082" Jan 30 16:24:32 crc kubenswrapper[4766]: I0130 16:24:32.039604 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:24:32 crc kubenswrapper[4766]: E0130 16:24:32.039859 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:24:32 crc kubenswrapper[4766]: I0130 16:24:32.622245 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-l6xdr_3a74bc5e-af98-4849-820c-7056caabc485/kube-multus/1.log" Jan 30 16:24:32 crc kubenswrapper[4766]: I0130 16:24:32.622751 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-l6xdr" event={"ID":"3a74bc5e-af98-4849-820c-7056caabc485","Type":"ContainerStarted","Data":"166e9165ba520b270882953160a98d79d10fd4c5b0fa39f8bd2fe923a3be331c"} Jan 30 16:24:33 crc kubenswrapper[4766]: I0130 16:24:33.038781 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:24:33 crc kubenswrapper[4766]: I0130 16:24:33.038940 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:24:33 crc kubenswrapper[4766]: E0130 16:24:33.038951 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:24:33 crc kubenswrapper[4766]: E0130 16:24:33.039304 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:24:33 crc kubenswrapper[4766]: I0130 16:24:33.039874 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:24:33 crc kubenswrapper[4766]: E0130 16:24:33.040067 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:24:34 crc kubenswrapper[4766]: I0130 16:24:34.040480 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:24:34 crc kubenswrapper[4766]: E0130 16:24:34.040725 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:24:35 crc kubenswrapper[4766]: I0130 16:24:35.039084 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:24:35 crc kubenswrapper[4766]: I0130 16:24:35.039309 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:24:35 crc kubenswrapper[4766]: E0130 16:24:35.039393 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:24:35 crc kubenswrapper[4766]: E0130 16:24:35.039522 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:24:35 crc kubenswrapper[4766]: I0130 16:24:35.039887 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:24:35 crc kubenswrapper[4766]: E0130 16:24:35.039983 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:24:36 crc kubenswrapper[4766]: I0130 16:24:36.039555 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:24:36 crc kubenswrapper[4766]: E0130 16:24:36.042784 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:24:36 crc kubenswrapper[4766]: I0130 16:24:36.043484 4766 scope.go:117] "RemoveContainer" containerID="18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75" Jan 30 16:24:36 crc kubenswrapper[4766]: E0130 16:24:36.133620 4766 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 16:24:36 crc kubenswrapper[4766]: I0130 16:24:36.639544 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-54ngm_d6a299e8-188d-4777-bb82-a0994feabcff/ovnkube-controller/3.log" Jan 30 16:24:36 crc kubenswrapper[4766]: I0130 16:24:36.642733 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" event={"ID":"d6a299e8-188d-4777-bb82-a0994feabcff","Type":"ContainerStarted","Data":"647d98f47c2a42e161b8c7c39f2520f193644bfbe6a04439449b24171986cc9b"} Jan 30 16:24:36 crc kubenswrapper[4766]: I0130 16:24:36.643273 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:24:37 crc kubenswrapper[4766]: I0130 16:24:37.042534 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:24:37 crc kubenswrapper[4766]: I0130 16:24:37.042605 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:24:37 crc kubenswrapper[4766]: E0130 16:24:37.043723 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:24:37 crc kubenswrapper[4766]: I0130 16:24:37.042618 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:24:37 crc kubenswrapper[4766]: E0130 16:24:37.044167 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:24:37 crc kubenswrapper[4766]: E0130 16:24:37.044460 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:24:37 crc kubenswrapper[4766]: I0130 16:24:37.051089 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" podStartSLOduration=112.051053975 podStartE2EDuration="1m52.051053975s" podCreationTimestamp="2026-01-30 16:22:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:36.673762366 +0000 UTC m=+131.311719722" watchObservedRunningTime="2026-01-30 16:24:37.051053975 +0000 UTC m=+131.689011321" Jan 30 16:24:37 crc kubenswrapper[4766]: I0130 16:24:37.051596 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-xrldv"] Jan 30 16:24:37 crc kubenswrapper[4766]: I0130 16:24:37.645656 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:24:37 crc kubenswrapper[4766]: E0130 16:24:37.646258 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:24:38 crc kubenswrapper[4766]: I0130 16:24:38.039332 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:24:38 crc kubenswrapper[4766]: E0130 16:24:38.039557 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:24:39 crc kubenswrapper[4766]: I0130 16:24:39.039367 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:24:39 crc kubenswrapper[4766]: I0130 16:24:39.039412 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:24:39 crc kubenswrapper[4766]: E0130 16:24:39.039555 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:24:39 crc kubenswrapper[4766]: I0130 16:24:39.039404 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:24:39 crc kubenswrapper[4766]: E0130 16:24:39.039762 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:24:39 crc kubenswrapper[4766]: E0130 16:24:39.039817 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:24:40 crc kubenswrapper[4766]: I0130 16:24:40.039329 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:24:40 crc kubenswrapper[4766]: E0130 16:24:40.039489 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:24:41 crc kubenswrapper[4766]: I0130 16:24:41.038487 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:24:41 crc kubenswrapper[4766]: I0130 16:24:41.038574 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:24:41 crc kubenswrapper[4766]: I0130 16:24:41.038608 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:24:41 crc kubenswrapper[4766]: E0130 16:24:41.038657 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:24:41 crc kubenswrapper[4766]: E0130 16:24:41.038809 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:24:41 crc kubenswrapper[4766]: E0130 16:24:41.039067 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:24:42 crc kubenswrapper[4766]: I0130 16:24:42.038700 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:24:42 crc kubenswrapper[4766]: I0130 16:24:42.042941 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 30 16:24:42 crc kubenswrapper[4766]: I0130 16:24:42.045607 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 30 16:24:43 crc kubenswrapper[4766]: I0130 16:24:43.038658 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:24:43 crc kubenswrapper[4766]: I0130 16:24:43.038719 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:24:43 crc kubenswrapper[4766]: I0130 16:24:43.038811 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:24:43 crc kubenswrapper[4766]: I0130 16:24:43.041454 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 30 16:24:43 crc kubenswrapper[4766]: I0130 16:24:43.041732 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 30 16:24:43 crc kubenswrapper[4766]: I0130 16:24:43.042580 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 30 16:24:43 crc kubenswrapper[4766]: I0130 16:24:43.042768 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.636128 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.688739 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dgkvz"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.689400 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-jn8dp"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.689619 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-dgkvz" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.689880 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.690551 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.690559 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-jn8dp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.691702 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfclt"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.692142 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfclt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.692399 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-c75qp"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.693117 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.696072 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2h92f"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.696827 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2h92f" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.697365 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-6ndwq"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.698087 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6ndwq" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.699048 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-7j765"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.704671 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7j765" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.707833 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.708787 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.709016 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.708777 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.709962 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.710041 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.716127 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.728595 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.729161 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.729534 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.729575 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.730222 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.730550 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.730737 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.730942 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.731165 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.737552 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.731957 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-254pk"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.732392 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.737999 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.734042 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.734793 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.735857 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.735856 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.735960 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.738493 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-gtfgx"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.739116 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-gtfgx" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.739127 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-254pk" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.736064 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.736262 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.736361 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.736430 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.736516 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.736581 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.736590 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.736633 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.736647 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.736648 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.736734 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.736803 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.736809 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.736851 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.736906 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.736959 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.736964 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.737016 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.741515 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-txtwn"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.742569 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-txtwn" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.737026 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.737077 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.737125 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.737134 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.737193 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.737227 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.747621 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-vzmxm"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.747738 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.747747 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.748084 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.748968 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-vzmxm" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.750205 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9nn5q"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.751275 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.765699 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/798137fc-1490-4b1c-ac4d-77b6c9e56d05-serving-cert\") pod \"route-controller-manager-6576b87f9c-mfclt\" (UID: \"798137fc-1490-4b1c-ac4d-77b6c9e56d05\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfclt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.765786 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfsvc\" (UniqueName: \"kubernetes.io/projected/2e83d3d7-f71f-47ab-a085-8d62e6b30f7d-kube-api-access-dfsvc\") pod \"console-operator-58897d9998-gtfgx\" (UID: \"2e83d3d7-f71f-47ab-a085-8d62e6b30f7d\") " pod="openshift-console-operator/console-operator-58897d9998-gtfgx" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.765819 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e83d3d7-f71f-47ab-a085-8d62e6b30f7d-config\") pod \"console-operator-58897d9998-gtfgx\" (UID: \"2e83d3d7-f71f-47ab-a085-8d62e6b30f7d\") " pod="openshift-console-operator/console-operator-58897d9998-gtfgx" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.765846 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e83d3d7-f71f-47ab-a085-8d62e6b30f7d-serving-cert\") pod \"console-operator-58897d9998-gtfgx\" (UID: \"2e83d3d7-f71f-47ab-a085-8d62e6b30f7d\") " pod="openshift-console-operator/console-operator-58897d9998-gtfgx" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.765870 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/798137fc-1490-4b1c-ac4d-77b6c9e56d05-config\") pod \"route-controller-manager-6576b87f9c-mfclt\" (UID: \"798137fc-1490-4b1c-ac4d-77b6c9e56d05\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfclt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.765894 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/798137fc-1490-4b1c-ac4d-77b6c9e56d05-client-ca\") pod \"route-controller-manager-6576b87f9c-mfclt\" (UID: \"798137fc-1490-4b1c-ac4d-77b6c9e56d05\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfclt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.765917 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ks54j\" (UniqueName: \"kubernetes.io/projected/798137fc-1490-4b1c-ac4d-77b6c9e56d05-kube-api-access-ks54j\") pod \"route-controller-manager-6576b87f9c-mfclt\" (UID: \"798137fc-1490-4b1c-ac4d-77b6c9e56d05\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfclt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.765947 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2e83d3d7-f71f-47ab-a085-8d62e6b30f7d-trusted-ca\") pod \"console-operator-58897d9998-gtfgx\" (UID: \"2e83d3d7-f71f-47ab-a085-8d62e6b30f7d\") " pod="openshift-console-operator/console-operator-58897d9998-gtfgx" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.766076 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hrm2g"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.766775 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hrm2g" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.767166 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dgkvz"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.772427 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.772584 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.772866 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.774609 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-67z4k"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.777489 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.777640 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-67z4k" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.772463 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.785764 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.785996 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.786107 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.786234 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.786354 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.786463 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.786600 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.786847 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.786980 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.787098 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.787409 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.786987 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.787573 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.787629 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.787686 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.788220 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fgcq7"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.789005 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fgcq7" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.789082 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-nx7kv"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.789786 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.790087 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.794329 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-sbckt"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.796944 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.797752 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-pr8gz"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.798083 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.798146 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.798410 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.798441 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-pr8gz" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.798292 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b8lt5"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.798704 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.798891 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.799293 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b8lt5" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.799423 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-nx7kv" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.800125 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.801125 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.801982 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8qrfp"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.804973 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.806473 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8qrfp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.806770 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.808799 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.810911 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.809032 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.809141 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.809744 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.811632 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.809785 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.810106 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.812993 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.810480 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.810524 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.810604 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.813490 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.817317 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-kbt4b"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.838121 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.842117 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.842978 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.843444 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.854072 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kbt4b" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.856505 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-jn8dp"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.861319 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-r7tdx"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.864071 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r7tdx" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.866650 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.866750 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.867925 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z958l\" (UniqueName: \"kubernetes.io/projected/3dc11d4d-16d8-43a2-9648-e0b833e8824a-kube-api-access-z958l\") pod \"dns-operator-744455d44c-vzmxm\" (UID: \"3dc11d4d-16d8-43a2-9648-e0b833e8824a\") " pod="openshift-dns-operator/dns-operator-744455d44c-vzmxm" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.867979 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/97631abe-0d99-4f69-b208-4da9d19a8400-ca-trust-extracted\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.868007 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71148f4c-0b84-45c4-911c-0ec4b06cf710-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-zps75\" (UID: \"71148f4c-0b84-45c4-911c-0ec4b06cf710\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.868029 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2f7dd292-e5ea-4fe6-b6ec-da47748f1fc3-auth-proxy-config\") pod \"machine-approver-56656f9798-6ndwq\" (UID: \"2f7dd292-e5ea-4fe6-b6ec-da47748f1fc3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6ndwq" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.868075 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d8527eb-86cc-45de-8821-7b80f37422d0-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-txtwn\" (UID: \"0d8527eb-86cc-45de-8821-7b80f37422d0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-txtwn" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.868106 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d8527eb-86cc-45de-8821-7b80f37422d0-service-ca-bundle\") pod \"authentication-operator-69f744f599-txtwn\" (UID: \"0d8527eb-86cc-45de-8821-7b80f37422d0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-txtwn" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.868133 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/97631abe-0d99-4f69-b208-4da9d19a8400-registry-tls\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.868154 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/c1191290-07ee-40c4-85e8-59545986d7db-image-import-ca\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.868190 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f7dd292-e5ea-4fe6-b6ec-da47748f1fc3-config\") pod \"machine-approver-56656f9798-6ndwq\" (UID: \"2f7dd292-e5ea-4fe6-b6ec-da47748f1fc3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6ndwq" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.868901 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9jhr\" (UniqueName: \"kubernetes.io/projected/587fc124-b506-4535-b8d2-1d0f6c91cfb9-kube-api-access-l9jhr\") pod \"cluster-samples-operator-665b6dd947-2h92f\" (UID: \"587fc124-b506-4535-b8d2-1d0f6c91cfb9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2h92f" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.868933 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2f7dd292-e5ea-4fe6-b6ec-da47748f1fc3-machine-approver-tls\") pod \"machine-approver-56656f9798-6ndwq\" (UID: \"2f7dd292-e5ea-4fe6-b6ec-da47748f1fc3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6ndwq" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.868969 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/798137fc-1490-4b1c-ac4d-77b6c9e56d05-serving-cert\") pod \"route-controller-manager-6576b87f9c-mfclt\" (UID: \"798137fc-1490-4b1c-ac4d-77b6c9e56d05\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfclt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.868997 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0fd41a92-ef77-4a02-bd2b-089d2edb3cf4-available-featuregates\") pod \"openshift-config-operator-7777fb866f-7j765\" (UID: \"0fd41a92-ef77-4a02-bd2b-089d2edb3cf4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7j765" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.869022 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3dc11d4d-16d8-43a2-9648-e0b833e8824a-metrics-tls\") pod \"dns-operator-744455d44c-vzmxm\" (UID: \"3dc11d4d-16d8-43a2-9648-e0b833e8824a\") " pod="openshift-dns-operator/dns-operator-744455d44c-vzmxm" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.869061 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c1191290-07ee-40c4-85e8-59545986d7db-etcd-client\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.869113 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfsvc\" (UniqueName: \"kubernetes.io/projected/2e83d3d7-f71f-47ab-a085-8d62e6b30f7d-kube-api-access-dfsvc\") pod \"console-operator-58897d9998-gtfgx\" (UID: \"2e83d3d7-f71f-47ab-a085-8d62e6b30f7d\") " pod="openshift-console-operator/console-operator-58897d9998-gtfgx" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.869142 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/97631abe-0d99-4f69-b208-4da9d19a8400-installation-pull-secrets\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.869169 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0fd41a92-ef77-4a02-bd2b-089d2edb3cf4-serving-cert\") pod \"openshift-config-operator-7777fb866f-7j765\" (UID: \"0fd41a92-ef77-4a02-bd2b-089d2edb3cf4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7j765" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.870588 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8acca84e-2800-4a20-b3e8-84e021d1c001-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-jn8dp\" (UID: \"8acca84e-2800-4a20-b3e8-84e021d1c001\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jn8dp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.870625 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/97631abe-0d99-4f69-b208-4da9d19a8400-bound-sa-token\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.870944 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/798137fc-1490-4b1c-ac4d-77b6c9e56d05-config\") pod \"route-controller-manager-6576b87f9c-mfclt\" (UID: \"798137fc-1490-4b1c-ac4d-77b6c9e56d05\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfclt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.870979 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5sxg\" (UniqueName: \"kubernetes.io/projected/d9f3a679-bd83-4e31-aad4-0bd228e96c33-kube-api-access-l5sxg\") pod \"downloads-7954f5f757-254pk\" (UID: \"d9f3a679-bd83-4e31-aad4-0bd228e96c33\") " pod="openshift-console/downloads-7954f5f757-254pk" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.871031 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71148f4c-0b84-45c4-911c-0ec4b06cf710-serving-cert\") pod \"apiserver-7bbb656c7d-zps75\" (UID: \"71148f4c-0b84-45c4-911c-0ec4b06cf710\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.871110 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d25l\" (UniqueName: \"kubernetes.io/projected/71148f4c-0b84-45c4-911c-0ec4b06cf710-kube-api-access-4d25l\") pod \"apiserver-7bbb656c7d-zps75\" (UID: \"71148f4c-0b84-45c4-911c-0ec4b06cf710\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.871131 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1191290-07ee-40c4-85e8-59545986d7db-config\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.871213 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/807df97f-b371-4d04-81e9-b1a823a8a638-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-dgkvz\" (UID: \"807df97f-b371-4d04-81e9-b1a823a8a638\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dgkvz" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.871315 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.871345 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j868\" (UniqueName: \"kubernetes.io/projected/c1191290-07ee-40c4-85e8-59545986d7db-kube-api-access-2j868\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.871370 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/807df97f-b371-4d04-81e9-b1a823a8a638-client-ca\") pod \"controller-manager-879f6c89f-dgkvz\" (UID: \"807df97f-b371-4d04-81e9-b1a823a8a638\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dgkvz" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.872987 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/798137fc-1490-4b1c-ac4d-77b6c9e56d05-config\") pod \"route-controller-manager-6576b87f9c-mfclt\" (UID: \"798137fc-1490-4b1c-ac4d-77b6c9e56d05\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfclt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.873067 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-5j6bc"] Jan 30 16:24:44 crc kubenswrapper[4766]: E0130 16:24:44.873623 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:45.373601298 +0000 UTC m=+140.011558644 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.874188 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c1191290-07ee-40c4-85e8-59545986d7db-audit-dir\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.874312 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d8527eb-86cc-45de-8821-7b80f37422d0-config\") pod \"authentication-operator-69f744f599-txtwn\" (UID: \"0d8527eb-86cc-45de-8821-7b80f37422d0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-txtwn" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.874370 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zmsv\" (UniqueName: \"kubernetes.io/projected/807df97f-b371-4d04-81e9-b1a823a8a638-kube-api-access-5zmsv\") pod \"controller-manager-879f6c89f-dgkvz\" (UID: \"807df97f-b371-4d04-81e9-b1a823a8a638\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dgkvz" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.874529 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79252\" (UniqueName: \"kubernetes.io/projected/97631abe-0d99-4f69-b208-4da9d19a8400-kube-api-access-79252\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.874579 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8acca84e-2800-4a20-b3e8-84e021d1c001-images\") pod \"machine-api-operator-5694c8668f-jn8dp\" (UID: \"8acca84e-2800-4a20-b3e8-84e021d1c001\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jn8dp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.874613 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/587fc124-b506-4535-b8d2-1d0f6c91cfb9-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-2h92f\" (UID: \"587fc124-b506-4535-b8d2-1d0f6c91cfb9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2h92f" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.874642 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn75m\" (UniqueName: \"kubernetes.io/projected/0d8527eb-86cc-45de-8821-7b80f37422d0-kube-api-access-vn75m\") pod \"authentication-operator-69f744f599-txtwn\" (UID: \"0d8527eb-86cc-45de-8821-7b80f37422d0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-txtwn" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.874696 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbqv9\" (UniqueName: \"kubernetes.io/projected/0fd41a92-ef77-4a02-bd2b-089d2edb3cf4-kube-api-access-dbqv9\") pod \"openshift-config-operator-7777fb866f-7j765\" (UID: \"0fd41a92-ef77-4a02-bd2b-089d2edb3cf4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7j765" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.874733 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/71148f4c-0b84-45c4-911c-0ec4b06cf710-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-zps75\" (UID: \"71148f4c-0b84-45c4-911c-0ec4b06cf710\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.874773 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd67z\" (UniqueName: \"kubernetes.io/projected/2f7dd292-e5ea-4fe6-b6ec-da47748f1fc3-kube-api-access-kd67z\") pod \"machine-approver-56656f9798-6ndwq\" (UID: \"2f7dd292-e5ea-4fe6-b6ec-da47748f1fc3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6ndwq" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.874954 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/807df97f-b371-4d04-81e9-b1a823a8a638-serving-cert\") pod \"controller-manager-879f6c89f-dgkvz\" (UID: \"807df97f-b371-4d04-81e9-b1a823a8a638\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dgkvz" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.875016 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8acca84e-2800-4a20-b3e8-84e021d1c001-config\") pod \"machine-api-operator-5694c8668f-jn8dp\" (UID: \"8acca84e-2800-4a20-b3e8-84e021d1c001\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jn8dp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.875099 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1191290-07ee-40c4-85e8-59545986d7db-serving-cert\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.875120 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5j6bc" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.875219 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/97631abe-0d99-4f69-b208-4da9d19a8400-trusted-ca\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.875301 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c1191290-07ee-40c4-85e8-59545986d7db-encryption-config\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.875408 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c1191290-07ee-40c4-85e8-59545986d7db-etcd-serving-ca\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.875502 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/807df97f-b371-4d04-81e9-b1a823a8a638-config\") pod \"controller-manager-879f6c89f-dgkvz\" (UID: \"807df97f-b371-4d04-81e9-b1a823a8a638\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dgkvz" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.875601 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e83d3d7-f71f-47ab-a085-8d62e6b30f7d-config\") pod \"console-operator-58897d9998-gtfgx\" (UID: \"2e83d3d7-f71f-47ab-a085-8d62e6b30f7d\") " pod="openshift-console-operator/console-operator-58897d9998-gtfgx" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.875638 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/97631abe-0d99-4f69-b208-4da9d19a8400-registry-certificates\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.875663 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/71148f4c-0b84-45c4-911c-0ec4b06cf710-audit-policies\") pod \"apiserver-7bbb656c7d-zps75\" (UID: \"71148f4c-0b84-45c4-911c-0ec4b06cf710\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.875685 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e83d3d7-f71f-47ab-a085-8d62e6b30f7d-serving-cert\") pod \"console-operator-58897d9998-gtfgx\" (UID: \"2e83d3d7-f71f-47ab-a085-8d62e6b30f7d\") " pod="openshift-console-operator/console-operator-58897d9998-gtfgx" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.875702 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/798137fc-1490-4b1c-ac4d-77b6c9e56d05-client-ca\") pod \"route-controller-manager-6576b87f9c-mfclt\" (UID: \"798137fc-1490-4b1c-ac4d-77b6c9e56d05\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfclt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.875741 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ks54j\" (UniqueName: \"kubernetes.io/projected/798137fc-1490-4b1c-ac4d-77b6c9e56d05-kube-api-access-ks54j\") pod \"route-controller-manager-6576b87f9c-mfclt\" (UID: \"798137fc-1490-4b1c-ac4d-77b6c9e56d05\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfclt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.875778 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/71148f4c-0b84-45c4-911c-0ec4b06cf710-etcd-client\") pod \"apiserver-7bbb656c7d-zps75\" (UID: \"71148f4c-0b84-45c4-911c-0ec4b06cf710\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.875799 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/71148f4c-0b84-45c4-911c-0ec4b06cf710-encryption-config\") pod \"apiserver-7bbb656c7d-zps75\" (UID: \"71148f4c-0b84-45c4-911c-0ec4b06cf710\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.875872 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2e83d3d7-f71f-47ab-a085-8d62e6b30f7d-trusted-ca\") pod \"console-operator-58897d9998-gtfgx\" (UID: \"2e83d3d7-f71f-47ab-a085-8d62e6b30f7d\") " pod="openshift-console-operator/console-operator-58897d9998-gtfgx" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.875896 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/c1191290-07ee-40c4-85e8-59545986d7db-audit\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.875935 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71148f4c-0b84-45c4-911c-0ec4b06cf710-audit-dir\") pod \"apiserver-7bbb656c7d-zps75\" (UID: \"71148f4c-0b84-45c4-911c-0ec4b06cf710\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.875958 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1191290-07ee-40c4-85e8-59545986d7db-trusted-ca-bundle\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.875974 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d8527eb-86cc-45de-8821-7b80f37422d0-serving-cert\") pod \"authentication-operator-69f744f599-txtwn\" (UID: \"0d8527eb-86cc-45de-8821-7b80f37422d0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-txtwn" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.875998 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzxn7\" (UniqueName: \"kubernetes.io/projected/8acca84e-2800-4a20-b3e8-84e021d1c001-kube-api-access-fzxn7\") pod \"machine-api-operator-5694c8668f-jn8dp\" (UID: \"8acca84e-2800-4a20-b3e8-84e021d1c001\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jn8dp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.876019 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c1191290-07ee-40c4-85e8-59545986d7db-node-pullsecrets\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.876965 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e83d3d7-f71f-47ab-a085-8d62e6b30f7d-config\") pod \"console-operator-58897d9998-gtfgx\" (UID: \"2e83d3d7-f71f-47ab-a085-8d62e6b30f7d\") " pod="openshift-console-operator/console-operator-58897d9998-gtfgx" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.877983 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8c8p6"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.879355 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.879661 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8c8p6" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.879996 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/798137fc-1490-4b1c-ac4d-77b6c9e56d05-client-ca\") pod \"route-controller-manager-6576b87f9c-mfclt\" (UID: \"798137fc-1490-4b1c-ac4d-77b6c9e56d05\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfclt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.883262 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2e83d3d7-f71f-47ab-a085-8d62e6b30f7d-trusted-ca\") pod \"console-operator-58897d9998-gtfgx\" (UID: \"2e83d3d7-f71f-47ab-a085-8d62e6b30f7d\") " pod="openshift-console-operator/console-operator-58897d9998-gtfgx" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.885089 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-28vp9"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.885539 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.888489 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-28vp9" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.889759 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2h92f"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.910081 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.910375 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e83d3d7-f71f-47ab-a085-8d62e6b30f7d-serving-cert\") pod \"console-operator-58897d9998-gtfgx\" (UID: \"2e83d3d7-f71f-47ab-a085-8d62e6b30f7d\") " pod="openshift-console-operator/console-operator-58897d9998-gtfgx" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.910567 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/798137fc-1490-4b1c-ac4d-77b6c9e56d05-serving-cert\") pod \"route-controller-manager-6576b87f9c-mfclt\" (UID: \"798137fc-1490-4b1c-ac4d-77b6c9e56d05\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfclt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.913087 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.913569 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.914118 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qdlmd"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.915015 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qdlmd" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.917749 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-8fgxh"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.918500 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-8fgxh" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.918970 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-vz9mh"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.919954 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-vz9mh" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.920154 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-92b8r"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.920744 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-92b8r" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.921259 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zqpn4"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.922238 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zqpn4" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.922783 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-7j765"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.924519 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hqpk"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.926198 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hqpk" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.926378 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-vpgtw"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.926778 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-vpgtw" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.927725 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wcmvb"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.928803 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wcmvb" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.930215 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496495-brtsv"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.930959 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496495-brtsv" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.932891 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfclt"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.934044 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.934225 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-n5kg4"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.935027 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-n5kg4" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.936741 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjlfz"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.937253 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjlfz" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.937322 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gtc8b"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.938086 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gtc8b" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.938560 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.939535 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-c75qp"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.940617 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-254pk"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.942398 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-gtfgx"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.942723 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hrm2g"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.944863 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9nn5q"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.945987 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-r7tdx"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.947047 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-nx7kv"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.948149 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-vzmxm"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.949153 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-67z4k"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.950274 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-vljjd"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.952078 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.984316 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-kbt4b"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.984375 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-vpgtw"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.984388 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-txtwn"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.984402 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-n5kg4"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.984416 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496495-brtsv"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.984441 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-vz9mh"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.984457 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-sbckt"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.984471 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-5j6bc"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.984482 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fgcq7"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.984496 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-92b8r"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.984508 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-hfk7g"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.985482 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-lnxcr"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.985895 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.986152 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-vljjd" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.986342 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4d25l\" (UniqueName: \"kubernetes.io/projected/71148f4c-0b84-45c4-911c-0ec4b06cf710-kube-api-access-4d25l\") pod \"apiserver-7bbb656c7d-zps75\" (UID: \"71148f4c-0b84-45c4-911c-0ec4b06cf710\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.986389 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/807df97f-b371-4d04-81e9-b1a823a8a638-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-dgkvz\" (UID: \"807df97f-b371-4d04-81e9-b1a823a8a638\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dgkvz" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.986422 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.986436 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-28vp9"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.986451 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17af2b06-620b-4126-ac9e-f0de24c9f6bb-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-qdlmd\" (UID: \"17af2b06-620b-4126-ac9e-f0de24c9f6bb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qdlmd" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.986478 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2j868\" (UniqueName: \"kubernetes.io/projected/c1191290-07ee-40c4-85e8-59545986d7db-kube-api-access-2j868\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.986508 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/807df97f-b371-4d04-81e9-b1a823a8a638-client-ca\") pod \"controller-manager-879f6c89f-dgkvz\" (UID: \"807df97f-b371-4d04-81e9-b1a823a8a638\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dgkvz" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.986531 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c1191290-07ee-40c4-85e8-59545986d7db-audit-dir\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.986558 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-lnxcr" Jan 30 16:24:44 crc kubenswrapper[4766]: E0130 16:24:44.986576 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:45.486533012 +0000 UTC m=+140.124490358 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.986456 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjlfz"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.986636 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d8527eb-86cc-45de-8821-7b80f37422d0-config\") pod \"authentication-operator-69f744f599-txtwn\" (UID: \"0d8527eb-86cc-45de-8821-7b80f37422d0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-txtwn" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.986676 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hqpk"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.986695 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8c8p6"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.986708 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8qrfp"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.986727 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qdlmd"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.986705 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zmsv\" (UniqueName: \"kubernetes.io/projected/807df97f-b371-4d04-81e9-b1a823a8a638-kube-api-access-5zmsv\") pod \"controller-manager-879f6c89f-dgkvz\" (UID: \"807df97f-b371-4d04-81e9-b1a823a8a638\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dgkvz" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.986770 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56e829ed-8cb2-48e8-bc6e-b5a6ec346d4a-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-hrm2g\" (UID: \"56e829ed-8cb2-48e8-bc6e-b5a6ec346d4a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hrm2g" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.986819 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-audit-policies\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.986847 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mntd9\" (UniqueName: \"kubernetes.io/projected/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-kube-api-access-mntd9\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.986874 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.986895 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33323546-6929-4c9c-a0a3-44842b9897b4-config\") pod \"openshift-apiserver-operator-796bbdcf4f-8c8p6\" (UID: \"33323546-6929-4c9c-a0a3-44842b9897b4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8c8p6" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.986645 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-hfk7g" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.986594 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c1191290-07ee-40c4-85e8-59545986d7db-audit-dir\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.986951 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xncpz\" (UniqueName: \"kubernetes.io/projected/33323546-6929-4c9c-a0a3-44842b9897b4-kube-api-access-xncpz\") pod \"openshift-apiserver-operator-796bbdcf4f-8c8p6\" (UID: \"33323546-6929-4c9c-a0a3-44842b9897b4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8c8p6" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.987044 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/587fc124-b506-4535-b8d2-1d0f6c91cfb9-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-2h92f\" (UID: \"587fc124-b506-4535-b8d2-1d0f6c91cfb9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2h92f" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.987094 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da9e3070-71fe-41f6-8549-90d97f03c16e-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8qrfp\" (UID: \"da9e3070-71fe-41f6-8549-90d97f03c16e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8qrfp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.987520 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/807df97f-b371-4d04-81e9-b1a823a8a638-serving-cert\") pod \"controller-manager-879f6c89f-dgkvz\" (UID: \"807df97f-b371-4d04-81e9-b1a823a8a638\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dgkvz" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.987589 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9f9669ae-a5fc-4e59-b2b7-3ae1ebf6f3ad-proxy-tls\") pod \"machine-config-operator-74547568cd-kbt4b\" (UID: \"9f9669ae-a5fc-4e59-b2b7-3ae1ebf6f3ad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kbt4b" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.988116 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c1191290-07ee-40c4-85e8-59545986d7db-encryption-config\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.988240 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c1191290-07ee-40c4-85e8-59545986d7db-etcd-serving-ca\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.988453 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/807df97f-b371-4d04-81e9-b1a823a8a638-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-dgkvz\" (UID: \"807df97f-b371-4d04-81e9-b1a823a8a638\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dgkvz" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.988696 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/97631abe-0d99-4f69-b208-4da9d19a8400-registry-certificates\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.988780 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.989404 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d8527eb-86cc-45de-8821-7b80f37422d0-config\") pod \"authentication-operator-69f744f599-txtwn\" (UID: \"0d8527eb-86cc-45de-8821-7b80f37422d0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-txtwn" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.990143 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/807df97f-b371-4d04-81e9-b1a823a8a638-client-ca\") pod \"controller-manager-879f6c89f-dgkvz\" (UID: \"807df97f-b371-4d04-81e9-b1a823a8a638\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dgkvz" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.990544 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/587fc124-b506-4535-b8d2-1d0f6c91cfb9-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-2h92f\" (UID: \"587fc124-b506-4535-b8d2-1d0f6c91cfb9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2h92f" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.990635 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/71148f4c-0b84-45c4-911c-0ec4b06cf710-etcd-client\") pod \"apiserver-7bbb656c7d-zps75\" (UID: \"71148f4c-0b84-45c4-911c-0ec4b06cf710\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.991638 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/16c26a8d-0deb-4754-b815-4402e2aa5455-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-67z4k\" (UID: \"16c26a8d-0deb-4754-b815-4402e2aa5455\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-67z4k" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.991758 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tt9l\" (UniqueName: \"kubernetes.io/projected/16c26a8d-0deb-4754-b815-4402e2aa5455-kube-api-access-8tt9l\") pod \"cluster-image-registry-operator-dc59b4c8b-67z4k\" (UID: \"16c26a8d-0deb-4754-b815-4402e2aa5455\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-67z4k" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.991821 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wldm9\" (UniqueName: \"kubernetes.io/projected/d6fc09a4-19be-4bdb-b87a-5eafbfc9981c-kube-api-access-wldm9\") pod \"migrator-59844c95c7-r7tdx\" (UID: \"d6fc09a4-19be-4bdb-b87a-5eafbfc9981c\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r7tdx" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.991843 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/97631abe-0d99-4f69-b208-4da9d19a8400-registry-certificates\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.991848 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvndq\" (UniqueName: \"kubernetes.io/projected/31501ea8-c8ad-4854-bfda-157a49fd0b39-kube-api-access-wvndq\") pod \"machine-config-controller-84d6567774-5j6bc\" (UID: \"31501ea8-c8ad-4854-bfda-157a49fd0b39\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5j6bc" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.991932 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/c1191290-07ee-40c4-85e8-59545986d7db-audit\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.991993 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzxn7\" (UniqueName: \"kubernetes.io/projected/8acca84e-2800-4a20-b3e8-84e021d1c001-kube-api-access-fzxn7\") pod \"machine-api-operator-5694c8668f-jn8dp\" (UID: \"8acca84e-2800-4a20-b3e8-84e021d1c001\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jn8dp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.992031 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c1191290-07ee-40c4-85e8-59545986d7db-node-pullsecrets\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.992058 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1191290-07ee-40c4-85e8-59545986d7db-trusted-ca-bundle\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.992082 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d8527eb-86cc-45de-8821-7b80f37422d0-serving-cert\") pod \"authentication-operator-69f744f599-txtwn\" (UID: \"0d8527eb-86cc-45de-8821-7b80f37422d0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-txtwn" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.992112 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z958l\" (UniqueName: \"kubernetes.io/projected/3dc11d4d-16d8-43a2-9648-e0b833e8824a-kube-api-access-z958l\") pod \"dns-operator-744455d44c-vzmxm\" (UID: \"3dc11d4d-16d8-43a2-9648-e0b833e8824a\") " pod="openshift-dns-operator/dns-operator-744455d44c-vzmxm" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.992140 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/bb325f25-00bb-4519-99d5-94ea7bbcd9d5-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-28vp9\" (UID: \"bb325f25-00bb-4519-99d5-94ea7bbcd9d5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-28vp9" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.992197 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.992224 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da9e3070-71fe-41f6-8549-90d97f03c16e-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8qrfp\" (UID: \"da9e3070-71fe-41f6-8549-90d97f03c16e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8qrfp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.992278 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/752a21cf-698e-45b3-91e2-c00b0e82d991-config\") pod \"kube-controller-manager-operator-78b949d7b-b8lt5\" (UID: \"752a21cf-698e-45b3-91e2-c00b0e82d991\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b8lt5" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.992309 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6eb3e5af-901e-42db-b01e-895e2d6c8171-config\") pod \"kube-apiserver-operator-766d6c64bb-fgcq7\" (UID: \"6eb3e5af-901e-42db-b01e-895e2d6c8171\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fgcq7" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.992356 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/af6eef76-87a0-459c-b2eb-61e06ae7386d-stats-auth\") pod \"router-default-5444994796-pr8gz\" (UID: \"af6eef76-87a0-459c-b2eb-61e06ae7386d\") " pod="openshift-ingress/router-default-5444994796-pr8gz" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.992397 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q4cz\" (UniqueName: \"kubernetes.io/projected/bb325f25-00bb-4519-99d5-94ea7bbcd9d5-kube-api-access-6q4cz\") pod \"control-plane-machine-set-operator-78cbb6b69f-28vp9\" (UID: \"bb325f25-00bb-4519-99d5-94ea7bbcd9d5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-28vp9" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.992431 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/97631abe-0d99-4f69-b208-4da9d19a8400-registry-tls\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.992463 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/c1191290-07ee-40c4-85e8-59545986d7db-image-import-ca\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.992501 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9jhr\" (UniqueName: \"kubernetes.io/projected/587fc124-b506-4535-b8d2-1d0f6c91cfb9-kube-api-access-l9jhr\") pod \"cluster-samples-operator-665b6dd947-2h92f\" (UID: \"587fc124-b506-4535-b8d2-1d0f6c91cfb9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2h92f" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.992562 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2f7dd292-e5ea-4fe6-b6ec-da47748f1fc3-machine-approver-tls\") pod \"machine-approver-56656f9798-6ndwq\" (UID: \"2f7dd292-e5ea-4fe6-b6ec-da47748f1fc3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6ndwq" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.992602 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/97631abe-0d99-4f69-b208-4da9d19a8400-installation-pull-secrets\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.992637 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3dc11d4d-16d8-43a2-9648-e0b833e8824a-metrics-tls\") pod \"dns-operator-744455d44c-vzmxm\" (UID: \"3dc11d4d-16d8-43a2-9648-e0b833e8824a\") " pod="openshift-dns-operator/dns-operator-744455d44c-vzmxm" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.992684 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdw7k\" (UniqueName: \"kubernetes.io/projected/a8f468fe-13d2-4f44-ab3e-fd301aac78ce-kube-api-access-mdw7k\") pod \"etcd-operator-b45778765-nx7kv\" (UID: \"a8f468fe-13d2-4f44-ab3e-fd301aac78ce\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nx7kv" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.992724 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6eb3e5af-901e-42db-b01e-895e2d6c8171-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-fgcq7\" (UID: \"6eb3e5af-901e-42db-b01e-895e2d6c8171\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fgcq7" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.992761 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8acca84e-2800-4a20-b3e8-84e021d1c001-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-jn8dp\" (UID: \"8acca84e-2800-4a20-b3e8-84e021d1c001\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jn8dp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.992798 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/97631abe-0d99-4f69-b208-4da9d19a8400-bound-sa-token\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.992834 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/af6eef76-87a0-459c-b2eb-61e06ae7386d-metrics-certs\") pod \"router-default-5444994796-pr8gz\" (UID: \"af6eef76-87a0-459c-b2eb-61e06ae7386d\") " pod="openshift-ingress/router-default-5444994796-pr8gz" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.992876 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5sxg\" (UniqueName: \"kubernetes.io/projected/d9f3a679-bd83-4e31-aad4-0bd228e96c33-kube-api-access-l5sxg\") pod \"downloads-7954f5f757-254pk\" (UID: \"d9f3a679-bd83-4e31-aad4-0bd228e96c33\") " pod="openshift-console/downloads-7954f5f757-254pk" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.992897 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d4nv\" (UniqueName: \"kubernetes.io/projected/af6eef76-87a0-459c-b2eb-61e06ae7386d-kube-api-access-6d4nv\") pod \"router-default-5444994796-pr8gz\" (UID: \"af6eef76-87a0-459c-b2eb-61e06ae7386d\") " pod="openshift-ingress/router-default-5444994796-pr8gz" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.992924 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71148f4c-0b84-45c4-911c-0ec4b06cf710-serving-cert\") pod \"apiserver-7bbb656c7d-zps75\" (UID: \"71148f4c-0b84-45c4-911c-0ec4b06cf710\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.992949 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1191290-07ee-40c4-85e8-59545986d7db-config\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.992979 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmgrk\" (UniqueName: \"kubernetes.io/projected/17af2b06-620b-4126-ac9e-f0de24c9f6bb-kube-api-access-zmgrk\") pod \"kube-storage-version-migrator-operator-b67b599dd-qdlmd\" (UID: \"17af2b06-620b-4126-ac9e-f0de24c9f6bb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qdlmd" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.993017 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.993043 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.993070 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/16c26a8d-0deb-4754-b815-4402e2aa5455-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-67z4k\" (UID: \"16c26a8d-0deb-4754-b815-4402e2aa5455\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-67z4k" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.993096 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31501ea8-c8ad-4854-bfda-157a49fd0b39-proxy-tls\") pod \"machine-config-controller-84d6567774-5j6bc\" (UID: \"31501ea8-c8ad-4854-bfda-157a49fd0b39\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5j6bc" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.993129 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv552\" (UniqueName: \"kubernetes.io/projected/9f9669ae-a5fc-4e59-b2b7-3ae1ebf6f3ad-kube-api-access-cv552\") pod \"machine-config-operator-74547568cd-kbt4b\" (UID: \"9f9669ae-a5fc-4e59-b2b7-3ae1ebf6f3ad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kbt4b" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.993155 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79252\" (UniqueName: \"kubernetes.io/projected/97631abe-0d99-4f69-b208-4da9d19a8400-kube-api-access-79252\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.993198 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8acca84e-2800-4a20-b3e8-84e021d1c001-images\") pod \"machine-api-operator-5694c8668f-jn8dp\" (UID: \"8acca84e-2800-4a20-b3e8-84e021d1c001\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jn8dp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.993232 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33323546-6929-4c9c-a0a3-44842b9897b4-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-8c8p6\" (UID: \"33323546-6929-4c9c-a0a3-44842b9897b4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8c8p6" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.993264 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vn75m\" (UniqueName: \"kubernetes.io/projected/0d8527eb-86cc-45de-8821-7b80f37422d0-kube-api-access-vn75m\") pod \"authentication-operator-69f744f599-txtwn\" (UID: \"0d8527eb-86cc-45de-8821-7b80f37422d0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-txtwn" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.993299 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af6eef76-87a0-459c-b2eb-61e06ae7386d-service-ca-bundle\") pod \"router-default-5444994796-pr8gz\" (UID: \"af6eef76-87a0-459c-b2eb-61e06ae7386d\") " pod="openshift-ingress/router-default-5444994796-pr8gz" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.993319 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/af6eef76-87a0-459c-b2eb-61e06ae7386d-default-certificate\") pod \"router-default-5444994796-pr8gz\" (UID: \"af6eef76-87a0-459c-b2eb-61e06ae7386d\") " pod="openshift-ingress/router-default-5444994796-pr8gz" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.993344 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbqv9\" (UniqueName: \"kubernetes.io/projected/0fd41a92-ef77-4a02-bd2b-089d2edb3cf4-kube-api-access-dbqv9\") pod \"openshift-config-operator-7777fb866f-7j765\" (UID: \"0fd41a92-ef77-4a02-bd2b-089d2edb3cf4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7j765" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.993369 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/71148f4c-0b84-45c4-911c-0ec4b06cf710-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-zps75\" (UID: \"71148f4c-0b84-45c4-911c-0ec4b06cf710\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.993400 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kd67z\" (UniqueName: \"kubernetes.io/projected/2f7dd292-e5ea-4fe6-b6ec-da47748f1fc3-kube-api-access-kd67z\") pod \"machine-approver-56656f9798-6ndwq\" (UID: \"2f7dd292-e5ea-4fe6-b6ec-da47748f1fc3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6ndwq" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.993429 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56e829ed-8cb2-48e8-bc6e-b5a6ec346d4a-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-hrm2g\" (UID: \"56e829ed-8cb2-48e8-bc6e-b5a6ec346d4a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hrm2g" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.993456 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1191290-07ee-40c4-85e8-59545986d7db-serving-cert\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.993495 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8acca84e-2800-4a20-b3e8-84e021d1c001-config\") pod \"machine-api-operator-5694c8668f-jn8dp\" (UID: \"8acca84e-2800-4a20-b3e8-84e021d1c001\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jn8dp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.993514 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/97631abe-0d99-4f69-b208-4da9d19a8400-trusted-ca\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.993542 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-audit-dir\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.993576 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/da9e3070-71fe-41f6-8549-90d97f03c16e-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8qrfp\" (UID: \"da9e3070-71fe-41f6-8549-90d97f03c16e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8qrfp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.993610 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/807df97f-b371-4d04-81e9-b1a823a8a638-config\") pod \"controller-manager-879f6c89f-dgkvz\" (UID: \"807df97f-b371-4d04-81e9-b1a823a8a638\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dgkvz" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.993637 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/71148f4c-0b84-45c4-911c-0ec4b06cf710-audit-policies\") pod \"apiserver-7bbb656c7d-zps75\" (UID: \"71148f4c-0b84-45c4-911c-0ec4b06cf710\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.993661 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8f468fe-13d2-4f44-ab3e-fd301aac78ce-config\") pod \"etcd-operator-b45778765-nx7kv\" (UID: \"a8f468fe-13d2-4f44-ab3e-fd301aac78ce\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nx7kv" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.993722 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/71148f4c-0b84-45c4-911c-0ec4b06cf710-encryption-config\") pod \"apiserver-7bbb656c7d-zps75\" (UID: \"71148f4c-0b84-45c4-911c-0ec4b06cf710\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.993754 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9f9669ae-a5fc-4e59-b2b7-3ae1ebf6f3ad-auth-proxy-config\") pod \"machine-config-operator-74547568cd-kbt4b\" (UID: \"9f9669ae-a5fc-4e59-b2b7-3ae1ebf6f3ad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kbt4b" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.993787 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9f9669ae-a5fc-4e59-b2b7-3ae1ebf6f3ad-images\") pod \"machine-config-operator-74547568cd-kbt4b\" (UID: \"9f9669ae-a5fc-4e59-b2b7-3ae1ebf6f3ad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kbt4b" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.993824 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71148f4c-0b84-45c4-911c-0ec4b06cf710-audit-dir\") pod \"apiserver-7bbb656c7d-zps75\" (UID: \"71148f4c-0b84-45c4-911c-0ec4b06cf710\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.993852 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.993907 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/752a21cf-698e-45b3-91e2-c00b0e82d991-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-b8lt5\" (UID: \"752a21cf-698e-45b3-91e2-c00b0e82d991\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b8lt5" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.993924 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/71148f4c-0b84-45c4-911c-0ec4b06cf710-etcd-client\") pod \"apiserver-7bbb656c7d-zps75\" (UID: \"71148f4c-0b84-45c4-911c-0ec4b06cf710\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.993938 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a8f468fe-13d2-4f44-ab3e-fd301aac78ce-etcd-client\") pod \"etcd-operator-b45778765-nx7kv\" (UID: \"a8f468fe-13d2-4f44-ab3e-fd301aac78ce\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nx7kv" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.993998 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a8f468fe-13d2-4f44-ab3e-fd301aac78ce-etcd-ca\") pod \"etcd-operator-b45778765-nx7kv\" (UID: \"a8f468fe-13d2-4f44-ab3e-fd301aac78ce\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nx7kv" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.994027 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a8f468fe-13d2-4f44-ab3e-fd301aac78ce-etcd-service-ca\") pod \"etcd-operator-b45778765-nx7kv\" (UID: \"a8f468fe-13d2-4f44-ab3e-fd301aac78ce\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nx7kv" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.994061 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.994098 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v86z7\" (UniqueName: \"kubernetes.io/projected/56e829ed-8cb2-48e8-bc6e-b5a6ec346d4a-kube-api-access-v86z7\") pod \"openshift-controller-manager-operator-756b6f6bc6-hrm2g\" (UID: \"56e829ed-8cb2-48e8-bc6e-b5a6ec346d4a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hrm2g" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.994136 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.994218 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/97631abe-0d99-4f69-b208-4da9d19a8400-ca-trust-extracted\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.994249 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c1191290-07ee-40c4-85e8-59545986d7db-node-pullsecrets\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.994256 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d8527eb-86cc-45de-8821-7b80f37422d0-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-txtwn\" (UID: \"0d8527eb-86cc-45de-8821-7b80f37422d0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-txtwn" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.995030 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d8527eb-86cc-45de-8821-7b80f37422d0-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-txtwn\" (UID: \"0d8527eb-86cc-45de-8821-7b80f37422d0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-txtwn" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.000229 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c1191290-07ee-40c4-85e8-59545986d7db-etcd-serving-ca\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.000785 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8acca84e-2800-4a20-b3e8-84e021d1c001-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-jn8dp\" (UID: \"8acca84e-2800-4a20-b3e8-84e021d1c001\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jn8dp" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.001755 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/c1191290-07ee-40c4-85e8-59545986d7db-audit\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.001961 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.002119 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b8lt5"] Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.002494 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 30 16:24:45 crc kubenswrapper[4766]: E0130 16:24:45.002804 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:45.502772334 +0000 UTC m=+140.140729680 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.002846 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1191290-07ee-40c4-85e8-59545986d7db-config\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.002907 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71148f4c-0b84-45c4-911c-0ec4b06cf710-audit-dir\") pod \"apiserver-7bbb656c7d-zps75\" (UID: \"71148f4c-0b84-45c4-911c-0ec4b06cf710\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.004683 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/807df97f-b371-4d04-81e9-b1a823a8a638-config\") pod \"controller-manager-879f6c89f-dgkvz\" (UID: \"807df97f-b371-4d04-81e9-b1a823a8a638\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dgkvz" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.005165 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8acca84e-2800-4a20-b3e8-84e021d1c001-images\") pod \"machine-api-operator-5694c8668f-jn8dp\" (UID: \"8acca84e-2800-4a20-b3e8-84e021d1c001\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jn8dp" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.005386 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8acca84e-2800-4a20-b3e8-84e021d1c001-config\") pod \"machine-api-operator-5694c8668f-jn8dp\" (UID: \"8acca84e-2800-4a20-b3e8-84e021d1c001\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jn8dp" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.005899 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wcmvb"] Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.010877 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-lnxcr"] Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.010911 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zqpn4"] Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.008766 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/97631abe-0d99-4f69-b208-4da9d19a8400-ca-trust-extracted\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.010999 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/97631abe-0d99-4f69-b208-4da9d19a8400-registry-tls\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.010395 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/97631abe-0d99-4f69-b208-4da9d19a8400-trusted-ca\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.006591 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/c1191290-07ee-40c4-85e8-59545986d7db-image-import-ca\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.010837 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.011118 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.007530 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1191290-07ee-40c4-85e8-59545986d7db-serving-cert\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.011215 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71148f4c-0b84-45c4-911c-0ec4b06cf710-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-zps75\" (UID: \"71148f4c-0b84-45c4-911c-0ec4b06cf710\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.011695 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/71148f4c-0b84-45c4-911c-0ec4b06cf710-encryption-config\") pod \"apiserver-7bbb656c7d-zps75\" (UID: \"71148f4c-0b84-45c4-911c-0ec4b06cf710\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.012084 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71148f4c-0b84-45c4-911c-0ec4b06cf710-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-zps75\" (UID: \"71148f4c-0b84-45c4-911c-0ec4b06cf710\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.012132 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.012471 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2f7dd292-e5ea-4fe6-b6ec-da47748f1fc3-auth-proxy-config\") pod \"machine-approver-56656f9798-6ndwq\" (UID: \"2f7dd292-e5ea-4fe6-b6ec-da47748f1fc3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6ndwq" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.012640 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/71148f4c-0b84-45c4-911c-0ec4b06cf710-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-zps75\" (UID: \"71148f4c-0b84-45c4-911c-0ec4b06cf710\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.012730 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/97631abe-0d99-4f69-b208-4da9d19a8400-installation-pull-secrets\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.012741 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d8527eb-86cc-45de-8821-7b80f37422d0-service-ca-bundle\") pod \"authentication-operator-69f744f599-txtwn\" (UID: \"0d8527eb-86cc-45de-8821-7b80f37422d0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-txtwn" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.012888 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17af2b06-620b-4126-ac9e-f0de24c9f6bb-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-qdlmd\" (UID: \"17af2b06-620b-4126-ac9e-f0de24c9f6bb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qdlmd" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.012946 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31501ea8-c8ad-4854-bfda-157a49fd0b39-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-5j6bc\" (UID: \"31501ea8-c8ad-4854-bfda-157a49fd0b39\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5j6bc" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.012991 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/752a21cf-698e-45b3-91e2-c00b0e82d991-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-b8lt5\" (UID: \"752a21cf-698e-45b3-91e2-c00b0e82d991\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b8lt5" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.013045 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f7dd292-e5ea-4fe6-b6ec-da47748f1fc3-config\") pod \"machine-approver-56656f9798-6ndwq\" (UID: \"2f7dd292-e5ea-4fe6-b6ec-da47748f1fc3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6ndwq" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.013087 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.013124 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0fd41a92-ef77-4a02-bd2b-089d2edb3cf4-available-featuregates\") pod \"openshift-config-operator-7777fb866f-7j765\" (UID: \"0fd41a92-ef77-4a02-bd2b-089d2edb3cf4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7j765" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.013168 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/16c26a8d-0deb-4754-b815-4402e2aa5455-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-67z4k\" (UID: \"16c26a8d-0deb-4754-b815-4402e2aa5455\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-67z4k" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.013242 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8f468fe-13d2-4f44-ab3e-fd301aac78ce-serving-cert\") pod \"etcd-operator-b45778765-nx7kv\" (UID: \"a8f468fe-13d2-4f44-ab3e-fd301aac78ce\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nx7kv" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.013441 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c1191290-07ee-40c4-85e8-59545986d7db-etcd-client\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.013483 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6eb3e5af-901e-42db-b01e-895e2d6c8171-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-fgcq7\" (UID: \"6eb3e5af-901e-42db-b01e-895e2d6c8171\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fgcq7" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.013644 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0fd41a92-ef77-4a02-bd2b-089d2edb3cf4-serving-cert\") pod \"openshift-config-operator-7777fb866f-7j765\" (UID: \"0fd41a92-ef77-4a02-bd2b-089d2edb3cf4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7j765" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.013680 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2f7dd292-e5ea-4fe6-b6ec-da47748f1fc3-auth-proxy-config\") pod \"machine-approver-56656f9798-6ndwq\" (UID: \"2f7dd292-e5ea-4fe6-b6ec-da47748f1fc3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6ndwq" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.013707 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d8527eb-86cc-45de-8821-7b80f37422d0-service-ca-bundle\") pod \"authentication-operator-69f744f599-txtwn\" (UID: \"0d8527eb-86cc-45de-8821-7b80f37422d0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-txtwn" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.013858 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gtc8b"] Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.014052 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c1191290-07ee-40c4-85e8-59545986d7db-encryption-config\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.014106 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/807df97f-b371-4d04-81e9-b1a823a8a638-serving-cert\") pod \"controller-manager-879f6c89f-dgkvz\" (UID: \"807df97f-b371-4d04-81e9-b1a823a8a638\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dgkvz" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.014540 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/71148f4c-0b84-45c4-911c-0ec4b06cf710-audit-policies\") pod \"apiserver-7bbb656c7d-zps75\" (UID: \"71148f4c-0b84-45c4-911c-0ec4b06cf710\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.014668 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f7dd292-e5ea-4fe6-b6ec-da47748f1fc3-config\") pod \"machine-approver-56656f9798-6ndwq\" (UID: \"2f7dd292-e5ea-4fe6-b6ec-da47748f1fc3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6ndwq" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.014770 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0fd41a92-ef77-4a02-bd2b-089d2edb3cf4-available-featuregates\") pod \"openshift-config-operator-7777fb866f-7j765\" (UID: \"0fd41a92-ef77-4a02-bd2b-089d2edb3cf4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7j765" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.015221 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2f7dd292-e5ea-4fe6-b6ec-da47748f1fc3-machine-approver-tls\") pod \"machine-approver-56656f9798-6ndwq\" (UID: \"2f7dd292-e5ea-4fe6-b6ec-da47748f1fc3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6ndwq" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.015558 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-hfk7g"] Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.016280 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71148f4c-0b84-45c4-911c-0ec4b06cf710-serving-cert\") pod \"apiserver-7bbb656c7d-zps75\" (UID: \"71148f4c-0b84-45c4-911c-0ec4b06cf710\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.016776 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1191290-07ee-40c4-85e8-59545986d7db-trusted-ca-bundle\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.018031 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d8527eb-86cc-45de-8821-7b80f37422d0-serving-cert\") pod \"authentication-operator-69f744f599-txtwn\" (UID: \"0d8527eb-86cc-45de-8821-7b80f37422d0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-txtwn" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.018817 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c1191290-07ee-40c4-85e8-59545986d7db-etcd-client\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.018964 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-8fgxh"] Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.019432 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3dc11d4d-16d8-43a2-9648-e0b833e8824a-metrics-tls\") pod \"dns-operator-744455d44c-vzmxm\" (UID: \"3dc11d4d-16d8-43a2-9648-e0b833e8824a\") " pod="openshift-dns-operator/dns-operator-744455d44c-vzmxm" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.020082 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0fd41a92-ef77-4a02-bd2b-089d2edb3cf4-serving-cert\") pod \"openshift-config-operator-7777fb866f-7j765\" (UID: \"0fd41a92-ef77-4a02-bd2b-089d2edb3cf4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7j765" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.024692 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-vljjd"] Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.028245 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-92gpq"] Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.029273 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-92gpq" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.032607 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.053260 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.072354 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.092601 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.112703 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.114469 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:45 crc kubenswrapper[4766]: E0130 16:24:45.114612 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:45.614579118 +0000 UTC m=+140.252536474 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.115063 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af6eef76-87a0-459c-b2eb-61e06ae7386d-service-ca-bundle\") pod \"router-default-5444994796-pr8gz\" (UID: \"af6eef76-87a0-459c-b2eb-61e06ae7386d\") " pod="openshift-ingress/router-default-5444994796-pr8gz" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.115385 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs8lk\" (UniqueName: \"kubernetes.io/projected/6289d893-d357-4aab-a2e9-389a422ebaa5-kube-api-access-cs8lk\") pod \"dns-default-lnxcr\" (UID: \"6289d893-d357-4aab-a2e9-389a422ebaa5\") " pod="openshift-dns/dns-default-lnxcr" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.115497 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/af6eef76-87a0-459c-b2eb-61e06ae7386d-default-certificate\") pod \"router-default-5444994796-pr8gz\" (UID: \"af6eef76-87a0-459c-b2eb-61e06ae7386d\") " pod="openshift-ingress/router-default-5444994796-pr8gz" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.115591 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56e829ed-8cb2-48e8-bc6e-b5a6ec346d4a-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-hrm2g\" (UID: \"56e829ed-8cb2-48e8-bc6e-b5a6ec346d4a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hrm2g" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.115696 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c71faa34-d1e9-4e10-911a-8cc1ccb436c0-mountpoint-dir\") pod \"csi-hostpathplugin-vljjd\" (UID: \"c71faa34-d1e9-4e10-911a-8cc1ccb436c0\") " pod="hostpath-provisioner/csi-hostpathplugin-vljjd" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.115865 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-audit-dir\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.115969 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/da9e3070-71fe-41f6-8549-90d97f03c16e-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8qrfp\" (UID: \"da9e3070-71fe-41f6-8549-90d97f03c16e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8qrfp" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.116047 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8f468fe-13d2-4f44-ab3e-fd301aac78ce-config\") pod \"etcd-operator-b45778765-nx7kv\" (UID: \"a8f468fe-13d2-4f44-ab3e-fd301aac78ce\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nx7kv" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.116138 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c71faa34-d1e9-4e10-911a-8cc1ccb436c0-socket-dir\") pod \"csi-hostpathplugin-vljjd\" (UID: \"c71faa34-d1e9-4e10-911a-8cc1ccb436c0\") " pod="hostpath-provisioner/csi-hostpathplugin-vljjd" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.116269 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9f9669ae-a5fc-4e59-b2b7-3ae1ebf6f3ad-auth-proxy-config\") pod \"machine-config-operator-74547568cd-kbt4b\" (UID: \"9f9669ae-a5fc-4e59-b2b7-3ae1ebf6f3ad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kbt4b" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.116361 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7082a4c2-c998-4e1c-8264-2bafcd96d0c1-config\") pod \"service-ca-operator-777779d784-vpgtw\" (UID: \"7082a4c2-c998-4e1c-8264-2bafcd96d0c1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vpgtw" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.115996 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-audit-dir\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.116436 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9f9669ae-a5fc-4e59-b2b7-3ae1ebf6f3ad-images\") pod \"machine-config-operator-74547568cd-kbt4b\" (UID: \"9f9669ae-a5fc-4e59-b2b7-3ae1ebf6f3ad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kbt4b" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.116572 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c71faa34-d1e9-4e10-911a-8cc1ccb436c0-csi-data-dir\") pod \"csi-hostpathplugin-vljjd\" (UID: \"c71faa34-d1e9-4e10-911a-8cc1ccb436c0\") " pod="hostpath-provisioner/csi-hostpathplugin-vljjd" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.115973 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af6eef76-87a0-459c-b2eb-61e06ae7386d-service-ca-bundle\") pod \"router-default-5444994796-pr8gz\" (UID: \"af6eef76-87a0-459c-b2eb-61e06ae7386d\") " pod="openshift-ingress/router-default-5444994796-pr8gz" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.116642 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/454fa304-47eb-48d6-9fec-406888874f6f-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-zqpn4\" (UID: \"454fa304-47eb-48d6-9fec-406888874f6f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zqpn4" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.116746 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a8f468fe-13d2-4f44-ab3e-fd301aac78ce-etcd-ca\") pod \"etcd-operator-b45778765-nx7kv\" (UID: \"a8f468fe-13d2-4f44-ab3e-fd301aac78ce\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nx7kv" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.116790 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a8f468fe-13d2-4f44-ab3e-fd301aac78ce-etcd-service-ca\") pod \"etcd-operator-b45778765-nx7kv\" (UID: \"a8f468fe-13d2-4f44-ab3e-fd301aac78ce\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nx7kv" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.116828 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.116858 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/752a21cf-698e-45b3-91e2-c00b0e82d991-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-b8lt5\" (UID: \"752a21cf-698e-45b3-91e2-c00b0e82d991\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b8lt5" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.116882 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a8f468fe-13d2-4f44-ab3e-fd301aac78ce-etcd-client\") pod \"etcd-operator-b45778765-nx7kv\" (UID: \"a8f468fe-13d2-4f44-ab3e-fd301aac78ce\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nx7kv" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.116908 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.116923 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9f9669ae-a5fc-4e59-b2b7-3ae1ebf6f3ad-auth-proxy-config\") pod \"machine-config-operator-74547568cd-kbt4b\" (UID: \"9f9669ae-a5fc-4e59-b2b7-3ae1ebf6f3ad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kbt4b" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.116942 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb5jf\" (UniqueName: \"kubernetes.io/projected/695ff148-b91d-49a2-ad3b-9a240f11e454-kube-api-access-cb5jf\") pod \"console-f9d7485db-8fgxh\" (UID: \"695ff148-b91d-49a2-ad3b-9a240f11e454\") " pod="openshift-console/console-f9d7485db-8fgxh" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.116964 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8f468fe-13d2-4f44-ab3e-fd301aac78ce-config\") pod \"etcd-operator-b45778765-nx7kv\" (UID: \"a8f468fe-13d2-4f44-ab3e-fd301aac78ce\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nx7kv" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.116979 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v86z7\" (UniqueName: \"kubernetes.io/projected/56e829ed-8cb2-48e8-bc6e-b5a6ec346d4a-kube-api-access-v86z7\") pod \"openshift-controller-manager-operator-756b6f6bc6-hrm2g\" (UID: \"56e829ed-8cb2-48e8-bc6e-b5a6ec346d4a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hrm2g" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.117012 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.117037 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6289d893-d357-4aab-a2e9-389a422ebaa5-metrics-tls\") pod \"dns-default-lnxcr\" (UID: \"6289d893-d357-4aab-a2e9-389a422ebaa5\") " pod="openshift-dns/dns-default-lnxcr" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.117069 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.117097 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.117127 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17af2b06-620b-4126-ac9e-f0de24c9f6bb-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-qdlmd\" (UID: \"17af2b06-620b-4126-ac9e-f0de24c9f6bb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qdlmd" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.117158 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/752a21cf-698e-45b3-91e2-c00b0e82d991-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-b8lt5\" (UID: \"752a21cf-698e-45b3-91e2-c00b0e82d991\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b8lt5" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.117236 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31501ea8-c8ad-4854-bfda-157a49fd0b39-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-5j6bc\" (UID: \"31501ea8-c8ad-4854-bfda-157a49fd0b39\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5j6bc" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.117263 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9b23bdbc-d2d1-4404-8455-4e877764c72d-profile-collector-cert\") pod \"olm-operator-6b444d44fb-hjlfz\" (UID: \"9b23bdbc-d2d1-4404-8455-4e877764c72d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjlfz" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.117297 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.117328 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwpjf\" (UniqueName: \"kubernetes.io/projected/bcea73c8-ba18-43d4-a84b-f4ac5f7c43b0-kube-api-access-qwpjf\") pod \"ingress-canary-hfk7g\" (UID: \"bcea73c8-ba18-43d4-a84b-f4ac5f7c43b0\") " pod="openshift-ingress-canary/ingress-canary-hfk7g" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.117356 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/16c26a8d-0deb-4754-b815-4402e2aa5455-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-67z4k\" (UID: \"16c26a8d-0deb-4754-b815-4402e2aa5455\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-67z4k" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.117381 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8f468fe-13d2-4f44-ab3e-fd301aac78ce-serving-cert\") pod \"etcd-operator-b45778765-nx7kv\" (UID: \"a8f468fe-13d2-4f44-ab3e-fd301aac78ce\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nx7kv" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.117396 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a8f468fe-13d2-4f44-ab3e-fd301aac78ce-etcd-ca\") pod \"etcd-operator-b45778765-nx7kv\" (UID: \"a8f468fe-13d2-4f44-ab3e-fd301aac78ce\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nx7kv" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.117414 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66gcf\" (UniqueName: \"kubernetes.io/projected/c71faa34-d1e9-4e10-911a-8cc1ccb436c0-kube-api-access-66gcf\") pod \"csi-hostpathplugin-vljjd\" (UID: \"c71faa34-d1e9-4e10-911a-8cc1ccb436c0\") " pod="hostpath-provisioner/csi-hostpathplugin-vljjd" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.117470 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6eb3e5af-901e-42db-b01e-895e2d6c8171-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-fgcq7\" (UID: \"6eb3e5af-901e-42db-b01e-895e2d6c8171\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fgcq7" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.117496 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/695ff148-b91d-49a2-ad3b-9a240f11e454-trusted-ca-bundle\") pod \"console-f9d7485db-8fgxh\" (UID: \"695ff148-b91d-49a2-ad3b-9a240f11e454\") " pod="openshift-console/console-f9d7485db-8fgxh" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.117525 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.117545 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17af2b06-620b-4126-ac9e-f0de24c9f6bb-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-qdlmd\" (UID: \"17af2b06-620b-4126-ac9e-f0de24c9f6bb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qdlmd" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.117582 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.117589 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56e829ed-8cb2-48e8-bc6e-b5a6ec346d4a-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-hrm2g\" (UID: \"56e829ed-8cb2-48e8-bc6e-b5a6ec346d4a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hrm2g" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.117661 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-audit-policies\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.117696 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mntd9\" (UniqueName: \"kubernetes.io/projected/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-kube-api-access-mntd9\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.117730 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4d8t\" (UniqueName: \"kubernetes.io/projected/cdbd0f5d-e6fb-4960-a928-7a5dcc399239-kube-api-access-h4d8t\") pod \"marketplace-operator-79b997595-wcmvb\" (UID: \"cdbd0f5d-e6fb-4960-a928-7a5dcc399239\") " pod="openshift-marketplace/marketplace-operator-79b997595-wcmvb" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.117765 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.117793 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33323546-6929-4c9c-a0a3-44842b9897b4-config\") pod \"openshift-apiserver-operator-796bbdcf4f-8c8p6\" (UID: \"33323546-6929-4c9c-a0a3-44842b9897b4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8c8p6" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.117815 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.117831 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xncpz\" (UniqueName: \"kubernetes.io/projected/33323546-6929-4c9c-a0a3-44842b9897b4-kube-api-access-xncpz\") pod \"openshift-apiserver-operator-796bbdcf4f-8c8p6\" (UID: \"33323546-6929-4c9c-a0a3-44842b9897b4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8c8p6" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.117904 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da9e3070-71fe-41f6-8549-90d97f03c16e-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8qrfp\" (UID: \"da9e3070-71fe-41f6-8549-90d97f03c16e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8qrfp" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.117944 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwj89\" (UniqueName: \"kubernetes.io/projected/928166c7-a17c-4693-9ae5-1c8aa4050176-kube-api-access-bwj89\") pod \"multus-admission-controller-857f4d67dd-vz9mh\" (UID: \"928166c7-a17c-4693-9ae5-1c8aa4050176\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vz9mh" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118029 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9f9669ae-a5fc-4e59-b2b7-3ae1ebf6f3ad-proxy-tls\") pod \"machine-config-operator-74547568cd-kbt4b\" (UID: \"9f9669ae-a5fc-4e59-b2b7-3ae1ebf6f3ad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kbt4b" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118055 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7082a4c2-c998-4e1c-8264-2bafcd96d0c1-serving-cert\") pod \"service-ca-operator-777779d784-vpgtw\" (UID: \"7082a4c2-c998-4e1c-8264-2bafcd96d0c1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vpgtw" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118077 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56e829ed-8cb2-48e8-bc6e-b5a6ec346d4a-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-hrm2g\" (UID: \"56e829ed-8cb2-48e8-bc6e-b5a6ec346d4a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hrm2g" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118133 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c71faa34-d1e9-4e10-911a-8cc1ccb436c0-registration-dir\") pod \"csi-hostpathplugin-vljjd\" (UID: \"c71faa34-d1e9-4e10-911a-8cc1ccb436c0\") " pod="hostpath-provisioner/csi-hostpathplugin-vljjd" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118196 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118243 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvndq\" (UniqueName: \"kubernetes.io/projected/31501ea8-c8ad-4854-bfda-157a49fd0b39-kube-api-access-wvndq\") pod \"machine-config-controller-84d6567774-5j6bc\" (UID: \"31501ea8-c8ad-4854-bfda-157a49fd0b39\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5j6bc" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118272 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6289d893-d357-4aab-a2e9-389a422ebaa5-config-volume\") pod \"dns-default-lnxcr\" (UID: \"6289d893-d357-4aab-a2e9-389a422ebaa5\") " pod="openshift-dns/dns-default-lnxcr" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118303 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/16c26a8d-0deb-4754-b815-4402e2aa5455-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-67z4k\" (UID: \"16c26a8d-0deb-4754-b815-4402e2aa5455\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-67z4k" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118334 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tt9l\" (UniqueName: \"kubernetes.io/projected/16c26a8d-0deb-4754-b815-4402e2aa5455-kube-api-access-8tt9l\") pod \"cluster-image-registry-operator-dc59b4c8b-67z4k\" (UID: \"16c26a8d-0deb-4754-b815-4402e2aa5455\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-67z4k" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118363 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wldm9\" (UniqueName: \"kubernetes.io/projected/d6fc09a4-19be-4bdb-b87a-5eafbfc9981c-kube-api-access-wldm9\") pod \"migrator-59844c95c7-r7tdx\" (UID: \"d6fc09a4-19be-4bdb-b87a-5eafbfc9981c\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r7tdx" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118411 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vwjk\" (UniqueName: \"kubernetes.io/projected/9b23bdbc-d2d1-4404-8455-4e877764c72d-kube-api-access-9vwjk\") pod \"olm-operator-6b444d44fb-hjlfz\" (UID: \"9b23bdbc-d2d1-4404-8455-4e877764c72d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjlfz" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118493 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/bb325f25-00bb-4519-99d5-94ea7bbcd9d5-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-28vp9\" (UID: \"bb325f25-00bb-4519-99d5-94ea7bbcd9d5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-28vp9" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118523 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/695ff148-b91d-49a2-ad3b-9a240f11e454-service-ca\") pod \"console-f9d7485db-8fgxh\" (UID: \"695ff148-b91d-49a2-ad3b-9a240f11e454\") " pod="openshift-console/console-f9d7485db-8fgxh" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118543 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118564 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da9e3070-71fe-41f6-8549-90d97f03c16e-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8qrfp\" (UID: \"da9e3070-71fe-41f6-8549-90d97f03c16e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8qrfp" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118593 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/af6eef76-87a0-459c-b2eb-61e06ae7386d-stats-auth\") pod \"router-default-5444994796-pr8gz\" (UID: \"af6eef76-87a0-459c-b2eb-61e06ae7386d\") " pod="openshift-ingress/router-default-5444994796-pr8gz" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118615 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/752a21cf-698e-45b3-91e2-c00b0e82d991-config\") pod \"kube-controller-manager-operator-78b949d7b-b8lt5\" (UID: \"752a21cf-698e-45b3-91e2-c00b0e82d991\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b8lt5" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118633 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6eb3e5af-901e-42db-b01e-895e2d6c8171-config\") pod \"kube-apiserver-operator-766d6c64bb-fgcq7\" (UID: \"6eb3e5af-901e-42db-b01e-895e2d6c8171\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fgcq7" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118657 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c71faa34-d1e9-4e10-911a-8cc1ccb436c0-plugins-dir\") pod \"csi-hostpathplugin-vljjd\" (UID: \"c71faa34-d1e9-4e10-911a-8cc1ccb436c0\") " pod="hostpath-provisioner/csi-hostpathplugin-vljjd" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118679 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6q4cz\" (UniqueName: \"kubernetes.io/projected/bb325f25-00bb-4519-99d5-94ea7bbcd9d5-kube-api-access-6q4cz\") pod \"control-plane-machine-set-operator-78cbb6b69f-28vp9\" (UID: \"bb325f25-00bb-4519-99d5-94ea7bbcd9d5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-28vp9" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118694 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/695ff148-b91d-49a2-ad3b-9a240f11e454-console-serving-cert\") pod \"console-f9d7485db-8fgxh\" (UID: \"695ff148-b91d-49a2-ad3b-9a240f11e454\") " pod="openshift-console/console-f9d7485db-8fgxh" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118716 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/695ff148-b91d-49a2-ad3b-9a240f11e454-console-oauth-config\") pod \"console-f9d7485db-8fgxh\" (UID: \"695ff148-b91d-49a2-ad3b-9a240f11e454\") " pod="openshift-console/console-f9d7485db-8fgxh" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118734 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cdbd0f5d-e6fb-4960-a928-7a5dcc399239-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wcmvb\" (UID: \"cdbd0f5d-e6fb-4960-a928-7a5dcc399239\") " pod="openshift-marketplace/marketplace-operator-79b997595-wcmvb" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118770 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6eb3e5af-901e-42db-b01e-895e2d6c8171-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-fgcq7\" (UID: \"6eb3e5af-901e-42db-b01e-895e2d6c8171\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fgcq7" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118790 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdw7k\" (UniqueName: \"kubernetes.io/projected/a8f468fe-13d2-4f44-ab3e-fd301aac78ce-kube-api-access-mdw7k\") pod \"etcd-operator-b45778765-nx7kv\" (UID: \"a8f468fe-13d2-4f44-ab3e-fd301aac78ce\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nx7kv" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118816 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/af6eef76-87a0-459c-b2eb-61e06ae7386d-metrics-certs\") pod \"router-default-5444994796-pr8gz\" (UID: \"af6eef76-87a0-459c-b2eb-61e06ae7386d\") " pod="openshift-ingress/router-default-5444994796-pr8gz" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118846 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6d4nv\" (UniqueName: \"kubernetes.io/projected/af6eef76-87a0-459c-b2eb-61e06ae7386d-kube-api-access-6d4nv\") pod \"router-default-5444994796-pr8gz\" (UID: \"af6eef76-87a0-459c-b2eb-61e06ae7386d\") " pod="openshift-ingress/router-default-5444994796-pr8gz" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118865 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cdbd0f5d-e6fb-4960-a928-7a5dcc399239-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wcmvb\" (UID: \"cdbd0f5d-e6fb-4960-a928-7a5dcc399239\") " pod="openshift-marketplace/marketplace-operator-79b997595-wcmvb" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118884 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlpcf\" (UniqueName: \"kubernetes.io/projected/7082a4c2-c998-4e1c-8264-2bafcd96d0c1-kube-api-access-xlpcf\") pod \"service-ca-operator-777779d784-vpgtw\" (UID: \"7082a4c2-c998-4e1c-8264-2bafcd96d0c1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vpgtw" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118909 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmgrk\" (UniqueName: \"kubernetes.io/projected/17af2b06-620b-4126-ac9e-f0de24c9f6bb-kube-api-access-zmgrk\") pod \"kube-storage-version-migrator-operator-b67b599dd-qdlmd\" (UID: \"17af2b06-620b-4126-ac9e-f0de24c9f6bb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qdlmd" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118927 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/928166c7-a17c-4693-9ae5-1c8aa4050176-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-vz9mh\" (UID: \"928166c7-a17c-4693-9ae5-1c8aa4050176\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vz9mh" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118952 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118970 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.119018 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/695ff148-b91d-49a2-ad3b-9a240f11e454-console-config\") pod \"console-f9d7485db-8fgxh\" (UID: \"695ff148-b91d-49a2-ad3b-9a240f11e454\") " pod="openshift-console/console-f9d7485db-8fgxh" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.119017 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-audit-policies\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.119040 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9b23bdbc-d2d1-4404-8455-4e877764c72d-srv-cert\") pod \"olm-operator-6b444d44fb-hjlfz\" (UID: \"9b23bdbc-d2d1-4404-8455-4e877764c72d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjlfz" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.119061 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/16c26a8d-0deb-4754-b815-4402e2aa5455-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-67z4k\" (UID: \"16c26a8d-0deb-4754-b815-4402e2aa5455\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-67z4k" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.119080 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31501ea8-c8ad-4854-bfda-157a49fd0b39-proxy-tls\") pod \"machine-config-controller-84d6567774-5j6bc\" (UID: \"31501ea8-c8ad-4854-bfda-157a49fd0b39\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5j6bc" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.119098 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/695ff148-b91d-49a2-ad3b-9a240f11e454-oauth-serving-cert\") pod \"console-f9d7485db-8fgxh\" (UID: \"695ff148-b91d-49a2-ad3b-9a240f11e454\") " pod="openshift-console/console-f9d7485db-8fgxh" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.119117 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bcea73c8-ba18-43d4-a84b-f4ac5f7c43b0-cert\") pod \"ingress-canary-hfk7g\" (UID: \"bcea73c8-ba18-43d4-a84b-f4ac5f7c43b0\") " pod="openshift-ingress-canary/ingress-canary-hfk7g" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.119143 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv552\" (UniqueName: \"kubernetes.io/projected/9f9669ae-a5fc-4e59-b2b7-3ae1ebf6f3ad-kube-api-access-cv552\") pod \"machine-config-operator-74547568cd-kbt4b\" (UID: \"9f9669ae-a5fc-4e59-b2b7-3ae1ebf6f3ad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kbt4b" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.119162 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tkv6\" (UniqueName: \"kubernetes.io/projected/454fa304-47eb-48d6-9fec-406888874f6f-kube-api-access-9tkv6\") pod \"package-server-manager-789f6589d5-zqpn4\" (UID: \"454fa304-47eb-48d6-9fec-406888874f6f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zqpn4" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.119234 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33323546-6929-4c9c-a0a3-44842b9897b4-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-8c8p6\" (UID: \"33323546-6929-4c9c-a0a3-44842b9897b4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8c8p6" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.119552 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.120300 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/752a21cf-698e-45b3-91e2-c00b0e82d991-config\") pod \"kube-controller-manager-operator-78b949d7b-b8lt5\" (UID: \"752a21cf-698e-45b3-91e2-c00b0e82d991\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b8lt5" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.120382 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31501ea8-c8ad-4854-bfda-157a49fd0b39-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-5j6bc\" (UID: \"31501ea8-c8ad-4854-bfda-157a49fd0b39\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5j6bc" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.120527 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6eb3e5af-901e-42db-b01e-895e2d6c8171-config\") pod \"kube-apiserver-operator-766d6c64bb-fgcq7\" (UID: \"6eb3e5af-901e-42db-b01e-895e2d6c8171\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fgcq7" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.120575 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:45 crc kubenswrapper[4766]: E0130 16:24:45.120679 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:45.62066318 +0000 UTC m=+140.258620526 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.120764 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/af6eef76-87a0-459c-b2eb-61e06ae7386d-default-certificate\") pod \"router-default-5444994796-pr8gz\" (UID: \"af6eef76-87a0-459c-b2eb-61e06ae7386d\") " pod="openshift-ingress/router-default-5444994796-pr8gz" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.121659 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/16c26a8d-0deb-4754-b815-4402e2aa5455-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-67z4k\" (UID: \"16c26a8d-0deb-4754-b815-4402e2aa5455\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-67z4k" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.122539 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56e829ed-8cb2-48e8-bc6e-b5a6ec346d4a-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-hrm2g\" (UID: \"56e829ed-8cb2-48e8-bc6e-b5a6ec346d4a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hrm2g" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.123872 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.123923 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6eb3e5af-901e-42db-b01e-895e2d6c8171-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-fgcq7\" (UID: \"6eb3e5af-901e-42db-b01e-895e2d6c8171\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fgcq7" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.123962 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.123978 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a8f468fe-13d2-4f44-ab3e-fd301aac78ce-etcd-client\") pod \"etcd-operator-b45778765-nx7kv\" (UID: \"a8f468fe-13d2-4f44-ab3e-fd301aac78ce\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nx7kv" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.124103 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/af6eef76-87a0-459c-b2eb-61e06ae7386d-stats-auth\") pod \"router-default-5444994796-pr8gz\" (UID: \"af6eef76-87a0-459c-b2eb-61e06ae7386d\") " pod="openshift-ingress/router-default-5444994796-pr8gz" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.124162 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.124199 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/16c26a8d-0deb-4754-b815-4402e2aa5455-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-67z4k\" (UID: \"16c26a8d-0deb-4754-b815-4402e2aa5455\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-67z4k" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.124270 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.124826 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/752a21cf-698e-45b3-91e2-c00b0e82d991-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-b8lt5\" (UID: \"752a21cf-698e-45b3-91e2-c00b0e82d991\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b8lt5" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.125634 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.125779 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/af6eef76-87a0-459c-b2eb-61e06ae7386d-metrics-certs\") pod \"router-default-5444994796-pr8gz\" (UID: \"af6eef76-87a0-459c-b2eb-61e06ae7386d\") " pod="openshift-ingress/router-default-5444994796-pr8gz" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.126109 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.127286 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.132264 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.137651 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a8f468fe-13d2-4f44-ab3e-fd301aac78ce-etcd-service-ca\") pod \"etcd-operator-b45778765-nx7kv\" (UID: \"a8f468fe-13d2-4f44-ab3e-fd301aac78ce\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nx7kv" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.153302 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.172602 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.183005 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8f468fe-13d2-4f44-ab3e-fd301aac78ce-serving-cert\") pod \"etcd-operator-b45778765-nx7kv\" (UID: \"a8f468fe-13d2-4f44-ab3e-fd301aac78ce\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nx7kv" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.192449 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.212661 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.220502 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:45 crc kubenswrapper[4766]: E0130 16:24:45.220709 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:45.720680951 +0000 UTC m=+140.358638307 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.220778 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4d8t\" (UniqueName: \"kubernetes.io/projected/cdbd0f5d-e6fb-4960-a928-7a5dcc399239-kube-api-access-h4d8t\") pod \"marketplace-operator-79b997595-wcmvb\" (UID: \"cdbd0f5d-e6fb-4960-a928-7a5dcc399239\") " pod="openshift-marketplace/marketplace-operator-79b997595-wcmvb" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.220846 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwj89\" (UniqueName: \"kubernetes.io/projected/928166c7-a17c-4693-9ae5-1c8aa4050176-kube-api-access-bwj89\") pod \"multus-admission-controller-857f4d67dd-vz9mh\" (UID: \"928166c7-a17c-4693-9ae5-1c8aa4050176\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vz9mh" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.220885 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7082a4c2-c998-4e1c-8264-2bafcd96d0c1-serving-cert\") pod \"service-ca-operator-777779d784-vpgtw\" (UID: \"7082a4c2-c998-4e1c-8264-2bafcd96d0c1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vpgtw" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.220919 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c71faa34-d1e9-4e10-911a-8cc1ccb436c0-registration-dir\") pod \"csi-hostpathplugin-vljjd\" (UID: \"c71faa34-d1e9-4e10-911a-8cc1ccb436c0\") " pod="hostpath-provisioner/csi-hostpathplugin-vljjd" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.220943 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6289d893-d357-4aab-a2e9-389a422ebaa5-config-volume\") pod \"dns-default-lnxcr\" (UID: \"6289d893-d357-4aab-a2e9-389a422ebaa5\") " pod="openshift-dns/dns-default-lnxcr" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.221010 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vwjk\" (UniqueName: \"kubernetes.io/projected/9b23bdbc-d2d1-4404-8455-4e877764c72d-kube-api-access-9vwjk\") pod \"olm-operator-6b444d44fb-hjlfz\" (UID: \"9b23bdbc-d2d1-4404-8455-4e877764c72d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjlfz" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.221062 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/695ff148-b91d-49a2-ad3b-9a240f11e454-service-ca\") pod \"console-f9d7485db-8fgxh\" (UID: \"695ff148-b91d-49a2-ad3b-9a240f11e454\") " pod="openshift-console/console-f9d7485db-8fgxh" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.221095 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c71faa34-d1e9-4e10-911a-8cc1ccb436c0-plugins-dir\") pod \"csi-hostpathplugin-vljjd\" (UID: \"c71faa34-d1e9-4e10-911a-8cc1ccb436c0\") " pod="hostpath-provisioner/csi-hostpathplugin-vljjd" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.221136 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/695ff148-b91d-49a2-ad3b-9a240f11e454-console-serving-cert\") pod \"console-f9d7485db-8fgxh\" (UID: \"695ff148-b91d-49a2-ad3b-9a240f11e454\") " pod="openshift-console/console-f9d7485db-8fgxh" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.221158 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/695ff148-b91d-49a2-ad3b-9a240f11e454-console-oauth-config\") pod \"console-f9d7485db-8fgxh\" (UID: \"695ff148-b91d-49a2-ad3b-9a240f11e454\") " pod="openshift-console/console-f9d7485db-8fgxh" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.221207 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cdbd0f5d-e6fb-4960-a928-7a5dcc399239-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wcmvb\" (UID: \"cdbd0f5d-e6fb-4960-a928-7a5dcc399239\") " pod="openshift-marketplace/marketplace-operator-79b997595-wcmvb" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.221273 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cdbd0f5d-e6fb-4960-a928-7a5dcc399239-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wcmvb\" (UID: \"cdbd0f5d-e6fb-4960-a928-7a5dcc399239\") " pod="openshift-marketplace/marketplace-operator-79b997595-wcmvb" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.221297 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlpcf\" (UniqueName: \"kubernetes.io/projected/7082a4c2-c998-4e1c-8264-2bafcd96d0c1-kube-api-access-xlpcf\") pod \"service-ca-operator-777779d784-vpgtw\" (UID: \"7082a4c2-c998-4e1c-8264-2bafcd96d0c1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vpgtw" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.221336 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/928166c7-a17c-4693-9ae5-1c8aa4050176-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-vz9mh\" (UID: \"928166c7-a17c-4693-9ae5-1c8aa4050176\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vz9mh" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.221367 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.221395 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/695ff148-b91d-49a2-ad3b-9a240f11e454-console-config\") pod \"console-f9d7485db-8fgxh\" (UID: \"695ff148-b91d-49a2-ad3b-9a240f11e454\") " pod="openshift-console/console-f9d7485db-8fgxh" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.221419 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9b23bdbc-d2d1-4404-8455-4e877764c72d-srv-cert\") pod \"olm-operator-6b444d44fb-hjlfz\" (UID: \"9b23bdbc-d2d1-4404-8455-4e877764c72d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjlfz" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.221452 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/695ff148-b91d-49a2-ad3b-9a240f11e454-oauth-serving-cert\") pod \"console-f9d7485db-8fgxh\" (UID: \"695ff148-b91d-49a2-ad3b-9a240f11e454\") " pod="openshift-console/console-f9d7485db-8fgxh" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.221474 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bcea73c8-ba18-43d4-a84b-f4ac5f7c43b0-cert\") pod \"ingress-canary-hfk7g\" (UID: \"bcea73c8-ba18-43d4-a84b-f4ac5f7c43b0\") " pod="openshift-ingress-canary/ingress-canary-hfk7g" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.221483 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c71faa34-d1e9-4e10-911a-8cc1ccb436c0-plugins-dir\") pod \"csi-hostpathplugin-vljjd\" (UID: \"c71faa34-d1e9-4e10-911a-8cc1ccb436c0\") " pod="hostpath-provisioner/csi-hostpathplugin-vljjd" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.221510 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tkv6\" (UniqueName: \"kubernetes.io/projected/454fa304-47eb-48d6-9fec-406888874f6f-kube-api-access-9tkv6\") pod \"package-server-manager-789f6589d5-zqpn4\" (UID: \"454fa304-47eb-48d6-9fec-406888874f6f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zqpn4" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.221539 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c71faa34-d1e9-4e10-911a-8cc1ccb436c0-registration-dir\") pod \"csi-hostpathplugin-vljjd\" (UID: \"c71faa34-d1e9-4e10-911a-8cc1ccb436c0\") " pod="hostpath-provisioner/csi-hostpathplugin-vljjd" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.221559 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cs8lk\" (UniqueName: \"kubernetes.io/projected/6289d893-d357-4aab-a2e9-389a422ebaa5-kube-api-access-cs8lk\") pod \"dns-default-lnxcr\" (UID: \"6289d893-d357-4aab-a2e9-389a422ebaa5\") " pod="openshift-dns/dns-default-lnxcr" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.221747 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c71faa34-d1e9-4e10-911a-8cc1ccb436c0-mountpoint-dir\") pod \"csi-hostpathplugin-vljjd\" (UID: \"c71faa34-d1e9-4e10-911a-8cc1ccb436c0\") " pod="hostpath-provisioner/csi-hostpathplugin-vljjd" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.221808 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c71faa34-d1e9-4e10-911a-8cc1ccb436c0-socket-dir\") pod \"csi-hostpathplugin-vljjd\" (UID: \"c71faa34-d1e9-4e10-911a-8cc1ccb436c0\") " pod="hostpath-provisioner/csi-hostpathplugin-vljjd" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.221873 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7082a4c2-c998-4e1c-8264-2bafcd96d0c1-config\") pod \"service-ca-operator-777779d784-vpgtw\" (UID: \"7082a4c2-c998-4e1c-8264-2bafcd96d0c1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vpgtw" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.221923 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c71faa34-d1e9-4e10-911a-8cc1ccb436c0-csi-data-dir\") pod \"csi-hostpathplugin-vljjd\" (UID: \"c71faa34-d1e9-4e10-911a-8cc1ccb436c0\") " pod="hostpath-provisioner/csi-hostpathplugin-vljjd" Jan 30 16:24:45 crc kubenswrapper[4766]: E0130 16:24:45.221978 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:45.721947714 +0000 UTC m=+140.359905230 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.222034 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c71faa34-d1e9-4e10-911a-8cc1ccb436c0-csi-data-dir\") pod \"csi-hostpathplugin-vljjd\" (UID: \"c71faa34-d1e9-4e10-911a-8cc1ccb436c0\") " pod="hostpath-provisioner/csi-hostpathplugin-vljjd" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.222073 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/454fa304-47eb-48d6-9fec-406888874f6f-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-zqpn4\" (UID: \"454fa304-47eb-48d6-9fec-406888874f6f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zqpn4" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.222127 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c71faa34-d1e9-4e10-911a-8cc1ccb436c0-socket-dir\") pod \"csi-hostpathplugin-vljjd\" (UID: \"c71faa34-d1e9-4e10-911a-8cc1ccb436c0\") " pod="hostpath-provisioner/csi-hostpathplugin-vljjd" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.222156 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cb5jf\" (UniqueName: \"kubernetes.io/projected/695ff148-b91d-49a2-ad3b-9a240f11e454-kube-api-access-cb5jf\") pod \"console-f9d7485db-8fgxh\" (UID: \"695ff148-b91d-49a2-ad3b-9a240f11e454\") " pod="openshift-console/console-f9d7485db-8fgxh" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.222235 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6289d893-d357-4aab-a2e9-389a422ebaa5-metrics-tls\") pod \"dns-default-lnxcr\" (UID: \"6289d893-d357-4aab-a2e9-389a422ebaa5\") " pod="openshift-dns/dns-default-lnxcr" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.222314 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9b23bdbc-d2d1-4404-8455-4e877764c72d-profile-collector-cert\") pod \"olm-operator-6b444d44fb-hjlfz\" (UID: \"9b23bdbc-d2d1-4404-8455-4e877764c72d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjlfz" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.222362 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwpjf\" (UniqueName: \"kubernetes.io/projected/bcea73c8-ba18-43d4-a84b-f4ac5f7c43b0-kube-api-access-qwpjf\") pod \"ingress-canary-hfk7g\" (UID: \"bcea73c8-ba18-43d4-a84b-f4ac5f7c43b0\") " pod="openshift-ingress-canary/ingress-canary-hfk7g" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.222435 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66gcf\" (UniqueName: \"kubernetes.io/projected/c71faa34-d1e9-4e10-911a-8cc1ccb436c0-kube-api-access-66gcf\") pod \"csi-hostpathplugin-vljjd\" (UID: \"c71faa34-d1e9-4e10-911a-8cc1ccb436c0\") " pod="hostpath-provisioner/csi-hostpathplugin-vljjd" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.222483 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/695ff148-b91d-49a2-ad3b-9a240f11e454-trusted-ca-bundle\") pod \"console-f9d7485db-8fgxh\" (UID: \"695ff148-b91d-49a2-ad3b-9a240f11e454\") " pod="openshift-console/console-f9d7485db-8fgxh" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.222084 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c71faa34-d1e9-4e10-911a-8cc1ccb436c0-mountpoint-dir\") pod \"csi-hostpathplugin-vljjd\" (UID: \"c71faa34-d1e9-4e10-911a-8cc1ccb436c0\") " pod="hostpath-provisioner/csi-hostpathplugin-vljjd" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.232420 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.252310 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.262583 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da9e3070-71fe-41f6-8549-90d97f03c16e-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8qrfp\" (UID: \"da9e3070-71fe-41f6-8549-90d97f03c16e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8qrfp" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.273036 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.301251 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.312097 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.321443 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da9e3070-71fe-41f6-8549-90d97f03c16e-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8qrfp\" (UID: \"da9e3070-71fe-41f6-8549-90d97f03c16e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8qrfp" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.323676 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:45 crc kubenswrapper[4766]: E0130 16:24:45.323840 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:45.823813454 +0000 UTC m=+140.461770810 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.324136 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:45 crc kubenswrapper[4766]: E0130 16:24:45.324504 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:45.824493993 +0000 UTC m=+140.462451339 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.352947 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.357936 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9f9669ae-a5fc-4e59-b2b7-3ae1ebf6f3ad-images\") pod \"machine-config-operator-74547568cd-kbt4b\" (UID: \"9f9669ae-a5fc-4e59-b2b7-3ae1ebf6f3ad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kbt4b" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.373354 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.393068 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.403997 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9f9669ae-a5fc-4e59-b2b7-3ae1ebf6f3ad-proxy-tls\") pod \"machine-config-operator-74547568cd-kbt4b\" (UID: \"9f9669ae-a5fc-4e59-b2b7-3ae1ebf6f3ad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kbt4b" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.413989 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.425837 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:45 crc kubenswrapper[4766]: E0130 16:24:45.426139 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:45.926095795 +0000 UTC m=+140.564053281 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.426384 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:45 crc kubenswrapper[4766]: E0130 16:24:45.426848 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:45.926830954 +0000 UTC m=+140.564788300 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.432463 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.452577 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.488807 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfsvc\" (UniqueName: \"kubernetes.io/projected/2e83d3d7-f71f-47ab-a085-8d62e6b30f7d-kube-api-access-dfsvc\") pod \"console-operator-58897d9998-gtfgx\" (UID: \"2e83d3d7-f71f-47ab-a085-8d62e6b30f7d\") " pod="openshift-console-operator/console-operator-58897d9998-gtfgx" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.492721 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.503651 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31501ea8-c8ad-4854-bfda-157a49fd0b39-proxy-tls\") pod \"machine-config-controller-84d6567774-5j6bc\" (UID: \"31501ea8-c8ad-4854-bfda-157a49fd0b39\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5j6bc" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.512947 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.527561 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-gtfgx" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.527719 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:45 crc kubenswrapper[4766]: E0130 16:24:45.528067 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:46.027972025 +0000 UTC m=+140.665929371 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.528629 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:45 crc kubenswrapper[4766]: E0130 16:24:45.529113 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:46.029088005 +0000 UTC m=+140.667045481 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.547636 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ks54j\" (UniqueName: \"kubernetes.io/projected/798137fc-1490-4b1c-ac4d-77b6c9e56d05-kube-api-access-ks54j\") pod \"route-controller-manager-6576b87f9c-mfclt\" (UID: \"798137fc-1490-4b1c-ac4d-77b6c9e56d05\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfclt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.552717 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.572823 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.580870 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33323546-6929-4c9c-a0a3-44842b9897b4-config\") pod \"openshift-apiserver-operator-796bbdcf4f-8c8p6\" (UID: \"33323546-6929-4c9c-a0a3-44842b9897b4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8c8p6" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.592105 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.604243 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33323546-6929-4c9c-a0a3-44842b9897b4-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-8c8p6\" (UID: \"33323546-6929-4c9c-a0a3-44842b9897b4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8c8p6" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.612560 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.629967 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:45 crc kubenswrapper[4766]: E0130 16:24:45.630684 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:46.130345178 +0000 UTC m=+140.768302544 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.630879 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:45 crc kubenswrapper[4766]: E0130 16:24:45.631958 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:46.13194193 +0000 UTC m=+140.769899296 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.632255 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.655314 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.664910 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/bb325f25-00bb-4519-99d5-94ea7bbcd9d5-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-28vp9\" (UID: \"bb325f25-00bb-4519-99d5-94ea7bbcd9d5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-28vp9" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.672588 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.692621 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.698776 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfclt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.714721 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.722401 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17af2b06-620b-4126-ac9e-f0de24c9f6bb-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-qdlmd\" (UID: \"17af2b06-620b-4126-ac9e-f0de24c9f6bb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qdlmd" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.732304 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:45 crc kubenswrapper[4766]: E0130 16:24:45.732832 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:46.232776113 +0000 UTC m=+140.870733459 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.733042 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.734036 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:45 crc kubenswrapper[4766]: E0130 16:24:45.734642 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:46.234624982 +0000 UTC m=+140.872582328 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.741266 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-gtfgx"] Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.753299 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.766644 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17af2b06-620b-4126-ac9e-f0de24c9f6bb-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-qdlmd\" (UID: \"17af2b06-620b-4126-ac9e-f0de24c9f6bb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qdlmd" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.774849 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.792931 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.804124 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/695ff148-b91d-49a2-ad3b-9a240f11e454-service-ca\") pod \"console-f9d7485db-8fgxh\" (UID: \"695ff148-b91d-49a2-ad3b-9a240f11e454\") " pod="openshift-console/console-f9d7485db-8fgxh" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.812325 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.816764 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/695ff148-b91d-49a2-ad3b-9a240f11e454-console-oauth-config\") pod \"console-f9d7485db-8fgxh\" (UID: \"695ff148-b91d-49a2-ad3b-9a240f11e454\") " pod="openshift-console/console-f9d7485db-8fgxh" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.832917 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.835594 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:45 crc kubenswrapper[4766]: E0130 16:24:45.835789 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:46.335760212 +0000 UTC m=+140.973717558 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.836119 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:45 crc kubenswrapper[4766]: E0130 16:24:45.836861 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:46.336833622 +0000 UTC m=+140.974790988 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.860066 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.868397 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/695ff148-b91d-49a2-ad3b-9a240f11e454-trusted-ca-bundle\") pod \"console-f9d7485db-8fgxh\" (UID: \"695ff148-b91d-49a2-ad3b-9a240f11e454\") " pod="openshift-console/console-f9d7485db-8fgxh" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.872533 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.887901 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/695ff148-b91d-49a2-ad3b-9a240f11e454-console-serving-cert\") pod \"console-f9d7485db-8fgxh\" (UID: \"695ff148-b91d-49a2-ad3b-9a240f11e454\") " pod="openshift-console/console-f9d7485db-8fgxh" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.895511 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.903660 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/695ff148-b91d-49a2-ad3b-9a240f11e454-console-config\") pod \"console-f9d7485db-8fgxh\" (UID: \"695ff148-b91d-49a2-ad3b-9a240f11e454\") " pod="openshift-console/console-f9d7485db-8fgxh" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.913894 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.924039 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfclt"] Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.924886 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/928166c7-a17c-4693-9ae5-1c8aa4050176-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-vz9mh\" (UID: \"928166c7-a17c-4693-9ae5-1c8aa4050176\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vz9mh" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.930391 4766 request.go:700] Waited for 1.010207713s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-ac-dockercfg-9lkdf&limit=500&resourceVersion=0 Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.933031 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.938248 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:45 crc kubenswrapper[4766]: E0130 16:24:45.938466 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:46.438432444 +0000 UTC m=+141.076389790 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.939007 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:45 crc kubenswrapper[4766]: E0130 16:24:45.939545 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:46.439490292 +0000 UTC m=+141.077447638 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.954415 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.963231 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/695ff148-b91d-49a2-ad3b-9a240f11e454-oauth-serving-cert\") pod \"console-f9d7485db-8fgxh\" (UID: \"695ff148-b91d-49a2-ad3b-9a240f11e454\") " pod="openshift-console/console-f9d7485db-8fgxh" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.978653 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.993207 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.012821 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.032266 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.039992 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.040369 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:46.540316664 +0000 UTC m=+141.178274120 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.041134 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.041573 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:46.541555717 +0000 UTC m=+141.179513063 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.051936 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.092749 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.107407 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/454fa304-47eb-48d6-9fec-406888874f6f-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-zqpn4\" (UID: \"454fa304-47eb-48d6-9fec-406888874f6f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zqpn4" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.113034 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.121688 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.131326 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.141796 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.142136 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:46.642107402 +0000 UTC m=+141.280064748 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.142433 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.142832 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:46.642816371 +0000 UTC m=+141.280773717 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.151447 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.172416 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.192346 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.195952 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7082a4c2-c998-4e1c-8264-2bafcd96d0c1-serving-cert\") pod \"service-ca-operator-777779d784-vpgtw\" (UID: \"7082a4c2-c998-4e1c-8264-2bafcd96d0c1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vpgtw" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.212208 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.221645 4766 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.221751 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9b23bdbc-d2d1-4404-8455-4e877764c72d-srv-cert podName:9b23bdbc-d2d1-4404-8455-4e877764c72d nodeName:}" failed. No retries permitted until 2026-01-30 16:24:46.72172425 +0000 UTC m=+141.359681776 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/9b23bdbc-d2d1-4404-8455-4e877764c72d-srv-cert") pod "olm-operator-6b444d44fb-hjlfz" (UID: "9b23bdbc-d2d1-4404-8455-4e877764c72d") : failed to sync secret cache: timed out waiting for the condition Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.221837 4766 secret.go:188] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.221910 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bcea73c8-ba18-43d4-a84b-f4ac5f7c43b0-cert podName:bcea73c8-ba18-43d4-a84b-f4ac5f7c43b0 nodeName:}" failed. No retries permitted until 2026-01-30 16:24:46.721882264 +0000 UTC m=+141.359839630 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bcea73c8-ba18-43d4-a84b-f4ac5f7c43b0-cert") pod "ingress-canary-hfk7g" (UID: "bcea73c8-ba18-43d4-a84b-f4ac5f7c43b0") : failed to sync secret cache: timed out waiting for the condition Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.221949 4766 secret.go:188] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: failed to sync secret cache: timed out waiting for the condition Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.221997 4766 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.222049 4766 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: failed to sync configmap cache: timed out waiting for the condition Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.222011 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cdbd0f5d-e6fb-4960-a928-7a5dcc399239-marketplace-operator-metrics podName:cdbd0f5d-e6fb-4960-a928-7a5dcc399239 nodeName:}" failed. No retries permitted until 2026-01-30 16:24:46.721996997 +0000 UTC m=+141.359954353 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/cdbd0f5d-e6fb-4960-a928-7a5dcc399239-marketplace-operator-metrics") pod "marketplace-operator-79b997595-wcmvb" (UID: "cdbd0f5d-e6fb-4960-a928-7a5dcc399239") : failed to sync secret cache: timed out waiting for the condition Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.222211 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cdbd0f5d-e6fb-4960-a928-7a5dcc399239-marketplace-trusted-ca podName:cdbd0f5d-e6fb-4960-a928-7a5dcc399239 nodeName:}" failed. No retries permitted until 2026-01-30 16:24:46.722137711 +0000 UTC m=+141.360095067 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/cdbd0f5d-e6fb-4960-a928-7a5dcc399239-marketplace-trusted-ca") pod "marketplace-operator-79b997595-wcmvb" (UID: "cdbd0f5d-e6fb-4960-a928-7a5dcc399239") : failed to sync configmap cache: timed out waiting for the condition Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.222241 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6289d893-d357-4aab-a2e9-389a422ebaa5-config-volume podName:6289d893-d357-4aab-a2e9-389a422ebaa5 nodeName:}" failed. No retries permitted until 2026-01-30 16:24:46.722228473 +0000 UTC m=+141.360185829 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/6289d893-d357-4aab-a2e9-389a422ebaa5-config-volume") pod "dns-default-lnxcr" (UID: "6289d893-d357-4aab-a2e9-389a422ebaa5") : failed to sync configmap cache: timed out waiting for the condition Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.222330 4766 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.222411 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7082a4c2-c998-4e1c-8264-2bafcd96d0c1-config podName:7082a4c2-c998-4e1c-8264-2bafcd96d0c1 nodeName:}" failed. No retries permitted until 2026-01-30 16:24:46.722401417 +0000 UTC m=+141.360358773 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/7082a4c2-c998-4e1c-8264-2bafcd96d0c1-config") pod "service-ca-operator-777779d784-vpgtw" (UID: "7082a4c2-c998-4e1c-8264-2bafcd96d0c1") : failed to sync configmap cache: timed out waiting for the condition Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.223097 4766 secret.go:188] Couldn't get secret openshift-dns/dns-default-metrics-tls: failed to sync secret cache: timed out waiting for the condition Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.223148 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6289d893-d357-4aab-a2e9-389a422ebaa5-metrics-tls podName:6289d893-d357-4aab-a2e9-389a422ebaa5 nodeName:}" failed. No retries permitted until 2026-01-30 16:24:46.723135127 +0000 UTC m=+141.361092493 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/6289d893-d357-4aab-a2e9-389a422ebaa5-metrics-tls") pod "dns-default-lnxcr" (UID: "6289d893-d357-4aab-a2e9-389a422ebaa5") : failed to sync secret cache: timed out waiting for the condition Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.223191 4766 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: failed to sync secret cache: timed out waiting for the condition Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.223223 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9b23bdbc-d2d1-4404-8455-4e877764c72d-profile-collector-cert podName:9b23bdbc-d2d1-4404-8455-4e877764c72d nodeName:}" failed. No retries permitted until 2026-01-30 16:24:46.72321535 +0000 UTC m=+141.361172706 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/9b23bdbc-d2d1-4404-8455-4e877764c72d-profile-collector-cert") pod "olm-operator-6b444d44fb-hjlfz" (UID: "9b23bdbc-d2d1-4404-8455-4e877764c72d") : failed to sync secret cache: timed out waiting for the condition Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.233409 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.244082 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.245410 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:46.745378898 +0000 UTC m=+141.383336264 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.251430 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.279511 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.292910 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.311890 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.333496 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.346850 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.347262 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:46.847246469 +0000 UTC m=+141.485203815 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.351535 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.372479 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.401253 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.440568 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.442770 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.448141 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.448294 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:46.948272586 +0000 UTC m=+141.586229942 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.449014 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.449448 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:46.949432857 +0000 UTC m=+141.587390213 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.451650 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.473374 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.493020 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.512780 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.532126 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.550799 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.551025 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:47.050987649 +0000 UTC m=+141.688945005 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.551808 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.552221 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.552317 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:47.052307484 +0000 UTC m=+141.690264830 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.572551 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.623366 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.632323 4766 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.638351 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4d25l\" (UniqueName: \"kubernetes.io/projected/71148f4c-0b84-45c4-911c-0ec4b06cf710-kube-api-access-4d25l\") pod \"apiserver-7bbb656c7d-zps75\" (UID: \"71148f4c-0b84-45c4-911c-0ec4b06cf710\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.653059 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.653666 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.653998 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:47.153937397 +0000 UTC m=+141.791894743 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.654561 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.655068 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:47.155053067 +0000 UTC m=+141.793010413 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.672102 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.692427 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.696320 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfclt" event={"ID":"798137fc-1490-4b1c-ac4d-77b6c9e56d05","Type":"ContainerStarted","Data":"4921083d2193553df918df075b998b9501b901f014de50a516ea94d274b8abf0"} Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.696376 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfclt" event={"ID":"798137fc-1490-4b1c-ac4d-77b6c9e56d05","Type":"ContainerStarted","Data":"777f165aaa35e8debb71a11164cf2e0013257285fafc5c165738c7722a8711a4"} Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.697846 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-gtfgx" event={"ID":"2e83d3d7-f71f-47ab-a085-8d62e6b30f7d","Type":"ContainerStarted","Data":"e092584838521da9e178559d35b263041054d50b5103e999ef7b3878e7fc6d19"} Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.697900 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-gtfgx" event={"ID":"2e83d3d7-f71f-47ab-a085-8d62e6b30f7d","Type":"ContainerStarted","Data":"7e5832395019d10128aad8c35d22d08e4bc20e98146fbd6ed4f59301d7c82dc2"} Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.698172 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-gtfgx" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.700775 4766 patch_prober.go:28] interesting pod/console-operator-58897d9998-gtfgx container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.700825 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-gtfgx" podUID="2e83d3d7-f71f-47ab-a085-8d62e6b30f7d" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.726301 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2j868\" (UniqueName: \"kubernetes.io/projected/c1191290-07ee-40c4-85e8-59545986d7db-kube-api-access-2j868\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.753777 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.756495 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.756678 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:47.256637719 +0000 UTC m=+141.894595075 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.756786 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6289d893-d357-4aab-a2e9-389a422ebaa5-config-volume\") pod \"dns-default-lnxcr\" (UID: \"6289d893-d357-4aab-a2e9-389a422ebaa5\") " pod="openshift-dns/dns-default-lnxcr" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.756904 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cdbd0f5d-e6fb-4960-a928-7a5dcc399239-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wcmvb\" (UID: \"cdbd0f5d-e6fb-4960-a928-7a5dcc399239\") " pod="openshift-marketplace/marketplace-operator-79b997595-wcmvb" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.757030 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cdbd0f5d-e6fb-4960-a928-7a5dcc399239-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wcmvb\" (UID: \"cdbd0f5d-e6fb-4960-a928-7a5dcc399239\") " pod="openshift-marketplace/marketplace-operator-79b997595-wcmvb" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.757124 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.757157 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9b23bdbc-d2d1-4404-8455-4e877764c72d-srv-cert\") pod \"olm-operator-6b444d44fb-hjlfz\" (UID: \"9b23bdbc-d2d1-4404-8455-4e877764c72d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjlfz" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.757216 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bcea73c8-ba18-43d4-a84b-f4ac5f7c43b0-cert\") pod \"ingress-canary-hfk7g\" (UID: \"bcea73c8-ba18-43d4-a84b-f4ac5f7c43b0\") " pod="openshift-ingress-canary/ingress-canary-hfk7g" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.757358 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7082a4c2-c998-4e1c-8264-2bafcd96d0c1-config\") pod \"service-ca-operator-777779d784-vpgtw\" (UID: \"7082a4c2-c998-4e1c-8264-2bafcd96d0c1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vpgtw" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.757420 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6289d893-d357-4aab-a2e9-389a422ebaa5-metrics-tls\") pod \"dns-default-lnxcr\" (UID: \"6289d893-d357-4aab-a2e9-389a422ebaa5\") " pod="openshift-dns/dns-default-lnxcr" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.757452 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9b23bdbc-d2d1-4404-8455-4e877764c72d-profile-collector-cert\") pod \"olm-operator-6b444d44fb-hjlfz\" (UID: \"9b23bdbc-d2d1-4404-8455-4e877764c72d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjlfz" Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.757674 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:47.257654216 +0000 UTC m=+141.895611602 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.757944 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6289d893-d357-4aab-a2e9-389a422ebaa5-config-volume\") pod \"dns-default-lnxcr\" (UID: \"6289d893-d357-4aab-a2e9-389a422ebaa5\") " pod="openshift-dns/dns-default-lnxcr" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.758393 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cdbd0f5d-e6fb-4960-a928-7a5dcc399239-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wcmvb\" (UID: \"cdbd0f5d-e6fb-4960-a928-7a5dcc399239\") " pod="openshift-marketplace/marketplace-operator-79b997595-wcmvb" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.758883 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7082a4c2-c998-4e1c-8264-2bafcd96d0c1-config\") pod \"service-ca-operator-777779d784-vpgtw\" (UID: \"7082a4c2-c998-4e1c-8264-2bafcd96d0c1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vpgtw" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.760956 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9b23bdbc-d2d1-4404-8455-4e877764c72d-srv-cert\") pod \"olm-operator-6b444d44fb-hjlfz\" (UID: \"9b23bdbc-d2d1-4404-8455-4e877764c72d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjlfz" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.761635 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cdbd0f5d-e6fb-4960-a928-7a5dcc399239-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wcmvb\" (UID: \"cdbd0f5d-e6fb-4960-a928-7a5dcc399239\") " pod="openshift-marketplace/marketplace-operator-79b997595-wcmvb" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.762398 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9b23bdbc-d2d1-4404-8455-4e877764c72d-profile-collector-cert\") pod \"olm-operator-6b444d44fb-hjlfz\" (UID: \"9b23bdbc-d2d1-4404-8455-4e877764c72d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjlfz" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.764654 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6289d893-d357-4aab-a2e9-389a422ebaa5-metrics-tls\") pod \"dns-default-lnxcr\" (UID: \"6289d893-d357-4aab-a2e9-389a422ebaa5\") " pod="openshift-dns/dns-default-lnxcr" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.773564 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.778027 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zmsv\" (UniqueName: \"kubernetes.io/projected/807df97f-b371-4d04-81e9-b1a823a8a638-kube-api-access-5zmsv\") pod \"controller-manager-879f6c89f-dgkvz\" (UID: \"807df97f-b371-4d04-81e9-b1a823a8a638\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dgkvz" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.792521 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.802762 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bcea73c8-ba18-43d4-a84b-f4ac5f7c43b0-cert\") pod \"ingress-canary-hfk7g\" (UID: \"bcea73c8-ba18-43d4-a84b-f4ac5f7c43b0\") " pod="openshift-ingress-canary/ingress-canary-hfk7g" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.812378 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.835323 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-dgkvz" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.850824 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/97631abe-0d99-4f69-b208-4da9d19a8400-bound-sa-token\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.853749 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.858856 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:47.358825427 +0000 UTC m=+141.996782813 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.858708 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.859606 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.860302 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:47.360275576 +0000 UTC m=+141.998232922 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.918291 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.928848 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzxn7\" (UniqueName: \"kubernetes.io/projected/8acca84e-2800-4a20-b3e8-84e021d1c001-kube-api-access-fzxn7\") pod \"machine-api-operator-5694c8668f-jn8dp\" (UID: \"8acca84e-2800-4a20-b3e8-84e021d1c001\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jn8dp" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.950141 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbqv9\" (UniqueName: \"kubernetes.io/projected/0fd41a92-ef77-4a02-bd2b-089d2edb3cf4-kube-api-access-dbqv9\") pod \"openshift-config-operator-7777fb866f-7j765\" (UID: \"0fd41a92-ef77-4a02-bd2b-089d2edb3cf4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7j765" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.950412 4766 request.go:700] Waited for 1.947146897s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/serviceaccounts/machine-approver-sa/token Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.954611 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79252\" (UniqueName: \"kubernetes.io/projected/97631abe-0d99-4f69-b208-4da9d19a8400-kube-api-access-79252\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.956296 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vn75m\" (UniqueName: \"kubernetes.io/projected/0d8527eb-86cc-45de-8821-7b80f37422d0-kube-api-access-vn75m\") pod \"authentication-operator-69f744f599-txtwn\" (UID: \"0d8527eb-86cc-45de-8821-7b80f37422d0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-txtwn" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.961733 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.962091 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:47.462060033 +0000 UTC m=+142.100017379 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.962703 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.963096 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:47.463087061 +0000 UTC m=+142.101044407 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.964169 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5sxg\" (UniqueName: \"kubernetes.io/projected/d9f3a679-bd83-4e31-aad4-0bd228e96c33-kube-api-access-l5sxg\") pod \"downloads-7954f5f757-254pk\" (UID: \"d9f3a679-bd83-4e31-aad4-0bd228e96c33\") " pod="openshift-console/downloads-7954f5f757-254pk" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.980626 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kd67z\" (UniqueName: \"kubernetes.io/projected/2f7dd292-e5ea-4fe6-b6ec-da47748f1fc3-kube-api-access-kd67z\") pod \"machine-approver-56656f9798-6ndwq\" (UID: \"2f7dd292-e5ea-4fe6-b6ec-da47748f1fc3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6ndwq" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.990087 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7j765" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.995702 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z958l\" (UniqueName: \"kubernetes.io/projected/3dc11d4d-16d8-43a2-9648-e0b833e8824a-kube-api-access-z958l\") pod \"dns-operator-744455d44c-vzmxm\" (UID: \"3dc11d4d-16d8-43a2-9648-e0b833e8824a\") " pod="openshift-dns-operator/dns-operator-744455d44c-vzmxm" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.008823 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9jhr\" (UniqueName: \"kubernetes.io/projected/587fc124-b506-4535-b8d2-1d0f6c91cfb9-kube-api-access-l9jhr\") pod \"cluster-samples-operator-665b6dd947-2h92f\" (UID: \"587fc124-b506-4535-b8d2-1d0f6c91cfb9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2h92f" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.015001 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-254pk" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.035644 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-txtwn" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.037556 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.050134 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-vzmxm" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.053487 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.065486 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:47 crc kubenswrapper[4766]: E0130 16:24:47.065645 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:47.565616388 +0000 UTC m=+142.203573734 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.066011 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:47 crc kubenswrapper[4766]: E0130 16:24:47.066569 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:47.566557814 +0000 UTC m=+142.204515160 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.072286 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.089618 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dgkvz"] Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.114969 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/da9e3070-71fe-41f6-8549-90d97f03c16e-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8qrfp\" (UID: \"da9e3070-71fe-41f6-8549-90d97f03c16e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8qrfp" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.128795 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/752a21cf-698e-45b3-91e2-c00b0e82d991-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-b8lt5\" (UID: \"752a21cf-698e-45b3-91e2-c00b0e82d991\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b8lt5" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.151704 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-c75qp"] Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.159526 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v86z7\" (UniqueName: \"kubernetes.io/projected/56e829ed-8cb2-48e8-bc6e-b5a6ec346d4a-kube-api-access-v86z7\") pod \"openshift-controller-manager-operator-756b6f6bc6-hrm2g\" (UID: \"56e829ed-8cb2-48e8-bc6e-b5a6ec346d4a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hrm2g" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.159924 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b8lt5" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.164864 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-jn8dp" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.167697 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:47 crc kubenswrapper[4766]: E0130 16:24:47.167862 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:47.667837138 +0000 UTC m=+142.305794484 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.168365 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:47 crc kubenswrapper[4766]: E0130 16:24:47.168874 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:47.668853064 +0000 UTC m=+142.306810410 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.182236 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75"] Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.182445 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8qrfp" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.189870 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6eb3e5af-901e-42db-b01e-895e2d6c8171-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-fgcq7\" (UID: \"6eb3e5af-901e-42db-b01e-895e2d6c8171\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fgcq7" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.192544 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mntd9\" (UniqueName: \"kubernetes.io/projected/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-kube-api-access-mntd9\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.208449 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xncpz\" (UniqueName: \"kubernetes.io/projected/33323546-6929-4c9c-a0a3-44842b9897b4-kube-api-access-xncpz\") pod \"openshift-apiserver-operator-796bbdcf4f-8c8p6\" (UID: \"33323546-6929-4c9c-a0a3-44842b9897b4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8c8p6" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.244468 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2h92f" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.245984 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvndq\" (UniqueName: \"kubernetes.io/projected/31501ea8-c8ad-4854-bfda-157a49fd0b39-kube-api-access-wvndq\") pod \"machine-config-controller-84d6567774-5j6bc\" (UID: \"31501ea8-c8ad-4854-bfda-157a49fd0b39\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5j6bc" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.252595 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6q4cz\" (UniqueName: \"kubernetes.io/projected/bb325f25-00bb-4519-99d5-94ea7bbcd9d5-kube-api-access-6q4cz\") pod \"control-plane-machine-set-operator-78cbb6b69f-28vp9\" (UID: \"bb325f25-00bb-4519-99d5-94ea7bbcd9d5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-28vp9" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.269220 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:47 crc kubenswrapper[4766]: E0130 16:24:47.269798 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:47.769779509 +0000 UTC m=+142.407736855 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.270794 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6d4nv\" (UniqueName: \"kubernetes.io/projected/af6eef76-87a0-459c-b2eb-61e06ae7386d-kube-api-access-6d4nv\") pod \"router-default-5444994796-pr8gz\" (UID: \"af6eef76-87a0-459c-b2eb-61e06ae7386d\") " pod="openshift-ingress/router-default-5444994796-pr8gz" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.273433 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6ndwq" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.290481 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/16c26a8d-0deb-4754-b815-4402e2aa5455-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-67z4k\" (UID: \"16c26a8d-0deb-4754-b815-4402e2aa5455\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-67z4k" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.293813 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-254pk"] Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.308679 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv552\" (UniqueName: \"kubernetes.io/projected/9f9669ae-a5fc-4e59-b2b7-3ae1ebf6f3ad-kube-api-access-cv552\") pod \"machine-config-operator-74547568cd-kbt4b\" (UID: \"9f9669ae-a5fc-4e59-b2b7-3ae1ebf6f3ad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kbt4b" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.329961 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdw7k\" (UniqueName: \"kubernetes.io/projected/a8f468fe-13d2-4f44-ab3e-fd301aac78ce-kube-api-access-mdw7k\") pod \"etcd-operator-b45778765-nx7kv\" (UID: \"a8f468fe-13d2-4f44-ab3e-fd301aac78ce\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nx7kv" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.350111 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tt9l\" (UniqueName: \"kubernetes.io/projected/16c26a8d-0deb-4754-b815-4402e2aa5455-kube-api-access-8tt9l\") pod \"cluster-image-registry-operator-dc59b4c8b-67z4k\" (UID: \"16c26a8d-0deb-4754-b815-4402e2aa5455\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-67z4k" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.371261 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:47 crc kubenswrapper[4766]: E0130 16:24:47.371687 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:47.871676209 +0000 UTC m=+142.509633545 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.391115 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wldm9\" (UniqueName: \"kubernetes.io/projected/d6fc09a4-19be-4bdb-b87a-5eafbfc9981c-kube-api-access-wldm9\") pod \"migrator-59844c95c7-r7tdx\" (UID: \"d6fc09a4-19be-4bdb-b87a-5eafbfc9981c\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r7tdx" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.392867 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmgrk\" (UniqueName: \"kubernetes.io/projected/17af2b06-620b-4126-ac9e-f0de24c9f6bb-kube-api-access-zmgrk\") pod \"kube-storage-version-migrator-operator-b67b599dd-qdlmd\" (UID: \"17af2b06-620b-4126-ac9e-f0de24c9f6bb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qdlmd" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.409595 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwj89\" (UniqueName: \"kubernetes.io/projected/928166c7-a17c-4693-9ae5-1c8aa4050176-kube-api-access-bwj89\") pod \"multus-admission-controller-857f4d67dd-vz9mh\" (UID: \"928166c7-a17c-4693-9ae5-1c8aa4050176\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vz9mh" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.426004 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hrm2g" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.429999 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vwjk\" (UniqueName: \"kubernetes.io/projected/9b23bdbc-d2d1-4404-8455-4e877764c72d-kube-api-access-9vwjk\") pod \"olm-operator-6b444d44fb-hjlfz\" (UID: \"9b23bdbc-d2d1-4404-8455-4e877764c72d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjlfz" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.430344 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-67z4k" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.438232 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fgcq7" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.448731 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-pr8gz" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.451707 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlpcf\" (UniqueName: \"kubernetes.io/projected/7082a4c2-c998-4e1c-8264-2bafcd96d0c1-kube-api-access-xlpcf\") pod \"service-ca-operator-777779d784-vpgtw\" (UID: \"7082a4c2-c998-4e1c-8264-2bafcd96d0c1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vpgtw" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.453428 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.468026 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-nx7kv" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.470504 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4d8t\" (UniqueName: \"kubernetes.io/projected/cdbd0f5d-e6fb-4960-a928-7a5dcc399239-kube-api-access-h4d8t\") pod \"marketplace-operator-79b997595-wcmvb\" (UID: \"cdbd0f5d-e6fb-4960-a928-7a5dcc399239\") " pod="openshift-marketplace/marketplace-operator-79b997595-wcmvb" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.471937 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:47 crc kubenswrapper[4766]: E0130 16:24:47.472611 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:47.972578004 +0000 UTC m=+142.610535360 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.483226 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kbt4b" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.490512 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r7tdx" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.493488 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tkv6\" (UniqueName: \"kubernetes.io/projected/454fa304-47eb-48d6-9fec-406888874f6f-kube-api-access-9tkv6\") pod \"package-server-manager-789f6589d5-zqpn4\" (UID: \"454fa304-47eb-48d6-9fec-406888874f6f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zqpn4" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.499273 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5j6bc" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.507588 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8c8p6" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.514814 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cs8lk\" (UniqueName: \"kubernetes.io/projected/6289d893-d357-4aab-a2e9-389a422ebaa5-kube-api-access-cs8lk\") pod \"dns-default-lnxcr\" (UID: \"6289d893-d357-4aab-a2e9-389a422ebaa5\") " pod="openshift-dns/dns-default-lnxcr" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.515422 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-28vp9" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.523299 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qdlmd" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.529993 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cb5jf\" (UniqueName: \"kubernetes.io/projected/695ff148-b91d-49a2-ad3b-9a240f11e454-kube-api-access-cb5jf\") pod \"console-f9d7485db-8fgxh\" (UID: \"695ff148-b91d-49a2-ad3b-9a240f11e454\") " pod="openshift-console/console-f9d7485db-8fgxh" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.532869 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-8fgxh" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.539695 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-vz9mh" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.551386 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwpjf\" (UniqueName: \"kubernetes.io/projected/bcea73c8-ba18-43d4-a84b-f4ac5f7c43b0-kube-api-access-qwpjf\") pod \"ingress-canary-hfk7g\" (UID: \"bcea73c8-ba18-43d4-a84b-f4ac5f7c43b0\") " pod="openshift-ingress-canary/ingress-canary-hfk7g" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.554832 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zqpn4" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.570233 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-vpgtw" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.573878 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:47 crc kubenswrapper[4766]: E0130 16:24:47.574691 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:48.07465797 +0000 UTC m=+142.712615316 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.575791 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wcmvb" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.590964 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-txtwn"] Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.599705 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjlfz" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.600364 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-7j765"] Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.606267 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-vzmxm"] Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.611806 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66gcf\" (UniqueName: \"kubernetes.io/projected/c71faa34-d1e9-4e10-911a-8cc1ccb436c0-kube-api-access-66gcf\") pod \"csi-hostpathplugin-vljjd\" (UID: \"c71faa34-d1e9-4e10-911a-8cc1ccb436c0\") " pod="hostpath-provisioner/csi-hostpathplugin-vljjd" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.639944 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-vljjd" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.647015 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-lnxcr" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.654151 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-hfk7g" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.679261 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:47 crc kubenswrapper[4766]: E0130 16:24:47.679464 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:48.179430496 +0000 UTC m=+142.817387842 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.679795 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqpdx\" (UniqueName: \"kubernetes.io/projected/06c79303-4409-4d40-8b87-66904d05a635-kube-api-access-rqpdx\") pod \"ingress-operator-5b745b69d9-92b8r\" (UID: \"06c79303-4409-4d40-8b87-66904d05a635\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-92b8r" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.679988 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/08038447-8cce-4cea-9ef9-f7dbcce48697-secret-volume\") pod \"collect-profiles-29496495-brtsv\" (UID: \"08038447-8cce-4cea-9ef9-f7dbcce48697\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496495-brtsv" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.680020 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.680071 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwd6c\" (UniqueName: \"kubernetes.io/projected/236f27f9-0389-4143-8014-18eb1f125468-kube-api-access-pwd6c\") pod \"packageserver-d55dfcdfc-5hqpk\" (UID: \"236f27f9-0389-4143-8014-18eb1f125468\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hqpk" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.680122 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/236f27f9-0389-4143-8014-18eb1f125468-webhook-cert\") pod \"packageserver-d55dfcdfc-5hqpk\" (UID: \"236f27f9-0389-4143-8014-18eb1f125468\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hqpk" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.680520 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08038447-8cce-4cea-9ef9-f7dbcce48697-config-volume\") pod \"collect-profiles-29496495-brtsv\" (UID: \"08038447-8cce-4cea-9ef9-f7dbcce48697\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496495-brtsv" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.680574 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/236f27f9-0389-4143-8014-18eb1f125468-apiservice-cert\") pod \"packageserver-d55dfcdfc-5hqpk\" (UID: \"236f27f9-0389-4143-8014-18eb1f125468\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hqpk" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.680605 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cb029d61-d79f-45a8-88f1-2c190d9315eb-srv-cert\") pod \"catalog-operator-68c6474976-gtc8b\" (UID: \"cb029d61-d79f-45a8-88f1-2c190d9315eb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gtc8b" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.680709 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdx5s\" (UniqueName: \"kubernetes.io/projected/e2e4b551-3838-4db9-8ee2-363473a40bc4-kube-api-access-wdx5s\") pod \"service-ca-9c57cc56f-n5kg4\" (UID: \"e2e4b551-3838-4db9-8ee2-363473a40bc4\") " pod="openshift-service-ca/service-ca-9c57cc56f-n5kg4" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.680740 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e2e4b551-3838-4db9-8ee2-363473a40bc4-signing-key\") pod \"service-ca-9c57cc56f-n5kg4\" (UID: \"e2e4b551-3838-4db9-8ee2-363473a40bc4\") " pod="openshift-service-ca/service-ca-9c57cc56f-n5kg4" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.680841 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/06c79303-4409-4d40-8b87-66904d05a635-bound-sa-token\") pod \"ingress-operator-5b745b69d9-92b8r\" (UID: \"06c79303-4409-4d40-8b87-66904d05a635\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-92b8r" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.680872 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/06c79303-4409-4d40-8b87-66904d05a635-trusted-ca\") pod \"ingress-operator-5b745b69d9-92b8r\" (UID: \"06c79303-4409-4d40-8b87-66904d05a635\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-92b8r" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.681038 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cb029d61-d79f-45a8-88f1-2c190d9315eb-profile-collector-cert\") pod \"catalog-operator-68c6474976-gtc8b\" (UID: \"cb029d61-d79f-45a8-88f1-2c190d9315eb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gtc8b" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.681076 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/236f27f9-0389-4143-8014-18eb1f125468-tmpfs\") pod \"packageserver-d55dfcdfc-5hqpk\" (UID: \"236f27f9-0389-4143-8014-18eb1f125468\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hqpk" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.681725 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/06c79303-4409-4d40-8b87-66904d05a635-metrics-tls\") pod \"ingress-operator-5b745b69d9-92b8r\" (UID: \"06c79303-4409-4d40-8b87-66904d05a635\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-92b8r" Jan 30 16:24:47 crc kubenswrapper[4766]: E0130 16:24:47.681774 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:48.181758318 +0000 UTC m=+142.819715664 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.681902 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cv2k\" (UniqueName: \"kubernetes.io/projected/08038447-8cce-4cea-9ef9-f7dbcce48697-kube-api-access-2cv2k\") pod \"collect-profiles-29496495-brtsv\" (UID: \"08038447-8cce-4cea-9ef9-f7dbcce48697\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496495-brtsv" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.681928 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e2e4b551-3838-4db9-8ee2-363473a40bc4-signing-cabundle\") pod \"service-ca-9c57cc56f-n5kg4\" (UID: \"e2e4b551-3838-4db9-8ee2-363473a40bc4\") " pod="openshift-service-ca/service-ca-9c57cc56f-n5kg4" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.682005 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs45g\" (UniqueName: \"kubernetes.io/projected/cb029d61-d79f-45a8-88f1-2c190d9315eb-kube-api-access-xs45g\") pod \"catalog-operator-68c6474976-gtc8b\" (UID: \"cb029d61-d79f-45a8-88f1-2c190d9315eb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gtc8b" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.698427 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-jn8dp"] Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.699884 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8qrfp"] Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.755781 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hrm2g"] Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.760412 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fgcq7"] Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.781880 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b8lt5"] Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.783033 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2h92f"] Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.784090 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.784393 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e2e4b551-3838-4db9-8ee2-363473a40bc4-signing-cabundle\") pod \"service-ca-9c57cc56f-n5kg4\" (UID: \"e2e4b551-3838-4db9-8ee2-363473a40bc4\") " pod="openshift-service-ca/service-ca-9c57cc56f-n5kg4" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.784427 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4gqp\" (UniqueName: \"kubernetes.io/projected/ae6eef10-afa3-4bb1-b57a-5a89d305467e-kube-api-access-x4gqp\") pod \"machine-config-server-92gpq\" (UID: \"ae6eef10-afa3-4bb1-b57a-5a89d305467e\") " pod="openshift-machine-config-operator/machine-config-server-92gpq" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.785907 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e2e4b551-3838-4db9-8ee2-363473a40bc4-signing-cabundle\") pod \"service-ca-9c57cc56f-n5kg4\" (UID: \"e2e4b551-3838-4db9-8ee2-363473a40bc4\") " pod="openshift-service-ca/service-ca-9c57cc56f-n5kg4" Jan 30 16:24:47 crc kubenswrapper[4766]: E0130 16:24:47.786036 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:48.286012111 +0000 UTC m=+142.923969457 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.790473 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xs45g\" (UniqueName: \"kubernetes.io/projected/cb029d61-d79f-45a8-88f1-2c190d9315eb-kube-api-access-xs45g\") pod \"catalog-operator-68c6474976-gtc8b\" (UID: \"cb029d61-d79f-45a8-88f1-2c190d9315eb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gtc8b" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.790697 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ae6eef10-afa3-4bb1-b57a-5a89d305467e-certs\") pod \"machine-config-server-92gpq\" (UID: \"ae6eef10-afa3-4bb1-b57a-5a89d305467e\") " pod="openshift-machine-config-operator/machine-config-server-92gpq" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.791359 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqpdx\" (UniqueName: \"kubernetes.io/projected/06c79303-4409-4d40-8b87-66904d05a635-kube-api-access-rqpdx\") pod \"ingress-operator-5b745b69d9-92b8r\" (UID: \"06c79303-4409-4d40-8b87-66904d05a635\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-92b8r" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.791859 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/08038447-8cce-4cea-9ef9-f7dbcce48697-secret-volume\") pod \"collect-profiles-29496495-brtsv\" (UID: \"08038447-8cce-4cea-9ef9-f7dbcce48697\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496495-brtsv" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.791932 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.792007 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwd6c\" (UniqueName: \"kubernetes.io/projected/236f27f9-0389-4143-8014-18eb1f125468-kube-api-access-pwd6c\") pod \"packageserver-d55dfcdfc-5hqpk\" (UID: \"236f27f9-0389-4143-8014-18eb1f125468\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hqpk" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.792125 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/236f27f9-0389-4143-8014-18eb1f125468-webhook-cert\") pod \"packageserver-d55dfcdfc-5hqpk\" (UID: \"236f27f9-0389-4143-8014-18eb1f125468\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hqpk" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.792156 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08038447-8cce-4cea-9ef9-f7dbcce48697-config-volume\") pod \"collect-profiles-29496495-brtsv\" (UID: \"08038447-8cce-4cea-9ef9-f7dbcce48697\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496495-brtsv" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.792213 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/236f27f9-0389-4143-8014-18eb1f125468-apiservice-cert\") pod \"packageserver-d55dfcdfc-5hqpk\" (UID: \"236f27f9-0389-4143-8014-18eb1f125468\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hqpk" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.792278 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cb029d61-d79f-45a8-88f1-2c190d9315eb-srv-cert\") pod \"catalog-operator-68c6474976-gtc8b\" (UID: \"cb029d61-d79f-45a8-88f1-2c190d9315eb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gtc8b" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.792387 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdx5s\" (UniqueName: \"kubernetes.io/projected/e2e4b551-3838-4db9-8ee2-363473a40bc4-kube-api-access-wdx5s\") pod \"service-ca-9c57cc56f-n5kg4\" (UID: \"e2e4b551-3838-4db9-8ee2-363473a40bc4\") " pod="openshift-service-ca/service-ca-9c57cc56f-n5kg4" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.792441 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e2e4b551-3838-4db9-8ee2-363473a40bc4-signing-key\") pod \"service-ca-9c57cc56f-n5kg4\" (UID: \"e2e4b551-3838-4db9-8ee2-363473a40bc4\") " pod="openshift-service-ca/service-ca-9c57cc56f-n5kg4" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.792651 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/06c79303-4409-4d40-8b87-66904d05a635-bound-sa-token\") pod \"ingress-operator-5b745b69d9-92b8r\" (UID: \"06c79303-4409-4d40-8b87-66904d05a635\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-92b8r" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.792707 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/06c79303-4409-4d40-8b87-66904d05a635-trusted-ca\") pod \"ingress-operator-5b745b69d9-92b8r\" (UID: \"06c79303-4409-4d40-8b87-66904d05a635\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-92b8r" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.792927 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cb029d61-d79f-45a8-88f1-2c190d9315eb-profile-collector-cert\") pod \"catalog-operator-68c6474976-gtc8b\" (UID: \"cb029d61-d79f-45a8-88f1-2c190d9315eb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gtc8b" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.792972 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/236f27f9-0389-4143-8014-18eb1f125468-tmpfs\") pod \"packageserver-d55dfcdfc-5hqpk\" (UID: \"236f27f9-0389-4143-8014-18eb1f125468\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hqpk" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.793001 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/06c79303-4409-4d40-8b87-66904d05a635-metrics-tls\") pod \"ingress-operator-5b745b69d9-92b8r\" (UID: \"06c79303-4409-4d40-8b87-66904d05a635\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-92b8r" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.793069 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ae6eef10-afa3-4bb1-b57a-5a89d305467e-node-bootstrap-token\") pod \"machine-config-server-92gpq\" (UID: \"ae6eef10-afa3-4bb1-b57a-5a89d305467e\") " pod="openshift-machine-config-operator/machine-config-server-92gpq" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.793097 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cv2k\" (UniqueName: \"kubernetes.io/projected/08038447-8cce-4cea-9ef9-f7dbcce48697-kube-api-access-2cv2k\") pod \"collect-profiles-29496495-brtsv\" (UID: \"08038447-8cce-4cea-9ef9-f7dbcce48697\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496495-brtsv" Jan 30 16:24:47 crc kubenswrapper[4766]: E0130 16:24:47.794838 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:48.294821447 +0000 UTC m=+142.932778783 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.796522 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/236f27f9-0389-4143-8014-18eb1f125468-tmpfs\") pod \"packageserver-d55dfcdfc-5hqpk\" (UID: \"236f27f9-0389-4143-8014-18eb1f125468\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hqpk" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.797323 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/06c79303-4409-4d40-8b87-66904d05a635-trusted-ca\") pod \"ingress-operator-5b745b69d9-92b8r\" (UID: \"06c79303-4409-4d40-8b87-66904d05a635\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-92b8r" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.797579 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08038447-8cce-4cea-9ef9-f7dbcce48697-config-volume\") pod \"collect-profiles-29496495-brtsv\" (UID: \"08038447-8cce-4cea-9ef9-f7dbcce48697\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496495-brtsv" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.800091 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/236f27f9-0389-4143-8014-18eb1f125468-apiservice-cert\") pod \"packageserver-d55dfcdfc-5hqpk\" (UID: \"236f27f9-0389-4143-8014-18eb1f125468\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hqpk" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.803010 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e2e4b551-3838-4db9-8ee2-363473a40bc4-signing-key\") pod \"service-ca-9c57cc56f-n5kg4\" (UID: \"e2e4b551-3838-4db9-8ee2-363473a40bc4\") " pod="openshift-service-ca/service-ca-9c57cc56f-n5kg4" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.803019 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cb029d61-d79f-45a8-88f1-2c190d9315eb-profile-collector-cert\") pod \"catalog-operator-68c6474976-gtc8b\" (UID: \"cb029d61-d79f-45a8-88f1-2c190d9315eb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gtc8b" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.803074 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/08038447-8cce-4cea-9ef9-f7dbcce48697-secret-volume\") pod \"collect-profiles-29496495-brtsv\" (UID: \"08038447-8cce-4cea-9ef9-f7dbcce48697\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496495-brtsv" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.803609 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/06c79303-4409-4d40-8b87-66904d05a635-metrics-tls\") pod \"ingress-operator-5b745b69d9-92b8r\" (UID: \"06c79303-4409-4d40-8b87-66904d05a635\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-92b8r" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.812106 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/236f27f9-0389-4143-8014-18eb1f125468-webhook-cert\") pod \"packageserver-d55dfcdfc-5hqpk\" (UID: \"236f27f9-0389-4143-8014-18eb1f125468\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hqpk" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.812132 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cb029d61-d79f-45a8-88f1-2c190d9315eb-srv-cert\") pod \"catalog-operator-68c6474976-gtc8b\" (UID: \"cb029d61-d79f-45a8-88f1-2c190d9315eb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gtc8b" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.814255 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6ndwq" event={"ID":"2f7dd292-e5ea-4fe6-b6ec-da47748f1fc3","Type":"ContainerStarted","Data":"453741f26b3ed7a14992c9725d66eb2123ad6d2924bc25f9e558bc21015df26f"} Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.831602 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xs45g\" (UniqueName: \"kubernetes.io/projected/cb029d61-d79f-45a8-88f1-2c190d9315eb-kube-api-access-xs45g\") pod \"catalog-operator-68c6474976-gtc8b\" (UID: \"cb029d61-d79f-45a8-88f1-2c190d9315eb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gtc8b" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.832415 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-254pk" event={"ID":"d9f3a679-bd83-4e31-aad4-0bd228e96c33","Type":"ContainerStarted","Data":"482247736a6b9798585a7bfb91e8563590e9e069d111adabf4004414cdb75d24"} Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.832489 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-254pk" event={"ID":"d9f3a679-bd83-4e31-aad4-0bd228e96c33","Type":"ContainerStarted","Data":"841f424f8401c8e324936c2900408b4414e1055b54b6b487f0054fad637340a2"} Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.833056 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-254pk" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.849622 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-c75qp" event={"ID":"c1191290-07ee-40c4-85e8-59545986d7db","Type":"ContainerStarted","Data":"daec32ddeb71cafc72ea9f18114392a006f36902490cf83d409c0b69bb0480ef"} Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.850482 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-c75qp" event={"ID":"c1191290-07ee-40c4-85e8-59545986d7db","Type":"ContainerStarted","Data":"7d5198e682adb277aef82e5a6cb369b7c0fef6a5ded9d6edbc28d5907dc5f74f"} Jan 30 16:24:47 crc kubenswrapper[4766]: W0130 16:24:47.850127 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda9e3070_71fe_41f6_8549_90d97f03c16e.slice/crio-0f5669dd5f65cf1966bdd3e7bcc330eeec62b86b4c3c4705acd1e3306ee4ce13 WatchSource:0}: Error finding container 0f5669dd5f65cf1966bdd3e7bcc330eeec62b86b4c3c4705acd1e3306ee4ce13: Status 404 returned error can't find the container with id 0f5669dd5f65cf1966bdd3e7bcc330eeec62b86b4c3c4705acd1e3306ee4ce13 Jan 30 16:24:47 crc kubenswrapper[4766]: W0130 16:24:47.851923 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8acca84e_2800_4a20_b3e8_84e021d1c001.slice/crio-161f5f9607be84f92f5e317ccd3606999115bafa037366c74c6bc3a23e59209c WatchSource:0}: Error finding container 161f5f9607be84f92f5e317ccd3606999115bafa037366c74c6bc3a23e59209c: Status 404 returned error can't find the container with id 161f5f9607be84f92f5e317ccd3606999115bafa037366c74c6bc3a23e59209c Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.853017 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cv2k\" (UniqueName: \"kubernetes.io/projected/08038447-8cce-4cea-9ef9-f7dbcce48697-kube-api-access-2cv2k\") pod \"collect-profiles-29496495-brtsv\" (UID: \"08038447-8cce-4cea-9ef9-f7dbcce48697\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496495-brtsv" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.853594 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" event={"ID":"71148f4c-0b84-45c4-911c-0ec4b06cf710","Type":"ContainerStarted","Data":"2671a13dece461b0f7ac5d5cf28d322e51625ef00baf6de4ac368b736fd3c301"} Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.854354 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-67z4k"] Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.860290 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-txtwn" event={"ID":"0d8527eb-86cc-45de-8821-7b80f37422d0","Type":"ContainerStarted","Data":"43496fbf302ed3230717bce41731ca26bacde92ea4fa65f4768c824a9d6d476a"} Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.861315 4766 patch_prober.go:28] interesting pod/downloads-7954f5f757-254pk container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.861408 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-254pk" podUID="d9f3a679-bd83-4e31-aad4-0bd228e96c33" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.865385 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-dgkvz" event={"ID":"807df97f-b371-4d04-81e9-b1a823a8a638","Type":"ContainerStarted","Data":"cdc8f66f787e17b15a0e7454e23799f03cb73f4271321de8e857fb5adbb8d6e1"} Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.865443 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-dgkvz" event={"ID":"807df97f-b371-4d04-81e9-b1a823a8a638","Type":"ContainerStarted","Data":"442796fe00494142d89b0e1b9d6820cd3ac80019a54bf8a35e0ec68f7d85bbbf"} Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.866374 4766 patch_prober.go:28] interesting pod/console-operator-58897d9998-gtfgx container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.866435 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfclt" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.866451 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-gtfgx" podUID="2e83d3d7-f71f-47ab-a085-8d62e6b30f7d" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.866482 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-dgkvz" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.868484 4766 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-dgkvz container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.868677 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-dgkvz" podUID="807df97f-b371-4d04-81e9-b1a823a8a638" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.877965 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqpdx\" (UniqueName: \"kubernetes.io/projected/06c79303-4409-4d40-8b87-66904d05a635-kube-api-access-rqpdx\") pod \"ingress-operator-5b745b69d9-92b8r\" (UID: \"06c79303-4409-4d40-8b87-66904d05a635\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-92b8r" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.883398 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496495-brtsv" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.888974 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/06c79303-4409-4d40-8b87-66904d05a635-bound-sa-token\") pod \"ingress-operator-5b745b69d9-92b8r\" (UID: \"06c79303-4409-4d40-8b87-66904d05a635\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-92b8r" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.897451 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.897767 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ae6eef10-afa3-4bb1-b57a-5a89d305467e-node-bootstrap-token\") pod \"machine-config-server-92gpq\" (UID: \"ae6eef10-afa3-4bb1-b57a-5a89d305467e\") " pod="openshift-machine-config-operator/machine-config-server-92gpq" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.897792 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4gqp\" (UniqueName: \"kubernetes.io/projected/ae6eef10-afa3-4bb1-b57a-5a89d305467e-kube-api-access-x4gqp\") pod \"machine-config-server-92gpq\" (UID: \"ae6eef10-afa3-4bb1-b57a-5a89d305467e\") " pod="openshift-machine-config-operator/machine-config-server-92gpq" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.897845 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ae6eef10-afa3-4bb1-b57a-5a89d305467e-certs\") pod \"machine-config-server-92gpq\" (UID: \"ae6eef10-afa3-4bb1-b57a-5a89d305467e\") " pod="openshift-machine-config-operator/machine-config-server-92gpq" Jan 30 16:24:47 crc kubenswrapper[4766]: E0130 16:24:47.898346 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:48.398313669 +0000 UTC m=+143.036271155 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.907080 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gtc8b" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.910840 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ae6eef10-afa3-4bb1-b57a-5a89d305467e-node-bootstrap-token\") pod \"machine-config-server-92gpq\" (UID: \"ae6eef10-afa3-4bb1-b57a-5a89d305467e\") " pod="openshift-machine-config-operator/machine-config-server-92gpq" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.911899 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdx5s\" (UniqueName: \"kubernetes.io/projected/e2e4b551-3838-4db9-8ee2-363473a40bc4-kube-api-access-wdx5s\") pod \"service-ca-9c57cc56f-n5kg4\" (UID: \"e2e4b551-3838-4db9-8ee2-363473a40bc4\") " pod="openshift-service-ca/service-ca-9c57cc56f-n5kg4" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.913932 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ae6eef10-afa3-4bb1-b57a-5a89d305467e-certs\") pod \"machine-config-server-92gpq\" (UID: \"ae6eef10-afa3-4bb1-b57a-5a89d305467e\") " pod="openshift-machine-config-operator/machine-config-server-92gpq" Jan 30 16:24:47 crc kubenswrapper[4766]: W0130 16:24:47.917502 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16c26a8d_0deb_4754_b815_4402e2aa5455.slice/crio-16711d843ae6874f2a105c7cedac574863068c1e52824fa651da2e5e171e041c WatchSource:0}: Error finding container 16711d843ae6874f2a105c7cedac574863068c1e52824fa651da2e5e171e041c: Status 404 returned error can't find the container with id 16711d843ae6874f2a105c7cedac574863068c1e52824fa651da2e5e171e041c Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.938155 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwd6c\" (UniqueName: \"kubernetes.io/projected/236f27f9-0389-4143-8014-18eb1f125468-kube-api-access-pwd6c\") pod \"packageserver-d55dfcdfc-5hqpk\" (UID: \"236f27f9-0389-4143-8014-18eb1f125468\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hqpk" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.952998 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-r7tdx"] Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:47.977886 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4gqp\" (UniqueName: \"kubernetes.io/projected/ae6eef10-afa3-4bb1-b57a-5a89d305467e-kube-api-access-x4gqp\") pod \"machine-config-server-92gpq\" (UID: \"ae6eef10-afa3-4bb1-b57a-5a89d305467e\") " pod="openshift-machine-config-operator/machine-config-server-92gpq" Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:47.999441 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:48 crc kubenswrapper[4766]: E0130 16:24:47.999863 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:48.49984397 +0000 UTC m=+143.137801316 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.107922 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:48 crc kubenswrapper[4766]: E0130 16:24:48.108599 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:48.608575163 +0000 UTC m=+143.246532509 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.148889 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-92b8r" Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.161223 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hqpk" Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.191309 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-n5kg4" Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.210246 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:48 crc kubenswrapper[4766]: E0130 16:24:48.210927 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:48.710904684 +0000 UTC m=+143.348862030 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.265348 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-92gpq" Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.297073 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfclt" Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.311234 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:48 crc kubenswrapper[4766]: E0130 16:24:48.311662 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:48.811629914 +0000 UTC m=+143.449587260 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.394670 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-nx7kv"] Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.396358 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-254pk" podStartSLOduration=123.396324816 podStartE2EDuration="2m3.396324816s" podCreationTimestamp="2026-01-30 16:22:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:48.38781461 +0000 UTC m=+143.025771956" watchObservedRunningTime="2026-01-30 16:24:48.396324816 +0000 UTC m=+143.034282162" Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.421418 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:48 crc kubenswrapper[4766]: E0130 16:24:48.421903 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:48.921886066 +0000 UTC m=+143.559843412 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.439130 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-5j6bc"] Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.441101 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-sbckt"] Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.522101 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:48 crc kubenswrapper[4766]: E0130 16:24:48.522560 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:49.022535874 +0000 UTC m=+143.660493220 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:48 crc kubenswrapper[4766]: W0130 16:24:48.616631 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31501ea8_c8ad_4854_bfda_157a49fd0b39.slice/crio-ebd7f08de6e087a24d4fbb3ad0ed25fca08854c043913e0bb81fa83fbb7dae47 WatchSource:0}: Error finding container ebd7f08de6e087a24d4fbb3ad0ed25fca08854c043913e0bb81fa83fbb7dae47: Status 404 returned error can't find the container with id ebd7f08de6e087a24d4fbb3ad0ed25fca08854c043913e0bb81fa83fbb7dae47 Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.626013 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:48 crc kubenswrapper[4766]: E0130 16:24:48.626394 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:49.126378116 +0000 UTC m=+143.764335462 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.727324 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:48 crc kubenswrapper[4766]: E0130 16:24:48.727870 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:49.227810465 +0000 UTC m=+143.865767811 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.808764 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-dgkvz" podStartSLOduration=122.808742457 podStartE2EDuration="2m2.808742457s" podCreationTimestamp="2026-01-30 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:48.806669143 +0000 UTC m=+143.444626489" watchObservedRunningTime="2026-01-30 16:24:48.808742457 +0000 UTC m=+143.446699803" Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.828954 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:48 crc kubenswrapper[4766]: E0130 16:24:48.829963 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:49.329941471 +0000 UTC m=+143.967898817 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.876437 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-nx7kv" event={"ID":"a8f468fe-13d2-4f44-ab3e-fd301aac78ce","Type":"ContainerStarted","Data":"da251e530ab2ae213417afd42802bcd7683d136713137e0b510b21cdbfe6eb43"} Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.881389 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fgcq7" event={"ID":"6eb3e5af-901e-42db-b01e-895e2d6c8171","Type":"ContainerStarted","Data":"4de9e4627339e1fcae6802873a837882e15805acebb31ecd9a71512f2df2f935"} Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.884453 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6ndwq" event={"ID":"2f7dd292-e5ea-4fe6-b6ec-da47748f1fc3","Type":"ContainerStarted","Data":"6ad5204974828ceb5cbfe7d2872cfeadbc7fd55a349a46eda58bf9243f7f8807"} Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.887624 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7j765" event={"ID":"0fd41a92-ef77-4a02-bd2b-089d2edb3cf4","Type":"ContainerStarted","Data":"ddf6c9f183093e3abd62fdf360fb6093bb986bc490a6c0e7b7f79dd126d78283"} Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.893049 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8qrfp" event={"ID":"da9e3070-71fe-41f6-8549-90d97f03c16e","Type":"ContainerStarted","Data":"0f5669dd5f65cf1966bdd3e7bcc330eeec62b86b4c3c4705acd1e3306ee4ce13"} Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.896134 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-67z4k" event={"ID":"16c26a8d-0deb-4754-b815-4402e2aa5455","Type":"ContainerStarted","Data":"a0e7715b8beb895fdad8948f13686a6e4856da0d9638f596d65fb28a29771549"} Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.896247 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-67z4k" event={"ID":"16c26a8d-0deb-4754-b815-4402e2aa5455","Type":"ContainerStarted","Data":"16711d843ae6874f2a105c7cedac574863068c1e52824fa651da2e5e171e041c"} Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.897977 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-jn8dp" event={"ID":"8acca84e-2800-4a20-b3e8-84e021d1c001","Type":"ContainerStarted","Data":"161f5f9607be84f92f5e317ccd3606999115bafa037366c74c6bc3a23e59209c"} Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.899785 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r7tdx" event={"ID":"d6fc09a4-19be-4bdb-b87a-5eafbfc9981c","Type":"ContainerStarted","Data":"d410fd65fd3929bcfa340dde5e3c83faefd8e517018e4cc42fb98f267ae5457b"} Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.902963 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" event={"ID":"71148f4c-0b84-45c4-911c-0ec4b06cf710","Type":"ContainerDied","Data":"1a422a313bd56f96b0268135869b328990e93c424eeb46ad57ae692d569fb0de"} Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.902858 4766 generic.go:334] "Generic (PLEG): container finished" podID="71148f4c-0b84-45c4-911c-0ec4b06cf710" containerID="1a422a313bd56f96b0268135869b328990e93c424eeb46ad57ae692d569fb0de" exitCode=0 Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.909499 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-kbt4b"] Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.925435 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-vzmxm" event={"ID":"3dc11d4d-16d8-43a2-9648-e0b833e8824a","Type":"ContainerStarted","Data":"08c5b468189bfe5f87ad1830d0d5545ac942bc2100fb13af56ab34a46a906741"} Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.928541 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b8lt5" event={"ID":"752a21cf-698e-45b3-91e2-c00b0e82d991","Type":"ContainerStarted","Data":"6905d5eb8ef040a03594ce180dad5fcb64bf67647935e14512561f3c56d254a1"} Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.930851 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:48 crc kubenswrapper[4766]: E0130 16:24:48.931620 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:49.431586485 +0000 UTC m=+144.069543821 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.932087 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" event={"ID":"21a8aae5-a6f8-43e0-ab59-1e6af94eb133","Type":"ContainerStarted","Data":"a6184cf8b16957ad6df32ef60f66d31e49cd6a8b7088d60d3d7abeb822aa03d8"} Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.934633 4766 generic.go:334] "Generic (PLEG): container finished" podID="c1191290-07ee-40c4-85e8-59545986d7db" containerID="daec32ddeb71cafc72ea9f18114392a006f36902490cf83d409c0b69bb0480ef" exitCode=0 Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.934737 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-c75qp" event={"ID":"c1191290-07ee-40c4-85e8-59545986d7db","Type":"ContainerDied","Data":"daec32ddeb71cafc72ea9f18114392a006f36902490cf83d409c0b69bb0480ef"} Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.939428 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5j6bc" event={"ID":"31501ea8-c8ad-4854-bfda-157a49fd0b39","Type":"ContainerStarted","Data":"ebd7f08de6e087a24d4fbb3ad0ed25fca08854c043913e0bb81fa83fbb7dae47"} Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.948881 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2h92f" event={"ID":"587fc124-b506-4535-b8d2-1d0f6c91cfb9","Type":"ContainerStarted","Data":"ecbc5022a09a2680184de1da4ce4b20a3d1d35bd4d0e5b84f23bd6c7f61891fc"} Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.951534 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-pr8gz" event={"ID":"af6eef76-87a0-459c-b2eb-61e06ae7386d","Type":"ContainerStarted","Data":"c072450d73a30397006517ca4a1710297da525d142769091fd9260d5e9d902a4"} Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.957153 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hrm2g" event={"ID":"56e829ed-8cb2-48e8-bc6e-b5a6ec346d4a","Type":"ContainerStarted","Data":"d5a97bf1b7443a01476607e93b1a10db15b399d5c4c579f36ab578a3f39e7592"} Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.966345 4766 patch_prober.go:28] interesting pod/downloads-7954f5f757-254pk container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.966452 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-254pk" podUID="d9f3a679-bd83-4e31-aad4-0bd228e96c33" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 16:24:49 crc kubenswrapper[4766]: I0130 16:24:49.016480 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-dgkvz" Jan 30 16:24:49 crc kubenswrapper[4766]: I0130 16:24:49.036765 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:49 crc kubenswrapper[4766]: E0130 16:24:49.038187 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:49.53815302 +0000 UTC m=+144.176110366 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:49 crc kubenswrapper[4766]: I0130 16:24:49.134248 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfclt" podStartSLOduration=123.134221726 podStartE2EDuration="2m3.134221726s" podCreationTimestamp="2026-01-30 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:49.128661698 +0000 UTC m=+143.766619044" watchObservedRunningTime="2026-01-30 16:24:49.134221726 +0000 UTC m=+143.772179062" Jan 30 16:24:49 crc kubenswrapper[4766]: I0130 16:24:49.137923 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:49 crc kubenswrapper[4766]: E0130 16:24:49.138462 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:49.638422467 +0000 UTC m=+144.276379813 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:49 crc kubenswrapper[4766]: I0130 16:24:49.150734 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:49 crc kubenswrapper[4766]: E0130 16:24:49.151303 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:49.65128924 +0000 UTC m=+144.289246586 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:49 crc kubenswrapper[4766]: W0130 16:24:49.194626 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f9669ae_a5fc_4e59_b2b7_3ae1ebf6f3ad.slice/crio-5e02b706cb8fb62adb14df73fad5c37b79cfa14658befa81ca68cc58318bd756 WatchSource:0}: Error finding container 5e02b706cb8fb62adb14df73fad5c37b79cfa14658befa81ca68cc58318bd756: Status 404 returned error can't find the container with id 5e02b706cb8fb62adb14df73fad5c37b79cfa14658befa81ca68cc58318bd756 Jan 30 16:24:49 crc kubenswrapper[4766]: I0130 16:24:49.256501 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:49 crc kubenswrapper[4766]: E0130 16:24:49.257014 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:49.75696855 +0000 UTC m=+144.394925896 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:49 crc kubenswrapper[4766]: I0130 16:24:49.257284 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:49 crc kubenswrapper[4766]: E0130 16:24:49.257641 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:49.757626739 +0000 UTC m=+144.395584085 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:49 crc kubenswrapper[4766]: I0130 16:24:49.367700 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:49 crc kubenswrapper[4766]: E0130 16:24:49.369443 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:49.869407392 +0000 UTC m=+144.507364748 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:49 crc kubenswrapper[4766]: I0130 16:24:49.369647 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:49 crc kubenswrapper[4766]: E0130 16:24:49.375888 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:49.874478536 +0000 UTC m=+144.512435882 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:49 crc kubenswrapper[4766]: I0130 16:24:49.465995 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-gtfgx" podStartSLOduration=124.46596399 podStartE2EDuration="2m4.46596399s" podCreationTimestamp="2026-01-30 16:22:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:49.409330924 +0000 UTC m=+144.047288310" watchObservedRunningTime="2026-01-30 16:24:49.46596399 +0000 UTC m=+144.103921336" Jan 30 16:24:49 crc kubenswrapper[4766]: I0130 16:24:49.466952 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8c8p6"] Jan 30 16:24:49 crc kubenswrapper[4766]: I0130 16:24:49.471819 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:49 crc kubenswrapper[4766]: E0130 16:24:49.472403 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:49.972379291 +0000 UTC m=+144.610336637 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:49 crc kubenswrapper[4766]: I0130 16:24:49.482038 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-vpgtw"] Jan 30 16:24:49 crc kubenswrapper[4766]: I0130 16:24:49.495153 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wcmvb"] Jan 30 16:24:49 crc kubenswrapper[4766]: I0130 16:24:49.575927 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:49 crc kubenswrapper[4766]: E0130 16:24:49.577498 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:50.077477407 +0000 UTC m=+144.715434743 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:49 crc kubenswrapper[4766]: I0130 16:24:49.680685 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:49 crc kubenswrapper[4766]: E0130 16:24:49.681269 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:50.181243447 +0000 UTC m=+144.819200793 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:49 crc kubenswrapper[4766]: I0130 16:24:49.791767 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:49 crc kubenswrapper[4766]: E0130 16:24:49.792636 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:50.29261333 +0000 UTC m=+144.930570676 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:49 crc kubenswrapper[4766]: I0130 16:24:49.889429 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-67z4k" podStartSLOduration=123.889353943 podStartE2EDuration="2m3.889353943s" podCreationTimestamp="2026-01-30 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:49.820221354 +0000 UTC m=+144.458178710" watchObservedRunningTime="2026-01-30 16:24:49.889353943 +0000 UTC m=+144.527311299" Jan 30 16:24:49 crc kubenswrapper[4766]: I0130 16:24:49.892872 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:49 crc kubenswrapper[4766]: I0130 16:24:49.911442 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gtc8b"] Jan 30 16:24:49 crc kubenswrapper[4766]: E0130 16:24:49.913222 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:50.413187587 +0000 UTC m=+145.051144933 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:49 crc kubenswrapper[4766]: I0130 16:24:49.920103 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496495-brtsv"] Jan 30 16:24:49 crc kubenswrapper[4766]: I0130 16:24:49.933514 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-lnxcr"] Jan 30 16:24:49 crc kubenswrapper[4766]: I0130 16:24:49.956435 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-28vp9"] Jan 30 16:24:49 crc kubenswrapper[4766]: I0130 16:24:49.977113 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-txtwn" event={"ID":"0d8527eb-86cc-45de-8821-7b80f37422d0","Type":"ContainerStarted","Data":"5bf2aaca3ffc9a6f0b3865148f1db3fe9ff5d8edbd775010a4143273d6d7148b"} Jan 30 16:24:49 crc kubenswrapper[4766]: I0130 16:24:49.977640 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qdlmd"] Jan 30 16:24:49 crc kubenswrapper[4766]: I0130 16:24:49.978151 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8c8p6" event={"ID":"33323546-6929-4c9c-a0a3-44842b9897b4","Type":"ContainerStarted","Data":"3e3e5f34546852f9f865ce293d49cf74902831ac94e7372ebd8fbf9c35b342d2"} Jan 30 16:24:49 crc kubenswrapper[4766]: I0130 16:24:49.979058 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8qrfp" event={"ID":"da9e3070-71fe-41f6-8549-90d97f03c16e","Type":"ContainerStarted","Data":"3119f9563fe5793394a0aa2da3e100fe6d9a4bd23dbeebbc51069eec3f569033"} Jan 30 16:24:49 crc kubenswrapper[4766]: I0130 16:24:49.986467 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-pr8gz" event={"ID":"af6eef76-87a0-459c-b2eb-61e06ae7386d","Type":"ContainerStarted","Data":"8df16e695c546b49c3fb9e0f2f6b9286cf04964bc89bd976fd8f255d3b0ffb9c"} Jan 30 16:24:49 crc kubenswrapper[4766]: I0130 16:24:49.991689 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-vpgtw" event={"ID":"7082a4c2-c998-4e1c-8264-2bafcd96d0c1","Type":"ContainerStarted","Data":"90ed4770024b3a935ff695b330dd2070d63f9090327d3c9c82f7ac1923e50390"} Jan 30 16:24:49 crc kubenswrapper[4766]: I0130 16:24:49.992860 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-92gpq" event={"ID":"ae6eef10-afa3-4bb1-b57a-5a89d305467e","Type":"ContainerStarted","Data":"296b99530e5aec0667f1585adc7769f2e22feb2beeb616aacb60bfdf325d5645"} Jan 30 16:24:49 crc kubenswrapper[4766]: I0130 16:24:49.995290 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:49 crc kubenswrapper[4766]: E0130 16:24:49.996338 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:50.496322018 +0000 UTC m=+145.134279364 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.013159 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kbt4b" event={"ID":"9f9669ae-a5fc-4e59-b2b7-3ae1ebf6f3ad","Type":"ContainerStarted","Data":"5e02b706cb8fb62adb14df73fad5c37b79cfa14658befa81ca68cc58318bd756"} Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.016696 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-hfk7g"] Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.031641 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-8fgxh"] Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.035837 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2h92f" event={"ID":"587fc124-b506-4535-b8d2-1d0f6c91cfb9","Type":"ContainerStarted","Data":"770d6e0a58032340a0944dbb22c0ab598c6a53cda36eadedcc32b80f603d6e08"} Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.054493 4766 generic.go:334] "Generic (PLEG): container finished" podID="0fd41a92-ef77-4a02-bd2b-089d2edb3cf4" containerID="bbdd72910bf69cedc9b201ca08b0d2cf32920301a60836f6830d189b4fae9f6c" exitCode=0 Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.056131 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7j765" event={"ID":"0fd41a92-ef77-4a02-bd2b-089d2edb3cf4","Type":"ContainerDied","Data":"bbdd72910bf69cedc9b201ca08b0d2cf32920301a60836f6830d189b4fae9f6c"} Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.072772 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wcmvb" event={"ID":"cdbd0f5d-e6fb-4960-a928-7a5dcc399239","Type":"ContainerStarted","Data":"f1bcfef40c047ee2d486510556be4c02c15197feb65c844e1b250852a3541990"} Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.085071 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hrm2g" event={"ID":"56e829ed-8cb2-48e8-bc6e-b5a6ec346d4a","Type":"ContainerStarted","Data":"599ebdbf5ebee78e6d458684bbc734c349630f6355f0b75ec6a80fa5519e47a0"} Jan 30 16:24:50 crc kubenswrapper[4766]: W0130 16:24:50.093486 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6289d893_d357_4aab_a2e9_389a422ebaa5.slice/crio-229b98c949c5936978b80a3f9feba18a7c1ba1e83d267ace9e8c25a3b7ad85ff WatchSource:0}: Error finding container 229b98c949c5936978b80a3f9feba18a7c1ba1e83d267ace9e8c25a3b7ad85ff: Status 404 returned error can't find the container with id 229b98c949c5936978b80a3f9feba18a7c1ba1e83d267ace9e8c25a3b7ad85ff Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.096732 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:50 crc kubenswrapper[4766]: E0130 16:24:50.097473 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:50.597444078 +0000 UTC m=+145.235401424 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.097934 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:50 crc kubenswrapper[4766]: E0130 16:24:50.098706 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:50.598682141 +0000 UTC m=+145.236639637 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.145536 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r7tdx" event={"ID":"d6fc09a4-19be-4bdb-b87a-5eafbfc9981c","Type":"ContainerStarted","Data":"314df87171b22bdeda0433f572a5232af51a1d1b4dcf4b0bef93c38a9b32f0b0"} Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.162078 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zqpn4"] Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.179419 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-txtwn" podStartSLOduration=124.179394789 podStartE2EDuration="2m4.179394789s" podCreationTimestamp="2026-01-30 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:50.165745975 +0000 UTC m=+144.803703321" watchObservedRunningTime="2026-01-30 16:24:50.179394789 +0000 UTC m=+144.817352135" Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.196424 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-n5kg4"] Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.199001 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-vzmxm" event={"ID":"3dc11d4d-16d8-43a2-9648-e0b833e8824a","Type":"ContainerStarted","Data":"3250aa1b6948fb4c1d00424aab5e2b385f337b6ce92155e423c37dd416a4e57d"} Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.199398 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:50 crc kubenswrapper[4766]: E0130 16:24:50.200596 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:50.700570291 +0000 UTC m=+145.338527637 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.203103 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-vz9mh"] Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.228288 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjlfz"] Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.228375 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hqpk"] Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.234321 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-vljjd"] Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.234944 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-92b8r"] Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.237585 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hrm2g" podStartSLOduration=124.237557785 podStartE2EDuration="2m4.237557785s" podCreationTimestamp="2026-01-30 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:50.222425253 +0000 UTC m=+144.860382609" watchObservedRunningTime="2026-01-30 16:24:50.237557785 +0000 UTC m=+144.875515131" Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.258444 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-pr8gz" podStartSLOduration=124.258422911 podStartE2EDuration="2m4.258422911s" podCreationTimestamp="2026-01-30 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:50.256976652 +0000 UTC m=+144.894934008" watchObservedRunningTime="2026-01-30 16:24:50.258422911 +0000 UTC m=+144.896380247" Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.281927 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-jn8dp" event={"ID":"8acca84e-2800-4a20-b3e8-84e021d1c001","Type":"ContainerStarted","Data":"256047d4257fc7a44d1ece1f87cbb5c8d5501e1d7fbc18af85fcf19357650f6b"} Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.296425 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" podStartSLOduration=124.29640038 podStartE2EDuration="2m4.29640038s" podCreationTimestamp="2026-01-30 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:50.295698782 +0000 UTC m=+144.933656128" watchObservedRunningTime="2026-01-30 16:24:50.29640038 +0000 UTC m=+144.934357726" Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.300590 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:50 crc kubenswrapper[4766]: E0130 16:24:50.301563 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:50.801544627 +0000 UTC m=+145.439501973 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.302056 4766 patch_prober.go:28] interesting pod/downloads-7954f5f757-254pk container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.302094 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-254pk" podUID="d9f3a679-bd83-4e31-aad4-0bd228e96c33" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.340610 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8qrfp" podStartSLOduration=124.340588446 podStartE2EDuration="2m4.340588446s" podCreationTimestamp="2026-01-30 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:50.335892492 +0000 UTC m=+144.973849848" watchObservedRunningTime="2026-01-30 16:24:50.340588446 +0000 UTC m=+144.978545782" Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.402967 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:50 crc kubenswrapper[4766]: E0130 16:24:50.406116 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:50.906085179 +0000 UTC m=+145.544042525 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.410457 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6ndwq" podStartSLOduration=125.410429334 podStartE2EDuration="2m5.410429334s" podCreationTimestamp="2026-01-30 16:22:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:50.358280697 +0000 UTC m=+144.996238043" watchObservedRunningTime="2026-01-30 16:24:50.410429334 +0000 UTC m=+145.048386680" Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.451771 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-pr8gz" Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.452405 4766 patch_prober.go:28] interesting pod/router-default-5444994796-pr8gz container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.452494 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pr8gz" podUID="af6eef76-87a0-459c-b2eb-61e06ae7386d" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.509595 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:50 crc kubenswrapper[4766]: E0130 16:24:50.511690 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:51.011672597 +0000 UTC m=+145.649629943 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.612360 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:50 crc kubenswrapper[4766]: E0130 16:24:50.612982 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:51.112961271 +0000 UTC m=+145.750918617 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.714885 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:50 crc kubenswrapper[4766]: E0130 16:24:50.715698 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:51.215675264 +0000 UTC m=+145.853632600 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.816418 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:50 crc kubenswrapper[4766]: E0130 16:24:50.817077 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:51.31705613 +0000 UTC m=+145.955013476 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.921704 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:50 crc kubenswrapper[4766]: E0130 16:24:50.922247 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:51.422223668 +0000 UTC m=+146.060181014 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.022818 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:51 crc kubenswrapper[4766]: E0130 16:24:51.023166 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:51.523133073 +0000 UTC m=+146.161090419 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.023383 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:51 crc kubenswrapper[4766]: E0130 16:24:51.023760 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:51.523748329 +0000 UTC m=+146.161705675 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.133058 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:51 crc kubenswrapper[4766]: E0130 16:24:51.133325 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:51.633296363 +0000 UTC m=+146.271253709 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.133633 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:51 crc kubenswrapper[4766]: E0130 16:24:51.134087 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:51.634074614 +0000 UTC m=+146.272031970 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.235256 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:51 crc kubenswrapper[4766]: E0130 16:24:51.235526 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:51.735485662 +0000 UTC m=+146.373442998 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.235672 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:51 crc kubenswrapper[4766]: E0130 16:24:51.236367 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:51.736357565 +0000 UTC m=+146.374314911 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.338992 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:51 crc kubenswrapper[4766]: E0130 16:24:51.339741 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:51.839718734 +0000 UTC m=+146.477676080 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.369999 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-jn8dp" event={"ID":"8acca84e-2800-4a20-b3e8-84e021d1c001","Type":"ContainerStarted","Data":"7405ae37c26b0581853db3ccac8ce6dd159a12ff270e5eaa1ff4742c800c28ae"} Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.379585 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hqpk" event={"ID":"236f27f9-0389-4143-8014-18eb1f125468","Type":"ContainerStarted","Data":"6363198d2d71917d8b884b64446a2ebb6a1046c1c91849449bbbdea23eee6260"} Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.389793 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496495-brtsv" event={"ID":"08038447-8cce-4cea-9ef9-f7dbcce48697","Type":"ContainerStarted","Data":"b112e3544153b7e8a93c7abc5b6cc98c8d5d4abc22a87cb47302149bba9f4cfe"} Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.389860 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496495-brtsv" event={"ID":"08038447-8cce-4cea-9ef9-f7dbcce48697","Type":"ContainerStarted","Data":"7363cff219ed95619e92adc9fc2c142dedc5995f1960823679028cb31e508fc5"} Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.394034 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8c8p6" event={"ID":"33323546-6929-4c9c-a0a3-44842b9897b4","Type":"ContainerStarted","Data":"e1c31ad8125853f8ec6630ad7159cee1cf9b16658bbe92eca33d530b84460071"} Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.414791 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-jn8dp" podStartSLOduration=125.414770331 podStartE2EDuration="2m5.414770331s" podCreationTimestamp="2026-01-30 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:51.413050815 +0000 UTC m=+146.051008171" watchObservedRunningTime="2026-01-30 16:24:51.414770331 +0000 UTC m=+146.052727677" Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.437104 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" event={"ID":"71148f4c-0b84-45c4-911c-0ec4b06cf710","Type":"ContainerStarted","Data":"cdab0ea604c4498b9ce6f2b77f1393d36bdca6102e490574ce37c01f5b6bc92e"} Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.440696 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:51 crc kubenswrapper[4766]: E0130 16:24:51.441951 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:51.941931713 +0000 UTC m=+146.579889059 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.462600 4766 patch_prober.go:28] interesting pod/router-default-5444994796-pr8gz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:24:51 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Jan 30 16:24:51 crc kubenswrapper[4766]: [+]process-running ok Jan 30 16:24:51 crc kubenswrapper[4766]: healthz check failed Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.463126 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pr8gz" podUID="af6eef76-87a0-459c-b2eb-61e06ae7386d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.472233 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8c8p6" podStartSLOduration=125.472219109 podStartE2EDuration="2m5.472219109s" podCreationTimestamp="2026-01-30 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:51.469921137 +0000 UTC m=+146.107878473" watchObservedRunningTime="2026-01-30 16:24:51.472219109 +0000 UTC m=+146.110176455" Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.483125 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6ndwq" event={"ID":"2f7dd292-e5ea-4fe6-b6ec-da47748f1fc3","Type":"ContainerStarted","Data":"268d7193ac5bf2744bf25326aabc4c15019a681734b5d92fd842657e4918c259"} Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.503402 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496495-brtsv" podStartSLOduration=126.503376367 podStartE2EDuration="2m6.503376367s" podCreationTimestamp="2026-01-30 16:22:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:51.502763361 +0000 UTC m=+146.140720707" watchObservedRunningTime="2026-01-30 16:24:51.503376367 +0000 UTC m=+146.141333713" Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.525922 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-lnxcr" event={"ID":"6289d893-d357-4aab-a2e9-389a422ebaa5","Type":"ContainerStarted","Data":"229b98c949c5936978b80a3f9feba18a7c1ba1e83d267ace9e8c25a3b7ad85ff"} Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.534500 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjlfz" event={"ID":"9b23bdbc-d2d1-4404-8455-4e877764c72d","Type":"ContainerStarted","Data":"447d9f52c39dc0821d8ea59a6af5c7fcbf332a8d3ca17855028b0af3d2557b54"} Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.555059 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:51 crc kubenswrapper[4766]: E0130 16:24:51.556419 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:52.056394148 +0000 UTC m=+146.694351494 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.577680 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fgcq7" event={"ID":"6eb3e5af-901e-42db-b01e-895e2d6c8171","Type":"ContainerStarted","Data":"0a0f824d256d03cc1a540cba346b16459a55c3f2556c7bd1cc3b5a8f60e24c23"} Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.590405 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-92b8r" event={"ID":"06c79303-4409-4d40-8b87-66904d05a635","Type":"ContainerStarted","Data":"32f740ab70487b2548cac5ac73175d1e67a39887c63387e05090860fbc3167ea"} Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.627705 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-nx7kv" event={"ID":"a8f468fe-13d2-4f44-ab3e-fd301aac78ce","Type":"ContainerStarted","Data":"96b1d76ae7d550f294ede95c3059a877b6f0998f8aacd8265f3707197ee543a9"} Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.659910 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-hfk7g" event={"ID":"bcea73c8-ba18-43d4-a84b-f4ac5f7c43b0","Type":"ContainerStarted","Data":"f4b622016ff6c0c01945c575adc8b50ab5bd534d066466f2f142a45da3704375"} Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.663201 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:51 crc kubenswrapper[4766]: E0130 16:24:51.665028 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:52.165015627 +0000 UTC m=+146.802972963 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.673390 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5j6bc" event={"ID":"31501ea8-c8ad-4854-bfda-157a49fd0b39","Type":"ContainerStarted","Data":"537c55ab56b69ecb980d12d859877fb379228ff1661c3331dce60fb6e6cfdbb7"} Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.673450 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5j6bc" event={"ID":"31501ea8-c8ad-4854-bfda-157a49fd0b39","Type":"ContainerStarted","Data":"759b72404c1cef5e7791c2725e441b5d4c1e8d16182caaa05112a21632b675ed"} Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.714103 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wcmvb" event={"ID":"cdbd0f5d-e6fb-4960-a928-7a5dcc399239","Type":"ContainerStarted","Data":"9baf130b02720b533f5cfa486ecbaff1522a0002fe7c262131847af34db02ada"} Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.715315 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-wcmvb" Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.717799 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-nx7kv" podStartSLOduration=125.717776651 podStartE2EDuration="2m5.717776651s" podCreationTimestamp="2026-01-30 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:51.713916448 +0000 UTC m=+146.351873794" watchObservedRunningTime="2026-01-30 16:24:51.717776651 +0000 UTC m=+146.355733997" Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.724832 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fgcq7" podStartSLOduration=125.724807998 podStartE2EDuration="2m5.724807998s" podCreationTimestamp="2026-01-30 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:51.644492092 +0000 UTC m=+146.282449438" watchObservedRunningTime="2026-01-30 16:24:51.724807998 +0000 UTC m=+146.362765364" Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.726682 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-vz9mh" event={"ID":"928166c7-a17c-4693-9ae5-1c8aa4050176","Type":"ContainerStarted","Data":"2139692494ba33eef2db868c7d67b746eb934636e0e538b12adf597842124180"} Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.745425 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-vpgtw" event={"ID":"7082a4c2-c998-4e1c-8264-2bafcd96d0c1","Type":"ContainerStarted","Data":"6a3a2bf293b49d0429263f964f58090e6b3564f1ffd0c8c8241cc42e8a8bb9c1"} Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.760254 4766 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-wcmvb container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.760641 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-wcmvb" podUID="cdbd0f5d-e6fb-4960-a928-7a5dcc399239" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.761674 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-92gpq" event={"ID":"ae6eef10-afa3-4bb1-b57a-5a89d305467e","Type":"ContainerStarted","Data":"e76a8866ea3ade697977a6e4499ca9be59e4b5cd0e3c08aa551cab86750a1d91"} Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.763690 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:51 crc kubenswrapper[4766]: E0130 16:24:51.764997 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:52.264976867 +0000 UTC m=+146.902934213 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.779074 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5j6bc" podStartSLOduration=125.779045271 podStartE2EDuration="2m5.779045271s" podCreationTimestamp="2026-01-30 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:51.777249093 +0000 UTC m=+146.415206439" watchObservedRunningTime="2026-01-30 16:24:51.779045271 +0000 UTC m=+146.417002627" Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.780808 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-c75qp" event={"ID":"c1191290-07ee-40c4-85e8-59545986d7db","Type":"ContainerStarted","Data":"177cc426e1ddcfe3423fb41da4b3eb7eb60b8c287e3562a5628e8b080ee78199"} Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.814911 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-vzmxm" event={"ID":"3dc11d4d-16d8-43a2-9648-e0b833e8824a","Type":"ContainerStarted","Data":"96a2531ba35b8676aa0de4f1f2099f9a58a9bea620128dc11f663e5b4f181069"} Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.844553 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gtc8b" event={"ID":"cb029d61-d79f-45a8-88f1-2c190d9315eb","Type":"ContainerStarted","Data":"d38548fd3dcb73080cdffd7120a608b12cb96d15aecfe9114f6b60664b38a178"} Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.845806 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gtc8b" Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.857490 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.857919 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.861071 4766 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-gtc8b container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.861430 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gtc8b" podUID="cb029d61-d79f-45a8-88f1-2c190d9315eb" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.863053 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b8lt5" event={"ID":"752a21cf-698e-45b3-91e2-c00b0e82d991","Type":"ContainerStarted","Data":"752368c44f9235ef926b7526e56ccb67ecea79042bf52005a099da0ece3d6549"} Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.865612 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:51 crc kubenswrapper[4766]: E0130 16:24:51.880570 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:52.38052259 +0000 UTC m=+147.018479936 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.881089 4766 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-zps75 container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.7:8443/livez\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.881643 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" podUID="71148f4c-0b84-45c4-911c-0ec4b06cf710" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.7:8443/livez\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.885459 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-n5kg4" event={"ID":"e2e4b551-3838-4db9-8ee2-363473a40bc4","Type":"ContainerStarted","Data":"833885bbce589d83ccfd18ff99e12f4e4514dfa88e6fff66c42e00586df2a781"} Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.892786 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-wcmvb" podStartSLOduration=125.892757385 podStartE2EDuration="2m5.892757385s" podCreationTimestamp="2026-01-30 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:51.836855938 +0000 UTC m=+146.474813284" watchObservedRunningTime="2026-01-30 16:24:51.892757385 +0000 UTC m=+146.530714731" Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.892954 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-92gpq" podStartSLOduration=7.89294659 podStartE2EDuration="7.89294659s" podCreationTimestamp="2026-01-30 16:24:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:51.887191677 +0000 UTC m=+146.525149033" watchObservedRunningTime="2026-01-30 16:24:51.89294659 +0000 UTC m=+146.530903936" Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.906065 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-vljjd" event={"ID":"c71faa34-d1e9-4e10-911a-8cc1ccb436c0","Type":"ContainerStarted","Data":"aa5086e7c2f8951ea0255063dcbd2e4c2bb466af0545c8d1936c4a340c56d773"} Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.924378 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.924734 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.943379 4766 patch_prober.go:28] interesting pod/apiserver-76f77b778f-c75qp container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.9:8443/livez\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.943463 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-c75qp" podUID="c1191290-07ee-40c4-85e8-59545986d7db" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.9:8443/livez\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.943895 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2h92f" event={"ID":"587fc124-b506-4535-b8d2-1d0f6c91cfb9","Type":"ContainerStarted","Data":"8d5082874d25b8386799f92133190a593df112474c2ba13a6f9daf39110867e5"} Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.964127 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" event={"ID":"21a8aae5-a6f8-43e0-ab59-1e6af94eb133","Type":"ContainerStarted","Data":"c2295bbeccc7cdf5226b500992026187a8b3be7c58a18b6a59edac5d0c9bd3b1"} Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.965443 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.966470 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:51 crc kubenswrapper[4766]: E0130 16:24:51.966909 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:52.466869337 +0000 UTC m=+147.104826683 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.980168 4766 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-sbckt container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.20:6443/healthz\": dial tcp 10.217.0.20:6443: connect: connection refused" start-of-body= Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.980275 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" podUID="21a8aae5-a6f8-43e0-ab59-1e6af94eb133" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.20:6443/healthz\": dial tcp 10.217.0.20:6443: connect: connection refused" Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.984332 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qdlmd" event={"ID":"17af2b06-620b-4126-ac9e-f0de24c9f6bb","Type":"ContainerStarted","Data":"37cfa1f14a9f6134b6908adadd3a6b6032df5da50b15b1baae295d503e0c6c49"} Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.984383 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qdlmd" event={"ID":"17af2b06-620b-4126-ac9e-f0de24c9f6bb","Type":"ContainerStarted","Data":"a44c5d4dbc8a9114803afe2accd4cdb11467fb50aa1b851d129012c5a2fd66dc"} Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.990382 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-vzmxm" podStartSLOduration=125.990357572 podStartE2EDuration="2m5.990357572s" podCreationTimestamp="2026-01-30 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:51.989849258 +0000 UTC m=+146.627806604" watchObservedRunningTime="2026-01-30 16:24:51.990357572 +0000 UTC m=+146.628314918" Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.000829 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-vpgtw" podStartSLOduration=126.000779719 podStartE2EDuration="2m6.000779719s" podCreationTimestamp="2026-01-30 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:51.928084825 +0000 UTC m=+146.566042171" watchObservedRunningTime="2026-01-30 16:24:52.000779719 +0000 UTC m=+146.638737065" Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.001687 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zqpn4" event={"ID":"454fa304-47eb-48d6-9fec-406888874f6f","Type":"ContainerStarted","Data":"714bf4a6a15b63c5073fc82efa378c75cb075d7b780c72784409cbfee15e41e6"} Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.001770 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zqpn4" event={"ID":"454fa304-47eb-48d6-9fec-406888874f6f","Type":"ContainerStarted","Data":"c86dc66ce421f80b9b44b1f2caa54a4f0c98553aefc74a62e2b7a17a2b335a61"} Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.002421 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zqpn4" Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.012615 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kbt4b" event={"ID":"9f9669ae-a5fc-4e59-b2b7-3ae1ebf6f3ad","Type":"ContainerStarted","Data":"c2ba8cdc73f709c9b246e9f10819363fed1d633aa5b27834559c875fe325adad"} Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.019231 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r7tdx" event={"ID":"d6fc09a4-19be-4bdb-b87a-5eafbfc9981c","Type":"ContainerStarted","Data":"265b6cf499f4675014bad1f21fd5af01055766ea421abe868847ffcb21f2197d"} Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.023631 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-8fgxh" event={"ID":"695ff148-b91d-49a2-ad3b-9a240f11e454","Type":"ContainerStarted","Data":"49a469bfbf32d87fdc9772eb7cb8b7a2cfda12f2178ff6d5d4530255ca2db5f7"} Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.029865 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-28vp9" event={"ID":"bb325f25-00bb-4519-99d5-94ea7bbcd9d5","Type":"ContainerStarted","Data":"b1a4490160d7f5a4f6fd598ea933ace3e42e3c496fd21c4ed95898afd6564752"} Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.083948 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:52 crc kubenswrapper[4766]: E0130 16:24:52.088540 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:52.588515863 +0000 UTC m=+147.226473399 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.107655 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-c75qp" podStartSLOduration=126.107625151 podStartE2EDuration="2m6.107625151s" podCreationTimestamp="2026-01-30 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:52.057145708 +0000 UTC m=+146.695103054" watchObservedRunningTime="2026-01-30 16:24:52.107625151 +0000 UTC m=+146.745582507" Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.108908 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b8lt5" podStartSLOduration=126.108899675 podStartE2EDuration="2m6.108899675s" podCreationTimestamp="2026-01-30 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:52.107254142 +0000 UTC m=+146.745211498" watchObservedRunningTime="2026-01-30 16:24:52.108899675 +0000 UTC m=+146.746857021" Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.189960 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:52 crc kubenswrapper[4766]: E0130 16:24:52.190421 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:52.690384803 +0000 UTC m=+147.328342149 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.190777 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:52 crc kubenswrapper[4766]: E0130 16:24:52.191281 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:52.691272206 +0000 UTC m=+147.329229552 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.208057 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gtc8b" podStartSLOduration=126.208040493 podStartE2EDuration="2m6.208040493s" podCreationTimestamp="2026-01-30 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:52.174527511 +0000 UTC m=+146.812484867" watchObservedRunningTime="2026-01-30 16:24:52.208040493 +0000 UTC m=+146.845997839" Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.209052 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2h92f" podStartSLOduration=127.209047309 podStartE2EDuration="2m7.209047309s" podCreationTimestamp="2026-01-30 16:22:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:52.206860021 +0000 UTC m=+146.844817367" watchObservedRunningTime="2026-01-30 16:24:52.209047309 +0000 UTC m=+146.847004655" Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.238249 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-28vp9" podStartSLOduration=126.238223286 podStartE2EDuration="2m6.238223286s" podCreationTimestamp="2026-01-30 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:52.23653822 +0000 UTC m=+146.874495566" watchObservedRunningTime="2026-01-30 16:24:52.238223286 +0000 UTC m=+146.876180632" Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.273344 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zqpn4" podStartSLOduration=126.273323179 podStartE2EDuration="2m6.273323179s" podCreationTimestamp="2026-01-30 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:52.271526432 +0000 UTC m=+146.909483788" watchObservedRunningTime="2026-01-30 16:24:52.273323179 +0000 UTC m=+146.911280525" Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.294497 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:52 crc kubenswrapper[4766]: E0130 16:24:52.295501 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:52.795468278 +0000 UTC m=+147.433425624 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.301569 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kbt4b" podStartSLOduration=126.30155123 podStartE2EDuration="2m6.30155123s" podCreationTimestamp="2026-01-30 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:52.300507402 +0000 UTC m=+146.938464738" watchObservedRunningTime="2026-01-30 16:24:52.30155123 +0000 UTC m=+146.939508576" Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.339564 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" podStartSLOduration=126.339544151 podStartE2EDuration="2m6.339544151s" podCreationTimestamp="2026-01-30 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:52.337439505 +0000 UTC m=+146.975396851" watchObservedRunningTime="2026-01-30 16:24:52.339544151 +0000 UTC m=+146.977501507" Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.396598 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:52 crc kubenswrapper[4766]: E0130 16:24:52.397071 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:52.89704943 +0000 UTC m=+147.535006776 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.399000 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qdlmd" podStartSLOduration=126.398973181 podStartE2EDuration="2m6.398973181s" podCreationTimestamp="2026-01-30 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:52.3970375 +0000 UTC m=+147.034994856" watchObservedRunningTime="2026-01-30 16:24:52.398973181 +0000 UTC m=+147.036930527" Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.441779 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r7tdx" podStartSLOduration=126.441761549 podStartE2EDuration="2m6.441761549s" podCreationTimestamp="2026-01-30 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:52.440324572 +0000 UTC m=+147.078281918" watchObservedRunningTime="2026-01-30 16:24:52.441761549 +0000 UTC m=+147.079718895" Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.462620 4766 patch_prober.go:28] interesting pod/router-default-5444994796-pr8gz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:24:52 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Jan 30 16:24:52 crc kubenswrapper[4766]: [+]process-running ok Jan 30 16:24:52 crc kubenswrapper[4766]: healthz check failed Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.463002 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pr8gz" podUID="af6eef76-87a0-459c-b2eb-61e06ae7386d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.474487 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-8fgxh" podStartSLOduration=127.47446427 podStartE2EDuration="2m7.47446427s" podCreationTimestamp="2026-01-30 16:22:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:52.472561049 +0000 UTC m=+147.110518405" watchObservedRunningTime="2026-01-30 16:24:52.47446427 +0000 UTC m=+147.112421626" Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.498606 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:52 crc kubenswrapper[4766]: E0130 16:24:52.499026 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:52.998992942 +0000 UTC m=+147.636950288 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.600274 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:52 crc kubenswrapper[4766]: E0130 16:24:52.600723 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:53.100707708 +0000 UTC m=+147.738665054 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.701852 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:52 crc kubenswrapper[4766]: E0130 16:24:52.702075 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:53.202034403 +0000 UTC m=+147.839991759 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.702190 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:52 crc kubenswrapper[4766]: E0130 16:24:52.702698 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:53.20268199 +0000 UTC m=+147.840639516 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.803826 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:52 crc kubenswrapper[4766]: E0130 16:24:52.804061 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:53.304024517 +0000 UTC m=+147.941981863 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.804620 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:52 crc kubenswrapper[4766]: E0130 16:24:52.805049 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:53.305041373 +0000 UTC m=+147.942998719 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.906277 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:52 crc kubenswrapper[4766]: E0130 16:24:52.907392 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:53.407347625 +0000 UTC m=+148.045304971 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.909067 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:52 crc kubenswrapper[4766]: E0130 16:24:52.909964 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:53.409738328 +0000 UTC m=+148.047695674 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.965230 4766 csr.go:261] certificate signing request csr-nw8v6 is approved, waiting to be issued Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.976385 4766 csr.go:257] certificate signing request csr-nw8v6 is issued Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.010783 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:53 crc kubenswrapper[4766]: E0130 16:24:53.011551 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:53.511511666 +0000 UTC m=+148.149469072 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.035158 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-vz9mh" event={"ID":"928166c7-a17c-4693-9ae5-1c8aa4050176","Type":"ContainerStarted","Data":"cfa6e2db336fc6785b9b181a509714caa7a29db322047e22a01d115b81c8c5a7"} Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.035555 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-vz9mh" event={"ID":"928166c7-a17c-4693-9ae5-1c8aa4050176","Type":"ContainerStarted","Data":"350d5ffed7059997ad2a9f5fddcc10d2543a396b97f50913871049601f3e9f60"} Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.038243 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-n5kg4" event={"ID":"e2e4b551-3838-4db9-8ee2-363473a40bc4","Type":"ContainerStarted","Data":"88da5e091813b2ae889a5abce8a5c7b378f6d55226ab55628cb5c054037bd528"} Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.040601 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-lnxcr" event={"ID":"6289d893-d357-4aab-a2e9-389a422ebaa5","Type":"ContainerStarted","Data":"ea0a8858d87a0af0254064800731e7b85e5dd3f77c82c9e17c1814222ab6f4f3"} Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.040882 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-lnxcr" event={"ID":"6289d893-d357-4aab-a2e9-389a422ebaa5","Type":"ContainerStarted","Data":"6dba05c6cf302151559a448b5a7144550979b9dc2b3cfd9f9bcc6c2eddc24f47"} Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.041537 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-lnxcr" Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.042771 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjlfz" event={"ID":"9b23bdbc-d2d1-4404-8455-4e877764c72d","Type":"ContainerStarted","Data":"d543a873765dd07695f0a5b7704044c70c0fc8424ab6e91c411205852b97f8c7"} Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.043253 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjlfz" Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.044852 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hqpk" event={"ID":"236f27f9-0389-4143-8014-18eb1f125468","Type":"ContainerStarted","Data":"26780db5818c6efb42b27114dddc4051db1e2aa057ae3cedc31ae8acdedbb769"} Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.046090 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hqpk" Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.046348 4766 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-hjlfz container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.046539 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjlfz" podUID="9b23bdbc-d2d1-4404-8455-4e877764c72d" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.047001 4766 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5hqpk container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:5443/healthz\": dial tcp 10.217.0.30:5443: connect: connection refused" start-of-body= Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.047118 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hqpk" podUID="236f27f9-0389-4143-8014-18eb1f125468" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.30:5443/healthz\": dial tcp 10.217.0.30:5443: connect: connection refused" Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.048747 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kbt4b" event={"ID":"9f9669ae-a5fc-4e59-b2b7-3ae1ebf6f3ad","Type":"ContainerStarted","Data":"97c6b040fd0559ab1bb40db0ab74cbfba27cdb7e1fb086235d129cea7d0f3c53"} Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.050972 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-92b8r" event={"ID":"06c79303-4409-4d40-8b87-66904d05a635","Type":"ContainerStarted","Data":"4e5f9778b56d5a6e0d4e84609cfb82ec6d5c1ba07cd9e9a5565f9deae58dae67"} Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.051098 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-92b8r" event={"ID":"06c79303-4409-4d40-8b87-66904d05a635","Type":"ContainerStarted","Data":"e3c10f7bd38cf6c82e0a17a24ecc1f02c302aac2c3f80877ec0440f531e63771"} Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.054094 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7j765" event={"ID":"0fd41a92-ef77-4a02-bd2b-089d2edb3cf4","Type":"ContainerStarted","Data":"5d710bfa89da9c2138c6091b48fa73bf9c82d796128313ddb96a4381746d4576"} Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.054273 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7j765" Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.055624 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-8fgxh" event={"ID":"695ff148-b91d-49a2-ad3b-9a240f11e454","Type":"ContainerStarted","Data":"a29f3a6e898177f84fd4104c6f79254b235a6b9d4f40d3ce7f40e1993c6d8532"} Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.057146 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gtc8b" event={"ID":"cb029d61-d79f-45a8-88f1-2c190d9315eb","Type":"ContainerStarted","Data":"ae62df9a2129ddcb0f7307054f73875d70d72f12be1382dfc87b4aa071371d4d"} Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.058657 4766 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-gtc8b container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.058834 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gtc8b" podUID="cb029d61-d79f-45a8-88f1-2c190d9315eb" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.060742 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zqpn4" event={"ID":"454fa304-47eb-48d6-9fec-406888874f6f","Type":"ContainerStarted","Data":"89c6881619dca0f47829132ee99216bb505bf40322cc160d3d5a94cf0714e639"} Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.062955 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-hfk7g" event={"ID":"bcea73c8-ba18-43d4-a84b-f4ac5f7c43b0","Type":"ContainerStarted","Data":"25cee3581538a46e288fe32e9a96d46c62607cc8ab2c44d06f6049b561af07d8"} Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.065506 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-28vp9" event={"ID":"bb325f25-00bb-4519-99d5-94ea7bbcd9d5","Type":"ContainerStarted","Data":"8b6ceb44e605e8d65c22ee47f1d8a63f9e04beef4021b510f174928a0704cb71"} Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.074896 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-c75qp" event={"ID":"c1191290-07ee-40c4-85e8-59545986d7db","Type":"ContainerStarted","Data":"4191ecbd08a95f39ecad007556146b8b4179e9d2053e3e331735cfb272c9d87a"} Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.078312 4766 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-wcmvb container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.078403 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-wcmvb" podUID="cdbd0f5d-e6fb-4960-a928-7a5dcc399239" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.086857 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-vz9mh" podStartSLOduration=127.08683558 podStartE2EDuration="2m7.08683558s" podCreationTimestamp="2026-01-30 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:53.085566265 +0000 UTC m=+147.723523621" watchObservedRunningTime="2026-01-30 16:24:53.08683558 +0000 UTC m=+147.724792926" Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.112812 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.117808 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.118291 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.118756 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.119050 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.137677 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:24:53 crc kubenswrapper[4766]: E0130 16:24:53.140428 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:53.640385974 +0000 UTC m=+148.278343340 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.144408 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.150089 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.151557 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.158921 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.221923 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:53 crc kubenswrapper[4766]: E0130 16:24:53.222542 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:53.722511689 +0000 UTC m=+148.360469045 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.256753 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.261534 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hqpk" podStartSLOduration=127.261509866 podStartE2EDuration="2m7.261509866s" podCreationTimestamp="2026-01-30 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:53.167163326 +0000 UTC m=+147.805120672" watchObservedRunningTime="2026-01-30 16:24:53.261509866 +0000 UTC m=+147.899467212" Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.279051 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.324280 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:53 crc kubenswrapper[4766]: E0130 16:24:53.325021 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:53.825008955 +0000 UTC m=+148.462966301 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.355614 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-n5kg4" podStartSLOduration=127.355583339 podStartE2EDuration="2m7.355583339s" podCreationTimestamp="2026-01-30 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:53.264367922 +0000 UTC m=+147.902325268" watchObservedRunningTime="2026-01-30 16:24:53.355583339 +0000 UTC m=+147.993540685" Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.358537 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjlfz" podStartSLOduration=127.358520237 podStartE2EDuration="2m7.358520237s" podCreationTimestamp="2026-01-30 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:53.331548579 +0000 UTC m=+147.969505935" watchObservedRunningTime="2026-01-30 16:24:53.358520237 +0000 UTC m=+147.996477583" Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.396103 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7j765" podStartSLOduration=128.396081956 podStartE2EDuration="2m8.396081956s" podCreationTimestamp="2026-01-30 16:22:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:53.392949313 +0000 UTC m=+148.030906679" watchObservedRunningTime="2026-01-30 16:24:53.396081956 +0000 UTC m=+148.034039302" Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.416743 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-92b8r" podStartSLOduration=127.416720414 podStartE2EDuration="2m7.416720414s" podCreationTimestamp="2026-01-30 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:53.415712528 +0000 UTC m=+148.053669884" watchObservedRunningTime="2026-01-30 16:24:53.416720414 +0000 UTC m=+148.054677760" Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.442264 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:53 crc kubenswrapper[4766]: E0130 16:24:53.442718 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:53.942691016 +0000 UTC m=+148.580648362 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.462161 4766 patch_prober.go:28] interesting pod/router-default-5444994796-pr8gz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:24:53 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Jan 30 16:24:53 crc kubenswrapper[4766]: [+]process-running ok Jan 30 16:24:53 crc kubenswrapper[4766]: healthz check failed Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.467407 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pr8gz" podUID="af6eef76-87a0-459c-b2eb-61e06ae7386d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.526083 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-hfk7g" podStartSLOduration=9.526055613 podStartE2EDuration="9.526055613s" podCreationTimestamp="2026-01-30 16:24:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:53.485588607 +0000 UTC m=+148.123545953" watchObservedRunningTime="2026-01-30 16:24:53.526055613 +0000 UTC m=+148.164012969" Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.526723 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-lnxcr" podStartSLOduration=9.526715601 podStartE2EDuration="9.526715601s" podCreationTimestamp="2026-01-30 16:24:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:53.518988265 +0000 UTC m=+148.156945611" watchObservedRunningTime="2026-01-30 16:24:53.526715601 +0000 UTC m=+148.164672957" Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.545306 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:53 crc kubenswrapper[4766]: E0130 16:24:53.545806 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:54.045784458 +0000 UTC m=+148.683741804 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.646532 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:53 crc kubenswrapper[4766]: E0130 16:24:53.646953 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:54.146930369 +0000 UTC m=+148.784887705 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.748156 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:53 crc kubenswrapper[4766]: E0130 16:24:53.748939 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:54.248922322 +0000 UTC m=+148.886879668 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.850170 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:53 crc kubenswrapper[4766]: E0130 16:24:53.850706 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:54.350644137 +0000 UTC m=+148.988601483 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.873865 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:53 crc kubenswrapper[4766]: E0130 16:24:53.963857 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:54.463834239 +0000 UTC m=+149.101791585 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.953252 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.979629 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-30 16:19:52 +0000 UTC, rotation deadline is 2026-11-08 09:41:44.580408886 +0000 UTC Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.979681 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6761h16m50.600731636s for next certificate rotation Jan 30 16:24:54 crc kubenswrapper[4766]: I0130 16:24:54.071014 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:54 crc kubenswrapper[4766]: E0130 16:24:54.071454 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:54.571429361 +0000 UTC m=+149.209386707 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:54 crc kubenswrapper[4766]: I0130 16:24:54.099593 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-vljjd" event={"ID":"c71faa34-d1e9-4e10-911a-8cc1ccb436c0","Type":"ContainerStarted","Data":"b5ad1c002728a64dacfb9c106729503b83c822bc791e12402de5e6f16e1b6f3b"} Jan 30 16:24:54 crc kubenswrapper[4766]: I0130 16:24:54.104534 4766 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5hqpk container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:5443/healthz\": dial tcp 10.217.0.30:5443: connect: connection refused" start-of-body= Jan 30 16:24:54 crc kubenswrapper[4766]: I0130 16:24:54.104579 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hqpk" podUID="236f27f9-0389-4143-8014-18eb1f125468" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.30:5443/healthz\": dial tcp 10.217.0.30:5443: connect: connection refused" Jan 30 16:24:54 crc kubenswrapper[4766]: I0130 16:24:54.104731 4766 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-gtc8b container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Jan 30 16:24:54 crc kubenswrapper[4766]: I0130 16:24:54.104853 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gtc8b" podUID="cb029d61-d79f-45a8-88f1-2c190d9315eb" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Jan 30 16:24:54 crc kubenswrapper[4766]: I0130 16:24:54.104541 4766 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-hjlfz container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Jan 30 16:24:54 crc kubenswrapper[4766]: I0130 16:24:54.104930 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjlfz" podUID="9b23bdbc-d2d1-4404-8455-4e877764c72d" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Jan 30 16:24:54 crc kubenswrapper[4766]: I0130 16:24:54.111232 4766 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-7j765 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 30 16:24:54 crc kubenswrapper[4766]: I0130 16:24:54.111260 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7j765" podUID="0fd41a92-ef77-4a02-bd2b-089d2edb3cf4" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 30 16:24:54 crc kubenswrapper[4766]: I0130 16:24:54.178291 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:54 crc kubenswrapper[4766]: E0130 16:24:54.186013 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:54.685987708 +0000 UTC m=+149.323945044 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:54 crc kubenswrapper[4766]: I0130 16:24:54.290609 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:54 crc kubenswrapper[4766]: E0130 16:24:54.291158 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:54.791131504 +0000 UTC m=+149.429088850 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:54 crc kubenswrapper[4766]: I0130 16:24:54.392335 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:54 crc kubenswrapper[4766]: E0130 16:24:54.392774 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:54.892756469 +0000 UTC m=+149.530713815 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:54 crc kubenswrapper[4766]: I0130 16:24:54.454433 4766 patch_prober.go:28] interesting pod/router-default-5444994796-pr8gz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:24:54 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Jan 30 16:24:54 crc kubenswrapper[4766]: [+]process-running ok Jan 30 16:24:54 crc kubenswrapper[4766]: healthz check failed Jan 30 16:24:54 crc kubenswrapper[4766]: I0130 16:24:54.454524 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pr8gz" podUID="af6eef76-87a0-459c-b2eb-61e06ae7386d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:24:54 crc kubenswrapper[4766]: I0130 16:24:54.493780 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:54 crc kubenswrapper[4766]: E0130 16:24:54.494078 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:54.994033212 +0000 UTC m=+149.631990568 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:54 crc kubenswrapper[4766]: I0130 16:24:54.494161 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:54 crc kubenswrapper[4766]: E0130 16:24:54.494532 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:54.994513855 +0000 UTC m=+149.632471391 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:54 crc kubenswrapper[4766]: I0130 16:24:54.595769 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:54 crc kubenswrapper[4766]: E0130 16:24:54.596209 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:55.096164549 +0000 UTC m=+149.734121895 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:54 crc kubenswrapper[4766]: I0130 16:24:54.697873 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:54 crc kubenswrapper[4766]: E0130 16:24:54.698350 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:55.198330547 +0000 UTC m=+149.836287893 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:54 crc kubenswrapper[4766]: I0130 16:24:54.799107 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:54 crc kubenswrapper[4766]: E0130 16:24:54.799370 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:55.299338474 +0000 UTC m=+149.937295820 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:54 crc kubenswrapper[4766]: I0130 16:24:54.799433 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:54 crc kubenswrapper[4766]: E0130 16:24:54.799823 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:55.299810776 +0000 UTC m=+149.937768122 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:54 crc kubenswrapper[4766]: I0130 16:24:54.901425 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:54 crc kubenswrapper[4766]: E0130 16:24:54.901707 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:55.401670606 +0000 UTC m=+150.039627942 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:54 crc kubenswrapper[4766]: I0130 16:24:54.902037 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:54 crc kubenswrapper[4766]: E0130 16:24:54.902551 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:55.402531709 +0000 UTC m=+150.040489055 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.004072 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:55 crc kubenswrapper[4766]: E0130 16:24:55.004314 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:55.504272565 +0000 UTC m=+150.142229911 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.004393 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:55 crc kubenswrapper[4766]: E0130 16:24:55.004836 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:55.50482352 +0000 UTC m=+150.142780906 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.105231 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:55 crc kubenswrapper[4766]: E0130 16:24:55.105581 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:55.605560869 +0000 UTC m=+150.243518215 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.108584 4766 generic.go:334] "Generic (PLEG): container finished" podID="08038447-8cce-4cea-9ef9-f7dbcce48697" containerID="b112e3544153b7e8a93c7abc5b6cc98c8d5d4abc22a87cb47302149bba9f4cfe" exitCode=0 Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.108681 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496495-brtsv" event={"ID":"08038447-8cce-4cea-9ef9-f7dbcce48697","Type":"ContainerDied","Data":"b112e3544153b7e8a93c7abc5b6cc98c8d5d4abc22a87cb47302149bba9f4cfe"} Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.110790 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"194256d66db56e5b4d3e2d08ae707c15cdf6e315a894a7ee01f7b04d4521ef91"} Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.110838 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"a7ec67b2024a86549ccbc71deac794f16e478880d65c368f45714b607f3b83dc"} Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.111091 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.113063 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"96265d74d9d20fbed617e2ab638aac389508a19e5b03e0571ee0116167a70b6e"} Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.113138 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"4ce2eaadfdb49ebc39c8112e3f604adfbf265faa0fb830caf045bb1984b8f8d0"} Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.117218 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"b9a66a51d927177e552352c451fd6c3e254770cf602e2aa83fee13aebbcb9dde"} Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.117281 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"0f13921c85e76ff9fe555cb96321307ccfa2342722c69c6900286d012f7ef9cf"} Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.207058 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:55 crc kubenswrapper[4766]: E0130 16:24:55.217857 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:55.717826816 +0000 UTC m=+150.355784162 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.320018 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:55 crc kubenswrapper[4766]: E0130 16:24:55.320277 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:55.820239501 +0000 UTC m=+150.458196847 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.320344 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:55 crc kubenswrapper[4766]: E0130 16:24:55.321023 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:55.821015781 +0000 UTC m=+150.458973127 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.422275 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:55 crc kubenswrapper[4766]: E0130 16:24:55.422533 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:55.92249538 +0000 UTC m=+150.560452726 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.422833 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:55 crc kubenswrapper[4766]: E0130 16:24:55.423240 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:55.92322356 +0000 UTC m=+150.561180906 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.456342 4766 patch_prober.go:28] interesting pod/router-default-5444994796-pr8gz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:24:55 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Jan 30 16:24:55 crc kubenswrapper[4766]: [+]process-running ok Jan 30 16:24:55 crc kubenswrapper[4766]: healthz check failed Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.456432 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pr8gz" podUID="af6eef76-87a0-459c-b2eb-61e06ae7386d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.523998 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:55 crc kubenswrapper[4766]: E0130 16:24:55.524214 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:56.024157505 +0000 UTC m=+150.662114861 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.524390 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:55 crc kubenswrapper[4766]: E0130 16:24:55.524734 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:56.02472103 +0000 UTC m=+150.662678376 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.547263 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-gtfgx" Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.625379 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:55 crc kubenswrapper[4766]: E0130 16:24:55.625528 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:56.125506171 +0000 UTC m=+150.763463517 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.625796 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:55 crc kubenswrapper[4766]: E0130 16:24:55.626118 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:56.126109677 +0000 UTC m=+150.764067023 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.727327 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:55 crc kubenswrapper[4766]: E0130 16:24:55.727642 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:56.227601326 +0000 UTC m=+150.865558672 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.728530 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:55 crc kubenswrapper[4766]: E0130 16:24:55.729017 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:56.229003044 +0000 UTC m=+150.866960390 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.747777 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.748623 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.753053 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.753246 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.766555 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.795076 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hqpk" Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.829255 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.829604 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ecabcf90-8bec-4268-91ea-79d333295003-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ecabcf90-8bec-4268-91ea-79d333295003\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.829632 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ecabcf90-8bec-4268-91ea-79d333295003-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ecabcf90-8bec-4268-91ea-79d333295003\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 16:24:55 crc kubenswrapper[4766]: E0130 16:24:55.829785 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:56.329768405 +0000 UTC m=+150.967725751 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.858273 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.931693 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:55 crc kubenswrapper[4766]: E0130 16:24:55.936095 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:56.436075443 +0000 UTC m=+151.074032789 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.936434 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ecabcf90-8bec-4268-91ea-79d333295003-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ecabcf90-8bec-4268-91ea-79d333295003\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.936491 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ecabcf90-8bec-4268-91ea-79d333295003-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ecabcf90-8bec-4268-91ea-79d333295003\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.936570 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ecabcf90-8bec-4268-91ea-79d333295003-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ecabcf90-8bec-4268-91ea-79d333295003\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.975135 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ecabcf90-8bec-4268-91ea-79d333295003-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ecabcf90-8bec-4268-91ea-79d333295003\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.000607 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7j765" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.037605 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:56 crc kubenswrapper[4766]: E0130 16:24:56.037827 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:56.537790258 +0000 UTC m=+151.175747604 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.038126 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:56 crc kubenswrapper[4766]: E0130 16:24:56.038618 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:56.538598709 +0000 UTC m=+151.176556055 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.066837 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.136301 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-vljjd" event={"ID":"c71faa34-d1e9-4e10-911a-8cc1ccb436c0","Type":"ContainerStarted","Data":"fe5386b041a5b01f5524eff7402381a52dad56a0345e06ea8e1f78b5d2454107"} Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.138906 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:56 crc kubenswrapper[4766]: E0130 16:24:56.139463 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:56.639425741 +0000 UTC m=+151.277383077 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.240957 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:56 crc kubenswrapper[4766]: E0130 16:24:56.242693 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:56.742671658 +0000 UTC m=+151.380629004 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.343711 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:56 crc kubenswrapper[4766]: E0130 16:24:56.345228 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:56.845128044 +0000 UTC m=+151.483085390 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.379687 4766 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.394511 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-969pn"] Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.402981 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-969pn" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.404049 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-969pn"] Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.406597 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.447901 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:56 crc kubenswrapper[4766]: E0130 16:24:56.448366 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:56.94835073 +0000 UTC m=+151.586308076 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.458535 4766 patch_prober.go:28] interesting pod/router-default-5444994796-pr8gz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:24:56 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Jan 30 16:24:56 crc kubenswrapper[4766]: [+]process-running ok Jan 30 16:24:56 crc kubenswrapper[4766]: healthz check failed Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.458713 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pr8gz" podUID="af6eef76-87a0-459c-b2eb-61e06ae7386d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.539821 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496495-brtsv" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.549540 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.549793 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f55dc373-49c6-4b05-a945-79614dc282d8-catalog-content\") pod \"certified-operators-969pn\" (UID: \"f55dc373-49c6-4b05-a945-79614dc282d8\") " pod="openshift-marketplace/certified-operators-969pn" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.549867 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f55dc373-49c6-4b05-a945-79614dc282d8-utilities\") pod \"certified-operators-969pn\" (UID: \"f55dc373-49c6-4b05-a945-79614dc282d8\") " pod="openshift-marketplace/certified-operators-969pn" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.549896 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc27r\" (UniqueName: \"kubernetes.io/projected/f55dc373-49c6-4b05-a945-79614dc282d8-kube-api-access-nc27r\") pod \"certified-operators-969pn\" (UID: \"f55dc373-49c6-4b05-a945-79614dc282d8\") " pod="openshift-marketplace/certified-operators-969pn" Jan 30 16:24:56 crc kubenswrapper[4766]: E0130 16:24:56.550107 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:57.050075756 +0000 UTC m=+151.688033102 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.589470 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qrcth"] Jan 30 16:24:56 crc kubenswrapper[4766]: E0130 16:24:56.591047 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08038447-8cce-4cea-9ef9-f7dbcce48697" containerName="collect-profiles" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.591112 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="08038447-8cce-4cea-9ef9-f7dbcce48697" containerName="collect-profiles" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.593414 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="08038447-8cce-4cea-9ef9-f7dbcce48697" containerName="collect-profiles" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.600468 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qrcth"] Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.600616 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qrcth" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.607408 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.651142 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08038447-8cce-4cea-9ef9-f7dbcce48697-config-volume\") pod \"08038447-8cce-4cea-9ef9-f7dbcce48697\" (UID: \"08038447-8cce-4cea-9ef9-f7dbcce48697\") " Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.651482 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/08038447-8cce-4cea-9ef9-f7dbcce48697-secret-volume\") pod \"08038447-8cce-4cea-9ef9-f7dbcce48697\" (UID: \"08038447-8cce-4cea-9ef9-f7dbcce48697\") " Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.651524 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cv2k\" (UniqueName: \"kubernetes.io/projected/08038447-8cce-4cea-9ef9-f7dbcce48697-kube-api-access-2cv2k\") pod \"08038447-8cce-4cea-9ef9-f7dbcce48697\" (UID: \"08038447-8cce-4cea-9ef9-f7dbcce48697\") " Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.651767 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f55dc373-49c6-4b05-a945-79614dc282d8-catalog-content\") pod \"certified-operators-969pn\" (UID: \"f55dc373-49c6-4b05-a945-79614dc282d8\") " pod="openshift-marketplace/certified-operators-969pn" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.651834 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.651878 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f55dc373-49c6-4b05-a945-79614dc282d8-utilities\") pod \"certified-operators-969pn\" (UID: \"f55dc373-49c6-4b05-a945-79614dc282d8\") " pod="openshift-marketplace/certified-operators-969pn" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.651901 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc27r\" (UniqueName: \"kubernetes.io/projected/f55dc373-49c6-4b05-a945-79614dc282d8-kube-api-access-nc27r\") pod \"certified-operators-969pn\" (UID: \"f55dc373-49c6-4b05-a945-79614dc282d8\") " pod="openshift-marketplace/certified-operators-969pn" Jan 30 16:24:56 crc kubenswrapper[4766]: E0130 16:24:56.652748 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:57.152727326 +0000 UTC m=+151.790684672 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.652851 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f55dc373-49c6-4b05-a945-79614dc282d8-catalog-content\") pod \"certified-operators-969pn\" (UID: \"f55dc373-49c6-4b05-a945-79614dc282d8\") " pod="openshift-marketplace/certified-operators-969pn" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.652905 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f55dc373-49c6-4b05-a945-79614dc282d8-utilities\") pod \"certified-operators-969pn\" (UID: \"f55dc373-49c6-4b05-a945-79614dc282d8\") " pod="openshift-marketplace/certified-operators-969pn" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.654493 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08038447-8cce-4cea-9ef9-f7dbcce48697-config-volume" (OuterVolumeSpecName: "config-volume") pod "08038447-8cce-4cea-9ef9-f7dbcce48697" (UID: "08038447-8cce-4cea-9ef9-f7dbcce48697"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.668454 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08038447-8cce-4cea-9ef9-f7dbcce48697-kube-api-access-2cv2k" (OuterVolumeSpecName: "kube-api-access-2cv2k") pod "08038447-8cce-4cea-9ef9-f7dbcce48697" (UID: "08038447-8cce-4cea-9ef9-f7dbcce48697"). InnerVolumeSpecName "kube-api-access-2cv2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.668737 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08038447-8cce-4cea-9ef9-f7dbcce48697-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "08038447-8cce-4cea-9ef9-f7dbcce48697" (UID: "08038447-8cce-4cea-9ef9-f7dbcce48697"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.682423 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nc27r\" (UniqueName: \"kubernetes.io/projected/f55dc373-49c6-4b05-a945-79614dc282d8-kube-api-access-nc27r\") pod \"certified-operators-969pn\" (UID: \"f55dc373-49c6-4b05-a945-79614dc282d8\") " pod="openshift-marketplace/certified-operators-969pn" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.738634 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-969pn" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.754157 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.754760 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhvw8\" (UniqueName: \"kubernetes.io/projected/ac4a36f6-21fe-4374-adaf-4505d59ce4c5-kube-api-access-fhvw8\") pod \"community-operators-qrcth\" (UID: \"ac4a36f6-21fe-4374-adaf-4505d59ce4c5\") " pod="openshift-marketplace/community-operators-qrcth" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.754794 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac4a36f6-21fe-4374-adaf-4505d59ce4c5-catalog-content\") pod \"community-operators-qrcth\" (UID: \"ac4a36f6-21fe-4374-adaf-4505d59ce4c5\") " pod="openshift-marketplace/community-operators-qrcth" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.754855 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac4a36f6-21fe-4374-adaf-4505d59ce4c5-utilities\") pod \"community-operators-qrcth\" (UID: \"ac4a36f6-21fe-4374-adaf-4505d59ce4c5\") " pod="openshift-marketplace/community-operators-qrcth" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.754964 4766 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08038447-8cce-4cea-9ef9-f7dbcce48697-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.754977 4766 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/08038447-8cce-4cea-9ef9-f7dbcce48697-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.754988 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cv2k\" (UniqueName: \"kubernetes.io/projected/08038447-8cce-4cea-9ef9-f7dbcce48697-kube-api-access-2cv2k\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:56 crc kubenswrapper[4766]: E0130 16:24:56.755094 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:57.255074439 +0000 UTC m=+151.893031785 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.768400 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.782370 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-46g6x"] Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.784368 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-46g6x" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.797708 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-46g6x"] Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.856434 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac4a36f6-21fe-4374-adaf-4505d59ce4c5-utilities\") pod \"community-operators-qrcth\" (UID: \"ac4a36f6-21fe-4374-adaf-4505d59ce4c5\") " pod="openshift-marketplace/community-operators-qrcth" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.856516 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.856750 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhvw8\" (UniqueName: \"kubernetes.io/projected/ac4a36f6-21fe-4374-adaf-4505d59ce4c5-kube-api-access-fhvw8\") pod \"community-operators-qrcth\" (UID: \"ac4a36f6-21fe-4374-adaf-4505d59ce4c5\") " pod="openshift-marketplace/community-operators-qrcth" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.856768 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac4a36f6-21fe-4374-adaf-4505d59ce4c5-catalog-content\") pod \"community-operators-qrcth\" (UID: \"ac4a36f6-21fe-4374-adaf-4505d59ce4c5\") " pod="openshift-marketplace/community-operators-qrcth" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.857167 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac4a36f6-21fe-4374-adaf-4505d59ce4c5-catalog-content\") pod \"community-operators-qrcth\" (UID: \"ac4a36f6-21fe-4374-adaf-4505d59ce4c5\") " pod="openshift-marketplace/community-operators-qrcth" Jan 30 16:24:56 crc kubenswrapper[4766]: E0130 16:24:56.857676 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:57.357647387 +0000 UTC m=+151.995605154 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.858358 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac4a36f6-21fe-4374-adaf-4505d59ce4c5-utilities\") pod \"community-operators-qrcth\" (UID: \"ac4a36f6-21fe-4374-adaf-4505d59ce4c5\") " pod="openshift-marketplace/community-operators-qrcth" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.870473 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.884468 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhvw8\" (UniqueName: \"kubernetes.io/projected/ac4a36f6-21fe-4374-adaf-4505d59ce4c5-kube-api-access-fhvw8\") pod \"community-operators-qrcth\" (UID: \"ac4a36f6-21fe-4374-adaf-4505d59ce4c5\") " pod="openshift-marketplace/community-operators-qrcth" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.885623 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.937033 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qrcth" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.940136 4766 patch_prober.go:28] interesting pod/apiserver-76f77b778f-c75qp container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 30 16:24:56 crc kubenswrapper[4766]: [+]log ok Jan 30 16:24:56 crc kubenswrapper[4766]: [+]etcd ok Jan 30 16:24:56 crc kubenswrapper[4766]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 30 16:24:56 crc kubenswrapper[4766]: [+]poststarthook/generic-apiserver-start-informers ok Jan 30 16:24:56 crc kubenswrapper[4766]: [+]poststarthook/max-in-flight-filter ok Jan 30 16:24:56 crc kubenswrapper[4766]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 30 16:24:56 crc kubenswrapper[4766]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 30 16:24:56 crc kubenswrapper[4766]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 30 16:24:56 crc kubenswrapper[4766]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 30 16:24:56 crc kubenswrapper[4766]: [+]poststarthook/project.openshift.io-projectcache ok Jan 30 16:24:56 crc kubenswrapper[4766]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 30 16:24:56 crc kubenswrapper[4766]: [+]poststarthook/openshift.io-startinformers ok Jan 30 16:24:56 crc kubenswrapper[4766]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 30 16:24:56 crc kubenswrapper[4766]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 30 16:24:56 crc kubenswrapper[4766]: livez check failed Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.940232 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-c75qp" podUID="c1191290-07ee-40c4-85e8-59545986d7db" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.957538 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:56 crc kubenswrapper[4766]: E0130 16:24:56.959390 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:57.459362063 +0000 UTC m=+152.097319409 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.959425 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c0324d7-1f61-4e1a-9ce7-fd960abfe244-catalog-content\") pod \"certified-operators-46g6x\" (UID: \"7c0324d7-1f61-4e1a-9ce7-fd960abfe244\") " pod="openshift-marketplace/certified-operators-46g6x" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.959464 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wphws\" (UniqueName: \"kubernetes.io/projected/7c0324d7-1f61-4e1a-9ce7-fd960abfe244-kube-api-access-wphws\") pod \"certified-operators-46g6x\" (UID: \"7c0324d7-1f61-4e1a-9ce7-fd960abfe244\") " pod="openshift-marketplace/certified-operators-46g6x" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.959518 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.959650 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c0324d7-1f61-4e1a-9ce7-fd960abfe244-utilities\") pod \"certified-operators-46g6x\" (UID: \"7c0324d7-1f61-4e1a-9ce7-fd960abfe244\") " pod="openshift-marketplace/certified-operators-46g6x" Jan 30 16:24:56 crc kubenswrapper[4766]: E0130 16:24:56.960984 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:57.460976466 +0000 UTC m=+152.098933812 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.979705 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cn45b"] Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.981031 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cn45b" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.990430 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cn45b"] Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.030321 4766 patch_prober.go:28] interesting pod/downloads-7954f5f757-254pk container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.030410 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-254pk" podUID="d9f3a679-bd83-4e31-aad4-0bd228e96c33" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.030321 4766 patch_prober.go:28] interesting pod/downloads-7954f5f757-254pk container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.030717 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-254pk" podUID="d9f3a679-bd83-4e31-aad4-0bd228e96c33" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.061206 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.061504 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c0324d7-1f61-4e1a-9ce7-fd960abfe244-utilities\") pod \"certified-operators-46g6x\" (UID: \"7c0324d7-1f61-4e1a-9ce7-fd960abfe244\") " pod="openshift-marketplace/certified-operators-46g6x" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.061587 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c0324d7-1f61-4e1a-9ce7-fd960abfe244-catalog-content\") pod \"certified-operators-46g6x\" (UID: \"7c0324d7-1f61-4e1a-9ce7-fd960abfe244\") " pod="openshift-marketplace/certified-operators-46g6x" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.061617 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wphws\" (UniqueName: \"kubernetes.io/projected/7c0324d7-1f61-4e1a-9ce7-fd960abfe244-kube-api-access-wphws\") pod \"certified-operators-46g6x\" (UID: \"7c0324d7-1f61-4e1a-9ce7-fd960abfe244\") " pod="openshift-marketplace/certified-operators-46g6x" Jan 30 16:24:57 crc kubenswrapper[4766]: E0130 16:24:57.062242 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:57.562218209 +0000 UTC m=+152.200175575 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.063277 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c0324d7-1f61-4e1a-9ce7-fd960abfe244-utilities\") pod \"certified-operators-46g6x\" (UID: \"7c0324d7-1f61-4e1a-9ce7-fd960abfe244\") " pod="openshift-marketplace/certified-operators-46g6x" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.063748 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c0324d7-1f61-4e1a-9ce7-fd960abfe244-catalog-content\") pod \"certified-operators-46g6x\" (UID: \"7c0324d7-1f61-4e1a-9ce7-fd960abfe244\") " pod="openshift-marketplace/certified-operators-46g6x" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.086327 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wphws\" (UniqueName: \"kubernetes.io/projected/7c0324d7-1f61-4e1a-9ce7-fd960abfe244-kube-api-access-wphws\") pod \"certified-operators-46g6x\" (UID: \"7c0324d7-1f61-4e1a-9ce7-fd960abfe244\") " pod="openshift-marketplace/certified-operators-46g6x" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.117006 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-46g6x" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.166657 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.166701 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/410ce027-e739-4759-a4ca-96994b5e37e4-utilities\") pod \"community-operators-cn45b\" (UID: \"410ce027-e739-4759-a4ca-96994b5e37e4\") " pod="openshift-marketplace/community-operators-cn45b" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.166728 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/410ce027-e739-4759-a4ca-96994b5e37e4-catalog-content\") pod \"community-operators-cn45b\" (UID: \"410ce027-e739-4759-a4ca-96994b5e37e4\") " pod="openshift-marketplace/community-operators-cn45b" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.166759 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kp6ht\" (UniqueName: \"kubernetes.io/projected/410ce027-e739-4759-a4ca-96994b5e37e4-kube-api-access-kp6ht\") pod \"community-operators-cn45b\" (UID: \"410ce027-e739-4759-a4ca-96994b5e37e4\") " pod="openshift-marketplace/community-operators-cn45b" Jan 30 16:24:57 crc kubenswrapper[4766]: E0130 16:24:57.168000 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:57.667982892 +0000 UTC m=+152.305940238 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.193910 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-vljjd" event={"ID":"c71faa34-d1e9-4e10-911a-8cc1ccb436c0","Type":"ContainerStarted","Data":"e1c78ffb61691bebbf207d4c4d8b6641b5fbb8cea89e3c384d3e24825b02def1"} Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.194020 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-vljjd" event={"ID":"c71faa34-d1e9-4e10-911a-8cc1ccb436c0","Type":"ContainerStarted","Data":"5b6c2b4b02cb2f6a1c92bba7a6f40a0a08a9ff5822b2f8d45b5c528ab23e4fa4"} Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.202563 4766 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-30T16:24:56.379775515Z","Handler":null,"Name":""} Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.203558 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"ecabcf90-8bec-4268-91ea-79d333295003","Type":"ContainerStarted","Data":"c48420ef9a88bc82024ad36793756893ada5c464cc54bd39f64f99dae7df3f4c"} Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.208124 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496495-brtsv" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.209508 4766 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.209544 4766 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.209494 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496495-brtsv" event={"ID":"08038447-8cce-4cea-9ef9-f7dbcce48697","Type":"ContainerDied","Data":"7363cff219ed95619e92adc9fc2c142dedc5995f1960823679028cb31e508fc5"} Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.209627 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7363cff219ed95619e92adc9fc2c142dedc5995f1960823679028cb31e508fc5" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.223703 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-vljjd" podStartSLOduration=13.223679934 podStartE2EDuration="13.223679934s" podCreationTimestamp="2026-01-30 16:24:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:57.217955221 +0000 UTC m=+151.855912567" watchObservedRunningTime="2026-01-30 16:24:57.223679934 +0000 UTC m=+151.861637280" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.267773 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.268710 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/410ce027-e739-4759-a4ca-96994b5e37e4-utilities\") pod \"community-operators-cn45b\" (UID: \"410ce027-e739-4759-a4ca-96994b5e37e4\") " pod="openshift-marketplace/community-operators-cn45b" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.268852 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/410ce027-e739-4759-a4ca-96994b5e37e4-catalog-content\") pod \"community-operators-cn45b\" (UID: \"410ce027-e739-4759-a4ca-96994b5e37e4\") " pod="openshift-marketplace/community-operators-cn45b" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.268963 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kp6ht\" (UniqueName: \"kubernetes.io/projected/410ce027-e739-4759-a4ca-96994b5e37e4-kube-api-access-kp6ht\") pod \"community-operators-cn45b\" (UID: \"410ce027-e739-4759-a4ca-96994b5e37e4\") " pod="openshift-marketplace/community-operators-cn45b" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.270026 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/410ce027-e739-4759-a4ca-96994b5e37e4-catalog-content\") pod \"community-operators-cn45b\" (UID: \"410ce027-e739-4759-a4ca-96994b5e37e4\") " pod="openshift-marketplace/community-operators-cn45b" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.270139 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/410ce027-e739-4759-a4ca-96994b5e37e4-utilities\") pod \"community-operators-cn45b\" (UID: \"410ce027-e739-4759-a4ca-96994b5e37e4\") " pod="openshift-marketplace/community-operators-cn45b" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.284701 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.287376 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qrcth"] Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.297685 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kp6ht\" (UniqueName: \"kubernetes.io/projected/410ce027-e739-4759-a4ca-96994b5e37e4-kube-api-access-kp6ht\") pod \"community-operators-cn45b\" (UID: \"410ce027-e739-4759-a4ca-96994b5e37e4\") " pod="openshift-marketplace/community-operators-cn45b" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.331510 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cn45b" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.357443 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-969pn"] Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.372303 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.375901 4766 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.375947 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.433148 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.449225 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-pr8gz" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.455099 4766 patch_prober.go:28] interesting pod/router-default-5444994796-pr8gz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:24:57 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Jan 30 16:24:57 crc kubenswrapper[4766]: [+]process-running ok Jan 30 16:24:57 crc kubenswrapper[4766]: healthz check failed Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.455186 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pr8gz" podUID="af6eef76-87a0-459c-b2eb-61e06ae7386d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.505907 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-46g6x"] Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.535483 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-8fgxh" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.535872 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-8fgxh" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.551141 4766 patch_prober.go:28] interesting pod/console-f9d7485db-8fgxh container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.42:8443/health\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.551229 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-8fgxh" podUID="695ff148-b91d-49a2-ad3b-9a240f11e454" containerName="console" probeResult="failure" output="Get \"https://10.217.0.42:8443/health\": dial tcp 10.217.0.42:8443: connect: connection refused" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.587621 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.593905 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-wcmvb" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.620773 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjlfz" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.888089 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cn45b"] Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.916222 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gtc8b" Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.062985 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.223046 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9nn5q"] Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.223601 4766 generic.go:334] "Generic (PLEG): container finished" podID="ac4a36f6-21fe-4374-adaf-4505d59ce4c5" containerID="9ac850580190254e68ddee9b089f6bc2d6e691ff6d344eda4986105f7c8ca18d" exitCode=0 Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.223692 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qrcth" event={"ID":"ac4a36f6-21fe-4374-adaf-4505d59ce4c5","Type":"ContainerDied","Data":"9ac850580190254e68ddee9b089f6bc2d6e691ff6d344eda4986105f7c8ca18d"} Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.223777 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qrcth" event={"ID":"ac4a36f6-21fe-4374-adaf-4505d59ce4c5","Type":"ContainerStarted","Data":"5097ba380ecfee61c19e8e36f0d186a1b5b9774436685bd5dece65fcdce6e72b"} Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.228890 4766 generic.go:334] "Generic (PLEG): container finished" podID="7c0324d7-1f61-4e1a-9ce7-fd960abfe244" containerID="af5b6a23fadc196544b29ed44557ed01e0f50ea2f1dc14b5996b09933749bb3c" exitCode=0 Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.229246 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-46g6x" event={"ID":"7c0324d7-1f61-4e1a-9ce7-fd960abfe244","Type":"ContainerDied","Data":"af5b6a23fadc196544b29ed44557ed01e0f50ea2f1dc14b5996b09933749bb3c"} Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.229317 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-46g6x" event={"ID":"7c0324d7-1f61-4e1a-9ce7-fd960abfe244","Type":"ContainerStarted","Data":"d647130a49f304f91277aec2b42b5513df4dbdb8a8c2d7524ca93ac92c844730"} Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.231718 4766 generic.go:334] "Generic (PLEG): container finished" podID="f55dc373-49c6-4b05-a945-79614dc282d8" containerID="01a6df12be346d87bb230eb7d19417e7d00327a79babb5d36b9be297a80a0970" exitCode=0 Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.231781 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-969pn" event={"ID":"f55dc373-49c6-4b05-a945-79614dc282d8","Type":"ContainerDied","Data":"01a6df12be346d87bb230eb7d19417e7d00327a79babb5d36b9be297a80a0970"} Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.231809 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-969pn" event={"ID":"f55dc373-49c6-4b05-a945-79614dc282d8","Type":"ContainerStarted","Data":"89ef9d87bc4ca6e14617c5d57a66c8f3479be224d2f0014eefd70f2deeb130e1"} Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.232915 4766 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.236342 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn45b" event={"ID":"410ce027-e739-4759-a4ca-96994b5e37e4","Type":"ContainerStarted","Data":"7e4b12fb0e25bcc11137fa0eb3d6857be3b4209f7f96e6448f5d10662b96aeb3"} Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.236388 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn45b" event={"ID":"410ce027-e739-4759-a4ca-96994b5e37e4","Type":"ContainerStarted","Data":"7171a7bd52b6d6953a2848237464b826e5b11b09254d5ec8e3dc69a35f3813bf"} Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.239130 4766 generic.go:334] "Generic (PLEG): container finished" podID="ecabcf90-8bec-4268-91ea-79d333295003" containerID="c3f50aa5932a546d3c2a9d802e8a53b757d37a7fd3a543f0c4f1e28dac970b7d" exitCode=0 Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.239542 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"ecabcf90-8bec-4268-91ea-79d333295003","Type":"ContainerDied","Data":"c3f50aa5932a546d3c2a9d802e8a53b757d37a7fd3a543f0c4f1e28dac970b7d"} Jan 30 16:24:58 crc kubenswrapper[4766]: W0130 16:24:58.271540 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97631abe_0d99_4f69_b208_4da9d19a8400.slice/crio-8607ddfed85f0737d38a8c68a75c871fb7626f9536fec8516b4240081fc47421 WatchSource:0}: Error finding container 8607ddfed85f0737d38a8c68a75c871fb7626f9536fec8516b4240081fc47421: Status 404 returned error can't find the container with id 8607ddfed85f0737d38a8c68a75c871fb7626f9536fec8516b4240081fc47421 Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.367371 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qct46"] Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.368588 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qct46" Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.371243 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.380596 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qct46"] Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.453986 4766 patch_prober.go:28] interesting pod/router-default-5444994796-pr8gz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:24:58 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Jan 30 16:24:58 crc kubenswrapper[4766]: [+]process-running ok Jan 30 16:24:58 crc kubenswrapper[4766]: healthz check failed Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.454095 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pr8gz" podUID="af6eef76-87a0-459c-b2eb-61e06ae7386d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.495067 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f598bfe-913e-4236-b3c5-78268f38396c-utilities\") pod \"redhat-marketplace-qct46\" (UID: \"9f598bfe-913e-4236-b3c5-78268f38396c\") " pod="openshift-marketplace/redhat-marketplace-qct46" Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.495415 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f598bfe-913e-4236-b3c5-78268f38396c-catalog-content\") pod \"redhat-marketplace-qct46\" (UID: \"9f598bfe-913e-4236-b3c5-78268f38396c\") " pod="openshift-marketplace/redhat-marketplace-qct46" Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.495659 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gqt4\" (UniqueName: \"kubernetes.io/projected/9f598bfe-913e-4236-b3c5-78268f38396c-kube-api-access-6gqt4\") pod \"redhat-marketplace-qct46\" (UID: \"9f598bfe-913e-4236-b3c5-78268f38396c\") " pod="openshift-marketplace/redhat-marketplace-qct46" Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.597036 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f598bfe-913e-4236-b3c5-78268f38396c-utilities\") pod \"redhat-marketplace-qct46\" (UID: \"9f598bfe-913e-4236-b3c5-78268f38396c\") " pod="openshift-marketplace/redhat-marketplace-qct46" Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.597589 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f598bfe-913e-4236-b3c5-78268f38396c-catalog-content\") pod \"redhat-marketplace-qct46\" (UID: \"9f598bfe-913e-4236-b3c5-78268f38396c\") " pod="openshift-marketplace/redhat-marketplace-qct46" Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.597743 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gqt4\" (UniqueName: \"kubernetes.io/projected/9f598bfe-913e-4236-b3c5-78268f38396c-kube-api-access-6gqt4\") pod \"redhat-marketplace-qct46\" (UID: \"9f598bfe-913e-4236-b3c5-78268f38396c\") " pod="openshift-marketplace/redhat-marketplace-qct46" Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.597656 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f598bfe-913e-4236-b3c5-78268f38396c-utilities\") pod \"redhat-marketplace-qct46\" (UID: \"9f598bfe-913e-4236-b3c5-78268f38396c\") " pod="openshift-marketplace/redhat-marketplace-qct46" Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.598275 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f598bfe-913e-4236-b3c5-78268f38396c-catalog-content\") pod \"redhat-marketplace-qct46\" (UID: \"9f598bfe-913e-4236-b3c5-78268f38396c\") " pod="openshift-marketplace/redhat-marketplace-qct46" Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.620207 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gqt4\" (UniqueName: \"kubernetes.io/projected/9f598bfe-913e-4236-b3c5-78268f38396c-kube-api-access-6gqt4\") pod \"redhat-marketplace-qct46\" (UID: \"9f598bfe-913e-4236-b3c5-78268f38396c\") " pod="openshift-marketplace/redhat-marketplace-qct46" Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.716590 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qct46" Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.760866 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mvnxb"] Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.762541 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mvnxb" Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.778774 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mvnxb"] Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.903079 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbcf0ab9-04e7-47e0-b375-c09a93463cc9-catalog-content\") pod \"redhat-marketplace-mvnxb\" (UID: \"bbcf0ab9-04e7-47e0-b375-c09a93463cc9\") " pod="openshift-marketplace/redhat-marketplace-mvnxb" Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.903532 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbcf0ab9-04e7-47e0-b375-c09a93463cc9-utilities\") pod \"redhat-marketplace-mvnxb\" (UID: \"bbcf0ab9-04e7-47e0-b375-c09a93463cc9\") " pod="openshift-marketplace/redhat-marketplace-mvnxb" Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.903561 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhlt7\" (UniqueName: \"kubernetes.io/projected/bbcf0ab9-04e7-47e0-b375-c09a93463cc9-kube-api-access-nhlt7\") pod \"redhat-marketplace-mvnxb\" (UID: \"bbcf0ab9-04e7-47e0-b375-c09a93463cc9\") " pod="openshift-marketplace/redhat-marketplace-mvnxb" Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.963197 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qct46"] Jan 30 16:24:58 crc kubenswrapper[4766]: W0130 16:24:58.978254 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f598bfe_913e_4236_b3c5_78268f38396c.slice/crio-e4ade6f221dc5ead87adec26ae126b386fc4d9600ec068ed3a99f86aa9f21eef WatchSource:0}: Error finding container e4ade6f221dc5ead87adec26ae126b386fc4d9600ec068ed3a99f86aa9f21eef: Status 404 returned error can't find the container with id e4ade6f221dc5ead87adec26ae126b386fc4d9600ec068ed3a99f86aa9f21eef Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.005885 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbcf0ab9-04e7-47e0-b375-c09a93463cc9-utilities\") pod \"redhat-marketplace-mvnxb\" (UID: \"bbcf0ab9-04e7-47e0-b375-c09a93463cc9\") " pod="openshift-marketplace/redhat-marketplace-mvnxb" Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.005961 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhlt7\" (UniqueName: \"kubernetes.io/projected/bbcf0ab9-04e7-47e0-b375-c09a93463cc9-kube-api-access-nhlt7\") pod \"redhat-marketplace-mvnxb\" (UID: \"bbcf0ab9-04e7-47e0-b375-c09a93463cc9\") " pod="openshift-marketplace/redhat-marketplace-mvnxb" Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.006059 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbcf0ab9-04e7-47e0-b375-c09a93463cc9-catalog-content\") pod \"redhat-marketplace-mvnxb\" (UID: \"bbcf0ab9-04e7-47e0-b375-c09a93463cc9\") " pod="openshift-marketplace/redhat-marketplace-mvnxb" Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.006548 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbcf0ab9-04e7-47e0-b375-c09a93463cc9-utilities\") pod \"redhat-marketplace-mvnxb\" (UID: \"bbcf0ab9-04e7-47e0-b375-c09a93463cc9\") " pod="openshift-marketplace/redhat-marketplace-mvnxb" Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.006722 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbcf0ab9-04e7-47e0-b375-c09a93463cc9-catalog-content\") pod \"redhat-marketplace-mvnxb\" (UID: \"bbcf0ab9-04e7-47e0-b375-c09a93463cc9\") " pod="openshift-marketplace/redhat-marketplace-mvnxb" Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.026710 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhlt7\" (UniqueName: \"kubernetes.io/projected/bbcf0ab9-04e7-47e0-b375-c09a93463cc9-kube-api-access-nhlt7\") pod \"redhat-marketplace-mvnxb\" (UID: \"bbcf0ab9-04e7-47e0-b375-c09a93463cc9\") " pod="openshift-marketplace/redhat-marketplace-mvnxb" Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.097346 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mvnxb" Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.247106 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" event={"ID":"97631abe-0d99-4f69-b208-4da9d19a8400","Type":"ContainerStarted","Data":"78ed4a5085e5d869bfcb80a811cd6fcf0f153ceead1030384c7d491fb2b98086"} Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.247191 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" event={"ID":"97631abe-0d99-4f69-b208-4da9d19a8400","Type":"ContainerStarted","Data":"8607ddfed85f0737d38a8c68a75c871fb7626f9536fec8516b4240081fc47421"} Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.247893 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.249207 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qct46" event={"ID":"9f598bfe-913e-4236-b3c5-78268f38396c","Type":"ContainerStarted","Data":"e4ade6f221dc5ead87adec26ae126b386fc4d9600ec068ed3a99f86aa9f21eef"} Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.251327 4766 generic.go:334] "Generic (PLEG): container finished" podID="410ce027-e739-4759-a4ca-96994b5e37e4" containerID="7e4b12fb0e25bcc11137fa0eb3d6857be3b4209f7f96e6448f5d10662b96aeb3" exitCode=0 Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.251407 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn45b" event={"ID":"410ce027-e739-4759-a4ca-96994b5e37e4","Type":"ContainerDied","Data":"7e4b12fb0e25bcc11137fa0eb3d6857be3b4209f7f96e6448f5d10662b96aeb3"} Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.268708 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" podStartSLOduration=133.268683323 podStartE2EDuration="2m13.268683323s" podCreationTimestamp="2026-01-30 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:59.268529719 +0000 UTC m=+153.906487075" watchObservedRunningTime="2026-01-30 16:24:59.268683323 +0000 UTC m=+153.906640669" Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.455410 4766 patch_prober.go:28] interesting pod/router-default-5444994796-pr8gz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:24:59 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Jan 30 16:24:59 crc kubenswrapper[4766]: [+]process-running ok Jan 30 16:24:59 crc kubenswrapper[4766]: healthz check failed Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.455498 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pr8gz" podUID="af6eef76-87a0-459c-b2eb-61e06ae7386d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.531854 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.605556 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mvnxb"] Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.618270 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ecabcf90-8bec-4268-91ea-79d333295003-kube-api-access\") pod \"ecabcf90-8bec-4268-91ea-79d333295003\" (UID: \"ecabcf90-8bec-4268-91ea-79d333295003\") " Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.618418 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ecabcf90-8bec-4268-91ea-79d333295003-kubelet-dir\") pod \"ecabcf90-8bec-4268-91ea-79d333295003\" (UID: \"ecabcf90-8bec-4268-91ea-79d333295003\") " Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.618612 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecabcf90-8bec-4268-91ea-79d333295003-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ecabcf90-8bec-4268-91ea-79d333295003" (UID: "ecabcf90-8bec-4268-91ea-79d333295003"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.625400 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecabcf90-8bec-4268-91ea-79d333295003-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ecabcf90-8bec-4268-91ea-79d333295003" (UID: "ecabcf90-8bec-4268-91ea-79d333295003"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.720349 4766 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ecabcf90-8bec-4268-91ea-79d333295003-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.720386 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ecabcf90-8bec-4268-91ea-79d333295003-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.766116 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hfpqw"] Jan 30 16:24:59 crc kubenswrapper[4766]: E0130 16:24:59.766476 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecabcf90-8bec-4268-91ea-79d333295003" containerName="pruner" Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.766500 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecabcf90-8bec-4268-91ea-79d333295003" containerName="pruner" Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.766660 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecabcf90-8bec-4268-91ea-79d333295003" containerName="pruner" Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.767781 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hfpqw" Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.772011 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.777460 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hfpqw"] Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.925312 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4xn8\" (UniqueName: \"kubernetes.io/projected/50a11a60-476d-48af-9ff9-b3d9841e6260-kube-api-access-h4xn8\") pod \"redhat-operators-hfpqw\" (UID: \"50a11a60-476d-48af-9ff9-b3d9841e6260\") " pod="openshift-marketplace/redhat-operators-hfpqw" Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.925390 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50a11a60-476d-48af-9ff9-b3d9841e6260-utilities\") pod \"redhat-operators-hfpqw\" (UID: \"50a11a60-476d-48af-9ff9-b3d9841e6260\") " pod="openshift-marketplace/redhat-operators-hfpqw" Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.925436 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50a11a60-476d-48af-9ff9-b3d9841e6260-catalog-content\") pod \"redhat-operators-hfpqw\" (UID: \"50a11a60-476d-48af-9ff9-b3d9841e6260\") " pod="openshift-marketplace/redhat-operators-hfpqw" Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.027456 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50a11a60-476d-48af-9ff9-b3d9841e6260-catalog-content\") pod \"redhat-operators-hfpqw\" (UID: \"50a11a60-476d-48af-9ff9-b3d9841e6260\") " pod="openshift-marketplace/redhat-operators-hfpqw" Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.027672 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4xn8\" (UniqueName: \"kubernetes.io/projected/50a11a60-476d-48af-9ff9-b3d9841e6260-kube-api-access-h4xn8\") pod \"redhat-operators-hfpqw\" (UID: \"50a11a60-476d-48af-9ff9-b3d9841e6260\") " pod="openshift-marketplace/redhat-operators-hfpqw" Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.027785 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50a11a60-476d-48af-9ff9-b3d9841e6260-utilities\") pod \"redhat-operators-hfpqw\" (UID: \"50a11a60-476d-48af-9ff9-b3d9841e6260\") " pod="openshift-marketplace/redhat-operators-hfpqw" Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.028533 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50a11a60-476d-48af-9ff9-b3d9841e6260-catalog-content\") pod \"redhat-operators-hfpqw\" (UID: \"50a11a60-476d-48af-9ff9-b3d9841e6260\") " pod="openshift-marketplace/redhat-operators-hfpqw" Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.028546 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50a11a60-476d-48af-9ff9-b3d9841e6260-utilities\") pod \"redhat-operators-hfpqw\" (UID: \"50a11a60-476d-48af-9ff9-b3d9841e6260\") " pod="openshift-marketplace/redhat-operators-hfpqw" Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.052213 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4xn8\" (UniqueName: \"kubernetes.io/projected/50a11a60-476d-48af-9ff9-b3d9841e6260-kube-api-access-h4xn8\") pod \"redhat-operators-hfpqw\" (UID: \"50a11a60-476d-48af-9ff9-b3d9841e6260\") " pod="openshift-marketplace/redhat-operators-hfpqw" Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.085473 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hfpqw" Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.159552 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2gzn6"] Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.163091 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2gzn6" Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.180865 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2gzn6"] Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.259121 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.259717 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"ecabcf90-8bec-4268-91ea-79d333295003","Type":"ContainerDied","Data":"c48420ef9a88bc82024ad36793756893ada5c464cc54bd39f64f99dae7df3f4c"} Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.259748 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c48420ef9a88bc82024ad36793756893ada5c464cc54bd39f64f99dae7df3f4c" Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.262256 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mvnxb" event={"ID":"bbcf0ab9-04e7-47e0-b375-c09a93463cc9","Type":"ContainerStarted","Data":"59a8b17052ea74cbace15a032912d54f5115659fdf57ccdbf95c02e5fb2078ae"} Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.299255 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hfpqw"] Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.332999 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8765357c-9e53-47c7-a913-1dc72a693ef2-catalog-content\") pod \"redhat-operators-2gzn6\" (UID: \"8765357c-9e53-47c7-a913-1dc72a693ef2\") " pod="openshift-marketplace/redhat-operators-2gzn6" Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.333138 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqrl6\" (UniqueName: \"kubernetes.io/projected/8765357c-9e53-47c7-a913-1dc72a693ef2-kube-api-access-sqrl6\") pod \"redhat-operators-2gzn6\" (UID: \"8765357c-9e53-47c7-a913-1dc72a693ef2\") " pod="openshift-marketplace/redhat-operators-2gzn6" Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.333196 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8765357c-9e53-47c7-a913-1dc72a693ef2-utilities\") pod \"redhat-operators-2gzn6\" (UID: \"8765357c-9e53-47c7-a913-1dc72a693ef2\") " pod="openshift-marketplace/redhat-operators-2gzn6" Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.434723 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqrl6\" (UniqueName: \"kubernetes.io/projected/8765357c-9e53-47c7-a913-1dc72a693ef2-kube-api-access-sqrl6\") pod \"redhat-operators-2gzn6\" (UID: \"8765357c-9e53-47c7-a913-1dc72a693ef2\") " pod="openshift-marketplace/redhat-operators-2gzn6" Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.434792 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8765357c-9e53-47c7-a913-1dc72a693ef2-utilities\") pod \"redhat-operators-2gzn6\" (UID: \"8765357c-9e53-47c7-a913-1dc72a693ef2\") " pod="openshift-marketplace/redhat-operators-2gzn6" Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.434864 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8765357c-9e53-47c7-a913-1dc72a693ef2-catalog-content\") pod \"redhat-operators-2gzn6\" (UID: \"8765357c-9e53-47c7-a913-1dc72a693ef2\") " pod="openshift-marketplace/redhat-operators-2gzn6" Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.436069 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8765357c-9e53-47c7-a913-1dc72a693ef2-utilities\") pod \"redhat-operators-2gzn6\" (UID: \"8765357c-9e53-47c7-a913-1dc72a693ef2\") " pod="openshift-marketplace/redhat-operators-2gzn6" Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.436389 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8765357c-9e53-47c7-a913-1dc72a693ef2-catalog-content\") pod \"redhat-operators-2gzn6\" (UID: \"8765357c-9e53-47c7-a913-1dc72a693ef2\") " pod="openshift-marketplace/redhat-operators-2gzn6" Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.457738 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqrl6\" (UniqueName: \"kubernetes.io/projected/8765357c-9e53-47c7-a913-1dc72a693ef2-kube-api-access-sqrl6\") pod \"redhat-operators-2gzn6\" (UID: \"8765357c-9e53-47c7-a913-1dc72a693ef2\") " pod="openshift-marketplace/redhat-operators-2gzn6" Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.458066 4766 patch_prober.go:28] interesting pod/router-default-5444994796-pr8gz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:25:00 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Jan 30 16:25:00 crc kubenswrapper[4766]: [+]process-running ok Jan 30 16:25:00 crc kubenswrapper[4766]: healthz check failed Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.458202 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pr8gz" podUID="af6eef76-87a0-459c-b2eb-61e06ae7386d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.479731 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.480642 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.483463 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.485882 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2gzn6" Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.486704 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.488958 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.639957 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.640043 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.750323 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.750684 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.750811 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.753777 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2gzn6"] Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.775963 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.914325 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 16:25:01 crc kubenswrapper[4766]: I0130 16:25:01.165330 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 30 16:25:01 crc kubenswrapper[4766]: W0130 16:25:01.200478 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pode8cf9d72_ab44_4f32_a5a5_1b1542f4aa2e.slice/crio-4cc95cd04ca115618037bfe2670dceebf65cf3d35cfcfc0e26487f0f44539d4a WatchSource:0}: Error finding container 4cc95cd04ca115618037bfe2670dceebf65cf3d35cfcfc0e26487f0f44539d4a: Status 404 returned error can't find the container with id 4cc95cd04ca115618037bfe2670dceebf65cf3d35cfcfc0e26487f0f44539d4a Jan 30 16:25:01 crc kubenswrapper[4766]: I0130 16:25:01.291716 4766 generic.go:334] "Generic (PLEG): container finished" podID="9f598bfe-913e-4236-b3c5-78268f38396c" containerID="ec0ce517870aafe9b0b52ea02febd0b91432faa6102be5a4c960f4e6d47e8c20" exitCode=0 Jan 30 16:25:01 crc kubenswrapper[4766]: I0130 16:25:01.291833 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qct46" event={"ID":"9f598bfe-913e-4236-b3c5-78268f38396c","Type":"ContainerDied","Data":"ec0ce517870aafe9b0b52ea02febd0b91432faa6102be5a4c960f4e6d47e8c20"} Jan 30 16:25:01 crc kubenswrapper[4766]: I0130 16:25:01.297556 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hfpqw" event={"ID":"50a11a60-476d-48af-9ff9-b3d9841e6260","Type":"ContainerStarted","Data":"56a4698fa29d8b3f31ac2d170f28bf29651c60264c984a5bcb461ab8477202c2"} Jan 30 16:25:01 crc kubenswrapper[4766]: I0130 16:25:01.299694 4766 generic.go:334] "Generic (PLEG): container finished" podID="bbcf0ab9-04e7-47e0-b375-c09a93463cc9" containerID="bd3ffb662254257b5ee19625a20b3eb5adc1c1ea60a29b9946405918cddc84cc" exitCode=0 Jan 30 16:25:01 crc kubenswrapper[4766]: I0130 16:25:01.299774 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mvnxb" event={"ID":"bbcf0ab9-04e7-47e0-b375-c09a93463cc9","Type":"ContainerDied","Data":"bd3ffb662254257b5ee19625a20b3eb5adc1c1ea60a29b9946405918cddc84cc"} Jan 30 16:25:01 crc kubenswrapper[4766]: I0130 16:25:01.300923 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e","Type":"ContainerStarted","Data":"4cc95cd04ca115618037bfe2670dceebf65cf3d35cfcfc0e26487f0f44539d4a"} Jan 30 16:25:01 crc kubenswrapper[4766]: I0130 16:25:01.304841 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2gzn6" event={"ID":"8765357c-9e53-47c7-a913-1dc72a693ef2","Type":"ContainerStarted","Data":"43c7dbc686fbe2f3266fcd7cd477508571fef6f7a4153b299b92d554a111a343"} Jan 30 16:25:01 crc kubenswrapper[4766]: I0130 16:25:01.454709 4766 patch_prober.go:28] interesting pod/router-default-5444994796-pr8gz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:25:01 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Jan 30 16:25:01 crc kubenswrapper[4766]: [+]process-running ok Jan 30 16:25:01 crc kubenswrapper[4766]: healthz check failed Jan 30 16:25:01 crc kubenswrapper[4766]: I0130 16:25:01.455106 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pr8gz" podUID="af6eef76-87a0-459c-b2eb-61e06ae7386d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:25:01 crc kubenswrapper[4766]: I0130 16:25:01.926961 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:25:01 crc kubenswrapper[4766]: I0130 16:25:01.932388 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:25:02 crc kubenswrapper[4766]: I0130 16:25:02.331934 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e","Type":"ContainerStarted","Data":"900b817c578487a6545b763558d45ccc041153cc93ff17f4ddd144434df2b4e6"} Jan 30 16:25:02 crc kubenswrapper[4766]: I0130 16:25:02.338920 4766 generic.go:334] "Generic (PLEG): container finished" podID="8765357c-9e53-47c7-a913-1dc72a693ef2" containerID="3b05632315ff455414402ad75d1dc4b60cee035d51e0e545e949479d0cb36613" exitCode=0 Jan 30 16:25:02 crc kubenswrapper[4766]: I0130 16:25:02.339577 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2gzn6" event={"ID":"8765357c-9e53-47c7-a913-1dc72a693ef2","Type":"ContainerDied","Data":"3b05632315ff455414402ad75d1dc4b60cee035d51e0e545e949479d0cb36613"} Jan 30 16:25:02 crc kubenswrapper[4766]: I0130 16:25:02.348561 4766 generic.go:334] "Generic (PLEG): container finished" podID="50a11a60-476d-48af-9ff9-b3d9841e6260" containerID="6326cb8b7c494cb94cd7ca4aaa3a58767027c93625175f1ed1562feb35a32331" exitCode=0 Jan 30 16:25:02 crc kubenswrapper[4766]: I0130 16:25:02.353136 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hfpqw" event={"ID":"50a11a60-476d-48af-9ff9-b3d9841e6260","Type":"ContainerDied","Data":"6326cb8b7c494cb94cd7ca4aaa3a58767027c93625175f1ed1562feb35a32331"} Jan 30 16:25:02 crc kubenswrapper[4766]: I0130 16:25:02.374401 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.374366198 podStartE2EDuration="2.374366198s" podCreationTimestamp="2026-01-30 16:25:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:25:02.352999379 +0000 UTC m=+156.990956735" watchObservedRunningTime="2026-01-30 16:25:02.374366198 +0000 UTC m=+157.012323544" Jan 30 16:25:02 crc kubenswrapper[4766]: I0130 16:25:02.454596 4766 patch_prober.go:28] interesting pod/router-default-5444994796-pr8gz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:25:02 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Jan 30 16:25:02 crc kubenswrapper[4766]: [+]process-running ok Jan 30 16:25:02 crc kubenswrapper[4766]: healthz check failed Jan 30 16:25:02 crc kubenswrapper[4766]: I0130 16:25:02.454674 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pr8gz" podUID="af6eef76-87a0-459c-b2eb-61e06ae7386d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:25:02 crc kubenswrapper[4766]: I0130 16:25:02.650842 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-lnxcr" Jan 30 16:25:03 crc kubenswrapper[4766]: I0130 16:25:03.396622 4766 generic.go:334] "Generic (PLEG): container finished" podID="e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e" containerID="900b817c578487a6545b763558d45ccc041153cc93ff17f4ddd144434df2b4e6" exitCode=0 Jan 30 16:25:03 crc kubenswrapper[4766]: I0130 16:25:03.396682 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e","Type":"ContainerDied","Data":"900b817c578487a6545b763558d45ccc041153cc93ff17f4ddd144434df2b4e6"} Jan 30 16:25:03 crc kubenswrapper[4766]: I0130 16:25:03.453054 4766 patch_prober.go:28] interesting pod/router-default-5444994796-pr8gz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:25:03 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Jan 30 16:25:03 crc kubenswrapper[4766]: [+]process-running ok Jan 30 16:25:03 crc kubenswrapper[4766]: healthz check failed Jan 30 16:25:03 crc kubenswrapper[4766]: I0130 16:25:03.453130 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pr8gz" podUID="af6eef76-87a0-459c-b2eb-61e06ae7386d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:25:04 crc kubenswrapper[4766]: I0130 16:25:04.455908 4766 patch_prober.go:28] interesting pod/router-default-5444994796-pr8gz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:25:04 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Jan 30 16:25:04 crc kubenswrapper[4766]: [+]process-running ok Jan 30 16:25:04 crc kubenswrapper[4766]: healthz check failed Jan 30 16:25:04 crc kubenswrapper[4766]: I0130 16:25:04.456596 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pr8gz" podUID="af6eef76-87a0-459c-b2eb-61e06ae7386d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:25:04 crc kubenswrapper[4766]: I0130 16:25:04.846495 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 16:25:04 crc kubenswrapper[4766]: I0130 16:25:04.939876 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e-kube-api-access\") pod \"e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e\" (UID: \"e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e\") " Jan 30 16:25:04 crc kubenswrapper[4766]: I0130 16:25:04.939955 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e-kubelet-dir\") pod \"e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e\" (UID: \"e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e\") " Jan 30 16:25:04 crc kubenswrapper[4766]: I0130 16:25:04.940341 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e" (UID: "e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:25:04 crc kubenswrapper[4766]: I0130 16:25:04.947266 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e" (UID: "e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:25:05 crc kubenswrapper[4766]: I0130 16:25:05.041738 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 16:25:05 crc kubenswrapper[4766]: I0130 16:25:05.041778 4766 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 16:25:05 crc kubenswrapper[4766]: I0130 16:25:05.453875 4766 patch_prober.go:28] interesting pod/router-default-5444994796-pr8gz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:25:05 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Jan 30 16:25:05 crc kubenswrapper[4766]: [+]process-running ok Jan 30 16:25:05 crc kubenswrapper[4766]: healthz check failed Jan 30 16:25:05 crc kubenswrapper[4766]: I0130 16:25:05.453949 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pr8gz" podUID="af6eef76-87a0-459c-b2eb-61e06ae7386d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:25:05 crc kubenswrapper[4766]: I0130 16:25:05.473045 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e","Type":"ContainerDied","Data":"4cc95cd04ca115618037bfe2670dceebf65cf3d35cfcfc0e26487f0f44539d4a"} Jan 30 16:25:05 crc kubenswrapper[4766]: I0130 16:25:05.473103 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4cc95cd04ca115618037bfe2670dceebf65cf3d35cfcfc0e26487f0f44539d4a" Jan 30 16:25:05 crc kubenswrapper[4766]: I0130 16:25:05.473169 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 16:25:06 crc kubenswrapper[4766]: I0130 16:25:06.452779 4766 patch_prober.go:28] interesting pod/router-default-5444994796-pr8gz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:25:06 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Jan 30 16:25:06 crc kubenswrapper[4766]: [+]process-running ok Jan 30 16:25:06 crc kubenswrapper[4766]: healthz check failed Jan 30 16:25:06 crc kubenswrapper[4766]: I0130 16:25:06.452877 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pr8gz" podUID="af6eef76-87a0-459c-b2eb-61e06ae7386d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:25:07 crc kubenswrapper[4766]: I0130 16:25:07.016387 4766 patch_prober.go:28] interesting pod/downloads-7954f5f757-254pk container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 16:25:07 crc kubenswrapper[4766]: I0130 16:25:07.016799 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-254pk" podUID="d9f3a679-bd83-4e31-aad4-0bd228e96c33" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 16:25:07 crc kubenswrapper[4766]: I0130 16:25:07.017305 4766 patch_prober.go:28] interesting pod/downloads-7954f5f757-254pk container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 16:25:07 crc kubenswrapper[4766]: I0130 16:25:07.017330 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-254pk" podUID="d9f3a679-bd83-4e31-aad4-0bd228e96c33" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 16:25:07 crc kubenswrapper[4766]: I0130 16:25:07.458089 4766 patch_prober.go:28] interesting pod/router-default-5444994796-pr8gz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:25:07 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Jan 30 16:25:07 crc kubenswrapper[4766]: [+]process-running ok Jan 30 16:25:07 crc kubenswrapper[4766]: healthz check failed Jan 30 16:25:07 crc kubenswrapper[4766]: I0130 16:25:07.458202 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pr8gz" podUID="af6eef76-87a0-459c-b2eb-61e06ae7386d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:25:07 crc kubenswrapper[4766]: I0130 16:25:07.499449 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/de5fecf1-cb2c-4ae2-a240-6f8826f6dac3-metrics-certs\") pod \"network-metrics-daemon-xrldv\" (UID: \"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3\") " pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:25:07 crc kubenswrapper[4766]: I0130 16:25:07.526360 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/de5fecf1-cb2c-4ae2-a240-6f8826f6dac3-metrics-certs\") pod \"network-metrics-daemon-xrldv\" (UID: \"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3\") " pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:25:07 crc kubenswrapper[4766]: I0130 16:25:07.535060 4766 patch_prober.go:28] interesting pod/console-f9d7485db-8fgxh container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.42:8443/health\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Jan 30 16:25:07 crc kubenswrapper[4766]: I0130 16:25:07.535130 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-8fgxh" podUID="695ff148-b91d-49a2-ad3b-9a240f11e454" containerName="console" probeResult="failure" output="Get \"https://10.217.0.42:8443/health\": dial tcp 10.217.0.42:8443: connect: connection refused" Jan 30 16:25:07 crc kubenswrapper[4766]: I0130 16:25:07.663685 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:25:08 crc kubenswrapper[4766]: I0130 16:25:08.453514 4766 patch_prober.go:28] interesting pod/router-default-5444994796-pr8gz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:25:08 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Jan 30 16:25:08 crc kubenswrapper[4766]: [+]process-running ok Jan 30 16:25:08 crc kubenswrapper[4766]: healthz check failed Jan 30 16:25:08 crc kubenswrapper[4766]: I0130 16:25:08.454028 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pr8gz" podUID="af6eef76-87a0-459c-b2eb-61e06ae7386d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:25:09 crc kubenswrapper[4766]: I0130 16:25:09.045170 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:25:09 crc kubenswrapper[4766]: I0130 16:25:09.045258 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:25:09 crc kubenswrapper[4766]: I0130 16:25:09.452694 4766 patch_prober.go:28] interesting pod/router-default-5444994796-pr8gz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:25:09 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Jan 30 16:25:09 crc kubenswrapper[4766]: [+]process-running ok Jan 30 16:25:09 crc kubenswrapper[4766]: healthz check failed Jan 30 16:25:09 crc kubenswrapper[4766]: I0130 16:25:09.452793 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pr8gz" podUID="af6eef76-87a0-459c-b2eb-61e06ae7386d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:25:10 crc kubenswrapper[4766]: I0130 16:25:10.452684 4766 patch_prober.go:28] interesting pod/router-default-5444994796-pr8gz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:25:10 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Jan 30 16:25:10 crc kubenswrapper[4766]: [+]process-running ok Jan 30 16:25:10 crc kubenswrapper[4766]: healthz check failed Jan 30 16:25:10 crc kubenswrapper[4766]: I0130 16:25:10.452769 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pr8gz" podUID="af6eef76-87a0-459c-b2eb-61e06ae7386d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:25:11 crc kubenswrapper[4766]: I0130 16:25:11.454066 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-pr8gz" Jan 30 16:25:11 crc kubenswrapper[4766]: I0130 16:25:11.458905 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-pr8gz" Jan 30 16:25:17 crc kubenswrapper[4766]: I0130 16:25:17.023678 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-254pk" Jan 30 16:25:17 crc kubenswrapper[4766]: I0130 16:25:17.277845 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-xrldv"] Jan 30 16:25:17 crc kubenswrapper[4766]: I0130 16:25:17.541020 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-8fgxh" Jan 30 16:25:17 crc kubenswrapper[4766]: I0130 16:25:17.546739 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-8fgxh" Jan 30 16:25:17 crc kubenswrapper[4766]: I0130 16:25:17.596330 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:25:17 crc kubenswrapper[4766]: I0130 16:25:17.632843 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xrldv" event={"ID":"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3","Type":"ContainerStarted","Data":"4c889cfcd8437b41d24cb60bd025045f8f105ce944bfb76b9ecf3006c68a4eb0"} Jan 30 16:25:18 crc kubenswrapper[4766]: I0130 16:25:18.642152 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xrldv" event={"ID":"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3","Type":"ContainerStarted","Data":"02cbc3afc54a125a2b594972c317d65c837dc0bd2f808eabc243042f6575b9a7"} Jan 30 16:25:27 crc kubenswrapper[4766]: I0130 16:25:27.560023 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zqpn4" Jan 30 16:25:33 crc kubenswrapper[4766]: I0130 16:25:33.957662 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:25:35 crc kubenswrapper[4766]: E0130 16:25:35.570256 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 30 16:25:35 crc kubenswrapper[4766]: E0130 16:25:35.570763 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h4xn8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-hfpqw_openshift-marketplace(50a11a60-476d-48af-9ff9-b3d9841e6260): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 16:25:35 crc kubenswrapper[4766]: E0130 16:25:35.571932 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-hfpqw" podUID="50a11a60-476d-48af-9ff9-b3d9841e6260" Jan 30 16:25:36 crc kubenswrapper[4766]: I0130 16:25:36.073344 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 30 16:25:36 crc kubenswrapper[4766]: E0130 16:25:36.074719 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e" containerName="pruner" Jan 30 16:25:36 crc kubenswrapper[4766]: I0130 16:25:36.074743 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e" containerName="pruner" Jan 30 16:25:36 crc kubenswrapper[4766]: I0130 16:25:36.074861 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e" containerName="pruner" Jan 30 16:25:36 crc kubenswrapper[4766]: I0130 16:25:36.075297 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 16:25:36 crc kubenswrapper[4766]: I0130 16:25:36.083356 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 30 16:25:36 crc kubenswrapper[4766]: I0130 16:25:36.083645 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 30 16:25:36 crc kubenswrapper[4766]: I0130 16:25:36.087768 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 30 16:25:36 crc kubenswrapper[4766]: I0130 16:25:36.133505 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2e9f6906-38fe-44c5-9bfa-91a159d0bbb0-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2e9f6906-38fe-44c5-9bfa-91a159d0bbb0\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 16:25:36 crc kubenswrapper[4766]: I0130 16:25:36.133577 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2e9f6906-38fe-44c5-9bfa-91a159d0bbb0-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2e9f6906-38fe-44c5-9bfa-91a159d0bbb0\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 16:25:36 crc kubenswrapper[4766]: I0130 16:25:36.235324 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2e9f6906-38fe-44c5-9bfa-91a159d0bbb0-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2e9f6906-38fe-44c5-9bfa-91a159d0bbb0\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 16:25:36 crc kubenswrapper[4766]: I0130 16:25:36.235398 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2e9f6906-38fe-44c5-9bfa-91a159d0bbb0-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2e9f6906-38fe-44c5-9bfa-91a159d0bbb0\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 16:25:36 crc kubenswrapper[4766]: I0130 16:25:36.235530 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2e9f6906-38fe-44c5-9bfa-91a159d0bbb0-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2e9f6906-38fe-44c5-9bfa-91a159d0bbb0\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 16:25:36 crc kubenswrapper[4766]: I0130 16:25:36.258329 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2e9f6906-38fe-44c5-9bfa-91a159d0bbb0-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2e9f6906-38fe-44c5-9bfa-91a159d0bbb0\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 16:25:36 crc kubenswrapper[4766]: I0130 16:25:36.417649 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 16:25:39 crc kubenswrapper[4766]: I0130 16:25:39.045714 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:25:39 crc kubenswrapper[4766]: I0130 16:25:39.045805 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:25:39 crc kubenswrapper[4766]: E0130 16:25:39.156530 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-hfpqw" podUID="50a11a60-476d-48af-9ff9-b3d9841e6260" Jan 30 16:25:39 crc kubenswrapper[4766]: E0130 16:25:39.256162 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 30 16:25:39 crc kubenswrapper[4766]: E0130 16:25:39.256392 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fhvw8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-qrcth_openshift-marketplace(ac4a36f6-21fe-4374-adaf-4505d59ce4c5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 16:25:39 crc kubenswrapper[4766]: E0130 16:25:39.257843 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-qrcth" podUID="ac4a36f6-21fe-4374-adaf-4505d59ce4c5" Jan 30 16:25:40 crc kubenswrapper[4766]: E0130 16:25:40.217743 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-qrcth" podUID="ac4a36f6-21fe-4374-adaf-4505d59ce4c5" Jan 30 16:25:40 crc kubenswrapper[4766]: E0130 16:25:40.560351 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 30 16:25:40 crc kubenswrapper[4766]: E0130 16:25:40.560537 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sqrl6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-2gzn6_openshift-marketplace(8765357c-9e53-47c7-a913-1dc72a693ef2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 16:25:40 crc kubenswrapper[4766]: E0130 16:25:40.561731 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-2gzn6" podUID="8765357c-9e53-47c7-a913-1dc72a693ef2" Jan 30 16:25:41 crc kubenswrapper[4766]: I0130 16:25:41.073090 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 30 16:25:41 crc kubenswrapper[4766]: I0130 16:25:41.077497 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 16:25:41 crc kubenswrapper[4766]: I0130 16:25:41.082684 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 30 16:25:41 crc kubenswrapper[4766]: I0130 16:25:41.266088 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4b30b717-ab4b-428d-8d98-f035422849b5-var-lock\") pod \"installer-9-crc\" (UID: \"4b30b717-ab4b-428d-8d98-f035422849b5\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 16:25:41 crc kubenswrapper[4766]: I0130 16:25:41.266173 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4b30b717-ab4b-428d-8d98-f035422849b5-kube-api-access\") pod \"installer-9-crc\" (UID: \"4b30b717-ab4b-428d-8d98-f035422849b5\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 16:25:41 crc kubenswrapper[4766]: I0130 16:25:41.266231 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4b30b717-ab4b-428d-8d98-f035422849b5-kubelet-dir\") pod \"installer-9-crc\" (UID: \"4b30b717-ab4b-428d-8d98-f035422849b5\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 16:25:41 crc kubenswrapper[4766]: I0130 16:25:41.367074 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4b30b717-ab4b-428d-8d98-f035422849b5-kubelet-dir\") pod \"installer-9-crc\" (UID: \"4b30b717-ab4b-428d-8d98-f035422849b5\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 16:25:41 crc kubenswrapper[4766]: I0130 16:25:41.367146 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4b30b717-ab4b-428d-8d98-f035422849b5-var-lock\") pod \"installer-9-crc\" (UID: \"4b30b717-ab4b-428d-8d98-f035422849b5\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 16:25:41 crc kubenswrapper[4766]: I0130 16:25:41.367207 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4b30b717-ab4b-428d-8d98-f035422849b5-kube-api-access\") pod \"installer-9-crc\" (UID: \"4b30b717-ab4b-428d-8d98-f035422849b5\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 16:25:41 crc kubenswrapper[4766]: I0130 16:25:41.367261 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4b30b717-ab4b-428d-8d98-f035422849b5-var-lock\") pod \"installer-9-crc\" (UID: \"4b30b717-ab4b-428d-8d98-f035422849b5\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 16:25:41 crc kubenswrapper[4766]: I0130 16:25:41.367280 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4b30b717-ab4b-428d-8d98-f035422849b5-kubelet-dir\") pod \"installer-9-crc\" (UID: \"4b30b717-ab4b-428d-8d98-f035422849b5\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 16:25:41 crc kubenswrapper[4766]: I0130 16:25:41.389203 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4b30b717-ab4b-428d-8d98-f035422849b5-kube-api-access\") pod \"installer-9-crc\" (UID: \"4b30b717-ab4b-428d-8d98-f035422849b5\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 16:25:41 crc kubenswrapper[4766]: I0130 16:25:41.401536 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 16:25:42 crc kubenswrapper[4766]: E0130 16:25:42.431491 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-2gzn6" podUID="8765357c-9e53-47c7-a913-1dc72a693ef2" Jan 30 16:25:42 crc kubenswrapper[4766]: E0130 16:25:42.534138 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 30 16:25:42 crc kubenswrapper[4766]: E0130 16:25:42.534368 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6gqt4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-qct46_openshift-marketplace(9f598bfe-913e-4236-b3c5-78268f38396c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 16:25:42 crc kubenswrapper[4766]: E0130 16:25:42.535392 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 30 16:25:42 crc kubenswrapper[4766]: E0130 16:25:42.535532 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kp6ht,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-cn45b_openshift-marketplace(410ce027-e739-4759-a4ca-96994b5e37e4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 16:25:42 crc kubenswrapper[4766]: E0130 16:25:42.535611 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-qct46" podUID="9f598bfe-913e-4236-b3c5-78268f38396c" Jan 30 16:25:42 crc kubenswrapper[4766]: E0130 16:25:42.536666 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-cn45b" podUID="410ce027-e739-4759-a4ca-96994b5e37e4" Jan 30 16:25:42 crc kubenswrapper[4766]: E0130 16:25:42.546048 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 30 16:25:42 crc kubenswrapper[4766]: E0130 16:25:42.546220 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nhlt7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-mvnxb_openshift-marketplace(bbcf0ab9-04e7-47e0-b375-c09a93463cc9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 16:25:42 crc kubenswrapper[4766]: E0130 16:25:42.547453 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-mvnxb" podUID="bbcf0ab9-04e7-47e0-b375-c09a93463cc9" Jan 30 16:25:42 crc kubenswrapper[4766]: E0130 16:25:42.801527 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-mvnxb" podUID="bbcf0ab9-04e7-47e0-b375-c09a93463cc9" Jan 30 16:25:42 crc kubenswrapper[4766]: E0130 16:25:42.801585 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-qct46" podUID="9f598bfe-913e-4236-b3c5-78268f38396c" Jan 30 16:25:42 crc kubenswrapper[4766]: E0130 16:25:42.801996 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-cn45b" podUID="410ce027-e739-4759-a4ca-96994b5e37e4" Jan 30 16:25:42 crc kubenswrapper[4766]: I0130 16:25:42.915232 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 30 16:25:42 crc kubenswrapper[4766]: W0130 16:25:42.929038 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod4b30b717_ab4b_428d_8d98_f035422849b5.slice/crio-8ea4803c398e8dd2c45bccc8f3bf98bb77923f9cc01db78303b4c730dab253c0 WatchSource:0}: Error finding container 8ea4803c398e8dd2c45bccc8f3bf98bb77923f9cc01db78303b4c730dab253c0: Status 404 returned error can't find the container with id 8ea4803c398e8dd2c45bccc8f3bf98bb77923f9cc01db78303b4c730dab253c0 Jan 30 16:25:42 crc kubenswrapper[4766]: I0130 16:25:42.996432 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 30 16:25:43 crc kubenswrapper[4766]: W0130 16:25:43.006743 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod2e9f6906_38fe_44c5_9bfa_91a159d0bbb0.slice/crio-3da589a8dcf475e9766957ef366b66cf2d2610eb529a2a5e4e10ae611412867e WatchSource:0}: Error finding container 3da589a8dcf475e9766957ef366b66cf2d2610eb529a2a5e4e10ae611412867e: Status 404 returned error can't find the container with id 3da589a8dcf475e9766957ef366b66cf2d2610eb529a2a5e4e10ae611412867e Jan 30 16:25:43 crc kubenswrapper[4766]: I0130 16:25:43.808096 4766 generic.go:334] "Generic (PLEG): container finished" podID="7c0324d7-1f61-4e1a-9ce7-fd960abfe244" containerID="97ccab51b00c2ebd3e485d91597544051c2bed561ec9c240ed8037412a2d52e1" exitCode=0 Jan 30 16:25:43 crc kubenswrapper[4766]: I0130 16:25:43.808202 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-46g6x" event={"ID":"7c0324d7-1f61-4e1a-9ce7-fd960abfe244","Type":"ContainerDied","Data":"97ccab51b00c2ebd3e485d91597544051c2bed561ec9c240ed8037412a2d52e1"} Jan 30 16:25:43 crc kubenswrapper[4766]: I0130 16:25:43.810064 4766 generic.go:334] "Generic (PLEG): container finished" podID="f55dc373-49c6-4b05-a945-79614dc282d8" containerID="18913b64598e390c8024ffdd2beaf8bfc1733f79b6e172d846d92e917392a4f2" exitCode=0 Jan 30 16:25:43 crc kubenswrapper[4766]: I0130 16:25:43.810124 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-969pn" event={"ID":"f55dc373-49c6-4b05-a945-79614dc282d8","Type":"ContainerDied","Data":"18913b64598e390c8024ffdd2beaf8bfc1733f79b6e172d846d92e917392a4f2"} Jan 30 16:25:43 crc kubenswrapper[4766]: I0130 16:25:43.812369 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"2e9f6906-38fe-44c5-9bfa-91a159d0bbb0","Type":"ContainerStarted","Data":"a8efc55b7e937307fec3de34be2e9c333069230a69b06703579516d9fd5c29bb"} Jan 30 16:25:43 crc kubenswrapper[4766]: I0130 16:25:43.812412 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"2e9f6906-38fe-44c5-9bfa-91a159d0bbb0","Type":"ContainerStarted","Data":"3da589a8dcf475e9766957ef366b66cf2d2610eb529a2a5e4e10ae611412867e"} Jan 30 16:25:43 crc kubenswrapper[4766]: I0130 16:25:43.815037 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xrldv" event={"ID":"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3","Type":"ContainerStarted","Data":"933df1289a819ed8ed49055ce89187d3fa29bd9c5f85fa171641c96f6ce1f3db"} Jan 30 16:25:43 crc kubenswrapper[4766]: I0130 16:25:43.817959 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"4b30b717-ab4b-428d-8d98-f035422849b5","Type":"ContainerStarted","Data":"0af9e4eb5943a3ef897af4faec4286f4a02c813f78a0ed3cf7d1ba829b602751"} Jan 30 16:25:43 crc kubenswrapper[4766]: I0130 16:25:43.818015 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"4b30b717-ab4b-428d-8d98-f035422849b5","Type":"ContainerStarted","Data":"8ea4803c398e8dd2c45bccc8f3bf98bb77923f9cc01db78303b4c730dab253c0"} Jan 30 16:25:43 crc kubenswrapper[4766]: I0130 16:25:43.855938 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.855910317 podStartE2EDuration="2.855910317s" podCreationTimestamp="2026-01-30 16:25:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:25:43.850908724 +0000 UTC m=+198.488866100" watchObservedRunningTime="2026-01-30 16:25:43.855910317 +0000 UTC m=+198.493867663" Jan 30 16:25:43 crc kubenswrapper[4766]: I0130 16:25:43.867787 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=7.867762483 podStartE2EDuration="7.867762483s" podCreationTimestamp="2026-01-30 16:25:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:25:43.865612995 +0000 UTC m=+198.503570351" watchObservedRunningTime="2026-01-30 16:25:43.867762483 +0000 UTC m=+198.505719829" Jan 30 16:25:43 crc kubenswrapper[4766]: I0130 16:25:43.904160 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-xrldv" podStartSLOduration=178.90413845 podStartE2EDuration="2m58.90413845s" podCreationTimestamp="2026-01-30 16:22:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:25:43.89887706 +0000 UTC m=+198.536834406" watchObservedRunningTime="2026-01-30 16:25:43.90413845 +0000 UTC m=+198.542095796" Jan 30 16:25:44 crc kubenswrapper[4766]: I0130 16:25:44.830081 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-46g6x" event={"ID":"7c0324d7-1f61-4e1a-9ce7-fd960abfe244","Type":"ContainerStarted","Data":"4cffc4b823b25da9acdf8fd2bcb8a9c77af6d9cb3020272a0800c152eff01393"} Jan 30 16:25:44 crc kubenswrapper[4766]: I0130 16:25:44.834090 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-969pn" event={"ID":"f55dc373-49c6-4b05-a945-79614dc282d8","Type":"ContainerStarted","Data":"3c17de7d9c8ff462aee20d6633666e6e8afb94763702757ff150c69ee7ee111d"} Jan 30 16:25:44 crc kubenswrapper[4766]: I0130 16:25:44.836821 4766 generic.go:334] "Generic (PLEG): container finished" podID="2e9f6906-38fe-44c5-9bfa-91a159d0bbb0" containerID="a8efc55b7e937307fec3de34be2e9c333069230a69b06703579516d9fd5c29bb" exitCode=0 Jan 30 16:25:44 crc kubenswrapper[4766]: I0130 16:25:44.837470 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"2e9f6906-38fe-44c5-9bfa-91a159d0bbb0","Type":"ContainerDied","Data":"a8efc55b7e937307fec3de34be2e9c333069230a69b06703579516d9fd5c29bb"} Jan 30 16:25:44 crc kubenswrapper[4766]: I0130 16:25:44.857143 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-46g6x" podStartSLOduration=2.786555888 podStartE2EDuration="48.85710492s" podCreationTimestamp="2026-01-30 16:24:56 +0000 UTC" firstStartedPulling="2026-01-30 16:24:58.232659073 +0000 UTC m=+152.870616419" lastFinishedPulling="2026-01-30 16:25:44.303208105 +0000 UTC m=+198.941165451" observedRunningTime="2026-01-30 16:25:44.850455652 +0000 UTC m=+199.488412998" watchObservedRunningTime="2026-01-30 16:25:44.85710492 +0000 UTC m=+199.495062266" Jan 30 16:25:44 crc kubenswrapper[4766]: I0130 16:25:44.877689 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-969pn" podStartSLOduration=2.89637484 podStartE2EDuration="48.877663237s" podCreationTimestamp="2026-01-30 16:24:56 +0000 UTC" firstStartedPulling="2026-01-30 16:24:58.234162594 +0000 UTC m=+152.872119940" lastFinishedPulling="2026-01-30 16:25:44.215450991 +0000 UTC m=+198.853408337" observedRunningTime="2026-01-30 16:25:44.872723725 +0000 UTC m=+199.510681071" watchObservedRunningTime="2026-01-30 16:25:44.877663237 +0000 UTC m=+199.515620583" Jan 30 16:25:46 crc kubenswrapper[4766]: I0130 16:25:46.136722 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 16:25:46 crc kubenswrapper[4766]: I0130 16:25:46.241216 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2e9f6906-38fe-44c5-9bfa-91a159d0bbb0-kubelet-dir\") pod \"2e9f6906-38fe-44c5-9bfa-91a159d0bbb0\" (UID: \"2e9f6906-38fe-44c5-9bfa-91a159d0bbb0\") " Jan 30 16:25:46 crc kubenswrapper[4766]: I0130 16:25:46.241589 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2e9f6906-38fe-44c5-9bfa-91a159d0bbb0-kube-api-access\") pod \"2e9f6906-38fe-44c5-9bfa-91a159d0bbb0\" (UID: \"2e9f6906-38fe-44c5-9bfa-91a159d0bbb0\") " Jan 30 16:25:46 crc kubenswrapper[4766]: I0130 16:25:46.241395 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e9f6906-38fe-44c5-9bfa-91a159d0bbb0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2e9f6906-38fe-44c5-9bfa-91a159d0bbb0" (UID: "2e9f6906-38fe-44c5-9bfa-91a159d0bbb0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:25:46 crc kubenswrapper[4766]: I0130 16:25:46.242230 4766 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2e9f6906-38fe-44c5-9bfa-91a159d0bbb0-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 16:25:46 crc kubenswrapper[4766]: I0130 16:25:46.253492 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e9f6906-38fe-44c5-9bfa-91a159d0bbb0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2e9f6906-38fe-44c5-9bfa-91a159d0bbb0" (UID: "2e9f6906-38fe-44c5-9bfa-91a159d0bbb0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:25:46 crc kubenswrapper[4766]: I0130 16:25:46.343891 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2e9f6906-38fe-44c5-9bfa-91a159d0bbb0-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 16:25:46 crc kubenswrapper[4766]: I0130 16:25:46.739519 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-969pn" Jan 30 16:25:46 crc kubenswrapper[4766]: I0130 16:25:46.740534 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-969pn" Jan 30 16:25:46 crc kubenswrapper[4766]: I0130 16:25:46.852108 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 16:25:46 crc kubenswrapper[4766]: I0130 16:25:46.852111 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"2e9f6906-38fe-44c5-9bfa-91a159d0bbb0","Type":"ContainerDied","Data":"3da589a8dcf475e9766957ef366b66cf2d2610eb529a2a5e4e10ae611412867e"} Jan 30 16:25:46 crc kubenswrapper[4766]: I0130 16:25:46.852824 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3da589a8dcf475e9766957ef366b66cf2d2610eb529a2a5e4e10ae611412867e" Jan 30 16:25:46 crc kubenswrapper[4766]: I0130 16:25:46.878160 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-969pn" Jan 30 16:25:47 crc kubenswrapper[4766]: I0130 16:25:47.117578 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-46g6x" Jan 30 16:25:47 crc kubenswrapper[4766]: I0130 16:25:47.117665 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-46g6x" Jan 30 16:25:47 crc kubenswrapper[4766]: I0130 16:25:47.163221 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-46g6x" Jan 30 16:25:55 crc kubenswrapper[4766]: I0130 16:25:55.906779 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2gzn6" event={"ID":"8765357c-9e53-47c7-a913-1dc72a693ef2","Type":"ContainerStarted","Data":"07632b3f53a3263a57b9d0d5ae423a7490fcc02d1df21bcac6776d409abc23d3"} Jan 30 16:25:55 crc kubenswrapper[4766]: I0130 16:25:55.909335 4766 generic.go:334] "Generic (PLEG): container finished" podID="ac4a36f6-21fe-4374-adaf-4505d59ce4c5" containerID="5e2714807bc4ce8fb75ec066ec680c820dce572b55a6c11d3376cbf3349b0a9a" exitCode=0 Jan 30 16:25:55 crc kubenswrapper[4766]: I0130 16:25:55.909399 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qrcth" event={"ID":"ac4a36f6-21fe-4374-adaf-4505d59ce4c5","Type":"ContainerDied","Data":"5e2714807bc4ce8fb75ec066ec680c820dce572b55a6c11d3376cbf3349b0a9a"} Jan 30 16:25:55 crc kubenswrapper[4766]: I0130 16:25:55.912226 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hfpqw" event={"ID":"50a11a60-476d-48af-9ff9-b3d9841e6260","Type":"ContainerStarted","Data":"56845faa6a2886e9495f7e3b56129ef294daca0a466636b522f89f4aba889fd6"} Jan 30 16:25:56 crc kubenswrapper[4766]: I0130 16:25:56.794679 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-969pn" Jan 30 16:25:56 crc kubenswrapper[4766]: I0130 16:25:56.921024 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qrcth" event={"ID":"ac4a36f6-21fe-4374-adaf-4505d59ce4c5","Type":"ContainerStarted","Data":"06c47e9228d7637aaf17f2cb5cbd46136e93dada97e45b28a37e4ae451827e8b"} Jan 30 16:25:56 crc kubenswrapper[4766]: I0130 16:25:56.923134 4766 generic.go:334] "Generic (PLEG): container finished" podID="50a11a60-476d-48af-9ff9-b3d9841e6260" containerID="56845faa6a2886e9495f7e3b56129ef294daca0a466636b522f89f4aba889fd6" exitCode=0 Jan 30 16:25:56 crc kubenswrapper[4766]: I0130 16:25:56.923210 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hfpqw" event={"ID":"50a11a60-476d-48af-9ff9-b3d9841e6260","Type":"ContainerDied","Data":"56845faa6a2886e9495f7e3b56129ef294daca0a466636b522f89f4aba889fd6"} Jan 30 16:25:56 crc kubenswrapper[4766]: I0130 16:25:56.925475 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mvnxb" event={"ID":"bbcf0ab9-04e7-47e0-b375-c09a93463cc9","Type":"ContainerStarted","Data":"beca937a48be7e110f42c991300022e0146b8a35b30f49ebf2865758e9ae66ab"} Jan 30 16:25:56 crc kubenswrapper[4766]: I0130 16:25:56.928063 4766 generic.go:334] "Generic (PLEG): container finished" podID="8765357c-9e53-47c7-a913-1dc72a693ef2" containerID="07632b3f53a3263a57b9d0d5ae423a7490fcc02d1df21bcac6776d409abc23d3" exitCode=0 Jan 30 16:25:56 crc kubenswrapper[4766]: I0130 16:25:56.928102 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2gzn6" event={"ID":"8765357c-9e53-47c7-a913-1dc72a693ef2","Type":"ContainerDied","Data":"07632b3f53a3263a57b9d0d5ae423a7490fcc02d1df21bcac6776d409abc23d3"} Jan 30 16:25:56 crc kubenswrapper[4766]: I0130 16:25:56.938083 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qrcth" Jan 30 16:25:56 crc kubenswrapper[4766]: I0130 16:25:56.938125 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qrcth" Jan 30 16:25:56 crc kubenswrapper[4766]: I0130 16:25:56.979791 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qrcth" podStartSLOduration=2.699943034 podStartE2EDuration="1m0.979770737s" podCreationTimestamp="2026-01-30 16:24:56 +0000 UTC" firstStartedPulling="2026-01-30 16:24:58.232720615 +0000 UTC m=+152.870677961" lastFinishedPulling="2026-01-30 16:25:56.512548318 +0000 UTC m=+211.150505664" observedRunningTime="2026-01-30 16:25:56.951938816 +0000 UTC m=+211.589896162" watchObservedRunningTime="2026-01-30 16:25:56.979770737 +0000 UTC m=+211.617728083" Jan 30 16:25:57 crc kubenswrapper[4766]: I0130 16:25:57.156613 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-46g6x" Jan 30 16:25:57 crc kubenswrapper[4766]: I0130 16:25:57.937352 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hfpqw" event={"ID":"50a11a60-476d-48af-9ff9-b3d9841e6260","Type":"ContainerStarted","Data":"be886e6bce28f07837bd1e5ff07fcae13b22456b433498c736f7be7e1ef836d8"} Jan 30 16:25:57 crc kubenswrapper[4766]: I0130 16:25:57.939234 4766 generic.go:334] "Generic (PLEG): container finished" podID="bbcf0ab9-04e7-47e0-b375-c09a93463cc9" containerID="beca937a48be7e110f42c991300022e0146b8a35b30f49ebf2865758e9ae66ab" exitCode=0 Jan 30 16:25:57 crc kubenswrapper[4766]: I0130 16:25:57.939292 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mvnxb" event={"ID":"bbcf0ab9-04e7-47e0-b375-c09a93463cc9","Type":"ContainerDied","Data":"beca937a48be7e110f42c991300022e0146b8a35b30f49ebf2865758e9ae66ab"} Jan 30 16:25:57 crc kubenswrapper[4766]: I0130 16:25:57.941400 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2gzn6" event={"ID":"8765357c-9e53-47c7-a913-1dc72a693ef2","Type":"ContainerStarted","Data":"0c0b309fb7fdefacf707e8856a871dd6c478a342c7967d85b53333e513264614"} Jan 30 16:25:57 crc kubenswrapper[4766]: I0130 16:25:57.944129 4766 generic.go:334] "Generic (PLEG): container finished" podID="9f598bfe-913e-4236-b3c5-78268f38396c" containerID="543dbb0915881eb0de3020763b26d25afd72cbd7d1477df0b515d8849845cb0f" exitCode=0 Jan 30 16:25:57 crc kubenswrapper[4766]: I0130 16:25:57.944225 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qct46" event={"ID":"9f598bfe-913e-4236-b3c5-78268f38396c","Type":"ContainerDied","Data":"543dbb0915881eb0de3020763b26d25afd72cbd7d1477df0b515d8849845cb0f"} Jan 30 16:25:57 crc kubenswrapper[4766]: I0130 16:25:57.959995 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hfpqw" podStartSLOduration=3.978307352 podStartE2EDuration="58.959974311s" podCreationTimestamp="2026-01-30 16:24:59 +0000 UTC" firstStartedPulling="2026-01-30 16:25:02.360618222 +0000 UTC m=+156.998575558" lastFinishedPulling="2026-01-30 16:25:57.342285171 +0000 UTC m=+211.980242517" observedRunningTime="2026-01-30 16:25:57.959685474 +0000 UTC m=+212.597642850" watchObservedRunningTime="2026-01-30 16:25:57.959974311 +0000 UTC m=+212.597931657" Jan 30 16:25:57 crc kubenswrapper[4766]: I0130 16:25:57.987755 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-qrcth" podUID="ac4a36f6-21fe-4374-adaf-4505d59ce4c5" containerName="registry-server" probeResult="failure" output=< Jan 30 16:25:57 crc kubenswrapper[4766]: timeout: failed to connect service ":50051" within 1s Jan 30 16:25:57 crc kubenswrapper[4766]: > Jan 30 16:25:58 crc kubenswrapper[4766]: I0130 16:25:58.046163 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2gzn6" podStartSLOduration=3.009174795 podStartE2EDuration="58.046146094s" podCreationTimestamp="2026-01-30 16:25:00 +0000 UTC" firstStartedPulling="2026-01-30 16:25:02.341493994 +0000 UTC m=+156.979451340" lastFinishedPulling="2026-01-30 16:25:57.378465283 +0000 UTC m=+212.016422639" observedRunningTime="2026-01-30 16:25:58.043143514 +0000 UTC m=+212.681100850" watchObservedRunningTime="2026-01-30 16:25:58.046146094 +0000 UTC m=+212.684103440" Jan 30 16:25:58 crc kubenswrapper[4766]: I0130 16:25:58.951841 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qct46" event={"ID":"9f598bfe-913e-4236-b3c5-78268f38396c","Type":"ContainerStarted","Data":"4fedda1f3608f9c6b64edb78a08731aa0ddac6e0535fa53504800f729c59836a"} Jan 30 16:25:58 crc kubenswrapper[4766]: I0130 16:25:58.955263 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn45b" event={"ID":"410ce027-e739-4759-a4ca-96994b5e37e4","Type":"ContainerStarted","Data":"92e62e71c7fbd33706b95a91aa2eaefded0e0c9e9acefeb1a81f0225cc9e60dd"} Jan 30 16:25:58 crc kubenswrapper[4766]: I0130 16:25:58.960124 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mvnxb" event={"ID":"bbcf0ab9-04e7-47e0-b375-c09a93463cc9","Type":"ContainerStarted","Data":"815a646ec94b2437921dbafceb1d7e98aeb0ed8c4ac31b3fa67c0ac231c901cb"} Jan 30 16:25:58 crc kubenswrapper[4766]: I0130 16:25:58.978425 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qct46" podStartSLOduration=3.9564252399999997 podStartE2EDuration="1m0.978395153s" podCreationTimestamp="2026-01-30 16:24:58 +0000 UTC" firstStartedPulling="2026-01-30 16:25:01.296496085 +0000 UTC m=+155.934453431" lastFinishedPulling="2026-01-30 16:25:58.318465998 +0000 UTC m=+212.956423344" observedRunningTime="2026-01-30 16:25:58.973901113 +0000 UTC m=+213.611858469" watchObservedRunningTime="2026-01-30 16:25:58.978395153 +0000 UTC m=+213.616352499" Jan 30 16:25:59 crc kubenswrapper[4766]: I0130 16:25:59.020480 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mvnxb" podStartSLOduration=3.957040588 podStartE2EDuration="1m1.020464862s" podCreationTimestamp="2026-01-30 16:24:58 +0000 UTC" firstStartedPulling="2026-01-30 16:25:01.303483222 +0000 UTC m=+155.941440568" lastFinishedPulling="2026-01-30 16:25:58.366907496 +0000 UTC m=+213.004864842" observedRunningTime="2026-01-30 16:25:59.02039133 +0000 UTC m=+213.658348686" watchObservedRunningTime="2026-01-30 16:25:59.020464862 +0000 UTC m=+213.658422208" Jan 30 16:25:59 crc kubenswrapper[4766]: I0130 16:25:59.098496 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mvnxb" Jan 30 16:25:59 crc kubenswrapper[4766]: I0130 16:25:59.098583 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mvnxb" Jan 30 16:25:59 crc kubenswrapper[4766]: I0130 16:25:59.967123 4766 generic.go:334] "Generic (PLEG): container finished" podID="410ce027-e739-4759-a4ca-96994b5e37e4" containerID="92e62e71c7fbd33706b95a91aa2eaefded0e0c9e9acefeb1a81f0225cc9e60dd" exitCode=0 Jan 30 16:25:59 crc kubenswrapper[4766]: I0130 16:25:59.967209 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn45b" event={"ID":"410ce027-e739-4759-a4ca-96994b5e37e4","Type":"ContainerDied","Data":"92e62e71c7fbd33706b95a91aa2eaefded0e0c9e9acefeb1a81f0225cc9e60dd"} Jan 30 16:26:00 crc kubenswrapper[4766]: I0130 16:26:00.085677 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hfpqw" Jan 30 16:26:00 crc kubenswrapper[4766]: I0130 16:26:00.085770 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hfpqw" Jan 30 16:26:00 crc kubenswrapper[4766]: I0130 16:26:00.156721 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-mvnxb" podUID="bbcf0ab9-04e7-47e0-b375-c09a93463cc9" containerName="registry-server" probeResult="failure" output=< Jan 30 16:26:00 crc kubenswrapper[4766]: timeout: failed to connect service ":50051" within 1s Jan 30 16:26:00 crc kubenswrapper[4766]: > Jan 30 16:26:00 crc kubenswrapper[4766]: I0130 16:26:00.477988 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-46g6x"] Jan 30 16:26:00 crc kubenswrapper[4766]: I0130 16:26:00.478272 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-46g6x" podUID="7c0324d7-1f61-4e1a-9ce7-fd960abfe244" containerName="registry-server" containerID="cri-o://4cffc4b823b25da9acdf8fd2bcb8a9c77af6d9cb3020272a0800c152eff01393" gracePeriod=2 Jan 30 16:26:00 crc kubenswrapper[4766]: I0130 16:26:00.486835 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2gzn6" Jan 30 16:26:00 crc kubenswrapper[4766]: I0130 16:26:00.486884 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2gzn6" Jan 30 16:26:00 crc kubenswrapper[4766]: I0130 16:26:00.853793 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-46g6x" Jan 30 16:26:00 crc kubenswrapper[4766]: I0130 16:26:00.975688 4766 generic.go:334] "Generic (PLEG): container finished" podID="7c0324d7-1f61-4e1a-9ce7-fd960abfe244" containerID="4cffc4b823b25da9acdf8fd2bcb8a9c77af6d9cb3020272a0800c152eff01393" exitCode=0 Jan 30 16:26:00 crc kubenswrapper[4766]: I0130 16:26:00.975782 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-46g6x" event={"ID":"7c0324d7-1f61-4e1a-9ce7-fd960abfe244","Type":"ContainerDied","Data":"4cffc4b823b25da9acdf8fd2bcb8a9c77af6d9cb3020272a0800c152eff01393"} Jan 30 16:26:00 crc kubenswrapper[4766]: I0130 16:26:00.975794 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-46g6x" Jan 30 16:26:00 crc kubenswrapper[4766]: I0130 16:26:00.975817 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-46g6x" event={"ID":"7c0324d7-1f61-4e1a-9ce7-fd960abfe244","Type":"ContainerDied","Data":"d647130a49f304f91277aec2b42b5513df4dbdb8a8c2d7524ca93ac92c844730"} Jan 30 16:26:00 crc kubenswrapper[4766]: I0130 16:26:00.975841 4766 scope.go:117] "RemoveContainer" containerID="4cffc4b823b25da9acdf8fd2bcb8a9c77af6d9cb3020272a0800c152eff01393" Jan 30 16:26:00 crc kubenswrapper[4766]: I0130 16:26:00.982802 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn45b" event={"ID":"410ce027-e739-4759-a4ca-96994b5e37e4","Type":"ContainerStarted","Data":"52d5ec6b6ab8d2bdb3b41676fffe38c24e44cd569cf21408fac15619934e2058"} Jan 30 16:26:00 crc kubenswrapper[4766]: I0130 16:26:00.997753 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wphws\" (UniqueName: \"kubernetes.io/projected/7c0324d7-1f61-4e1a-9ce7-fd960abfe244-kube-api-access-wphws\") pod \"7c0324d7-1f61-4e1a-9ce7-fd960abfe244\" (UID: \"7c0324d7-1f61-4e1a-9ce7-fd960abfe244\") " Jan 30 16:26:00 crc kubenswrapper[4766]: I0130 16:26:00.997861 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c0324d7-1f61-4e1a-9ce7-fd960abfe244-utilities\") pod \"7c0324d7-1f61-4e1a-9ce7-fd960abfe244\" (UID: \"7c0324d7-1f61-4e1a-9ce7-fd960abfe244\") " Jan 30 16:26:00 crc kubenswrapper[4766]: I0130 16:26:00.998069 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c0324d7-1f61-4e1a-9ce7-fd960abfe244-catalog-content\") pod \"7c0324d7-1f61-4e1a-9ce7-fd960abfe244\" (UID: \"7c0324d7-1f61-4e1a-9ce7-fd960abfe244\") " Jan 30 16:26:00 crc kubenswrapper[4766]: I0130 16:26:00.998737 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c0324d7-1f61-4e1a-9ce7-fd960abfe244-utilities" (OuterVolumeSpecName: "utilities") pod "7c0324d7-1f61-4e1a-9ce7-fd960abfe244" (UID: "7c0324d7-1f61-4e1a-9ce7-fd960abfe244"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:26:01 crc kubenswrapper[4766]: I0130 16:26:01.005025 4766 scope.go:117] "RemoveContainer" containerID="97ccab51b00c2ebd3e485d91597544051c2bed561ec9c240ed8037412a2d52e1" Jan 30 16:26:01 crc kubenswrapper[4766]: I0130 16:26:01.009304 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cn45b" podStartSLOduration=2.877244982 podStartE2EDuration="1m5.009276937s" podCreationTimestamp="2026-01-30 16:24:56 +0000 UTC" firstStartedPulling="2026-01-30 16:24:58.238533651 +0000 UTC m=+152.876490997" lastFinishedPulling="2026-01-30 16:26:00.370565606 +0000 UTC m=+215.008522952" observedRunningTime="2026-01-30 16:26:01.001703155 +0000 UTC m=+215.639660521" watchObservedRunningTime="2026-01-30 16:26:01.009276937 +0000 UTC m=+215.647234283" Jan 30 16:26:01 crc kubenswrapper[4766]: I0130 16:26:01.036903 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c0324d7-1f61-4e1a-9ce7-fd960abfe244-kube-api-access-wphws" (OuterVolumeSpecName: "kube-api-access-wphws") pod "7c0324d7-1f61-4e1a-9ce7-fd960abfe244" (UID: "7c0324d7-1f61-4e1a-9ce7-fd960abfe244"). InnerVolumeSpecName "kube-api-access-wphws". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:26:01 crc kubenswrapper[4766]: I0130 16:26:01.049844 4766 scope.go:117] "RemoveContainer" containerID="af5b6a23fadc196544b29ed44557ed01e0f50ea2f1dc14b5996b09933749bb3c" Jan 30 16:26:01 crc kubenswrapper[4766]: I0130 16:26:01.065670 4766 scope.go:117] "RemoveContainer" containerID="4cffc4b823b25da9acdf8fd2bcb8a9c77af6d9cb3020272a0800c152eff01393" Jan 30 16:26:01 crc kubenswrapper[4766]: E0130 16:26:01.067268 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cffc4b823b25da9acdf8fd2bcb8a9c77af6d9cb3020272a0800c152eff01393\": container with ID starting with 4cffc4b823b25da9acdf8fd2bcb8a9c77af6d9cb3020272a0800c152eff01393 not found: ID does not exist" containerID="4cffc4b823b25da9acdf8fd2bcb8a9c77af6d9cb3020272a0800c152eff01393" Jan 30 16:26:01 crc kubenswrapper[4766]: I0130 16:26:01.067323 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cffc4b823b25da9acdf8fd2bcb8a9c77af6d9cb3020272a0800c152eff01393"} err="failed to get container status \"4cffc4b823b25da9acdf8fd2bcb8a9c77af6d9cb3020272a0800c152eff01393\": rpc error: code = NotFound desc = could not find container \"4cffc4b823b25da9acdf8fd2bcb8a9c77af6d9cb3020272a0800c152eff01393\": container with ID starting with 4cffc4b823b25da9acdf8fd2bcb8a9c77af6d9cb3020272a0800c152eff01393 not found: ID does not exist" Jan 30 16:26:01 crc kubenswrapper[4766]: I0130 16:26:01.067375 4766 scope.go:117] "RemoveContainer" containerID="97ccab51b00c2ebd3e485d91597544051c2bed561ec9c240ed8037412a2d52e1" Jan 30 16:26:01 crc kubenswrapper[4766]: E0130 16:26:01.069373 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97ccab51b00c2ebd3e485d91597544051c2bed561ec9c240ed8037412a2d52e1\": container with ID starting with 97ccab51b00c2ebd3e485d91597544051c2bed561ec9c240ed8037412a2d52e1 not found: ID does not exist" containerID="97ccab51b00c2ebd3e485d91597544051c2bed561ec9c240ed8037412a2d52e1" Jan 30 16:26:01 crc kubenswrapper[4766]: I0130 16:26:01.069411 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97ccab51b00c2ebd3e485d91597544051c2bed561ec9c240ed8037412a2d52e1"} err="failed to get container status \"97ccab51b00c2ebd3e485d91597544051c2bed561ec9c240ed8037412a2d52e1\": rpc error: code = NotFound desc = could not find container \"97ccab51b00c2ebd3e485d91597544051c2bed561ec9c240ed8037412a2d52e1\": container with ID starting with 97ccab51b00c2ebd3e485d91597544051c2bed561ec9c240ed8037412a2d52e1 not found: ID does not exist" Jan 30 16:26:01 crc kubenswrapper[4766]: I0130 16:26:01.069438 4766 scope.go:117] "RemoveContainer" containerID="af5b6a23fadc196544b29ed44557ed01e0f50ea2f1dc14b5996b09933749bb3c" Jan 30 16:26:01 crc kubenswrapper[4766]: E0130 16:26:01.071319 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af5b6a23fadc196544b29ed44557ed01e0f50ea2f1dc14b5996b09933749bb3c\": container with ID starting with af5b6a23fadc196544b29ed44557ed01e0f50ea2f1dc14b5996b09933749bb3c not found: ID does not exist" containerID="af5b6a23fadc196544b29ed44557ed01e0f50ea2f1dc14b5996b09933749bb3c" Jan 30 16:26:01 crc kubenswrapper[4766]: I0130 16:26:01.071363 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af5b6a23fadc196544b29ed44557ed01e0f50ea2f1dc14b5996b09933749bb3c"} err="failed to get container status \"af5b6a23fadc196544b29ed44557ed01e0f50ea2f1dc14b5996b09933749bb3c\": rpc error: code = NotFound desc = could not find container \"af5b6a23fadc196544b29ed44557ed01e0f50ea2f1dc14b5996b09933749bb3c\": container with ID starting with af5b6a23fadc196544b29ed44557ed01e0f50ea2f1dc14b5996b09933749bb3c not found: ID does not exist" Jan 30 16:26:01 crc kubenswrapper[4766]: I0130 16:26:01.081142 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c0324d7-1f61-4e1a-9ce7-fd960abfe244-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7c0324d7-1f61-4e1a-9ce7-fd960abfe244" (UID: "7c0324d7-1f61-4e1a-9ce7-fd960abfe244"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:26:01 crc kubenswrapper[4766]: I0130 16:26:01.100157 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wphws\" (UniqueName: \"kubernetes.io/projected/7c0324d7-1f61-4e1a-9ce7-fd960abfe244-kube-api-access-wphws\") on node \"crc\" DevicePath \"\"" Jan 30 16:26:01 crc kubenswrapper[4766]: I0130 16:26:01.100257 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c0324d7-1f61-4e1a-9ce7-fd960abfe244-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:26:01 crc kubenswrapper[4766]: I0130 16:26:01.100271 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c0324d7-1f61-4e1a-9ce7-fd960abfe244-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:26:01 crc kubenswrapper[4766]: I0130 16:26:01.146350 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hfpqw" podUID="50a11a60-476d-48af-9ff9-b3d9841e6260" containerName="registry-server" probeResult="failure" output=< Jan 30 16:26:01 crc kubenswrapper[4766]: timeout: failed to connect service ":50051" within 1s Jan 30 16:26:01 crc kubenswrapper[4766]: > Jan 30 16:26:01 crc kubenswrapper[4766]: I0130 16:26:01.302568 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-46g6x"] Jan 30 16:26:01 crc kubenswrapper[4766]: I0130 16:26:01.306598 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-46g6x"] Jan 30 16:26:01 crc kubenswrapper[4766]: I0130 16:26:01.538804 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2gzn6" podUID="8765357c-9e53-47c7-a913-1dc72a693ef2" containerName="registry-server" probeResult="failure" output=< Jan 30 16:26:01 crc kubenswrapper[4766]: timeout: failed to connect service ":50051" within 1s Jan 30 16:26:01 crc kubenswrapper[4766]: > Jan 30 16:26:02 crc kubenswrapper[4766]: I0130 16:26:02.047315 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c0324d7-1f61-4e1a-9ce7-fd960abfe244" path="/var/lib/kubelet/pods/7c0324d7-1f61-4e1a-9ce7-fd960abfe244/volumes" Jan 30 16:26:06 crc kubenswrapper[4766]: I0130 16:26:06.271939 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-sbckt"] Jan 30 16:26:06 crc kubenswrapper[4766]: I0130 16:26:06.982356 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qrcth" Jan 30 16:26:07 crc kubenswrapper[4766]: I0130 16:26:07.029765 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qrcth" Jan 30 16:26:07 crc kubenswrapper[4766]: I0130 16:26:07.332608 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cn45b" Jan 30 16:26:07 crc kubenswrapper[4766]: I0130 16:26:07.332938 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cn45b" Jan 30 16:26:07 crc kubenswrapper[4766]: I0130 16:26:07.372699 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cn45b" Jan 30 16:26:08 crc kubenswrapper[4766]: I0130 16:26:08.272228 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cn45b" Jan 30 16:26:08 crc kubenswrapper[4766]: I0130 16:26:08.717476 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qct46" Jan 30 16:26:08 crc kubenswrapper[4766]: I0130 16:26:08.718798 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qct46" Jan 30 16:26:08 crc kubenswrapper[4766]: I0130 16:26:08.760302 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qct46" Jan 30 16:26:09 crc kubenswrapper[4766]: I0130 16:26:09.047079 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:26:09 crc kubenswrapper[4766]: I0130 16:26:09.047169 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:26:09 crc kubenswrapper[4766]: I0130 16:26:09.047229 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 16:26:09 crc kubenswrapper[4766]: I0130 16:26:09.047801 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823"} pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 16:26:09 crc kubenswrapper[4766]: I0130 16:26:09.047876 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" containerID="cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823" gracePeriod=600 Jan 30 16:26:09 crc kubenswrapper[4766]: I0130 16:26:09.075490 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qct46" Jan 30 16:26:09 crc kubenswrapper[4766]: I0130 16:26:09.140318 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mvnxb" Jan 30 16:26:09 crc kubenswrapper[4766]: I0130 16:26:09.182913 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mvnxb" Jan 30 16:26:10 crc kubenswrapper[4766]: I0130 16:26:10.083562 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cn45b"] Jan 30 16:26:10 crc kubenswrapper[4766]: I0130 16:26:10.142210 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hfpqw" Jan 30 16:26:10 crc kubenswrapper[4766]: I0130 16:26:10.187695 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hfpqw" Jan 30 16:26:10 crc kubenswrapper[4766]: I0130 16:26:10.539974 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2gzn6" Jan 30 16:26:10 crc kubenswrapper[4766]: I0130 16:26:10.593998 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2gzn6" Jan 30 16:26:11 crc kubenswrapper[4766]: I0130 16:26:11.040398 4766 generic.go:334] "Generic (PLEG): container finished" podID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerID="183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823" exitCode=0 Jan 30 16:26:11 crc kubenswrapper[4766]: I0130 16:26:11.040434 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerDied","Data":"183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823"} Jan 30 16:26:11 crc kubenswrapper[4766]: I0130 16:26:11.041219 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cn45b" podUID="410ce027-e739-4759-a4ca-96994b5e37e4" containerName="registry-server" containerID="cri-o://52d5ec6b6ab8d2bdb3b41676fffe38c24e44cd569cf21408fac15619934e2058" gracePeriod=2 Jan 30 16:26:11 crc kubenswrapper[4766]: I0130 16:26:11.081068 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mvnxb"] Jan 30 16:26:11 crc kubenswrapper[4766]: I0130 16:26:11.081527 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mvnxb" podUID="bbcf0ab9-04e7-47e0-b375-c09a93463cc9" containerName="registry-server" containerID="cri-o://815a646ec94b2437921dbafceb1d7e98aeb0ed8c4ac31b3fa67c0ac231c901cb" gracePeriod=2 Jan 30 16:26:12 crc kubenswrapper[4766]: I0130 16:26:12.049846 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn45b" event={"ID":"410ce027-e739-4759-a4ca-96994b5e37e4","Type":"ContainerDied","Data":"52d5ec6b6ab8d2bdb3b41676fffe38c24e44cd569cf21408fac15619934e2058"} Jan 30 16:26:12 crc kubenswrapper[4766]: I0130 16:26:12.049792 4766 generic.go:334] "Generic (PLEG): container finished" podID="410ce027-e739-4759-a4ca-96994b5e37e4" containerID="52d5ec6b6ab8d2bdb3b41676fffe38c24e44cd569cf21408fac15619934e2058" exitCode=0 Jan 30 16:26:12 crc kubenswrapper[4766]: I0130 16:26:12.052962 4766 generic.go:334] "Generic (PLEG): container finished" podID="bbcf0ab9-04e7-47e0-b375-c09a93463cc9" containerID="815a646ec94b2437921dbafceb1d7e98aeb0ed8c4ac31b3fa67c0ac231c901cb" exitCode=0 Jan 30 16:26:12 crc kubenswrapper[4766]: I0130 16:26:12.053008 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mvnxb" event={"ID":"bbcf0ab9-04e7-47e0-b375-c09a93463cc9","Type":"ContainerDied","Data":"815a646ec94b2437921dbafceb1d7e98aeb0ed8c4ac31b3fa67c0ac231c901cb"} Jan 30 16:26:12 crc kubenswrapper[4766]: I0130 16:26:12.640152 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cn45b" Jan 30 16:26:12 crc kubenswrapper[4766]: I0130 16:26:12.757882 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/410ce027-e739-4759-a4ca-96994b5e37e4-catalog-content\") pod \"410ce027-e739-4759-a4ca-96994b5e37e4\" (UID: \"410ce027-e739-4759-a4ca-96994b5e37e4\") " Jan 30 16:26:12 crc kubenswrapper[4766]: I0130 16:26:12.757991 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kp6ht\" (UniqueName: \"kubernetes.io/projected/410ce027-e739-4759-a4ca-96994b5e37e4-kube-api-access-kp6ht\") pod \"410ce027-e739-4759-a4ca-96994b5e37e4\" (UID: \"410ce027-e739-4759-a4ca-96994b5e37e4\") " Jan 30 16:26:12 crc kubenswrapper[4766]: I0130 16:26:12.758083 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/410ce027-e739-4759-a4ca-96994b5e37e4-utilities\") pod \"410ce027-e739-4759-a4ca-96994b5e37e4\" (UID: \"410ce027-e739-4759-a4ca-96994b5e37e4\") " Jan 30 16:26:12 crc kubenswrapper[4766]: I0130 16:26:12.759144 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/410ce027-e739-4759-a4ca-96994b5e37e4-utilities" (OuterVolumeSpecName: "utilities") pod "410ce027-e739-4759-a4ca-96994b5e37e4" (UID: "410ce027-e739-4759-a4ca-96994b5e37e4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:26:12 crc kubenswrapper[4766]: I0130 16:26:12.769463 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/410ce027-e739-4759-a4ca-96994b5e37e4-kube-api-access-kp6ht" (OuterVolumeSpecName: "kube-api-access-kp6ht") pod "410ce027-e739-4759-a4ca-96994b5e37e4" (UID: "410ce027-e739-4759-a4ca-96994b5e37e4"). InnerVolumeSpecName "kube-api-access-kp6ht". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:26:12 crc kubenswrapper[4766]: I0130 16:26:12.805902 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mvnxb" Jan 30 16:26:12 crc kubenswrapper[4766]: I0130 16:26:12.810507 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/410ce027-e739-4759-a4ca-96994b5e37e4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "410ce027-e739-4759-a4ca-96994b5e37e4" (UID: "410ce027-e739-4759-a4ca-96994b5e37e4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:26:12 crc kubenswrapper[4766]: I0130 16:26:12.860386 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/410ce027-e739-4759-a4ca-96994b5e37e4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:26:12 crc kubenswrapper[4766]: I0130 16:26:12.860447 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kp6ht\" (UniqueName: \"kubernetes.io/projected/410ce027-e739-4759-a4ca-96994b5e37e4-kube-api-access-kp6ht\") on node \"crc\" DevicePath \"\"" Jan 30 16:26:12 crc kubenswrapper[4766]: I0130 16:26:12.860466 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/410ce027-e739-4759-a4ca-96994b5e37e4-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:26:12 crc kubenswrapper[4766]: I0130 16:26:12.961462 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhlt7\" (UniqueName: \"kubernetes.io/projected/bbcf0ab9-04e7-47e0-b375-c09a93463cc9-kube-api-access-nhlt7\") pod \"bbcf0ab9-04e7-47e0-b375-c09a93463cc9\" (UID: \"bbcf0ab9-04e7-47e0-b375-c09a93463cc9\") " Jan 30 16:26:12 crc kubenswrapper[4766]: I0130 16:26:12.961549 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbcf0ab9-04e7-47e0-b375-c09a93463cc9-utilities\") pod \"bbcf0ab9-04e7-47e0-b375-c09a93463cc9\" (UID: \"bbcf0ab9-04e7-47e0-b375-c09a93463cc9\") " Jan 30 16:26:12 crc kubenswrapper[4766]: I0130 16:26:12.961580 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbcf0ab9-04e7-47e0-b375-c09a93463cc9-catalog-content\") pod \"bbcf0ab9-04e7-47e0-b375-c09a93463cc9\" (UID: \"bbcf0ab9-04e7-47e0-b375-c09a93463cc9\") " Jan 30 16:26:12 crc kubenswrapper[4766]: I0130 16:26:12.962618 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbcf0ab9-04e7-47e0-b375-c09a93463cc9-utilities" (OuterVolumeSpecName: "utilities") pod "bbcf0ab9-04e7-47e0-b375-c09a93463cc9" (UID: "bbcf0ab9-04e7-47e0-b375-c09a93463cc9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:26:12 crc kubenswrapper[4766]: I0130 16:26:12.968497 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbcf0ab9-04e7-47e0-b375-c09a93463cc9-kube-api-access-nhlt7" (OuterVolumeSpecName: "kube-api-access-nhlt7") pod "bbcf0ab9-04e7-47e0-b375-c09a93463cc9" (UID: "bbcf0ab9-04e7-47e0-b375-c09a93463cc9"). InnerVolumeSpecName "kube-api-access-nhlt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:26:12 crc kubenswrapper[4766]: I0130 16:26:12.984557 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbcf0ab9-04e7-47e0-b375-c09a93463cc9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bbcf0ab9-04e7-47e0-b375-c09a93463cc9" (UID: "bbcf0ab9-04e7-47e0-b375-c09a93463cc9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:26:13 crc kubenswrapper[4766]: I0130 16:26:13.062849 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhlt7\" (UniqueName: \"kubernetes.io/projected/bbcf0ab9-04e7-47e0-b375-c09a93463cc9-kube-api-access-nhlt7\") on node \"crc\" DevicePath \"\"" Jan 30 16:26:13 crc kubenswrapper[4766]: I0130 16:26:13.064649 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbcf0ab9-04e7-47e0-b375-c09a93463cc9-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:26:13 crc kubenswrapper[4766]: I0130 16:26:13.064764 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbcf0ab9-04e7-47e0-b375-c09a93463cc9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:26:13 crc kubenswrapper[4766]: I0130 16:26:13.063881 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cn45b" Jan 30 16:26:13 crc kubenswrapper[4766]: I0130 16:26:13.063890 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn45b" event={"ID":"410ce027-e739-4759-a4ca-96994b5e37e4","Type":"ContainerDied","Data":"7171a7bd52b6d6953a2848237464b826e5b11b09254d5ec8e3dc69a35f3813bf"} Jan 30 16:26:13 crc kubenswrapper[4766]: I0130 16:26:13.065041 4766 scope.go:117] "RemoveContainer" containerID="52d5ec6b6ab8d2bdb3b41676fffe38c24e44cd569cf21408fac15619934e2058" Jan 30 16:26:13 crc kubenswrapper[4766]: I0130 16:26:13.066903 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mvnxb" event={"ID":"bbcf0ab9-04e7-47e0-b375-c09a93463cc9","Type":"ContainerDied","Data":"59a8b17052ea74cbace15a032912d54f5115659fdf57ccdbf95c02e5fb2078ae"} Jan 30 16:26:13 crc kubenswrapper[4766]: I0130 16:26:13.066979 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mvnxb" Jan 30 16:26:13 crc kubenswrapper[4766]: I0130 16:26:13.070958 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerStarted","Data":"a61da9bc846bcef2fd5085fc646835d633689f5537ff5019224103cb78b8173f"} Jan 30 16:26:13 crc kubenswrapper[4766]: I0130 16:26:13.090324 4766 scope.go:117] "RemoveContainer" containerID="92e62e71c7fbd33706b95a91aa2eaefded0e0c9e9acefeb1a81f0225cc9e60dd" Jan 30 16:26:13 crc kubenswrapper[4766]: I0130 16:26:13.116365 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cn45b"] Jan 30 16:26:13 crc kubenswrapper[4766]: I0130 16:26:13.119414 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cn45b"] Jan 30 16:26:13 crc kubenswrapper[4766]: I0130 16:26:13.119591 4766 scope.go:117] "RemoveContainer" containerID="7e4b12fb0e25bcc11137fa0eb3d6857be3b4209f7f96e6448f5d10662b96aeb3" Jan 30 16:26:13 crc kubenswrapper[4766]: I0130 16:26:13.137627 4766 scope.go:117] "RemoveContainer" containerID="815a646ec94b2437921dbafceb1d7e98aeb0ed8c4ac31b3fa67c0ac231c901cb" Jan 30 16:26:13 crc kubenswrapper[4766]: I0130 16:26:13.137995 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mvnxb"] Jan 30 16:26:13 crc kubenswrapper[4766]: I0130 16:26:13.143723 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mvnxb"] Jan 30 16:26:13 crc kubenswrapper[4766]: I0130 16:26:13.151245 4766 scope.go:117] "RemoveContainer" containerID="beca937a48be7e110f42c991300022e0146b8a35b30f49ebf2865758e9ae66ab" Jan 30 16:26:13 crc kubenswrapper[4766]: I0130 16:26:13.167435 4766 scope.go:117] "RemoveContainer" containerID="bd3ffb662254257b5ee19625a20b3eb5adc1c1ea60a29b9946405918cddc84cc" Jan 30 16:26:13 crc kubenswrapper[4766]: I0130 16:26:13.480952 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2gzn6"] Jan 30 16:26:13 crc kubenswrapper[4766]: I0130 16:26:13.481295 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2gzn6" podUID="8765357c-9e53-47c7-a913-1dc72a693ef2" containerName="registry-server" containerID="cri-o://0c0b309fb7fdefacf707e8856a871dd6c478a342c7967d85b53333e513264614" gracePeriod=2 Jan 30 16:26:13 crc kubenswrapper[4766]: I0130 16:26:13.831893 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2gzn6" Jan 30 16:26:13 crc kubenswrapper[4766]: I0130 16:26:13.987069 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqrl6\" (UniqueName: \"kubernetes.io/projected/8765357c-9e53-47c7-a913-1dc72a693ef2-kube-api-access-sqrl6\") pod \"8765357c-9e53-47c7-a913-1dc72a693ef2\" (UID: \"8765357c-9e53-47c7-a913-1dc72a693ef2\") " Jan 30 16:26:13 crc kubenswrapper[4766]: I0130 16:26:13.987158 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8765357c-9e53-47c7-a913-1dc72a693ef2-catalog-content\") pod \"8765357c-9e53-47c7-a913-1dc72a693ef2\" (UID: \"8765357c-9e53-47c7-a913-1dc72a693ef2\") " Jan 30 16:26:13 crc kubenswrapper[4766]: I0130 16:26:13.987249 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8765357c-9e53-47c7-a913-1dc72a693ef2-utilities\") pod \"8765357c-9e53-47c7-a913-1dc72a693ef2\" (UID: \"8765357c-9e53-47c7-a913-1dc72a693ef2\") " Jan 30 16:26:13 crc kubenswrapper[4766]: I0130 16:26:13.988540 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8765357c-9e53-47c7-a913-1dc72a693ef2-utilities" (OuterVolumeSpecName: "utilities") pod "8765357c-9e53-47c7-a913-1dc72a693ef2" (UID: "8765357c-9e53-47c7-a913-1dc72a693ef2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:26:13 crc kubenswrapper[4766]: I0130 16:26:13.996756 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8765357c-9e53-47c7-a913-1dc72a693ef2-kube-api-access-sqrl6" (OuterVolumeSpecName: "kube-api-access-sqrl6") pod "8765357c-9e53-47c7-a913-1dc72a693ef2" (UID: "8765357c-9e53-47c7-a913-1dc72a693ef2"). InnerVolumeSpecName "kube-api-access-sqrl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:26:14 crc kubenswrapper[4766]: I0130 16:26:14.049152 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="410ce027-e739-4759-a4ca-96994b5e37e4" path="/var/lib/kubelet/pods/410ce027-e739-4759-a4ca-96994b5e37e4/volumes" Jan 30 16:26:14 crc kubenswrapper[4766]: I0130 16:26:14.050317 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbcf0ab9-04e7-47e0-b375-c09a93463cc9" path="/var/lib/kubelet/pods/bbcf0ab9-04e7-47e0-b375-c09a93463cc9/volumes" Jan 30 16:26:14 crc kubenswrapper[4766]: I0130 16:26:14.078604 4766 generic.go:334] "Generic (PLEG): container finished" podID="8765357c-9e53-47c7-a913-1dc72a693ef2" containerID="0c0b309fb7fdefacf707e8856a871dd6c478a342c7967d85b53333e513264614" exitCode=0 Jan 30 16:26:14 crc kubenswrapper[4766]: I0130 16:26:14.078659 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2gzn6" event={"ID":"8765357c-9e53-47c7-a913-1dc72a693ef2","Type":"ContainerDied","Data":"0c0b309fb7fdefacf707e8856a871dd6c478a342c7967d85b53333e513264614"} Jan 30 16:26:14 crc kubenswrapper[4766]: I0130 16:26:14.078712 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2gzn6" event={"ID":"8765357c-9e53-47c7-a913-1dc72a693ef2","Type":"ContainerDied","Data":"43c7dbc686fbe2f3266fcd7cd477508571fef6f7a4153b299b92d554a111a343"} Jan 30 16:26:14 crc kubenswrapper[4766]: I0130 16:26:14.078735 4766 scope.go:117] "RemoveContainer" containerID="0c0b309fb7fdefacf707e8856a871dd6c478a342c7967d85b53333e513264614" Jan 30 16:26:14 crc kubenswrapper[4766]: I0130 16:26:14.080167 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2gzn6" Jan 30 16:26:14 crc kubenswrapper[4766]: I0130 16:26:14.088961 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqrl6\" (UniqueName: \"kubernetes.io/projected/8765357c-9e53-47c7-a913-1dc72a693ef2-kube-api-access-sqrl6\") on node \"crc\" DevicePath \"\"" Jan 30 16:26:14 crc kubenswrapper[4766]: I0130 16:26:14.089005 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8765357c-9e53-47c7-a913-1dc72a693ef2-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:26:14 crc kubenswrapper[4766]: I0130 16:26:14.094059 4766 scope.go:117] "RemoveContainer" containerID="07632b3f53a3263a57b9d0d5ae423a7490fcc02d1df21bcac6776d409abc23d3" Jan 30 16:26:14 crc kubenswrapper[4766]: I0130 16:26:14.110115 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8765357c-9e53-47c7-a913-1dc72a693ef2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8765357c-9e53-47c7-a913-1dc72a693ef2" (UID: "8765357c-9e53-47c7-a913-1dc72a693ef2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:26:14 crc kubenswrapper[4766]: I0130 16:26:14.112284 4766 scope.go:117] "RemoveContainer" containerID="3b05632315ff455414402ad75d1dc4b60cee035d51e0e545e949479d0cb36613" Jan 30 16:26:14 crc kubenswrapper[4766]: I0130 16:26:14.128278 4766 scope.go:117] "RemoveContainer" containerID="0c0b309fb7fdefacf707e8856a871dd6c478a342c7967d85b53333e513264614" Jan 30 16:26:14 crc kubenswrapper[4766]: E0130 16:26:14.128803 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c0b309fb7fdefacf707e8856a871dd6c478a342c7967d85b53333e513264614\": container with ID starting with 0c0b309fb7fdefacf707e8856a871dd6c478a342c7967d85b53333e513264614 not found: ID does not exist" containerID="0c0b309fb7fdefacf707e8856a871dd6c478a342c7967d85b53333e513264614" Jan 30 16:26:14 crc kubenswrapper[4766]: I0130 16:26:14.128860 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c0b309fb7fdefacf707e8856a871dd6c478a342c7967d85b53333e513264614"} err="failed to get container status \"0c0b309fb7fdefacf707e8856a871dd6c478a342c7967d85b53333e513264614\": rpc error: code = NotFound desc = could not find container \"0c0b309fb7fdefacf707e8856a871dd6c478a342c7967d85b53333e513264614\": container with ID starting with 0c0b309fb7fdefacf707e8856a871dd6c478a342c7967d85b53333e513264614 not found: ID does not exist" Jan 30 16:26:14 crc kubenswrapper[4766]: I0130 16:26:14.128884 4766 scope.go:117] "RemoveContainer" containerID="07632b3f53a3263a57b9d0d5ae423a7490fcc02d1df21bcac6776d409abc23d3" Jan 30 16:26:14 crc kubenswrapper[4766]: E0130 16:26:14.129259 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07632b3f53a3263a57b9d0d5ae423a7490fcc02d1df21bcac6776d409abc23d3\": container with ID starting with 07632b3f53a3263a57b9d0d5ae423a7490fcc02d1df21bcac6776d409abc23d3 not found: ID does not exist" containerID="07632b3f53a3263a57b9d0d5ae423a7490fcc02d1df21bcac6776d409abc23d3" Jan 30 16:26:14 crc kubenswrapper[4766]: I0130 16:26:14.129325 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07632b3f53a3263a57b9d0d5ae423a7490fcc02d1df21bcac6776d409abc23d3"} err="failed to get container status \"07632b3f53a3263a57b9d0d5ae423a7490fcc02d1df21bcac6776d409abc23d3\": rpc error: code = NotFound desc = could not find container \"07632b3f53a3263a57b9d0d5ae423a7490fcc02d1df21bcac6776d409abc23d3\": container with ID starting with 07632b3f53a3263a57b9d0d5ae423a7490fcc02d1df21bcac6776d409abc23d3 not found: ID does not exist" Jan 30 16:26:14 crc kubenswrapper[4766]: I0130 16:26:14.129376 4766 scope.go:117] "RemoveContainer" containerID="3b05632315ff455414402ad75d1dc4b60cee035d51e0e545e949479d0cb36613" Jan 30 16:26:14 crc kubenswrapper[4766]: E0130 16:26:14.130169 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b05632315ff455414402ad75d1dc4b60cee035d51e0e545e949479d0cb36613\": container with ID starting with 3b05632315ff455414402ad75d1dc4b60cee035d51e0e545e949479d0cb36613 not found: ID does not exist" containerID="3b05632315ff455414402ad75d1dc4b60cee035d51e0e545e949479d0cb36613" Jan 30 16:26:14 crc kubenswrapper[4766]: I0130 16:26:14.130342 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b05632315ff455414402ad75d1dc4b60cee035d51e0e545e949479d0cb36613"} err="failed to get container status \"3b05632315ff455414402ad75d1dc4b60cee035d51e0e545e949479d0cb36613\": rpc error: code = NotFound desc = could not find container \"3b05632315ff455414402ad75d1dc4b60cee035d51e0e545e949479d0cb36613\": container with ID starting with 3b05632315ff455414402ad75d1dc4b60cee035d51e0e545e949479d0cb36613 not found: ID does not exist" Jan 30 16:26:14 crc kubenswrapper[4766]: I0130 16:26:14.190210 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8765357c-9e53-47c7-a913-1dc72a693ef2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:26:14 crc kubenswrapper[4766]: I0130 16:26:14.417230 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2gzn6"] Jan 30 16:26:14 crc kubenswrapper[4766]: I0130 16:26:14.420986 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2gzn6"] Jan 30 16:26:16 crc kubenswrapper[4766]: I0130 16:26:16.047941 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8765357c-9e53-47c7-a913-1dc72a693ef2" path="/var/lib/kubelet/pods/8765357c-9e53-47c7-a913-1dc72a693ef2/volumes" Jan 30 16:26:21 crc kubenswrapper[4766]: E0130 16:26:21.001812 4766 file.go:109] "Unable to process watch event" err="can't process config file \"/etc/kubernetes/manifests/kube-apiserver-pod.yaml\": /etc/kubernetes/manifests/kube-apiserver-pod.yaml: couldn't parse as pod(Object 'Kind' is missing in 'null'), please check config file" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.001907 4766 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.002712 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41" gracePeriod=15 Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.002886 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1" gracePeriod=15 Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.002933 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9" gracePeriod=15 Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.002963 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334" gracePeriod=15 Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.003019 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc" gracePeriod=15 Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.004316 4766 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 16:26:21 crc kubenswrapper[4766]: E0130 16:26:21.004622 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="410ce027-e739-4759-a4ca-96994b5e37e4" containerName="extract-utilities" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.004636 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="410ce027-e739-4759-a4ca-96994b5e37e4" containerName="extract-utilities" Jan 30 16:26:21 crc kubenswrapper[4766]: E0130 16:26:21.004652 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c0324d7-1f61-4e1a-9ce7-fd960abfe244" containerName="registry-server" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.004661 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c0324d7-1f61-4e1a-9ce7-fd960abfe244" containerName="registry-server" Jan 30 16:26:21 crc kubenswrapper[4766]: E0130 16:26:21.004674 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="410ce027-e739-4759-a4ca-96994b5e37e4" containerName="extract-content" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.004681 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="410ce027-e739-4759-a4ca-96994b5e37e4" containerName="extract-content" Jan 30 16:26:21 crc kubenswrapper[4766]: E0130 16:26:21.004690 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8765357c-9e53-47c7-a913-1dc72a693ef2" containerName="registry-server" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.004696 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8765357c-9e53-47c7-a913-1dc72a693ef2" containerName="registry-server" Jan 30 16:26:21 crc kubenswrapper[4766]: E0130 16:26:21.004707 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbcf0ab9-04e7-47e0-b375-c09a93463cc9" containerName="registry-server" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.004712 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbcf0ab9-04e7-47e0-b375-c09a93463cc9" containerName="registry-server" Jan 30 16:26:21 crc kubenswrapper[4766]: E0130 16:26:21.004720 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.004728 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 30 16:26:21 crc kubenswrapper[4766]: E0130 16:26:21.004737 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.004744 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 30 16:26:21 crc kubenswrapper[4766]: E0130 16:26:21.004754 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.004761 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 16:26:21 crc kubenswrapper[4766]: E0130 16:26:21.004771 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbcf0ab9-04e7-47e0-b375-c09a93463cc9" containerName="extract-utilities" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.004778 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbcf0ab9-04e7-47e0-b375-c09a93463cc9" containerName="extract-utilities" Jan 30 16:26:21 crc kubenswrapper[4766]: E0130 16:26:21.004788 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="410ce027-e739-4759-a4ca-96994b5e37e4" containerName="registry-server" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.004794 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="410ce027-e739-4759-a4ca-96994b5e37e4" containerName="registry-server" Jan 30 16:26:21 crc kubenswrapper[4766]: E0130 16:26:21.004805 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8765357c-9e53-47c7-a913-1dc72a693ef2" containerName="extract-utilities" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.004812 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8765357c-9e53-47c7-a913-1dc72a693ef2" containerName="extract-utilities" Jan 30 16:26:21 crc kubenswrapper[4766]: E0130 16:26:21.004823 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8765357c-9e53-47c7-a913-1dc72a693ef2" containerName="extract-content" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.004830 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8765357c-9e53-47c7-a913-1dc72a693ef2" containerName="extract-content" Jan 30 16:26:21 crc kubenswrapper[4766]: E0130 16:26:21.004837 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.004845 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 30 16:26:21 crc kubenswrapper[4766]: E0130 16:26:21.004857 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.004864 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 16:26:21 crc kubenswrapper[4766]: E0130 16:26:21.004877 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbcf0ab9-04e7-47e0-b375-c09a93463cc9" containerName="extract-content" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.004883 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbcf0ab9-04e7-47e0-b375-c09a93463cc9" containerName="extract-content" Jan 30 16:26:21 crc kubenswrapper[4766]: E0130 16:26:21.004894 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e9f6906-38fe-44c5-9bfa-91a159d0bbb0" containerName="pruner" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.004901 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e9f6906-38fe-44c5-9bfa-91a159d0bbb0" containerName="pruner" Jan 30 16:26:21 crc kubenswrapper[4766]: E0130 16:26:21.004911 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.004918 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 30 16:26:21 crc kubenswrapper[4766]: E0130 16:26:21.004929 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c0324d7-1f61-4e1a-9ce7-fd960abfe244" containerName="extract-utilities" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.004936 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c0324d7-1f61-4e1a-9ce7-fd960abfe244" containerName="extract-utilities" Jan 30 16:26:21 crc kubenswrapper[4766]: E0130 16:26:21.004946 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c0324d7-1f61-4e1a-9ce7-fd960abfe244" containerName="extract-content" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.004953 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c0324d7-1f61-4e1a-9ce7-fd960abfe244" containerName="extract-content" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.005063 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.005074 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.005084 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.005092 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.005102 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbcf0ab9-04e7-47e0-b375-c09a93463cc9" containerName="registry-server" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.005114 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c0324d7-1f61-4e1a-9ce7-fd960abfe244" containerName="registry-server" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.005122 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.005129 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8765357c-9e53-47c7-a913-1dc72a693ef2" containerName="registry-server" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.005139 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="410ce027-e739-4759-a4ca-96994b5e37e4" containerName="registry-server" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.005146 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.005154 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e9f6906-38fe-44c5-9bfa-91a159d0bbb0" containerName="pruner" Jan 30 16:26:21 crc kubenswrapper[4766]: E0130 16:26:21.005285 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.005294 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.006647 4766 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.007230 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.013482 4766 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.193515 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.193565 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.193607 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.193842 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.193891 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.193916 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.193946 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.194002 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.295786 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.295876 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.295906 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.295967 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.295921 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.296003 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.296025 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.296059 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.296093 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.296114 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.296121 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.296148 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.296165 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.296154 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.296228 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.296206 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:26:22 crc kubenswrapper[4766]: I0130 16:26:22.127995 4766 generic.go:334] "Generic (PLEG): container finished" podID="4b30b717-ab4b-428d-8d98-f035422849b5" containerID="0af9e4eb5943a3ef897af4faec4286f4a02c813f78a0ed3cf7d1ba829b602751" exitCode=0 Jan 30 16:26:22 crc kubenswrapper[4766]: I0130 16:26:22.128095 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"4b30b717-ab4b-428d-8d98-f035422849b5","Type":"ContainerDied","Data":"0af9e4eb5943a3ef897af4faec4286f4a02c813f78a0ed3cf7d1ba829b602751"} Jan 30 16:26:22 crc kubenswrapper[4766]: I0130 16:26:22.129101 4766 status_manager.go:851] "Failed to get status for pod" podUID="4b30b717-ab4b-428d-8d98-f035422849b5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Jan 30 16:26:22 crc kubenswrapper[4766]: I0130 16:26:22.131245 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 30 16:26:22 crc kubenswrapper[4766]: I0130 16:26:22.133317 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 16:26:22 crc kubenswrapper[4766]: I0130 16:26:22.134366 4766 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1" exitCode=0 Jan 30 16:26:22 crc kubenswrapper[4766]: I0130 16:26:22.134399 4766 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9" exitCode=0 Jan 30 16:26:22 crc kubenswrapper[4766]: I0130 16:26:22.134413 4766 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334" exitCode=0 Jan 30 16:26:22 crc kubenswrapper[4766]: I0130 16:26:22.134427 4766 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc" exitCode=2 Jan 30 16:26:22 crc kubenswrapper[4766]: I0130 16:26:22.134546 4766 scope.go:117] "RemoveContainer" containerID="5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036" Jan 30 16:26:23 crc kubenswrapper[4766]: I0130 16:26:23.149103 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 16:26:23 crc kubenswrapper[4766]: I0130 16:26:23.453657 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 16:26:23 crc kubenswrapper[4766]: I0130 16:26:23.454765 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:26:23 crc kubenswrapper[4766]: I0130 16:26:23.455403 4766 status_manager.go:851] "Failed to get status for pod" podUID="4b30b717-ab4b-428d-8d98-f035422849b5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Jan 30 16:26:23 crc kubenswrapper[4766]: I0130 16:26:23.455742 4766 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Jan 30 16:26:23 crc kubenswrapper[4766]: I0130 16:26:23.495547 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 16:26:23 crc kubenswrapper[4766]: I0130 16:26:23.496250 4766 status_manager.go:851] "Failed to get status for pod" podUID="4b30b717-ab4b-428d-8d98-f035422849b5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Jan 30 16:26:23 crc kubenswrapper[4766]: I0130 16:26:23.496873 4766 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Jan 30 16:26:23 crc kubenswrapper[4766]: I0130 16:26:23.630744 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 30 16:26:23 crc kubenswrapper[4766]: I0130 16:26:23.630899 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 30 16:26:23 crc kubenswrapper[4766]: I0130 16:26:23.630914 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 30 16:26:23 crc kubenswrapper[4766]: I0130 16:26:23.630944 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4b30b717-ab4b-428d-8d98-f035422849b5-kube-api-access\") pod \"4b30b717-ab4b-428d-8d98-f035422849b5\" (UID: \"4b30b717-ab4b-428d-8d98-f035422849b5\") " Jan 30 16:26:23 crc kubenswrapper[4766]: I0130 16:26:23.630985 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:26:23 crc kubenswrapper[4766]: I0130 16:26:23.630993 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:26:23 crc kubenswrapper[4766]: I0130 16:26:23.631010 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4b30b717-ab4b-428d-8d98-f035422849b5-var-lock\") pod \"4b30b717-ab4b-428d-8d98-f035422849b5\" (UID: \"4b30b717-ab4b-428d-8d98-f035422849b5\") " Jan 30 16:26:23 crc kubenswrapper[4766]: I0130 16:26:23.630988 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:26:23 crc kubenswrapper[4766]: I0130 16:26:23.631068 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4b30b717-ab4b-428d-8d98-f035422849b5-kubelet-dir\") pod \"4b30b717-ab4b-428d-8d98-f035422849b5\" (UID: \"4b30b717-ab4b-428d-8d98-f035422849b5\") " Jan 30 16:26:23 crc kubenswrapper[4766]: I0130 16:26:23.631165 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b30b717-ab4b-428d-8d98-f035422849b5-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4b30b717-ab4b-428d-8d98-f035422849b5" (UID: "4b30b717-ab4b-428d-8d98-f035422849b5"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:26:23 crc kubenswrapper[4766]: I0130 16:26:23.631354 4766 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4b30b717-ab4b-428d-8d98-f035422849b5-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 16:26:23 crc kubenswrapper[4766]: I0130 16:26:23.631368 4766 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 30 16:26:23 crc kubenswrapper[4766]: I0130 16:26:23.631379 4766 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 16:26:23 crc kubenswrapper[4766]: I0130 16:26:23.631388 4766 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 30 16:26:23 crc kubenswrapper[4766]: I0130 16:26:23.631416 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b30b717-ab4b-428d-8d98-f035422849b5-var-lock" (OuterVolumeSpecName: "var-lock") pod "4b30b717-ab4b-428d-8d98-f035422849b5" (UID: "4b30b717-ab4b-428d-8d98-f035422849b5"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:26:23 crc kubenswrapper[4766]: I0130 16:26:23.636912 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b30b717-ab4b-428d-8d98-f035422849b5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4b30b717-ab4b-428d-8d98-f035422849b5" (UID: "4b30b717-ab4b-428d-8d98-f035422849b5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:26:23 crc kubenswrapper[4766]: I0130 16:26:23.732442 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4b30b717-ab4b-428d-8d98-f035422849b5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 16:26:23 crc kubenswrapper[4766]: I0130 16:26:23.732509 4766 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4b30b717-ab4b-428d-8d98-f035422849b5-var-lock\") on node \"crc\" DevicePath \"\"" Jan 30 16:26:24 crc kubenswrapper[4766]: I0130 16:26:24.045733 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 30 16:26:24 crc kubenswrapper[4766]: I0130 16:26:24.159811 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 16:26:24 crc kubenswrapper[4766]: I0130 16:26:24.160445 4766 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41" exitCode=0 Jan 30 16:26:24 crc kubenswrapper[4766]: I0130 16:26:24.160535 4766 scope.go:117] "RemoveContainer" containerID="0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1" Jan 30 16:26:24 crc kubenswrapper[4766]: I0130 16:26:24.160745 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:26:24 crc kubenswrapper[4766]: I0130 16:26:24.161574 4766 status_manager.go:851] "Failed to get status for pod" podUID="4b30b717-ab4b-428d-8d98-f035422849b5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Jan 30 16:26:24 crc kubenswrapper[4766]: I0130 16:26:24.162152 4766 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Jan 30 16:26:24 crc kubenswrapper[4766]: I0130 16:26:24.163641 4766 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Jan 30 16:26:24 crc kubenswrapper[4766]: I0130 16:26:24.163914 4766 status_manager.go:851] "Failed to get status for pod" podUID="4b30b717-ab4b-428d-8d98-f035422849b5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Jan 30 16:26:24 crc kubenswrapper[4766]: I0130 16:26:24.165155 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"4b30b717-ab4b-428d-8d98-f035422849b5","Type":"ContainerDied","Data":"8ea4803c398e8dd2c45bccc8f3bf98bb77923f9cc01db78303b4c730dab253c0"} Jan 30 16:26:24 crc kubenswrapper[4766]: I0130 16:26:24.165790 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ea4803c398e8dd2c45bccc8f3bf98bb77923f9cc01db78303b4c730dab253c0" Jan 30 16:26:24 crc kubenswrapper[4766]: I0130 16:26:24.165265 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 16:26:24 crc kubenswrapper[4766]: I0130 16:26:24.168508 4766 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Jan 30 16:26:24 crc kubenswrapper[4766]: I0130 16:26:24.168709 4766 status_manager.go:851] "Failed to get status for pod" podUID="4b30b717-ab4b-428d-8d98-f035422849b5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Jan 30 16:26:24 crc kubenswrapper[4766]: I0130 16:26:24.181242 4766 scope.go:117] "RemoveContainer" containerID="d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9" Jan 30 16:26:24 crc kubenswrapper[4766]: I0130 16:26:24.201896 4766 scope.go:117] "RemoveContainer" containerID="d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334" Jan 30 16:26:24 crc kubenswrapper[4766]: I0130 16:26:24.217370 4766 scope.go:117] "RemoveContainer" containerID="f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc" Jan 30 16:26:24 crc kubenswrapper[4766]: I0130 16:26:24.231363 4766 scope.go:117] "RemoveContainer" containerID="5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41" Jan 30 16:26:24 crc kubenswrapper[4766]: I0130 16:26:24.248032 4766 scope.go:117] "RemoveContainer" containerID="a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045" Jan 30 16:26:24 crc kubenswrapper[4766]: I0130 16:26:24.271858 4766 scope.go:117] "RemoveContainer" containerID="0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1" Jan 30 16:26:24 crc kubenswrapper[4766]: E0130 16:26:24.273620 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\": container with ID starting with 0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1 not found: ID does not exist" containerID="0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1" Jan 30 16:26:24 crc kubenswrapper[4766]: I0130 16:26:24.273653 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1"} err="failed to get container status \"0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\": rpc error: code = NotFound desc = could not find container \"0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\": container with ID starting with 0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1 not found: ID does not exist" Jan 30 16:26:24 crc kubenswrapper[4766]: I0130 16:26:24.273696 4766 scope.go:117] "RemoveContainer" containerID="d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9" Jan 30 16:26:24 crc kubenswrapper[4766]: E0130 16:26:24.273982 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\": container with ID starting with d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9 not found: ID does not exist" containerID="d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9" Jan 30 16:26:24 crc kubenswrapper[4766]: I0130 16:26:24.274031 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9"} err="failed to get container status \"d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\": rpc error: code = NotFound desc = could not find container \"d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\": container with ID starting with d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9 not found: ID does not exist" Jan 30 16:26:24 crc kubenswrapper[4766]: I0130 16:26:24.274051 4766 scope.go:117] "RemoveContainer" containerID="d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334" Jan 30 16:26:24 crc kubenswrapper[4766]: E0130 16:26:24.274420 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\": container with ID starting with d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334 not found: ID does not exist" containerID="d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334" Jan 30 16:26:24 crc kubenswrapper[4766]: I0130 16:26:24.274466 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334"} err="failed to get container status \"d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\": rpc error: code = NotFound desc = could not find container \"d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\": container with ID starting with d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334 not found: ID does not exist" Jan 30 16:26:24 crc kubenswrapper[4766]: I0130 16:26:24.274495 4766 scope.go:117] "RemoveContainer" containerID="f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc" Jan 30 16:26:24 crc kubenswrapper[4766]: E0130 16:26:24.275252 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\": container with ID starting with f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc not found: ID does not exist" containerID="f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc" Jan 30 16:26:24 crc kubenswrapper[4766]: I0130 16:26:24.275274 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc"} err="failed to get container status \"f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\": rpc error: code = NotFound desc = could not find container \"f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\": container with ID starting with f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc not found: ID does not exist" Jan 30 16:26:24 crc kubenswrapper[4766]: I0130 16:26:24.275290 4766 scope.go:117] "RemoveContainer" containerID="5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41" Jan 30 16:26:24 crc kubenswrapper[4766]: E0130 16:26:24.275682 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\": container with ID starting with 5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41 not found: ID does not exist" containerID="5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41" Jan 30 16:26:24 crc kubenswrapper[4766]: I0130 16:26:24.275719 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41"} err="failed to get container status \"5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\": rpc error: code = NotFound desc = could not find container \"5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\": container with ID starting with 5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41 not found: ID does not exist" Jan 30 16:26:24 crc kubenswrapper[4766]: I0130 16:26:24.275736 4766 scope.go:117] "RemoveContainer" containerID="a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045" Jan 30 16:26:24 crc kubenswrapper[4766]: E0130 16:26:24.276125 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\": container with ID starting with a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045 not found: ID does not exist" containerID="a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045" Jan 30 16:26:24 crc kubenswrapper[4766]: I0130 16:26:24.276145 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045"} err="failed to get container status \"a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\": rpc error: code = NotFound desc = could not find container \"a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\": container with ID starting with a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045 not found: ID does not exist" Jan 30 16:26:26 crc kubenswrapper[4766]: E0130 16:26:26.039603 4766 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.103:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:26:26 crc kubenswrapper[4766]: I0130 16:26:26.040100 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:26:26 crc kubenswrapper[4766]: I0130 16:26:26.042815 4766 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Jan 30 16:26:26 crc kubenswrapper[4766]: I0130 16:26:26.043234 4766 status_manager.go:851] "Failed to get status for pod" podUID="4b30b717-ab4b-428d-8d98-f035422849b5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Jan 30 16:26:26 crc kubenswrapper[4766]: E0130 16:26:26.081831 4766 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.103:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f8efab9447208 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 16:26:26.081133064 +0000 UTC m=+240.719090410,LastTimestamp:2026-01-30 16:26:26.081133064 +0000 UTC m=+240.719090410,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 16:26:26 crc kubenswrapper[4766]: E0130 16:26:26.127079 4766 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.103:6443: connect: connection refused" Jan 30 16:26:26 crc kubenswrapper[4766]: E0130 16:26:26.128217 4766 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.103:6443: connect: connection refused" Jan 30 16:26:26 crc kubenswrapper[4766]: E0130 16:26:26.128619 4766 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.103:6443: connect: connection refused" Jan 30 16:26:26 crc kubenswrapper[4766]: E0130 16:26:26.129239 4766 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.103:6443: connect: connection refused" Jan 30 16:26:26 crc kubenswrapper[4766]: E0130 16:26:26.129976 4766 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.103:6443: connect: connection refused" Jan 30 16:26:26 crc kubenswrapper[4766]: I0130 16:26:26.130002 4766 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 30 16:26:26 crc kubenswrapper[4766]: E0130 16:26:26.130328 4766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.103:6443: connect: connection refused" interval="200ms" Jan 30 16:26:26 crc kubenswrapper[4766]: I0130 16:26:26.183128 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"49fecd8be2a6c4bf752c52a3d9142162f9f7dac36faeba708d06ab3a53e06d87"} Jan 30 16:26:26 crc kubenswrapper[4766]: E0130 16:26:26.331370 4766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.103:6443: connect: connection refused" interval="400ms" Jan 30 16:26:26 crc kubenswrapper[4766]: E0130 16:26:26.732580 4766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.103:6443: connect: connection refused" interval="800ms" Jan 30 16:26:27 crc kubenswrapper[4766]: I0130 16:26:27.191035 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"1fd1b478d8e899504c0fde3f05b01dd9e95e984e187c19d6fb8a7235d9242bd2"} Jan 30 16:26:27 crc kubenswrapper[4766]: E0130 16:26:27.192251 4766 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.103:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:26:27 crc kubenswrapper[4766]: I0130 16:26:27.192282 4766 status_manager.go:851] "Failed to get status for pod" podUID="4b30b717-ab4b-428d-8d98-f035422849b5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Jan 30 16:26:27 crc kubenswrapper[4766]: E0130 16:26:27.534051 4766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.103:6443: connect: connection refused" interval="1.6s" Jan 30 16:26:28 crc kubenswrapper[4766]: E0130 16:26:28.196379 4766 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.103:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:26:29 crc kubenswrapper[4766]: E0130 16:26:29.135783 4766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.103:6443: connect: connection refused" interval="3.2s" Jan 30 16:26:29 crc kubenswrapper[4766]: E0130 16:26:29.739567 4766 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.103:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f8efab9447208 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 16:26:26.081133064 +0000 UTC m=+240.719090410,LastTimestamp:2026-01-30 16:26:26.081133064 +0000 UTC m=+240.719090410,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.302046 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" podUID="21a8aae5-a6f8-43e0-ab59-1e6af94eb133" containerName="oauth-openshift" containerID="cri-o://c2295bbeccc7cdf5226b500992026187a8b3be7c58a18b6a59edac5d0c9bd3b1" gracePeriod=15 Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.659901 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.661108 4766 status_manager.go:851] "Failed to get status for pod" podUID="21a8aae5-a6f8-43e0-ab59-1e6af94eb133" pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-sbckt\": dial tcp 38.102.83.103:6443: connect: connection refused" Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.661625 4766 status_manager.go:851] "Failed to get status for pod" podUID="4b30b717-ab4b-428d-8d98-f035422849b5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.842907 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mntd9\" (UniqueName: \"kubernetes.io/projected/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-kube-api-access-mntd9\") pod \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.842991 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-cliconfig\") pod \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.843027 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-audit-policies\") pod \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.843057 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-user-template-error\") pod \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.843084 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-user-template-provider-selection\") pod \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.843138 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-ocp-branding-template\") pod \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.843240 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-trusted-ca-bundle\") pod \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.843273 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-serving-cert\") pod \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.843308 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-service-ca\") pod \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.843337 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-user-idp-0-file-data\") pod \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.843369 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-user-template-login\") pod \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.843400 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-router-certs\") pod \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.843436 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-audit-dir\") pod \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.843550 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-session\") pod \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.844474 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "21a8aae5-a6f8-43e0-ab59-1e6af94eb133" (UID: "21a8aae5-a6f8-43e0-ab59-1e6af94eb133"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.844764 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "21a8aae5-a6f8-43e0-ab59-1e6af94eb133" (UID: "21a8aae5-a6f8-43e0-ab59-1e6af94eb133"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.844863 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "21a8aae5-a6f8-43e0-ab59-1e6af94eb133" (UID: "21a8aae5-a6f8-43e0-ab59-1e6af94eb133"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.844983 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "21a8aae5-a6f8-43e0-ab59-1e6af94eb133" (UID: "21a8aae5-a6f8-43e0-ab59-1e6af94eb133"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.845317 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "21a8aae5-a6f8-43e0-ab59-1e6af94eb133" (UID: "21a8aae5-a6f8-43e0-ab59-1e6af94eb133"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.850919 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-kube-api-access-mntd9" (OuterVolumeSpecName: "kube-api-access-mntd9") pod "21a8aae5-a6f8-43e0-ab59-1e6af94eb133" (UID: "21a8aae5-a6f8-43e0-ab59-1e6af94eb133"). InnerVolumeSpecName "kube-api-access-mntd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.851242 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "21a8aae5-a6f8-43e0-ab59-1e6af94eb133" (UID: "21a8aae5-a6f8-43e0-ab59-1e6af94eb133"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.850965 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "21a8aae5-a6f8-43e0-ab59-1e6af94eb133" (UID: "21a8aae5-a6f8-43e0-ab59-1e6af94eb133"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.851483 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "21a8aae5-a6f8-43e0-ab59-1e6af94eb133" (UID: "21a8aae5-a6f8-43e0-ab59-1e6af94eb133"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.851633 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "21a8aae5-a6f8-43e0-ab59-1e6af94eb133" (UID: "21a8aae5-a6f8-43e0-ab59-1e6af94eb133"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.851892 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "21a8aae5-a6f8-43e0-ab59-1e6af94eb133" (UID: "21a8aae5-a6f8-43e0-ab59-1e6af94eb133"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.852190 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "21a8aae5-a6f8-43e0-ab59-1e6af94eb133" (UID: "21a8aae5-a6f8-43e0-ab59-1e6af94eb133"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.852438 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "21a8aae5-a6f8-43e0-ab59-1e6af94eb133" (UID: "21a8aae5-a6f8-43e0-ab59-1e6af94eb133"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.853231 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "21a8aae5-a6f8-43e0-ab59-1e6af94eb133" (UID: "21a8aae5-a6f8-43e0-ab59-1e6af94eb133"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.945324 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mntd9\" (UniqueName: \"kubernetes.io/projected/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-kube-api-access-mntd9\") on node \"crc\" DevicePath \"\"" Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.945376 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.945395 4766 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.945409 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.945425 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.945440 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.945453 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.945467 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.945479 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.945496 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.945509 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.945523 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.945536 4766 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.945549 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 30 16:26:32 crc kubenswrapper[4766]: I0130 16:26:32.217235 4766 generic.go:334] "Generic (PLEG): container finished" podID="21a8aae5-a6f8-43e0-ab59-1e6af94eb133" containerID="c2295bbeccc7cdf5226b500992026187a8b3be7c58a18b6a59edac5d0c9bd3b1" exitCode=0 Jan 30 16:26:32 crc kubenswrapper[4766]: I0130 16:26:32.217289 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" event={"ID":"21a8aae5-a6f8-43e0-ab59-1e6af94eb133","Type":"ContainerDied","Data":"c2295bbeccc7cdf5226b500992026187a8b3be7c58a18b6a59edac5d0c9bd3b1"} Jan 30 16:26:32 crc kubenswrapper[4766]: I0130 16:26:32.217303 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:26:32 crc kubenswrapper[4766]: I0130 16:26:32.217322 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" event={"ID":"21a8aae5-a6f8-43e0-ab59-1e6af94eb133","Type":"ContainerDied","Data":"a6184cf8b16957ad6df32ef60f66d31e49cd6a8b7088d60d3d7abeb822aa03d8"} Jan 30 16:26:32 crc kubenswrapper[4766]: I0130 16:26:32.217343 4766 scope.go:117] "RemoveContainer" containerID="c2295bbeccc7cdf5226b500992026187a8b3be7c58a18b6a59edac5d0c9bd3b1" Jan 30 16:26:32 crc kubenswrapper[4766]: I0130 16:26:32.217975 4766 status_manager.go:851] "Failed to get status for pod" podUID="21a8aae5-a6f8-43e0-ab59-1e6af94eb133" pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-sbckt\": dial tcp 38.102.83.103:6443: connect: connection refused" Jan 30 16:26:32 crc kubenswrapper[4766]: I0130 16:26:32.218171 4766 status_manager.go:851] "Failed to get status for pod" podUID="4b30b717-ab4b-428d-8d98-f035422849b5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Jan 30 16:26:32 crc kubenswrapper[4766]: I0130 16:26:32.221196 4766 status_manager.go:851] "Failed to get status for pod" podUID="21a8aae5-a6f8-43e0-ab59-1e6af94eb133" pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-sbckt\": dial tcp 38.102.83.103:6443: connect: connection refused" Jan 30 16:26:32 crc kubenswrapper[4766]: I0130 16:26:32.221737 4766 status_manager.go:851] "Failed to get status for pod" podUID="4b30b717-ab4b-428d-8d98-f035422849b5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Jan 30 16:26:32 crc kubenswrapper[4766]: I0130 16:26:32.234838 4766 scope.go:117] "RemoveContainer" containerID="c2295bbeccc7cdf5226b500992026187a8b3be7c58a18b6a59edac5d0c9bd3b1" Jan 30 16:26:32 crc kubenswrapper[4766]: E0130 16:26:32.235230 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2295bbeccc7cdf5226b500992026187a8b3be7c58a18b6a59edac5d0c9bd3b1\": container with ID starting with c2295bbeccc7cdf5226b500992026187a8b3be7c58a18b6a59edac5d0c9bd3b1 not found: ID does not exist" containerID="c2295bbeccc7cdf5226b500992026187a8b3be7c58a18b6a59edac5d0c9bd3b1" Jan 30 16:26:32 crc kubenswrapper[4766]: I0130 16:26:32.235266 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2295bbeccc7cdf5226b500992026187a8b3be7c58a18b6a59edac5d0c9bd3b1"} err="failed to get container status \"c2295bbeccc7cdf5226b500992026187a8b3be7c58a18b6a59edac5d0c9bd3b1\": rpc error: code = NotFound desc = could not find container \"c2295bbeccc7cdf5226b500992026187a8b3be7c58a18b6a59edac5d0c9bd3b1\": container with ID starting with c2295bbeccc7cdf5226b500992026187a8b3be7c58a18b6a59edac5d0c9bd3b1 not found: ID does not exist" Jan 30 16:26:32 crc kubenswrapper[4766]: E0130 16:26:32.336476 4766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.103:6443: connect: connection refused" interval="6.4s" Jan 30 16:26:34 crc kubenswrapper[4766]: I0130 16:26:34.038572 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:26:34 crc kubenswrapper[4766]: I0130 16:26:34.040332 4766 status_manager.go:851] "Failed to get status for pod" podUID="21a8aae5-a6f8-43e0-ab59-1e6af94eb133" pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-sbckt\": dial tcp 38.102.83.103:6443: connect: connection refused" Jan 30 16:26:34 crc kubenswrapper[4766]: I0130 16:26:34.040946 4766 status_manager.go:851] "Failed to get status for pod" podUID="4b30b717-ab4b-428d-8d98-f035422849b5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Jan 30 16:26:34 crc kubenswrapper[4766]: I0130 16:26:34.055943 4766 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a866f582-e240-4058-a5ab-7c73e33d80fa" Jan 30 16:26:34 crc kubenswrapper[4766]: I0130 16:26:34.055985 4766 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a866f582-e240-4058-a5ab-7c73e33d80fa" Jan 30 16:26:34 crc kubenswrapper[4766]: E0130 16:26:34.056592 4766 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:26:34 crc kubenswrapper[4766]: I0130 16:26:34.057244 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:26:34 crc kubenswrapper[4766]: I0130 16:26:34.233819 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"05a9592f0076932bed52a64c5174d7b0290219dfa2f88db228313205be00c92e"} Jan 30 16:26:35 crc kubenswrapper[4766]: E0130 16:26:35.102881 4766 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.103:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" volumeName="registry-storage" Jan 30 16:26:35 crc kubenswrapper[4766]: I0130 16:26:35.241937 4766 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="eaab36321c3200e0f1d677b1d444f633b389bf5abbfdcff7bbab0ae863bc87a6" exitCode=0 Jan 30 16:26:35 crc kubenswrapper[4766]: I0130 16:26:35.242042 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"eaab36321c3200e0f1d677b1d444f633b389bf5abbfdcff7bbab0ae863bc87a6"} Jan 30 16:26:35 crc kubenswrapper[4766]: I0130 16:26:35.242307 4766 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a866f582-e240-4058-a5ab-7c73e33d80fa" Jan 30 16:26:35 crc kubenswrapper[4766]: I0130 16:26:35.242335 4766 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a866f582-e240-4058-a5ab-7c73e33d80fa" Jan 30 16:26:35 crc kubenswrapper[4766]: I0130 16:26:35.242807 4766 status_manager.go:851] "Failed to get status for pod" podUID="21a8aae5-a6f8-43e0-ab59-1e6af94eb133" pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-sbckt\": dial tcp 38.102.83.103:6443: connect: connection refused" Jan 30 16:26:35 crc kubenswrapper[4766]: E0130 16:26:35.242931 4766 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:26:35 crc kubenswrapper[4766]: I0130 16:26:35.243041 4766 status_manager.go:851] "Failed to get status for pod" podUID="4b30b717-ab4b-428d-8d98-f035422849b5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Jan 30 16:26:35 crc kubenswrapper[4766]: I0130 16:26:35.246648 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 30 16:26:35 crc kubenswrapper[4766]: I0130 16:26:35.246698 4766 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38" exitCode=1 Jan 30 16:26:35 crc kubenswrapper[4766]: I0130 16:26:35.246729 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38"} Jan 30 16:26:35 crc kubenswrapper[4766]: I0130 16:26:35.247208 4766 scope.go:117] "RemoveContainer" containerID="6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38" Jan 30 16:26:35 crc kubenswrapper[4766]: I0130 16:26:35.247564 4766 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Jan 30 16:26:35 crc kubenswrapper[4766]: I0130 16:26:35.247906 4766 status_manager.go:851] "Failed to get status for pod" podUID="21a8aae5-a6f8-43e0-ab59-1e6af94eb133" pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-sbckt\": dial tcp 38.102.83.103:6443: connect: connection refused" Jan 30 16:26:35 crc kubenswrapper[4766]: I0130 16:26:35.248138 4766 status_manager.go:851] "Failed to get status for pod" podUID="4b30b717-ab4b-428d-8d98-f035422849b5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Jan 30 16:26:35 crc kubenswrapper[4766]: I0130 16:26:35.905458 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:26:36 crc kubenswrapper[4766]: I0130 16:26:36.265505 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0c8a6228825cdad02fd214b175dcfd4582cc31eb4021a6fa3da99e1e9e20dbb2"} Jan 30 16:26:36 crc kubenswrapper[4766]: I0130 16:26:36.265914 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"cbb84e0a697372a9b6c4917135f4e27f7c946c9427f13b11debe3917ddb7730a"} Jan 30 16:26:36 crc kubenswrapper[4766]: I0130 16:26:36.265931 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3d41387ed76bbba54ab16e4a8774a0fc8ea422811b9fb4e6eb0b367421314405"} Jan 30 16:26:36 crc kubenswrapper[4766]: I0130 16:26:36.265942 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"85f2730b6c81494e30bb54b9d6db46018c3c1b38f70ac1667a882db1b7548b47"} Jan 30 16:26:36 crc kubenswrapper[4766]: I0130 16:26:36.270032 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 30 16:26:36 crc kubenswrapper[4766]: I0130 16:26:36.270087 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"66412e2a523a7faab5a9a322c702486daeda620792156ede9e963c0f09763795"} Jan 30 16:26:37 crc kubenswrapper[4766]: I0130 16:26:37.281166 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1252afe6e1cf63b6d3b7b9258560b2ede202b7c18a267d16316c042d9ec9db26"} Jan 30 16:26:37 crc kubenswrapper[4766]: I0130 16:26:37.281631 4766 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a866f582-e240-4058-a5ab-7c73e33d80fa" Jan 30 16:26:37 crc kubenswrapper[4766]: I0130 16:26:37.281797 4766 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a866f582-e240-4058-a5ab-7c73e33d80fa" Jan 30 16:26:37 crc kubenswrapper[4766]: I0130 16:26:37.281749 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:26:39 crc kubenswrapper[4766]: I0130 16:26:39.057895 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:26:39 crc kubenswrapper[4766]: I0130 16:26:39.058487 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:26:39 crc kubenswrapper[4766]: I0130 16:26:39.063061 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:26:39 crc kubenswrapper[4766]: I0130 16:26:39.265087 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:26:42 crc kubenswrapper[4766]: I0130 16:26:42.297732 4766 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:26:43 crc kubenswrapper[4766]: I0130 16:26:43.313981 4766 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a866f582-e240-4058-a5ab-7c73e33d80fa" Jan 30 16:26:43 crc kubenswrapper[4766]: I0130 16:26:43.314038 4766 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a866f582-e240-4058-a5ab-7c73e33d80fa" Jan 30 16:26:43 crc kubenswrapper[4766]: I0130 16:26:43.319847 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:26:43 crc kubenswrapper[4766]: I0130 16:26:43.323464 4766 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="aaea65f6-cc7c-4398-a46b-87c70da9698e" Jan 30 16:26:44 crc kubenswrapper[4766]: I0130 16:26:44.319455 4766 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a866f582-e240-4058-a5ab-7c73e33d80fa" Jan 30 16:26:44 crc kubenswrapper[4766]: I0130 16:26:44.319487 4766 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a866f582-e240-4058-a5ab-7c73e33d80fa" Jan 30 16:26:44 crc kubenswrapper[4766]: I0130 16:26:44.324006 4766 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="aaea65f6-cc7c-4398-a46b-87c70da9698e" Jan 30 16:26:45 crc kubenswrapper[4766]: I0130 16:26:45.905881 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:26:45 crc kubenswrapper[4766]: I0130 16:26:45.909872 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:26:46 crc kubenswrapper[4766]: I0130 16:26:46.336406 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:26:51 crc kubenswrapper[4766]: I0130 16:26:51.994274 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 30 16:26:52 crc kubenswrapper[4766]: I0130 16:26:52.177314 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 30 16:26:52 crc kubenswrapper[4766]: I0130 16:26:52.306798 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 30 16:26:52 crc kubenswrapper[4766]: I0130 16:26:52.898144 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 30 16:26:53 crc kubenswrapper[4766]: I0130 16:26:53.254703 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 30 16:26:53 crc kubenswrapper[4766]: I0130 16:26:53.445268 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 30 16:26:53 crc kubenswrapper[4766]: I0130 16:26:53.483696 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 30 16:26:53 crc kubenswrapper[4766]: I0130 16:26:53.776707 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 16:26:54 crc kubenswrapper[4766]: I0130 16:26:54.195846 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 30 16:26:54 crc kubenswrapper[4766]: I0130 16:26:54.406952 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 16:26:54 crc kubenswrapper[4766]: I0130 16:26:54.529693 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 30 16:26:54 crc kubenswrapper[4766]: I0130 16:26:54.697112 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 30 16:26:54 crc kubenswrapper[4766]: I0130 16:26:54.894888 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 30 16:26:54 crc kubenswrapper[4766]: I0130 16:26:54.919138 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 30 16:26:55 crc kubenswrapper[4766]: I0130 16:26:55.000819 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 30 16:26:55 crc kubenswrapper[4766]: I0130 16:26:55.599727 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 30 16:26:55 crc kubenswrapper[4766]: I0130 16:26:55.627747 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 30 16:26:55 crc kubenswrapper[4766]: I0130 16:26:55.732786 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 30 16:26:55 crc kubenswrapper[4766]: I0130 16:26:55.736881 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 30 16:26:55 crc kubenswrapper[4766]: I0130 16:26:55.961636 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 30 16:26:55 crc kubenswrapper[4766]: I0130 16:26:55.963570 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 30 16:26:56 crc kubenswrapper[4766]: I0130 16:26:56.145324 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 30 16:26:56 crc kubenswrapper[4766]: I0130 16:26:56.224658 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 30 16:26:56 crc kubenswrapper[4766]: I0130 16:26:56.302678 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 30 16:26:56 crc kubenswrapper[4766]: I0130 16:26:56.334607 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 30 16:26:56 crc kubenswrapper[4766]: I0130 16:26:56.350458 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 30 16:26:56 crc kubenswrapper[4766]: I0130 16:26:56.355949 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 30 16:26:56 crc kubenswrapper[4766]: I0130 16:26:56.419076 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 16:26:56 crc kubenswrapper[4766]: I0130 16:26:56.427481 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 30 16:26:56 crc kubenswrapper[4766]: I0130 16:26:56.430863 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 30 16:26:56 crc kubenswrapper[4766]: I0130 16:26:56.452462 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 30 16:26:56 crc kubenswrapper[4766]: I0130 16:26:56.493123 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 30 16:26:56 crc kubenswrapper[4766]: I0130 16:26:56.522045 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 30 16:26:56 crc kubenswrapper[4766]: I0130 16:26:56.672258 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 30 16:26:56 crc kubenswrapper[4766]: I0130 16:26:56.697016 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 30 16:26:56 crc kubenswrapper[4766]: I0130 16:26:56.753786 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 30 16:26:56 crc kubenswrapper[4766]: I0130 16:26:56.883509 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 30 16:26:56 crc kubenswrapper[4766]: I0130 16:26:56.891062 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 30 16:26:56 crc kubenswrapper[4766]: I0130 16:26:56.948912 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 30 16:26:56 crc kubenswrapper[4766]: I0130 16:26:56.966525 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 30 16:26:56 crc kubenswrapper[4766]: I0130 16:26:56.976930 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 30 16:26:56 crc kubenswrapper[4766]: I0130 16:26:56.997588 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 30 16:26:57 crc kubenswrapper[4766]: I0130 16:26:57.065853 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 30 16:26:57 crc kubenswrapper[4766]: I0130 16:26:57.102283 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 30 16:26:57 crc kubenswrapper[4766]: I0130 16:26:57.271453 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 30 16:26:57 crc kubenswrapper[4766]: I0130 16:26:57.301231 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 30 16:26:57 crc kubenswrapper[4766]: I0130 16:26:57.324766 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 30 16:26:57 crc kubenswrapper[4766]: I0130 16:26:57.379962 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 30 16:26:57 crc kubenswrapper[4766]: I0130 16:26:57.456258 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 30 16:26:57 crc kubenswrapper[4766]: I0130 16:26:57.485956 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 30 16:26:57 crc kubenswrapper[4766]: I0130 16:26:57.495020 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 30 16:26:57 crc kubenswrapper[4766]: I0130 16:26:57.499972 4766 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 30 16:26:57 crc kubenswrapper[4766]: I0130 16:26:57.534723 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 30 16:26:57 crc kubenswrapper[4766]: I0130 16:26:57.548572 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 30 16:26:57 crc kubenswrapper[4766]: I0130 16:26:57.598999 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 30 16:26:57 crc kubenswrapper[4766]: I0130 16:26:57.641702 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 30 16:26:57 crc kubenswrapper[4766]: I0130 16:26:57.681621 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 30 16:26:57 crc kubenswrapper[4766]: I0130 16:26:57.696208 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 30 16:26:57 crc kubenswrapper[4766]: I0130 16:26:57.716579 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 30 16:26:57 crc kubenswrapper[4766]: I0130 16:26:57.737973 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 30 16:26:57 crc kubenswrapper[4766]: I0130 16:26:57.744157 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 30 16:26:57 crc kubenswrapper[4766]: I0130 16:26:57.819081 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 30 16:26:57 crc kubenswrapper[4766]: I0130 16:26:57.861759 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 30 16:26:58 crc kubenswrapper[4766]: I0130 16:26:58.036007 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 30 16:26:58 crc kubenswrapper[4766]: I0130 16:26:58.069399 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 30 16:26:58 crc kubenswrapper[4766]: I0130 16:26:58.136108 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 30 16:26:58 crc kubenswrapper[4766]: I0130 16:26:58.145764 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 30 16:26:58 crc kubenswrapper[4766]: I0130 16:26:58.150729 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 30 16:26:58 crc kubenswrapper[4766]: I0130 16:26:58.209784 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 30 16:26:58 crc kubenswrapper[4766]: I0130 16:26:58.241712 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 30 16:26:58 crc kubenswrapper[4766]: I0130 16:26:58.251482 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 30 16:26:58 crc kubenswrapper[4766]: I0130 16:26:58.426425 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 30 16:26:58 crc kubenswrapper[4766]: I0130 16:26:58.441775 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 30 16:26:58 crc kubenswrapper[4766]: I0130 16:26:58.478959 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 30 16:26:58 crc kubenswrapper[4766]: I0130 16:26:58.740810 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 30 16:26:58 crc kubenswrapper[4766]: I0130 16:26:58.826165 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 30 16:26:58 crc kubenswrapper[4766]: I0130 16:26:58.877345 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 30 16:26:58 crc kubenswrapper[4766]: I0130 16:26:58.879676 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 30 16:26:58 crc kubenswrapper[4766]: I0130 16:26:58.885709 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 30 16:26:58 crc kubenswrapper[4766]: I0130 16:26:58.902407 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 30 16:26:58 crc kubenswrapper[4766]: I0130 16:26:58.993841 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 30 16:26:59 crc kubenswrapper[4766]: I0130 16:26:59.023938 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 30 16:26:59 crc kubenswrapper[4766]: I0130 16:26:59.044138 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 30 16:26:59 crc kubenswrapper[4766]: I0130 16:26:59.074297 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 30 16:26:59 crc kubenswrapper[4766]: I0130 16:26:59.087128 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 30 16:26:59 crc kubenswrapper[4766]: I0130 16:26:59.201511 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 30 16:26:59 crc kubenswrapper[4766]: I0130 16:26:59.210375 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 30 16:26:59 crc kubenswrapper[4766]: I0130 16:26:59.315660 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 30 16:26:59 crc kubenswrapper[4766]: I0130 16:26:59.329811 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 16:26:59 crc kubenswrapper[4766]: I0130 16:26:59.349337 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 30 16:26:59 crc kubenswrapper[4766]: I0130 16:26:59.391636 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 30 16:26:59 crc kubenswrapper[4766]: I0130 16:26:59.399780 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 30 16:26:59 crc kubenswrapper[4766]: I0130 16:26:59.495701 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 30 16:26:59 crc kubenswrapper[4766]: I0130 16:26:59.557046 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 30 16:26:59 crc kubenswrapper[4766]: I0130 16:26:59.581422 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 30 16:26:59 crc kubenswrapper[4766]: I0130 16:26:59.592492 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 30 16:26:59 crc kubenswrapper[4766]: I0130 16:26:59.616008 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 30 16:26:59 crc kubenswrapper[4766]: I0130 16:26:59.617478 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 30 16:26:59 crc kubenswrapper[4766]: I0130 16:26:59.690398 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 16:26:59 crc kubenswrapper[4766]: I0130 16:26:59.698317 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 30 16:26:59 crc kubenswrapper[4766]: I0130 16:26:59.713600 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 30 16:26:59 crc kubenswrapper[4766]: I0130 16:26:59.720583 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 30 16:26:59 crc kubenswrapper[4766]: I0130 16:26:59.802689 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 30 16:26:59 crc kubenswrapper[4766]: I0130 16:26:59.947448 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 30 16:26:59 crc kubenswrapper[4766]: I0130 16:26:59.979033 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 30 16:27:00 crc kubenswrapper[4766]: I0130 16:27:00.086246 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 30 16:27:00 crc kubenswrapper[4766]: I0130 16:27:00.134044 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 30 16:27:00 crc kubenswrapper[4766]: I0130 16:27:00.167410 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 30 16:27:00 crc kubenswrapper[4766]: I0130 16:27:00.194139 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 30 16:27:00 crc kubenswrapper[4766]: I0130 16:27:00.194584 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 30 16:27:00 crc kubenswrapper[4766]: I0130 16:27:00.200809 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 30 16:27:00 crc kubenswrapper[4766]: I0130 16:27:00.222091 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 30 16:27:00 crc kubenswrapper[4766]: I0130 16:27:00.273835 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 30 16:27:00 crc kubenswrapper[4766]: I0130 16:27:00.298127 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 30 16:27:00 crc kubenswrapper[4766]: I0130 16:27:00.376318 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 30 16:27:00 crc kubenswrapper[4766]: I0130 16:27:00.392066 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 30 16:27:00 crc kubenswrapper[4766]: I0130 16:27:00.436919 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 30 16:27:00 crc kubenswrapper[4766]: I0130 16:27:00.451817 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 30 16:27:00 crc kubenswrapper[4766]: I0130 16:27:00.458489 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 30 16:27:00 crc kubenswrapper[4766]: I0130 16:27:00.482928 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 30 16:27:00 crc kubenswrapper[4766]: I0130 16:27:00.490781 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 30 16:27:00 crc kubenswrapper[4766]: I0130 16:27:00.501387 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 30 16:27:00 crc kubenswrapper[4766]: I0130 16:27:00.558163 4766 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 30 16:27:00 crc kubenswrapper[4766]: I0130 16:27:00.587358 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 30 16:27:00 crc kubenswrapper[4766]: I0130 16:27:00.693893 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 30 16:27:00 crc kubenswrapper[4766]: I0130 16:27:00.804357 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 30 16:27:00 crc kubenswrapper[4766]: I0130 16:27:00.827140 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 16:27:00 crc kubenswrapper[4766]: I0130 16:27:00.986221 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 30 16:27:01 crc kubenswrapper[4766]: I0130 16:27:01.003623 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 30 16:27:01 crc kubenswrapper[4766]: I0130 16:27:01.082020 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 30 16:27:01 crc kubenswrapper[4766]: I0130 16:27:01.299350 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 30 16:27:01 crc kubenswrapper[4766]: I0130 16:27:01.340742 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 30 16:27:01 crc kubenswrapper[4766]: I0130 16:27:01.452906 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 30 16:27:01 crc kubenswrapper[4766]: I0130 16:27:01.468006 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 16:27:01 crc kubenswrapper[4766]: I0130 16:27:01.678853 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 30 16:27:01 crc kubenswrapper[4766]: I0130 16:27:01.756798 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 30 16:27:01 crc kubenswrapper[4766]: I0130 16:27:01.761014 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 30 16:27:01 crc kubenswrapper[4766]: I0130 16:27:01.769661 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 30 16:27:01 crc kubenswrapper[4766]: I0130 16:27:01.993844 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 30 16:27:02 crc kubenswrapper[4766]: I0130 16:27:02.023202 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 30 16:27:02 crc kubenswrapper[4766]: I0130 16:27:02.057257 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 30 16:27:02 crc kubenswrapper[4766]: I0130 16:27:02.177107 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 30 16:27:02 crc kubenswrapper[4766]: I0130 16:27:02.235316 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 30 16:27:02 crc kubenswrapper[4766]: I0130 16:27:02.238061 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 30 16:27:02 crc kubenswrapper[4766]: I0130 16:27:02.289843 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 30 16:27:02 crc kubenswrapper[4766]: I0130 16:27:02.314128 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 30 16:27:02 crc kubenswrapper[4766]: I0130 16:27:02.365916 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 30 16:27:02 crc kubenswrapper[4766]: I0130 16:27:02.380487 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 30 16:27:02 crc kubenswrapper[4766]: I0130 16:27:02.423380 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 30 16:27:02 crc kubenswrapper[4766]: I0130 16:27:02.435579 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 16:27:02 crc kubenswrapper[4766]: I0130 16:27:02.470224 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 30 16:27:02 crc kubenswrapper[4766]: I0130 16:27:02.495060 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 30 16:27:02 crc kubenswrapper[4766]: I0130 16:27:02.544197 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 30 16:27:02 crc kubenswrapper[4766]: I0130 16:27:02.880461 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 30 16:27:02 crc kubenswrapper[4766]: I0130 16:27:02.890388 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 30 16:27:02 crc kubenswrapper[4766]: I0130 16:27:02.924620 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 30 16:27:02 crc kubenswrapper[4766]: I0130 16:27:02.979625 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 30 16:27:03 crc kubenswrapper[4766]: I0130 16:27:03.015883 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 30 16:27:03 crc kubenswrapper[4766]: I0130 16:27:03.016143 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 30 16:27:03 crc kubenswrapper[4766]: I0130 16:27:03.019520 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 30 16:27:03 crc kubenswrapper[4766]: I0130 16:27:03.088632 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 30 16:27:03 crc kubenswrapper[4766]: I0130 16:27:03.118752 4766 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 30 16:27:03 crc kubenswrapper[4766]: I0130 16:27:03.134315 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 30 16:27:03 crc kubenswrapper[4766]: I0130 16:27:03.140515 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 30 16:27:03 crc kubenswrapper[4766]: I0130 16:27:03.183337 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 30 16:27:03 crc kubenswrapper[4766]: I0130 16:27:03.222909 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 30 16:27:03 crc kubenswrapper[4766]: I0130 16:27:03.250992 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 30 16:27:03 crc kubenswrapper[4766]: I0130 16:27:03.274486 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 30 16:27:03 crc kubenswrapper[4766]: I0130 16:27:03.316739 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 30 16:27:03 crc kubenswrapper[4766]: I0130 16:27:03.338627 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 30 16:27:03 crc kubenswrapper[4766]: I0130 16:27:03.348404 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 30 16:27:03 crc kubenswrapper[4766]: I0130 16:27:03.500960 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 30 16:27:03 crc kubenswrapper[4766]: I0130 16:27:03.642901 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 30 16:27:03 crc kubenswrapper[4766]: I0130 16:27:03.673068 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 30 16:27:03 crc kubenswrapper[4766]: I0130 16:27:03.686150 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 30 16:27:03 crc kubenswrapper[4766]: I0130 16:27:03.691124 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 30 16:27:03 crc kubenswrapper[4766]: I0130 16:27:03.696497 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 30 16:27:03 crc kubenswrapper[4766]: I0130 16:27:03.776846 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 30 16:27:03 crc kubenswrapper[4766]: I0130 16:27:03.777369 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 30 16:27:03 crc kubenswrapper[4766]: I0130 16:27:03.851279 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 30 16:27:03 crc kubenswrapper[4766]: I0130 16:27:03.873338 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 30 16:27:03 crc kubenswrapper[4766]: I0130 16:27:03.932982 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 30 16:27:03 crc kubenswrapper[4766]: I0130 16:27:03.981123 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 30 16:27:04 crc kubenswrapper[4766]: I0130 16:27:04.034084 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 30 16:27:04 crc kubenswrapper[4766]: I0130 16:27:04.099672 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 30 16:27:04 crc kubenswrapper[4766]: I0130 16:27:04.192542 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 30 16:27:04 crc kubenswrapper[4766]: I0130 16:27:04.306166 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 30 16:27:04 crc kubenswrapper[4766]: I0130 16:27:04.397958 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 30 16:27:04 crc kubenswrapper[4766]: I0130 16:27:04.666500 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 30 16:27:04 crc kubenswrapper[4766]: I0130 16:27:04.761159 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 30 16:27:04 crc kubenswrapper[4766]: I0130 16:27:04.787695 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 30 16:27:04 crc kubenswrapper[4766]: I0130 16:27:04.791608 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 30 16:27:04 crc kubenswrapper[4766]: I0130 16:27:04.802925 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 30 16:27:04 crc kubenswrapper[4766]: I0130 16:27:04.841352 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 30 16:27:04 crc kubenswrapper[4766]: I0130 16:27:04.872452 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 30 16:27:04 crc kubenswrapper[4766]: I0130 16:27:04.894066 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 30 16:27:05 crc kubenswrapper[4766]: I0130 16:27:05.071396 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 30 16:27:05 crc kubenswrapper[4766]: I0130 16:27:05.087920 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 30 16:27:05 crc kubenswrapper[4766]: I0130 16:27:05.135736 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 30 16:27:05 crc kubenswrapper[4766]: I0130 16:27:05.364258 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 30 16:27:05 crc kubenswrapper[4766]: I0130 16:27:05.544064 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 30 16:27:05 crc kubenswrapper[4766]: I0130 16:27:05.632936 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 16:27:05 crc kubenswrapper[4766]: I0130 16:27:05.695783 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 16:27:05 crc kubenswrapper[4766]: I0130 16:27:05.739264 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 30 16:27:05 crc kubenswrapper[4766]: I0130 16:27:05.748938 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 30 16:27:05 crc kubenswrapper[4766]: I0130 16:27:05.784075 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 30 16:27:05 crc kubenswrapper[4766]: I0130 16:27:05.988054 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.136496 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.167673 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.286115 4766 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.291196 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-sbckt"] Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.291273 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-6fffd54687-fl5rm"] Jan 30 16:27:06 crc kubenswrapper[4766]: E0130 16:27:06.291500 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b30b717-ab4b-428d-8d98-f035422849b5" containerName="installer" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.291520 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b30b717-ab4b-428d-8d98-f035422849b5" containerName="installer" Jan 30 16:27:06 crc kubenswrapper[4766]: E0130 16:27:06.291532 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21a8aae5-a6f8-43e0-ab59-1e6af94eb133" containerName="oauth-openshift" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.291541 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="21a8aae5-a6f8-43e0-ab59-1e6af94eb133" containerName="oauth-openshift" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.291643 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b30b717-ab4b-428d-8d98-f035422849b5" containerName="installer" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.291655 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="21a8aae5-a6f8-43e0-ab59-1e6af94eb133" containerName="oauth-openshift" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.291785 4766 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a866f582-e240-4058-a5ab-7c73e33d80fa" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.291829 4766 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a866f582-e240-4058-a5ab-7c73e33d80fa" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.292150 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.295204 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.295220 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.295472 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.295483 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.295563 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.295758 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.296048 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.296073 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.296230 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.296368 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.297998 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.298065 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.298094 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.308474 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.309198 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.319655 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.346154 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=24.3461338 podStartE2EDuration="24.3461338s" podCreationTimestamp="2026-01-30 16:26:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:27:06.345921194 +0000 UTC m=+280.983878560" watchObservedRunningTime="2026-01-30 16:27:06.3461338 +0000 UTC m=+280.984091146" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.365978 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.366053 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.408379 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.408457 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.408492 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z6l7\" (UniqueName: \"kubernetes.io/projected/dfb08685-43c0-4cd6-bb82-51f5df825923-kube-api-access-6z6l7\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.408521 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.408684 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-user-template-error\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.408798 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dfb08685-43c0-4cd6-bb82-51f5df825923-audit-policies\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.408825 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.408918 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-system-session\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.408957 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-user-template-login\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.408994 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.409058 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dfb08685-43c0-4cd6-bb82-51f5df825923-audit-dir\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.409087 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-system-service-ca\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.409117 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.409145 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-system-router-certs\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.430486 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.480587 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.498536 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.510550 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.510611 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-user-template-error\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.510642 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dfb08685-43c0-4cd6-bb82-51f5df825923-audit-policies\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.510662 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.510690 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-system-session\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.510710 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-user-template-login\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.510729 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.510751 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dfb08685-43c0-4cd6-bb82-51f5df825923-audit-dir\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.510770 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-system-service-ca\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.510788 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.510803 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-system-router-certs\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.510824 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.510852 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.510869 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6z6l7\" (UniqueName: \"kubernetes.io/projected/dfb08685-43c0-4cd6-bb82-51f5df825923-kube-api-access-6z6l7\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.511628 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dfb08685-43c0-4cd6-bb82-51f5df825923-audit-dir\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.512137 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.512560 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-system-service-ca\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.512795 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.512923 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dfb08685-43c0-4cd6-bb82-51f5df825923-audit-policies\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.519922 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.520051 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.520412 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.521317 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-system-session\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.521551 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-system-router-certs\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.521700 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-user-template-error\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.522112 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.528633 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-user-template-login\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.531456 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6z6l7\" (UniqueName: \"kubernetes.io/projected/dfb08685-43c0-4cd6-bb82-51f5df825923-kube-api-access-6z6l7\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.616969 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.659904 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.865299 4766 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.865364 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6fffd54687-fl5rm"] Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.926804 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.983143 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 16:27:07 crc kubenswrapper[4766]: I0130 16:27:07.104669 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 30 16:27:07 crc kubenswrapper[4766]: I0130 16:27:07.381666 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 30 16:27:07 crc kubenswrapper[4766]: I0130 16:27:07.426343 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 30 16:27:07 crc kubenswrapper[4766]: I0130 16:27:07.447665 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" event={"ID":"dfb08685-43c0-4cd6-bb82-51f5df825923","Type":"ContainerStarted","Data":"19faad2de142e6eb25b9f845611d4223a106b12e69c6bf20e7bcff9c8b2fa028"} Jan 30 16:27:07 crc kubenswrapper[4766]: I0130 16:27:07.447728 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" event={"ID":"dfb08685-43c0-4cd6-bb82-51f5df825923","Type":"ContainerStarted","Data":"6cd1b270a44652628af6ab31f77d6e4512e027ce67447faaf88f8341b03fe40b"} Jan 30 16:27:07 crc kubenswrapper[4766]: I0130 16:27:07.448100 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:07 crc kubenswrapper[4766]: I0130 16:27:07.473058 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" podStartSLOduration=61.473026146 podStartE2EDuration="1m1.473026146s" podCreationTimestamp="2026-01-30 16:26:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:27:07.469150563 +0000 UTC m=+282.107107939" watchObservedRunningTime="2026-01-30 16:27:07.473026146 +0000 UTC m=+282.110983492" Jan 30 16:27:07 crc kubenswrapper[4766]: I0130 16:27:07.528169 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 30 16:27:07 crc kubenswrapper[4766]: I0130 16:27:07.594849 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 30 16:27:07 crc kubenswrapper[4766]: I0130 16:27:07.595077 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 30 16:27:07 crc kubenswrapper[4766]: I0130 16:27:07.632418 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 30 16:27:07 crc kubenswrapper[4766]: I0130 16:27:07.722993 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 30 16:27:07 crc kubenswrapper[4766]: I0130 16:27:07.744399 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 30 16:27:07 crc kubenswrapper[4766]: I0130 16:27:07.770519 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 30 16:27:07 crc kubenswrapper[4766]: I0130 16:27:07.837643 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:08 crc kubenswrapper[4766]: I0130 16:27:08.031600 4766 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 30 16:27:08 crc kubenswrapper[4766]: I0130 16:27:08.047848 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21a8aae5-a6f8-43e0-ab59-1e6af94eb133" path="/var/lib/kubelet/pods/21a8aae5-a6f8-43e0-ab59-1e6af94eb133/volumes" Jan 30 16:27:08 crc kubenswrapper[4766]: I0130 16:27:08.297810 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 30 16:27:08 crc kubenswrapper[4766]: I0130 16:27:08.326978 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 30 16:27:08 crc kubenswrapper[4766]: I0130 16:27:08.608621 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 30 16:27:08 crc kubenswrapper[4766]: I0130 16:27:08.699767 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 30 16:27:08 crc kubenswrapper[4766]: I0130 16:27:08.757833 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 30 16:27:09 crc kubenswrapper[4766]: I0130 16:27:09.476902 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 16:27:11 crc kubenswrapper[4766]: I0130 16:27:11.672822 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 30 16:27:16 crc kubenswrapper[4766]: I0130 16:27:16.200880 4766 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 16:27:16 crc kubenswrapper[4766]: I0130 16:27:16.201439 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://1fd1b478d8e899504c0fde3f05b01dd9e95e984e187c19d6fb8a7235d9242bd2" gracePeriod=5 Jan 30 16:27:21 crc kubenswrapper[4766]: I0130 16:27:21.521904 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 30 16:27:21 crc kubenswrapper[4766]: I0130 16:27:21.522268 4766 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="1fd1b478d8e899504c0fde3f05b01dd9e95e984e187c19d6fb8a7235d9242bd2" exitCode=137 Jan 30 16:27:21 crc kubenswrapper[4766]: I0130 16:27:21.769328 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 30 16:27:21 crc kubenswrapper[4766]: I0130 16:27:21.769863 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:27:21 crc kubenswrapper[4766]: I0130 16:27:21.818440 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 16:27:21 crc kubenswrapper[4766]: I0130 16:27:21.818504 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 16:27:21 crc kubenswrapper[4766]: I0130 16:27:21.818532 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 16:27:21 crc kubenswrapper[4766]: I0130 16:27:21.818570 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 16:27:21 crc kubenswrapper[4766]: I0130 16:27:21.818596 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:27:21 crc kubenswrapper[4766]: I0130 16:27:21.818667 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 16:27:21 crc kubenswrapper[4766]: I0130 16:27:21.818663 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:27:21 crc kubenswrapper[4766]: I0130 16:27:21.818688 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:27:21 crc kubenswrapper[4766]: I0130 16:27:21.818706 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:27:21 crc kubenswrapper[4766]: I0130 16:27:21.818951 4766 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:21 crc kubenswrapper[4766]: I0130 16:27:21.818968 4766 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:21 crc kubenswrapper[4766]: I0130 16:27:21.818978 4766 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:21 crc kubenswrapper[4766]: I0130 16:27:21.818988 4766 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:21 crc kubenswrapper[4766]: I0130 16:27:21.826760 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:27:21 crc kubenswrapper[4766]: I0130 16:27:21.919493 4766 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:22 crc kubenswrapper[4766]: I0130 16:27:22.045970 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 30 16:27:22 crc kubenswrapper[4766]: I0130 16:27:22.530821 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 30 16:27:22 crc kubenswrapper[4766]: I0130 16:27:22.530901 4766 scope.go:117] "RemoveContainer" containerID="1fd1b478d8e899504c0fde3f05b01dd9e95e984e187c19d6fb8a7235d9242bd2" Jan 30 16:27:22 crc kubenswrapper[4766]: I0130 16:27:22.531064 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.416278 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dgkvz"] Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.416629 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-dgkvz" podUID="807df97f-b371-4d04-81e9-b1a823a8a638" containerName="controller-manager" containerID="cri-o://cdc8f66f787e17b15a0e7454e23799f03cb73f4271321de8e857fb5adbb8d6e1" gracePeriod=30 Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.513294 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfclt"] Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.514051 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfclt" podUID="798137fc-1490-4b1c-ac4d-77b6c9e56d05" containerName="route-controller-manager" containerID="cri-o://4921083d2193553df918df075b998b9501b901f014de50a516ea94d274b8abf0" gracePeriod=30 Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.556659 4766 generic.go:334] "Generic (PLEG): container finished" podID="cdbd0f5d-e6fb-4960-a928-7a5dcc399239" containerID="9baf130b02720b533f5cfa486ecbaff1522a0002fe7c262131847af34db02ada" exitCode=0 Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.556798 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wcmvb" event={"ID":"cdbd0f5d-e6fb-4960-a928-7a5dcc399239","Type":"ContainerDied","Data":"9baf130b02720b533f5cfa486ecbaff1522a0002fe7c262131847af34db02ada"} Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.557428 4766 scope.go:117] "RemoveContainer" containerID="9baf130b02720b533f5cfa486ecbaff1522a0002fe7c262131847af34db02ada" Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.564745 4766 generic.go:334] "Generic (PLEG): container finished" podID="807df97f-b371-4d04-81e9-b1a823a8a638" containerID="cdc8f66f787e17b15a0e7454e23799f03cb73f4271321de8e857fb5adbb8d6e1" exitCode=0 Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.564793 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-dgkvz" event={"ID":"807df97f-b371-4d04-81e9-b1a823a8a638","Type":"ContainerDied","Data":"cdc8f66f787e17b15a0e7454e23799f03cb73f4271321de8e857fb5adbb8d6e1"} Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.700083 4766 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-mfclt container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.700197 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfclt" podUID="798137fc-1490-4b1c-ac4d-77b6c9e56d05" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.837106 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-dgkvz" Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.857821 4766 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.871907 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/807df97f-b371-4d04-81e9-b1a823a8a638-client-ca\") pod \"807df97f-b371-4d04-81e9-b1a823a8a638\" (UID: \"807df97f-b371-4d04-81e9-b1a823a8a638\") " Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.871953 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/807df97f-b371-4d04-81e9-b1a823a8a638-config\") pod \"807df97f-b371-4d04-81e9-b1a823a8a638\" (UID: \"807df97f-b371-4d04-81e9-b1a823a8a638\") " Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.871976 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/807df97f-b371-4d04-81e9-b1a823a8a638-proxy-ca-bundles\") pod \"807df97f-b371-4d04-81e9-b1a823a8a638\" (UID: \"807df97f-b371-4d04-81e9-b1a823a8a638\") " Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.871998 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/807df97f-b371-4d04-81e9-b1a823a8a638-serving-cert\") pod \"807df97f-b371-4d04-81e9-b1a823a8a638\" (UID: \"807df97f-b371-4d04-81e9-b1a823a8a638\") " Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.872031 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zmsv\" (UniqueName: \"kubernetes.io/projected/807df97f-b371-4d04-81e9-b1a823a8a638-kube-api-access-5zmsv\") pod \"807df97f-b371-4d04-81e9-b1a823a8a638\" (UID: \"807df97f-b371-4d04-81e9-b1a823a8a638\") " Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.872560 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/807df97f-b371-4d04-81e9-b1a823a8a638-client-ca" (OuterVolumeSpecName: "client-ca") pod "807df97f-b371-4d04-81e9-b1a823a8a638" (UID: "807df97f-b371-4d04-81e9-b1a823a8a638"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.873120 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/807df97f-b371-4d04-81e9-b1a823a8a638-config" (OuterVolumeSpecName: "config") pod "807df97f-b371-4d04-81e9-b1a823a8a638" (UID: "807df97f-b371-4d04-81e9-b1a823a8a638"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.873314 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/807df97f-b371-4d04-81e9-b1a823a8a638-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "807df97f-b371-4d04-81e9-b1a823a8a638" (UID: "807df97f-b371-4d04-81e9-b1a823a8a638"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.881518 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/807df97f-b371-4d04-81e9-b1a823a8a638-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "807df97f-b371-4d04-81e9-b1a823a8a638" (UID: "807df97f-b371-4d04-81e9-b1a823a8a638"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.881652 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/807df97f-b371-4d04-81e9-b1a823a8a638-kube-api-access-5zmsv" (OuterVolumeSpecName: "kube-api-access-5zmsv") pod "807df97f-b371-4d04-81e9-b1a823a8a638" (UID: "807df97f-b371-4d04-81e9-b1a823a8a638"). InnerVolumeSpecName "kube-api-access-5zmsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.907734 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfclt" Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.973246 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/798137fc-1490-4b1c-ac4d-77b6c9e56d05-serving-cert\") pod \"798137fc-1490-4b1c-ac4d-77b6c9e56d05\" (UID: \"798137fc-1490-4b1c-ac4d-77b6c9e56d05\") " Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.973322 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/798137fc-1490-4b1c-ac4d-77b6c9e56d05-client-ca\") pod \"798137fc-1490-4b1c-ac4d-77b6c9e56d05\" (UID: \"798137fc-1490-4b1c-ac4d-77b6c9e56d05\") " Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.973433 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks54j\" (UniqueName: \"kubernetes.io/projected/798137fc-1490-4b1c-ac4d-77b6c9e56d05-kube-api-access-ks54j\") pod \"798137fc-1490-4b1c-ac4d-77b6c9e56d05\" (UID: \"798137fc-1490-4b1c-ac4d-77b6c9e56d05\") " Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.973464 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/798137fc-1490-4b1c-ac4d-77b6c9e56d05-config\") pod \"798137fc-1490-4b1c-ac4d-77b6c9e56d05\" (UID: \"798137fc-1490-4b1c-ac4d-77b6c9e56d05\") " Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.973743 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zmsv\" (UniqueName: \"kubernetes.io/projected/807df97f-b371-4d04-81e9-b1a823a8a638-kube-api-access-5zmsv\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.973774 4766 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/807df97f-b371-4d04-81e9-b1a823a8a638-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.973788 4766 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/807df97f-b371-4d04-81e9-b1a823a8a638-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.973804 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/807df97f-b371-4d04-81e9-b1a823a8a638-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.973817 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/807df97f-b371-4d04-81e9-b1a823a8a638-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.974541 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/798137fc-1490-4b1c-ac4d-77b6c9e56d05-client-ca" (OuterVolumeSpecName: "client-ca") pod "798137fc-1490-4b1c-ac4d-77b6c9e56d05" (UID: "798137fc-1490-4b1c-ac4d-77b6c9e56d05"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.974729 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/798137fc-1490-4b1c-ac4d-77b6c9e56d05-config" (OuterVolumeSpecName: "config") pod "798137fc-1490-4b1c-ac4d-77b6c9e56d05" (UID: "798137fc-1490-4b1c-ac4d-77b6c9e56d05"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.977872 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/798137fc-1490-4b1c-ac4d-77b6c9e56d05-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "798137fc-1490-4b1c-ac4d-77b6c9e56d05" (UID: "798137fc-1490-4b1c-ac4d-77b6c9e56d05"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.977988 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/798137fc-1490-4b1c-ac4d-77b6c9e56d05-kube-api-access-ks54j" (OuterVolumeSpecName: "kube-api-access-ks54j") pod "798137fc-1490-4b1c-ac4d-77b6c9e56d05" (UID: "798137fc-1490-4b1c-ac4d-77b6c9e56d05"). InnerVolumeSpecName "kube-api-access-ks54j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:27:26 crc kubenswrapper[4766]: I0130 16:27:26.074772 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ks54j\" (UniqueName: \"kubernetes.io/projected/798137fc-1490-4b1c-ac4d-77b6c9e56d05-kube-api-access-ks54j\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:26 crc kubenswrapper[4766]: I0130 16:27:26.074809 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/798137fc-1490-4b1c-ac4d-77b6c9e56d05-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:26 crc kubenswrapper[4766]: I0130 16:27:26.074820 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/798137fc-1490-4b1c-ac4d-77b6c9e56d05-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:26 crc kubenswrapper[4766]: I0130 16:27:26.074832 4766 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/798137fc-1490-4b1c-ac4d-77b6c9e56d05-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:26 crc kubenswrapper[4766]: I0130 16:27:26.571912 4766 generic.go:334] "Generic (PLEG): container finished" podID="798137fc-1490-4b1c-ac4d-77b6c9e56d05" containerID="4921083d2193553df918df075b998b9501b901f014de50a516ea94d274b8abf0" exitCode=0 Jan 30 16:27:26 crc kubenswrapper[4766]: I0130 16:27:26.571978 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfclt" Jan 30 16:27:26 crc kubenswrapper[4766]: I0130 16:27:26.572003 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfclt" event={"ID":"798137fc-1490-4b1c-ac4d-77b6c9e56d05","Type":"ContainerDied","Data":"4921083d2193553df918df075b998b9501b901f014de50a516ea94d274b8abf0"} Jan 30 16:27:26 crc kubenswrapper[4766]: I0130 16:27:26.572030 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfclt" event={"ID":"798137fc-1490-4b1c-ac4d-77b6c9e56d05","Type":"ContainerDied","Data":"777f165aaa35e8debb71a11164cf2e0013257285fafc5c165738c7722a8711a4"} Jan 30 16:27:26 crc kubenswrapper[4766]: I0130 16:27:26.572049 4766 scope.go:117] "RemoveContainer" containerID="4921083d2193553df918df075b998b9501b901f014de50a516ea94d274b8abf0" Jan 30 16:27:26 crc kubenswrapper[4766]: I0130 16:27:26.575381 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wcmvb" event={"ID":"cdbd0f5d-e6fb-4960-a928-7a5dcc399239","Type":"ContainerStarted","Data":"b6e9379c9cd40d8f1beccde490be8ea8ec9eabe93e20ab939489087d2f14c434"} Jan 30 16:27:26 crc kubenswrapper[4766]: I0130 16:27:26.575778 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-wcmvb" Jan 30 16:27:26 crc kubenswrapper[4766]: I0130 16:27:26.578154 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-wcmvb" Jan 30 16:27:26 crc kubenswrapper[4766]: I0130 16:27:26.579750 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-dgkvz" event={"ID":"807df97f-b371-4d04-81e9-b1a823a8a638","Type":"ContainerDied","Data":"442796fe00494142d89b0e1b9d6820cd3ac80019a54bf8a35e0ec68f7d85bbbf"} Jan 30 16:27:26 crc kubenswrapper[4766]: I0130 16:27:26.579815 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-dgkvz" Jan 30 16:27:26 crc kubenswrapper[4766]: I0130 16:27:26.594846 4766 scope.go:117] "RemoveContainer" containerID="4921083d2193553df918df075b998b9501b901f014de50a516ea94d274b8abf0" Jan 30 16:27:26 crc kubenswrapper[4766]: I0130 16:27:26.595979 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfclt"] Jan 30 16:27:26 crc kubenswrapper[4766]: E0130 16:27:26.596031 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4921083d2193553df918df075b998b9501b901f014de50a516ea94d274b8abf0\": container with ID starting with 4921083d2193553df918df075b998b9501b901f014de50a516ea94d274b8abf0 not found: ID does not exist" containerID="4921083d2193553df918df075b998b9501b901f014de50a516ea94d274b8abf0" Jan 30 16:27:26 crc kubenswrapper[4766]: I0130 16:27:26.596106 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4921083d2193553df918df075b998b9501b901f014de50a516ea94d274b8abf0"} err="failed to get container status \"4921083d2193553df918df075b998b9501b901f014de50a516ea94d274b8abf0\": rpc error: code = NotFound desc = could not find container \"4921083d2193553df918df075b998b9501b901f014de50a516ea94d274b8abf0\": container with ID starting with 4921083d2193553df918df075b998b9501b901f014de50a516ea94d274b8abf0 not found: ID does not exist" Jan 30 16:27:26 crc kubenswrapper[4766]: I0130 16:27:26.596265 4766 scope.go:117] "RemoveContainer" containerID="cdc8f66f787e17b15a0e7454e23799f03cb73f4271321de8e857fb5adbb8d6e1" Jan 30 16:27:26 crc kubenswrapper[4766]: I0130 16:27:26.605268 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfclt"] Jan 30 16:27:26 crc kubenswrapper[4766]: I0130 16:27:26.628900 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dgkvz"] Jan 30 16:27:26 crc kubenswrapper[4766]: I0130 16:27:26.633519 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dgkvz"] Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.369534 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-9f999584f-bwdvp"] Jan 30 16:27:27 crc kubenswrapper[4766]: E0130 16:27:27.370160 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="798137fc-1490-4b1c-ac4d-77b6c9e56d05" containerName="route-controller-manager" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.370263 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="798137fc-1490-4b1c-ac4d-77b6c9e56d05" containerName="route-controller-manager" Jan 30 16:27:27 crc kubenswrapper[4766]: E0130 16:27:27.370345 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="807df97f-b371-4d04-81e9-b1a823a8a638" containerName="controller-manager" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.370402 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="807df97f-b371-4d04-81e9-b1a823a8a638" containerName="controller-manager" Jan 30 16:27:27 crc kubenswrapper[4766]: E0130 16:27:27.370466 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.370528 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.370683 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.370746 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="807df97f-b371-4d04-81e9-b1a823a8a638" containerName="controller-manager" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.370812 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="798137fc-1490-4b1c-ac4d-77b6c9e56d05" containerName="route-controller-manager" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.371337 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9f999584f-bwdvp" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.374534 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f7c67755b-6mn4d"] Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.375631 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f7c67755b-6mn4d" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.375904 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.377461 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.378001 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.378397 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.380156 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.380435 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.380624 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.380940 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.381777 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.381931 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.382040 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.382419 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.386653 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.386785 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-9f999584f-bwdvp"] Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.390661 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f7c67755b-6mn4d"] Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.391146 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35a5ead8-3b9f-4ac8-9266-3dc405b7c80b-serving-cert\") pod \"controller-manager-9f999584f-bwdvp\" (UID: \"35a5ead8-3b9f-4ac8-9266-3dc405b7c80b\") " pod="openshift-controller-manager/controller-manager-9f999584f-bwdvp" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.391212 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgvcz\" (UniqueName: \"kubernetes.io/projected/35a5ead8-3b9f-4ac8-9266-3dc405b7c80b-kube-api-access-qgvcz\") pod \"controller-manager-9f999584f-bwdvp\" (UID: \"35a5ead8-3b9f-4ac8-9266-3dc405b7c80b\") " pod="openshift-controller-manager/controller-manager-9f999584f-bwdvp" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.391253 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65ec52f3-f575-4a70-ad65-a7cce55ba3bd-config\") pod \"route-controller-manager-7f7c67755b-6mn4d\" (UID: \"65ec52f3-f575-4a70-ad65-a7cce55ba3bd\") " pod="openshift-route-controller-manager/route-controller-manager-7f7c67755b-6mn4d" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.391384 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65ec52f3-f575-4a70-ad65-a7cce55ba3bd-serving-cert\") pod \"route-controller-manager-7f7c67755b-6mn4d\" (UID: \"65ec52f3-f575-4a70-ad65-a7cce55ba3bd\") " pod="openshift-route-controller-manager/route-controller-manager-7f7c67755b-6mn4d" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.391566 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/35a5ead8-3b9f-4ac8-9266-3dc405b7c80b-client-ca\") pod \"controller-manager-9f999584f-bwdvp\" (UID: \"35a5ead8-3b9f-4ac8-9266-3dc405b7c80b\") " pod="openshift-controller-manager/controller-manager-9f999584f-bwdvp" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.391624 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klsl4\" (UniqueName: \"kubernetes.io/projected/65ec52f3-f575-4a70-ad65-a7cce55ba3bd-kube-api-access-klsl4\") pod \"route-controller-manager-7f7c67755b-6mn4d\" (UID: \"65ec52f3-f575-4a70-ad65-a7cce55ba3bd\") " pod="openshift-route-controller-manager/route-controller-manager-7f7c67755b-6mn4d" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.391674 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35a5ead8-3b9f-4ac8-9266-3dc405b7c80b-config\") pod \"controller-manager-9f999584f-bwdvp\" (UID: \"35a5ead8-3b9f-4ac8-9266-3dc405b7c80b\") " pod="openshift-controller-manager/controller-manager-9f999584f-bwdvp" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.391698 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/65ec52f3-f575-4a70-ad65-a7cce55ba3bd-client-ca\") pod \"route-controller-manager-7f7c67755b-6mn4d\" (UID: \"65ec52f3-f575-4a70-ad65-a7cce55ba3bd\") " pod="openshift-route-controller-manager/route-controller-manager-7f7c67755b-6mn4d" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.391796 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/35a5ead8-3b9f-4ac8-9266-3dc405b7c80b-proxy-ca-bundles\") pod \"controller-manager-9f999584f-bwdvp\" (UID: \"35a5ead8-3b9f-4ac8-9266-3dc405b7c80b\") " pod="openshift-controller-manager/controller-manager-9f999584f-bwdvp" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.493607 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35a5ead8-3b9f-4ac8-9266-3dc405b7c80b-config\") pod \"controller-manager-9f999584f-bwdvp\" (UID: \"35a5ead8-3b9f-4ac8-9266-3dc405b7c80b\") " pod="openshift-controller-manager/controller-manager-9f999584f-bwdvp" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.493679 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/65ec52f3-f575-4a70-ad65-a7cce55ba3bd-client-ca\") pod \"route-controller-manager-7f7c67755b-6mn4d\" (UID: \"65ec52f3-f575-4a70-ad65-a7cce55ba3bd\") " pod="openshift-route-controller-manager/route-controller-manager-7f7c67755b-6mn4d" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.493729 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/35a5ead8-3b9f-4ac8-9266-3dc405b7c80b-proxy-ca-bundles\") pod \"controller-manager-9f999584f-bwdvp\" (UID: \"35a5ead8-3b9f-4ac8-9266-3dc405b7c80b\") " pod="openshift-controller-manager/controller-manager-9f999584f-bwdvp" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.493780 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35a5ead8-3b9f-4ac8-9266-3dc405b7c80b-serving-cert\") pod \"controller-manager-9f999584f-bwdvp\" (UID: \"35a5ead8-3b9f-4ac8-9266-3dc405b7c80b\") " pod="openshift-controller-manager/controller-manager-9f999584f-bwdvp" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.493824 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgvcz\" (UniqueName: \"kubernetes.io/projected/35a5ead8-3b9f-4ac8-9266-3dc405b7c80b-kube-api-access-qgvcz\") pod \"controller-manager-9f999584f-bwdvp\" (UID: \"35a5ead8-3b9f-4ac8-9266-3dc405b7c80b\") " pod="openshift-controller-manager/controller-manager-9f999584f-bwdvp" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.493882 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65ec52f3-f575-4a70-ad65-a7cce55ba3bd-config\") pod \"route-controller-manager-7f7c67755b-6mn4d\" (UID: \"65ec52f3-f575-4a70-ad65-a7cce55ba3bd\") " pod="openshift-route-controller-manager/route-controller-manager-7f7c67755b-6mn4d" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.493915 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65ec52f3-f575-4a70-ad65-a7cce55ba3bd-serving-cert\") pod \"route-controller-manager-7f7c67755b-6mn4d\" (UID: \"65ec52f3-f575-4a70-ad65-a7cce55ba3bd\") " pod="openshift-route-controller-manager/route-controller-manager-7f7c67755b-6mn4d" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.493970 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/35a5ead8-3b9f-4ac8-9266-3dc405b7c80b-client-ca\") pod \"controller-manager-9f999584f-bwdvp\" (UID: \"35a5ead8-3b9f-4ac8-9266-3dc405b7c80b\") " pod="openshift-controller-manager/controller-manager-9f999584f-bwdvp" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.494005 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klsl4\" (UniqueName: \"kubernetes.io/projected/65ec52f3-f575-4a70-ad65-a7cce55ba3bd-kube-api-access-klsl4\") pod \"route-controller-manager-7f7c67755b-6mn4d\" (UID: \"65ec52f3-f575-4a70-ad65-a7cce55ba3bd\") " pod="openshift-route-controller-manager/route-controller-manager-7f7c67755b-6mn4d" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.495294 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/65ec52f3-f575-4a70-ad65-a7cce55ba3bd-client-ca\") pod \"route-controller-manager-7f7c67755b-6mn4d\" (UID: \"65ec52f3-f575-4a70-ad65-a7cce55ba3bd\") " pod="openshift-route-controller-manager/route-controller-manager-7f7c67755b-6mn4d" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.495399 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/35a5ead8-3b9f-4ac8-9266-3dc405b7c80b-client-ca\") pod \"controller-manager-9f999584f-bwdvp\" (UID: \"35a5ead8-3b9f-4ac8-9266-3dc405b7c80b\") " pod="openshift-controller-manager/controller-manager-9f999584f-bwdvp" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.495431 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35a5ead8-3b9f-4ac8-9266-3dc405b7c80b-config\") pod \"controller-manager-9f999584f-bwdvp\" (UID: \"35a5ead8-3b9f-4ac8-9266-3dc405b7c80b\") " pod="openshift-controller-manager/controller-manager-9f999584f-bwdvp" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.495776 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65ec52f3-f575-4a70-ad65-a7cce55ba3bd-config\") pod \"route-controller-manager-7f7c67755b-6mn4d\" (UID: \"65ec52f3-f575-4a70-ad65-a7cce55ba3bd\") " pod="openshift-route-controller-manager/route-controller-manager-7f7c67755b-6mn4d" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.496295 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/35a5ead8-3b9f-4ac8-9266-3dc405b7c80b-proxy-ca-bundles\") pod \"controller-manager-9f999584f-bwdvp\" (UID: \"35a5ead8-3b9f-4ac8-9266-3dc405b7c80b\") " pod="openshift-controller-manager/controller-manager-9f999584f-bwdvp" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.501348 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65ec52f3-f575-4a70-ad65-a7cce55ba3bd-serving-cert\") pod \"route-controller-manager-7f7c67755b-6mn4d\" (UID: \"65ec52f3-f575-4a70-ad65-a7cce55ba3bd\") " pod="openshift-route-controller-manager/route-controller-manager-7f7c67755b-6mn4d" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.507941 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35a5ead8-3b9f-4ac8-9266-3dc405b7c80b-serving-cert\") pod \"controller-manager-9f999584f-bwdvp\" (UID: \"35a5ead8-3b9f-4ac8-9266-3dc405b7c80b\") " pod="openshift-controller-manager/controller-manager-9f999584f-bwdvp" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.513390 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klsl4\" (UniqueName: \"kubernetes.io/projected/65ec52f3-f575-4a70-ad65-a7cce55ba3bd-kube-api-access-klsl4\") pod \"route-controller-manager-7f7c67755b-6mn4d\" (UID: \"65ec52f3-f575-4a70-ad65-a7cce55ba3bd\") " pod="openshift-route-controller-manager/route-controller-manager-7f7c67755b-6mn4d" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.514759 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgvcz\" (UniqueName: \"kubernetes.io/projected/35a5ead8-3b9f-4ac8-9266-3dc405b7c80b-kube-api-access-qgvcz\") pod \"controller-manager-9f999584f-bwdvp\" (UID: \"35a5ead8-3b9f-4ac8-9266-3dc405b7c80b\") " pod="openshift-controller-manager/controller-manager-9f999584f-bwdvp" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.689723 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9f999584f-bwdvp" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.699844 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f7c67755b-6mn4d" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.874489 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-9f999584f-bwdvp"] Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.906667 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f7c67755b-6mn4d"] Jan 30 16:27:27 crc kubenswrapper[4766]: W0130 16:27:27.917596 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65ec52f3_f575_4a70_ad65_a7cce55ba3bd.slice/crio-ec08d796f2b7bb87253867702d82a73f0c3eccb5beee5c885794f2ae843306cb WatchSource:0}: Error finding container ec08d796f2b7bb87253867702d82a73f0c3eccb5beee5c885794f2ae843306cb: Status 404 returned error can't find the container with id ec08d796f2b7bb87253867702d82a73f0c3eccb5beee5c885794f2ae843306cb Jan 30 16:27:28 crc kubenswrapper[4766]: I0130 16:27:28.047105 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="798137fc-1490-4b1c-ac4d-77b6c9e56d05" path="/var/lib/kubelet/pods/798137fc-1490-4b1c-ac4d-77b6c9e56d05/volumes" Jan 30 16:27:28 crc kubenswrapper[4766]: I0130 16:27:28.048067 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="807df97f-b371-4d04-81e9-b1a823a8a638" path="/var/lib/kubelet/pods/807df97f-b371-4d04-81e9-b1a823a8a638/volumes" Jan 30 16:27:28 crc kubenswrapper[4766]: I0130 16:27:28.597790 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9f999584f-bwdvp" event={"ID":"35a5ead8-3b9f-4ac8-9266-3dc405b7c80b","Type":"ContainerStarted","Data":"1bca5c1041071b4b73c2ca9a76efeda879c0c0766a6198ad9b35a9d7a5432449"} Jan 30 16:27:28 crc kubenswrapper[4766]: I0130 16:27:28.598106 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-9f999584f-bwdvp" Jan 30 16:27:28 crc kubenswrapper[4766]: I0130 16:27:28.598118 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9f999584f-bwdvp" event={"ID":"35a5ead8-3b9f-4ac8-9266-3dc405b7c80b","Type":"ContainerStarted","Data":"779aa30092a73b8f0ead09d3638ab33c4bdd98e3a50ef1e6f57c47c69049b23a"} Jan 30 16:27:28 crc kubenswrapper[4766]: I0130 16:27:28.600396 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f7c67755b-6mn4d" event={"ID":"65ec52f3-f575-4a70-ad65-a7cce55ba3bd","Type":"ContainerStarted","Data":"65270142116f308163aab3be005a4bf9c3c613fc78e5a00d1ac0575954c96b31"} Jan 30 16:27:28 crc kubenswrapper[4766]: I0130 16:27:28.600445 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f7c67755b-6mn4d" event={"ID":"65ec52f3-f575-4a70-ad65-a7cce55ba3bd","Type":"ContainerStarted","Data":"ec08d796f2b7bb87253867702d82a73f0c3eccb5beee5c885794f2ae843306cb"} Jan 30 16:27:28 crc kubenswrapper[4766]: I0130 16:27:28.603638 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-9f999584f-bwdvp" Jan 30 16:27:28 crc kubenswrapper[4766]: I0130 16:27:28.618453 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-9f999584f-bwdvp" podStartSLOduration=3.618434208 podStartE2EDuration="3.618434208s" podCreationTimestamp="2026-01-30 16:27:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:27:28.613963119 +0000 UTC m=+303.251920465" watchObservedRunningTime="2026-01-30 16:27:28.618434208 +0000 UTC m=+303.256391554" Jan 30 16:27:29 crc kubenswrapper[4766]: I0130 16:27:29.605905 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7f7c67755b-6mn4d" Jan 30 16:27:29 crc kubenswrapper[4766]: I0130 16:27:29.611075 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7f7c67755b-6mn4d" Jan 30 16:27:29 crc kubenswrapper[4766]: I0130 16:27:29.632942 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7f7c67755b-6mn4d" podStartSLOduration=4.6329231140000005 podStartE2EDuration="4.632923114s" podCreationTimestamp="2026-01-30 16:27:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:27:28.65761658 +0000 UTC m=+303.295573946" watchObservedRunningTime="2026-01-30 16:27:29.632923114 +0000 UTC m=+304.270880470" Jan 30 16:27:31 crc kubenswrapper[4766]: I0130 16:27:31.304793 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-9f999584f-bwdvp"] Jan 30 16:27:31 crc kubenswrapper[4766]: I0130 16:27:31.615215 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-9f999584f-bwdvp" podUID="35a5ead8-3b9f-4ac8-9266-3dc405b7c80b" containerName="controller-manager" containerID="cri-o://1bca5c1041071b4b73c2ca9a76efeda879c0c0766a6198ad9b35a9d7a5432449" gracePeriod=30 Jan 30 16:27:32 crc kubenswrapper[4766]: I0130 16:27:32.690769 4766 generic.go:334] "Generic (PLEG): container finished" podID="35a5ead8-3b9f-4ac8-9266-3dc405b7c80b" containerID="1bca5c1041071b4b73c2ca9a76efeda879c0c0766a6198ad9b35a9d7a5432449" exitCode=0 Jan 30 16:27:32 crc kubenswrapper[4766]: I0130 16:27:32.691093 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9f999584f-bwdvp" event={"ID":"35a5ead8-3b9f-4ac8-9266-3dc405b7c80b","Type":"ContainerDied","Data":"1bca5c1041071b4b73c2ca9a76efeda879c0c0766a6198ad9b35a9d7a5432449"} Jan 30 16:27:32 crc kubenswrapper[4766]: I0130 16:27:32.875174 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9f999584f-bwdvp" Jan 30 16:27:32 crc kubenswrapper[4766]: I0130 16:27:32.903669 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-d55469fcf-485sj"] Jan 30 16:27:32 crc kubenswrapper[4766]: E0130 16:27:32.903955 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35a5ead8-3b9f-4ac8-9266-3dc405b7c80b" containerName="controller-manager" Jan 30 16:27:32 crc kubenswrapper[4766]: I0130 16:27:32.903978 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="35a5ead8-3b9f-4ac8-9266-3dc405b7c80b" containerName="controller-manager" Jan 30 16:27:32 crc kubenswrapper[4766]: I0130 16:27:32.904121 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="35a5ead8-3b9f-4ac8-9266-3dc405b7c80b" containerName="controller-manager" Jan 30 16:27:32 crc kubenswrapper[4766]: I0130 16:27:32.904635 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d55469fcf-485sj" Jan 30 16:27:32 crc kubenswrapper[4766]: I0130 16:27:32.924227 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d55469fcf-485sj"] Jan 30 16:27:32 crc kubenswrapper[4766]: I0130 16:27:32.977616 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgvcz\" (UniqueName: \"kubernetes.io/projected/35a5ead8-3b9f-4ac8-9266-3dc405b7c80b-kube-api-access-qgvcz\") pod \"35a5ead8-3b9f-4ac8-9266-3dc405b7c80b\" (UID: \"35a5ead8-3b9f-4ac8-9266-3dc405b7c80b\") " Jan 30 16:27:32 crc kubenswrapper[4766]: I0130 16:27:32.978353 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/35a5ead8-3b9f-4ac8-9266-3dc405b7c80b-client-ca\") pod \"35a5ead8-3b9f-4ac8-9266-3dc405b7c80b\" (UID: \"35a5ead8-3b9f-4ac8-9266-3dc405b7c80b\") " Jan 30 16:27:32 crc kubenswrapper[4766]: I0130 16:27:32.978413 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/35a5ead8-3b9f-4ac8-9266-3dc405b7c80b-proxy-ca-bundles\") pod \"35a5ead8-3b9f-4ac8-9266-3dc405b7c80b\" (UID: \"35a5ead8-3b9f-4ac8-9266-3dc405b7c80b\") " Jan 30 16:27:32 crc kubenswrapper[4766]: I0130 16:27:32.978473 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35a5ead8-3b9f-4ac8-9266-3dc405b7c80b-serving-cert\") pod \"35a5ead8-3b9f-4ac8-9266-3dc405b7c80b\" (UID: \"35a5ead8-3b9f-4ac8-9266-3dc405b7c80b\") " Jan 30 16:27:32 crc kubenswrapper[4766]: I0130 16:27:32.978513 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35a5ead8-3b9f-4ac8-9266-3dc405b7c80b-config\") pod \"35a5ead8-3b9f-4ac8-9266-3dc405b7c80b\" (UID: \"35a5ead8-3b9f-4ac8-9266-3dc405b7c80b\") " Jan 30 16:27:32 crc kubenswrapper[4766]: I0130 16:27:32.978778 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27tgd\" (UniqueName: \"kubernetes.io/projected/9bff95c9-c6a1-4ee5-ac7a-68dac2da0318-kube-api-access-27tgd\") pod \"controller-manager-d55469fcf-485sj\" (UID: \"9bff95c9-c6a1-4ee5-ac7a-68dac2da0318\") " pod="openshift-controller-manager/controller-manager-d55469fcf-485sj" Jan 30 16:27:32 crc kubenswrapper[4766]: I0130 16:27:32.978824 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9bff95c9-c6a1-4ee5-ac7a-68dac2da0318-client-ca\") pod \"controller-manager-d55469fcf-485sj\" (UID: \"9bff95c9-c6a1-4ee5-ac7a-68dac2da0318\") " pod="openshift-controller-manager/controller-manager-d55469fcf-485sj" Jan 30 16:27:32 crc kubenswrapper[4766]: I0130 16:27:32.978864 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9bff95c9-c6a1-4ee5-ac7a-68dac2da0318-proxy-ca-bundles\") pod \"controller-manager-d55469fcf-485sj\" (UID: \"9bff95c9-c6a1-4ee5-ac7a-68dac2da0318\") " pod="openshift-controller-manager/controller-manager-d55469fcf-485sj" Jan 30 16:27:32 crc kubenswrapper[4766]: I0130 16:27:32.978885 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bff95c9-c6a1-4ee5-ac7a-68dac2da0318-config\") pod \"controller-manager-d55469fcf-485sj\" (UID: \"9bff95c9-c6a1-4ee5-ac7a-68dac2da0318\") " pod="openshift-controller-manager/controller-manager-d55469fcf-485sj" Jan 30 16:27:32 crc kubenswrapper[4766]: I0130 16:27:32.978912 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bff95c9-c6a1-4ee5-ac7a-68dac2da0318-serving-cert\") pod \"controller-manager-d55469fcf-485sj\" (UID: \"9bff95c9-c6a1-4ee5-ac7a-68dac2da0318\") " pod="openshift-controller-manager/controller-manager-d55469fcf-485sj" Jan 30 16:27:32 crc kubenswrapper[4766]: I0130 16:27:32.978984 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35a5ead8-3b9f-4ac8-9266-3dc405b7c80b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "35a5ead8-3b9f-4ac8-9266-3dc405b7c80b" (UID: "35a5ead8-3b9f-4ac8-9266-3dc405b7c80b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:27:32 crc kubenswrapper[4766]: I0130 16:27:32.978975 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35a5ead8-3b9f-4ac8-9266-3dc405b7c80b-client-ca" (OuterVolumeSpecName: "client-ca") pod "35a5ead8-3b9f-4ac8-9266-3dc405b7c80b" (UID: "35a5ead8-3b9f-4ac8-9266-3dc405b7c80b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:27:32 crc kubenswrapper[4766]: I0130 16:27:32.979219 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35a5ead8-3b9f-4ac8-9266-3dc405b7c80b-config" (OuterVolumeSpecName: "config") pod "35a5ead8-3b9f-4ac8-9266-3dc405b7c80b" (UID: "35a5ead8-3b9f-4ac8-9266-3dc405b7c80b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:27:32 crc kubenswrapper[4766]: I0130 16:27:32.983882 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35a5ead8-3b9f-4ac8-9266-3dc405b7c80b-kube-api-access-qgvcz" (OuterVolumeSpecName: "kube-api-access-qgvcz") pod "35a5ead8-3b9f-4ac8-9266-3dc405b7c80b" (UID: "35a5ead8-3b9f-4ac8-9266-3dc405b7c80b"). InnerVolumeSpecName "kube-api-access-qgvcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:27:32 crc kubenswrapper[4766]: I0130 16:27:32.984533 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35a5ead8-3b9f-4ac8-9266-3dc405b7c80b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "35a5ead8-3b9f-4ac8-9266-3dc405b7c80b" (UID: "35a5ead8-3b9f-4ac8-9266-3dc405b7c80b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:27:33 crc kubenswrapper[4766]: I0130 16:27:33.080687 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27tgd\" (UniqueName: \"kubernetes.io/projected/9bff95c9-c6a1-4ee5-ac7a-68dac2da0318-kube-api-access-27tgd\") pod \"controller-manager-d55469fcf-485sj\" (UID: \"9bff95c9-c6a1-4ee5-ac7a-68dac2da0318\") " pod="openshift-controller-manager/controller-manager-d55469fcf-485sj" Jan 30 16:27:33 crc kubenswrapper[4766]: I0130 16:27:33.080754 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9bff95c9-c6a1-4ee5-ac7a-68dac2da0318-client-ca\") pod \"controller-manager-d55469fcf-485sj\" (UID: \"9bff95c9-c6a1-4ee5-ac7a-68dac2da0318\") " pod="openshift-controller-manager/controller-manager-d55469fcf-485sj" Jan 30 16:27:33 crc kubenswrapper[4766]: I0130 16:27:33.080787 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9bff95c9-c6a1-4ee5-ac7a-68dac2da0318-proxy-ca-bundles\") pod \"controller-manager-d55469fcf-485sj\" (UID: \"9bff95c9-c6a1-4ee5-ac7a-68dac2da0318\") " pod="openshift-controller-manager/controller-manager-d55469fcf-485sj" Jan 30 16:27:33 crc kubenswrapper[4766]: I0130 16:27:33.080818 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bff95c9-c6a1-4ee5-ac7a-68dac2da0318-config\") pod \"controller-manager-d55469fcf-485sj\" (UID: \"9bff95c9-c6a1-4ee5-ac7a-68dac2da0318\") " pod="openshift-controller-manager/controller-manager-d55469fcf-485sj" Jan 30 16:27:33 crc kubenswrapper[4766]: I0130 16:27:33.080850 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bff95c9-c6a1-4ee5-ac7a-68dac2da0318-serving-cert\") pod \"controller-manager-d55469fcf-485sj\" (UID: \"9bff95c9-c6a1-4ee5-ac7a-68dac2da0318\") " pod="openshift-controller-manager/controller-manager-d55469fcf-485sj" Jan 30 16:27:33 crc kubenswrapper[4766]: I0130 16:27:33.080917 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35a5ead8-3b9f-4ac8-9266-3dc405b7c80b-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:33 crc kubenswrapper[4766]: I0130 16:27:33.080928 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgvcz\" (UniqueName: \"kubernetes.io/projected/35a5ead8-3b9f-4ac8-9266-3dc405b7c80b-kube-api-access-qgvcz\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:33 crc kubenswrapper[4766]: I0130 16:27:33.080941 4766 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/35a5ead8-3b9f-4ac8-9266-3dc405b7c80b-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:33 crc kubenswrapper[4766]: I0130 16:27:33.080950 4766 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/35a5ead8-3b9f-4ac8-9266-3dc405b7c80b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:33 crc kubenswrapper[4766]: I0130 16:27:33.080958 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35a5ead8-3b9f-4ac8-9266-3dc405b7c80b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:33 crc kubenswrapper[4766]: I0130 16:27:33.082808 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9bff95c9-c6a1-4ee5-ac7a-68dac2da0318-proxy-ca-bundles\") pod \"controller-manager-d55469fcf-485sj\" (UID: \"9bff95c9-c6a1-4ee5-ac7a-68dac2da0318\") " pod="openshift-controller-manager/controller-manager-d55469fcf-485sj" Jan 30 16:27:33 crc kubenswrapper[4766]: I0130 16:27:33.083632 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bff95c9-c6a1-4ee5-ac7a-68dac2da0318-config\") pod \"controller-manager-d55469fcf-485sj\" (UID: \"9bff95c9-c6a1-4ee5-ac7a-68dac2da0318\") " pod="openshift-controller-manager/controller-manager-d55469fcf-485sj" Jan 30 16:27:33 crc kubenswrapper[4766]: I0130 16:27:33.084200 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9bff95c9-c6a1-4ee5-ac7a-68dac2da0318-client-ca\") pod \"controller-manager-d55469fcf-485sj\" (UID: \"9bff95c9-c6a1-4ee5-ac7a-68dac2da0318\") " pod="openshift-controller-manager/controller-manager-d55469fcf-485sj" Jan 30 16:27:33 crc kubenswrapper[4766]: I0130 16:27:33.085597 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bff95c9-c6a1-4ee5-ac7a-68dac2da0318-serving-cert\") pod \"controller-manager-d55469fcf-485sj\" (UID: \"9bff95c9-c6a1-4ee5-ac7a-68dac2da0318\") " pod="openshift-controller-manager/controller-manager-d55469fcf-485sj" Jan 30 16:27:33 crc kubenswrapper[4766]: I0130 16:27:33.100294 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27tgd\" (UniqueName: \"kubernetes.io/projected/9bff95c9-c6a1-4ee5-ac7a-68dac2da0318-kube-api-access-27tgd\") pod \"controller-manager-d55469fcf-485sj\" (UID: \"9bff95c9-c6a1-4ee5-ac7a-68dac2da0318\") " pod="openshift-controller-manager/controller-manager-d55469fcf-485sj" Jan 30 16:27:33 crc kubenswrapper[4766]: I0130 16:27:33.225662 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d55469fcf-485sj" Jan 30 16:27:33 crc kubenswrapper[4766]: I0130 16:27:33.450730 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d55469fcf-485sj"] Jan 30 16:27:33 crc kubenswrapper[4766]: I0130 16:27:33.699095 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9f999584f-bwdvp" event={"ID":"35a5ead8-3b9f-4ac8-9266-3dc405b7c80b","Type":"ContainerDied","Data":"779aa30092a73b8f0ead09d3638ab33c4bdd98e3a50ef1e6f57c47c69049b23a"} Jan 30 16:27:33 crc kubenswrapper[4766]: I0130 16:27:33.699525 4766 scope.go:117] "RemoveContainer" containerID="1bca5c1041071b4b73c2ca9a76efeda879c0c0766a6198ad9b35a9d7a5432449" Jan 30 16:27:33 crc kubenswrapper[4766]: I0130 16:27:33.699422 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9f999584f-bwdvp" Jan 30 16:27:33 crc kubenswrapper[4766]: I0130 16:27:33.705397 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d55469fcf-485sj" event={"ID":"9bff95c9-c6a1-4ee5-ac7a-68dac2da0318","Type":"ContainerStarted","Data":"d311fa8670dceb7e4a31251ca8e6a5715eb8dab77a0e0a77753b1ca24a74735a"} Jan 30 16:27:33 crc kubenswrapper[4766]: I0130 16:27:33.705444 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d55469fcf-485sj" event={"ID":"9bff95c9-c6a1-4ee5-ac7a-68dac2da0318","Type":"ContainerStarted","Data":"e58fbe7996a8ff003a2b6f7f74a31d396be00251f43d6d9bee24d2bba733d54a"} Jan 30 16:27:33 crc kubenswrapper[4766]: I0130 16:27:33.707847 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-d55469fcf-485sj" Jan 30 16:27:33 crc kubenswrapper[4766]: I0130 16:27:33.707970 4766 patch_prober.go:28] interesting pod/controller-manager-d55469fcf-485sj container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" start-of-body= Jan 30 16:27:33 crc kubenswrapper[4766]: I0130 16:27:33.708019 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-d55469fcf-485sj" podUID="9bff95c9-c6a1-4ee5-ac7a-68dac2da0318" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" Jan 30 16:27:33 crc kubenswrapper[4766]: I0130 16:27:33.750904 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-d55469fcf-485sj" podStartSLOduration=2.750877336 podStartE2EDuration="2.750877336s" podCreationTimestamp="2026-01-30 16:27:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:27:33.733594657 +0000 UTC m=+308.371552023" watchObservedRunningTime="2026-01-30 16:27:33.750877336 +0000 UTC m=+308.388834702" Jan 30 16:27:33 crc kubenswrapper[4766]: I0130 16:27:33.764691 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-9f999584f-bwdvp"] Jan 30 16:27:33 crc kubenswrapper[4766]: I0130 16:27:33.776524 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-9f999584f-bwdvp"] Jan 30 16:27:34 crc kubenswrapper[4766]: I0130 16:27:34.046253 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35a5ead8-3b9f-4ac8-9266-3dc405b7c80b" path="/var/lib/kubelet/pods/35a5ead8-3b9f-4ac8-9266-3dc405b7c80b/volumes" Jan 30 16:27:34 crc kubenswrapper[4766]: I0130 16:27:34.718232 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-d55469fcf-485sj" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.005173 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-969pn"] Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.006131 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-969pn" podUID="f55dc373-49c6-4b05-a945-79614dc282d8" containerName="registry-server" containerID="cri-o://3c17de7d9c8ff462aee20d6633666e6e8afb94763702757ff150c69ee7ee111d" gracePeriod=30 Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.017256 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qrcth"] Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.017522 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qrcth" podUID="ac4a36f6-21fe-4374-adaf-4505d59ce4c5" containerName="registry-server" containerID="cri-o://06c47e9228d7637aaf17f2cb5cbd46136e93dada97e45b28a37e4ae451827e8b" gracePeriod=30 Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.031292 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wcmvb"] Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.031576 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-wcmvb" podUID="cdbd0f5d-e6fb-4960-a928-7a5dcc399239" containerName="marketplace-operator" containerID="cri-o://b6e9379c9cd40d8f1beccde490be8ea8ec9eabe93e20ab939489087d2f14c434" gracePeriod=30 Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.041333 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qct46"] Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.042031 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qct46" podUID="9f598bfe-913e-4236-b3c5-78268f38396c" containerName="registry-server" containerID="cri-o://4fedda1f3608f9c6b64edb78a08731aa0ddac6e0535fa53504800f729c59836a" gracePeriod=30 Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.055900 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hfpqw"] Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.056272 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hfpqw" podUID="50a11a60-476d-48af-9ff9-b3d9841e6260" containerName="registry-server" containerID="cri-o://be886e6bce28f07837bd1e5ff07fcae13b22456b433498c736f7be7e1ef836d8" gracePeriod=30 Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.062508 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rwhkx"] Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.063759 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rwhkx" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.066815 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rwhkx"] Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.162474 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b996b\" (UniqueName: \"kubernetes.io/projected/2b001665-9e64-4f29-b35f-5f702206ae07-kube-api-access-b996b\") pod \"marketplace-operator-79b997595-rwhkx\" (UID: \"2b001665-9e64-4f29-b35f-5f702206ae07\") " pod="openshift-marketplace/marketplace-operator-79b997595-rwhkx" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.162863 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2b001665-9e64-4f29-b35f-5f702206ae07-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rwhkx\" (UID: \"2b001665-9e64-4f29-b35f-5f702206ae07\") " pod="openshift-marketplace/marketplace-operator-79b997595-rwhkx" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.162933 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2b001665-9e64-4f29-b35f-5f702206ae07-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rwhkx\" (UID: \"2b001665-9e64-4f29-b35f-5f702206ae07\") " pod="openshift-marketplace/marketplace-operator-79b997595-rwhkx" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.264539 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2b001665-9e64-4f29-b35f-5f702206ae07-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rwhkx\" (UID: \"2b001665-9e64-4f29-b35f-5f702206ae07\") " pod="openshift-marketplace/marketplace-operator-79b997595-rwhkx" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.264617 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b996b\" (UniqueName: \"kubernetes.io/projected/2b001665-9e64-4f29-b35f-5f702206ae07-kube-api-access-b996b\") pod \"marketplace-operator-79b997595-rwhkx\" (UID: \"2b001665-9e64-4f29-b35f-5f702206ae07\") " pod="openshift-marketplace/marketplace-operator-79b997595-rwhkx" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.264645 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2b001665-9e64-4f29-b35f-5f702206ae07-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rwhkx\" (UID: \"2b001665-9e64-4f29-b35f-5f702206ae07\") " pod="openshift-marketplace/marketplace-operator-79b997595-rwhkx" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.266263 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2b001665-9e64-4f29-b35f-5f702206ae07-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rwhkx\" (UID: \"2b001665-9e64-4f29-b35f-5f702206ae07\") " pod="openshift-marketplace/marketplace-operator-79b997595-rwhkx" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.276166 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2b001665-9e64-4f29-b35f-5f702206ae07-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rwhkx\" (UID: \"2b001665-9e64-4f29-b35f-5f702206ae07\") " pod="openshift-marketplace/marketplace-operator-79b997595-rwhkx" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.284653 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b996b\" (UniqueName: \"kubernetes.io/projected/2b001665-9e64-4f29-b35f-5f702206ae07-kube-api-access-b996b\") pod \"marketplace-operator-79b997595-rwhkx\" (UID: \"2b001665-9e64-4f29-b35f-5f702206ae07\") " pod="openshift-marketplace/marketplace-operator-79b997595-rwhkx" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.537563 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rwhkx" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.544612 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qrcth" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.568437 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhvw8\" (UniqueName: \"kubernetes.io/projected/ac4a36f6-21fe-4374-adaf-4505d59ce4c5-kube-api-access-fhvw8\") pod \"ac4a36f6-21fe-4374-adaf-4505d59ce4c5\" (UID: \"ac4a36f6-21fe-4374-adaf-4505d59ce4c5\") " Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.568492 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac4a36f6-21fe-4374-adaf-4505d59ce4c5-catalog-content\") pod \"ac4a36f6-21fe-4374-adaf-4505d59ce4c5\" (UID: \"ac4a36f6-21fe-4374-adaf-4505d59ce4c5\") " Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.568537 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac4a36f6-21fe-4374-adaf-4505d59ce4c5-utilities\") pod \"ac4a36f6-21fe-4374-adaf-4505d59ce4c5\" (UID: \"ac4a36f6-21fe-4374-adaf-4505d59ce4c5\") " Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.570413 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac4a36f6-21fe-4374-adaf-4505d59ce4c5-utilities" (OuterVolumeSpecName: "utilities") pod "ac4a36f6-21fe-4374-adaf-4505d59ce4c5" (UID: "ac4a36f6-21fe-4374-adaf-4505d59ce4c5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.584774 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac4a36f6-21fe-4374-adaf-4505d59ce4c5-kube-api-access-fhvw8" (OuterVolumeSpecName: "kube-api-access-fhvw8") pod "ac4a36f6-21fe-4374-adaf-4505d59ce4c5" (UID: "ac4a36f6-21fe-4374-adaf-4505d59ce4c5"). InnerVolumeSpecName "kube-api-access-fhvw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.670080 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac4a36f6-21fe-4374-adaf-4505d59ce4c5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ac4a36f6-21fe-4374-adaf-4505d59ce4c5" (UID: "ac4a36f6-21fe-4374-adaf-4505d59ce4c5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.670481 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhvw8\" (UniqueName: \"kubernetes.io/projected/ac4a36f6-21fe-4374-adaf-4505d59ce4c5-kube-api-access-fhvw8\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.670497 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac4a36f6-21fe-4374-adaf-4505d59ce4c5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.670509 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac4a36f6-21fe-4374-adaf-4505d59ce4c5-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.747106 4766 generic.go:334] "Generic (PLEG): container finished" podID="50a11a60-476d-48af-9ff9-b3d9841e6260" containerID="be886e6bce28f07837bd1e5ff07fcae13b22456b433498c736f7be7e1ef836d8" exitCode=0 Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.747381 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hfpqw" event={"ID":"50a11a60-476d-48af-9ff9-b3d9841e6260","Type":"ContainerDied","Data":"be886e6bce28f07837bd1e5ff07fcae13b22456b433498c736f7be7e1ef836d8"} Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.751736 4766 generic.go:334] "Generic (PLEG): container finished" podID="ac4a36f6-21fe-4374-adaf-4505d59ce4c5" containerID="06c47e9228d7637aaf17f2cb5cbd46136e93dada97e45b28a37e4ae451827e8b" exitCode=0 Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.751854 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qrcth" event={"ID":"ac4a36f6-21fe-4374-adaf-4505d59ce4c5","Type":"ContainerDied","Data":"06c47e9228d7637aaf17f2cb5cbd46136e93dada97e45b28a37e4ae451827e8b"} Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.751894 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qrcth" event={"ID":"ac4a36f6-21fe-4374-adaf-4505d59ce4c5","Type":"ContainerDied","Data":"5097ba380ecfee61c19e8e36f0d186a1b5b9774436685bd5dece65fcdce6e72b"} Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.751844 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qrcth" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.751918 4766 scope.go:117] "RemoveContainer" containerID="06c47e9228d7637aaf17f2cb5cbd46136e93dada97e45b28a37e4ae451827e8b" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.759091 4766 generic.go:334] "Generic (PLEG): container finished" podID="9f598bfe-913e-4236-b3c5-78268f38396c" containerID="4fedda1f3608f9c6b64edb78a08731aa0ddac6e0535fa53504800f729c59836a" exitCode=0 Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.759172 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qct46" event={"ID":"9f598bfe-913e-4236-b3c5-78268f38396c","Type":"ContainerDied","Data":"4fedda1f3608f9c6b64edb78a08731aa0ddac6e0535fa53504800f729c59836a"} Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.761311 4766 generic.go:334] "Generic (PLEG): container finished" podID="cdbd0f5d-e6fb-4960-a928-7a5dcc399239" containerID="b6e9379c9cd40d8f1beccde490be8ea8ec9eabe93e20ab939489087d2f14c434" exitCode=0 Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.761392 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wcmvb" event={"ID":"cdbd0f5d-e6fb-4960-a928-7a5dcc399239","Type":"ContainerDied","Data":"b6e9379c9cd40d8f1beccde490be8ea8ec9eabe93e20ab939489087d2f14c434"} Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.783747 4766 generic.go:334] "Generic (PLEG): container finished" podID="f55dc373-49c6-4b05-a945-79614dc282d8" containerID="3c17de7d9c8ff462aee20d6633666e6e8afb94763702757ff150c69ee7ee111d" exitCode=0 Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.783806 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-969pn" event={"ID":"f55dc373-49c6-4b05-a945-79614dc282d8","Type":"ContainerDied","Data":"3c17de7d9c8ff462aee20d6633666e6e8afb94763702757ff150c69ee7ee111d"} Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.809612 4766 scope.go:117] "RemoveContainer" containerID="5e2714807bc4ce8fb75ec066ec680c820dce572b55a6c11d3376cbf3349b0a9a" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.813933 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qrcth"] Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.822009 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qrcth"] Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.839452 4766 scope.go:117] "RemoveContainer" containerID="9ac850580190254e68ddee9b089f6bc2d6e691ff6d344eda4986105f7c8ca18d" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.855851 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-969pn" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.858446 4766 scope.go:117] "RemoveContainer" containerID="06c47e9228d7637aaf17f2cb5cbd46136e93dada97e45b28a37e4ae451827e8b" Jan 30 16:27:39 crc kubenswrapper[4766]: E0130 16:27:39.858781 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06c47e9228d7637aaf17f2cb5cbd46136e93dada97e45b28a37e4ae451827e8b\": container with ID starting with 06c47e9228d7637aaf17f2cb5cbd46136e93dada97e45b28a37e4ae451827e8b not found: ID does not exist" containerID="06c47e9228d7637aaf17f2cb5cbd46136e93dada97e45b28a37e4ae451827e8b" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.858815 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06c47e9228d7637aaf17f2cb5cbd46136e93dada97e45b28a37e4ae451827e8b"} err="failed to get container status \"06c47e9228d7637aaf17f2cb5cbd46136e93dada97e45b28a37e4ae451827e8b\": rpc error: code = NotFound desc = could not find container \"06c47e9228d7637aaf17f2cb5cbd46136e93dada97e45b28a37e4ae451827e8b\": container with ID starting with 06c47e9228d7637aaf17f2cb5cbd46136e93dada97e45b28a37e4ae451827e8b not found: ID does not exist" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.858840 4766 scope.go:117] "RemoveContainer" containerID="5e2714807bc4ce8fb75ec066ec680c820dce572b55a6c11d3376cbf3349b0a9a" Jan 30 16:27:39 crc kubenswrapper[4766]: E0130 16:27:39.859414 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e2714807bc4ce8fb75ec066ec680c820dce572b55a6c11d3376cbf3349b0a9a\": container with ID starting with 5e2714807bc4ce8fb75ec066ec680c820dce572b55a6c11d3376cbf3349b0a9a not found: ID does not exist" containerID="5e2714807bc4ce8fb75ec066ec680c820dce572b55a6c11d3376cbf3349b0a9a" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.859472 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e2714807bc4ce8fb75ec066ec680c820dce572b55a6c11d3376cbf3349b0a9a"} err="failed to get container status \"5e2714807bc4ce8fb75ec066ec680c820dce572b55a6c11d3376cbf3349b0a9a\": rpc error: code = NotFound desc = could not find container \"5e2714807bc4ce8fb75ec066ec680c820dce572b55a6c11d3376cbf3349b0a9a\": container with ID starting with 5e2714807bc4ce8fb75ec066ec680c820dce572b55a6c11d3376cbf3349b0a9a not found: ID does not exist" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.859488 4766 scope.go:117] "RemoveContainer" containerID="9ac850580190254e68ddee9b089f6bc2d6e691ff6d344eda4986105f7c8ca18d" Jan 30 16:27:39 crc kubenswrapper[4766]: E0130 16:27:39.859881 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ac850580190254e68ddee9b089f6bc2d6e691ff6d344eda4986105f7c8ca18d\": container with ID starting with 9ac850580190254e68ddee9b089f6bc2d6e691ff6d344eda4986105f7c8ca18d not found: ID does not exist" containerID="9ac850580190254e68ddee9b089f6bc2d6e691ff6d344eda4986105f7c8ca18d" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.859903 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ac850580190254e68ddee9b089f6bc2d6e691ff6d344eda4986105f7c8ca18d"} err="failed to get container status \"9ac850580190254e68ddee9b089f6bc2d6e691ff6d344eda4986105f7c8ca18d\": rpc error: code = NotFound desc = could not find container \"9ac850580190254e68ddee9b089f6bc2d6e691ff6d344eda4986105f7c8ca18d\": container with ID starting with 9ac850580190254e68ddee9b089f6bc2d6e691ff6d344eda4986105f7c8ca18d not found: ID does not exist" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.859916 4766 scope.go:117] "RemoveContainer" containerID="9baf130b02720b533f5cfa486ecbaff1522a0002fe7c262131847af34db02ada" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.873713 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nc27r\" (UniqueName: \"kubernetes.io/projected/f55dc373-49c6-4b05-a945-79614dc282d8-kube-api-access-nc27r\") pod \"f55dc373-49c6-4b05-a945-79614dc282d8\" (UID: \"f55dc373-49c6-4b05-a945-79614dc282d8\") " Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.873862 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f55dc373-49c6-4b05-a945-79614dc282d8-catalog-content\") pod \"f55dc373-49c6-4b05-a945-79614dc282d8\" (UID: \"f55dc373-49c6-4b05-a945-79614dc282d8\") " Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.873956 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f55dc373-49c6-4b05-a945-79614dc282d8-utilities\") pod \"f55dc373-49c6-4b05-a945-79614dc282d8\" (UID: \"f55dc373-49c6-4b05-a945-79614dc282d8\") " Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.875814 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f55dc373-49c6-4b05-a945-79614dc282d8-utilities" (OuterVolumeSpecName: "utilities") pod "f55dc373-49c6-4b05-a945-79614dc282d8" (UID: "f55dc373-49c6-4b05-a945-79614dc282d8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.884040 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f55dc373-49c6-4b05-a945-79614dc282d8-kube-api-access-nc27r" (OuterVolumeSpecName: "kube-api-access-nc27r") pod "f55dc373-49c6-4b05-a945-79614dc282d8" (UID: "f55dc373-49c6-4b05-a945-79614dc282d8"). InnerVolumeSpecName "kube-api-access-nc27r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.900201 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wcmvb" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.902421 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hfpqw" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.903623 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qct46" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.974962 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cdbd0f5d-e6fb-4960-a928-7a5dcc399239-marketplace-operator-metrics\") pod \"cdbd0f5d-e6fb-4960-a928-7a5dcc399239\" (UID: \"cdbd0f5d-e6fb-4960-a928-7a5dcc399239\") " Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.975050 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f598bfe-913e-4236-b3c5-78268f38396c-utilities\") pod \"9f598bfe-913e-4236-b3c5-78268f38396c\" (UID: \"9f598bfe-913e-4236-b3c5-78268f38396c\") " Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.975076 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cdbd0f5d-e6fb-4960-a928-7a5dcc399239-marketplace-trusted-ca\") pod \"cdbd0f5d-e6fb-4960-a928-7a5dcc399239\" (UID: \"cdbd0f5d-e6fb-4960-a928-7a5dcc399239\") " Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.975143 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gqt4\" (UniqueName: \"kubernetes.io/projected/9f598bfe-913e-4236-b3c5-78268f38396c-kube-api-access-6gqt4\") pod \"9f598bfe-913e-4236-b3c5-78268f38396c\" (UID: \"9f598bfe-913e-4236-b3c5-78268f38396c\") " Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.975202 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4xn8\" (UniqueName: \"kubernetes.io/projected/50a11a60-476d-48af-9ff9-b3d9841e6260-kube-api-access-h4xn8\") pod \"50a11a60-476d-48af-9ff9-b3d9841e6260\" (UID: \"50a11a60-476d-48af-9ff9-b3d9841e6260\") " Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.975232 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50a11a60-476d-48af-9ff9-b3d9841e6260-utilities\") pod \"50a11a60-476d-48af-9ff9-b3d9841e6260\" (UID: \"50a11a60-476d-48af-9ff9-b3d9841e6260\") " Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.975253 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4d8t\" (UniqueName: \"kubernetes.io/projected/cdbd0f5d-e6fb-4960-a928-7a5dcc399239-kube-api-access-h4d8t\") pod \"cdbd0f5d-e6fb-4960-a928-7a5dcc399239\" (UID: \"cdbd0f5d-e6fb-4960-a928-7a5dcc399239\") " Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.975300 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f598bfe-913e-4236-b3c5-78268f38396c-catalog-content\") pod \"9f598bfe-913e-4236-b3c5-78268f38396c\" (UID: \"9f598bfe-913e-4236-b3c5-78268f38396c\") " Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.975323 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50a11a60-476d-48af-9ff9-b3d9841e6260-catalog-content\") pod \"50a11a60-476d-48af-9ff9-b3d9841e6260\" (UID: \"50a11a60-476d-48af-9ff9-b3d9841e6260\") " Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.975568 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f55dc373-49c6-4b05-a945-79614dc282d8-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.975584 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nc27r\" (UniqueName: \"kubernetes.io/projected/f55dc373-49c6-4b05-a945-79614dc282d8-kube-api-access-nc27r\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.976022 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdbd0f5d-e6fb-4960-a928-7a5dcc399239-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "cdbd0f5d-e6fb-4960-a928-7a5dcc399239" (UID: "cdbd0f5d-e6fb-4960-a928-7a5dcc399239"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.976756 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50a11a60-476d-48af-9ff9-b3d9841e6260-utilities" (OuterVolumeSpecName: "utilities") pod "50a11a60-476d-48af-9ff9-b3d9841e6260" (UID: "50a11a60-476d-48af-9ff9-b3d9841e6260"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.976879 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f55dc373-49c6-4b05-a945-79614dc282d8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f55dc373-49c6-4b05-a945-79614dc282d8" (UID: "f55dc373-49c6-4b05-a945-79614dc282d8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.982875 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f598bfe-913e-4236-b3c5-78268f38396c-utilities" (OuterVolumeSpecName: "utilities") pod "9f598bfe-913e-4236-b3c5-78268f38396c" (UID: "9f598bfe-913e-4236-b3c5-78268f38396c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.985040 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdbd0f5d-e6fb-4960-a928-7a5dcc399239-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "cdbd0f5d-e6fb-4960-a928-7a5dcc399239" (UID: "cdbd0f5d-e6fb-4960-a928-7a5dcc399239"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.987784 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50a11a60-476d-48af-9ff9-b3d9841e6260-kube-api-access-h4xn8" (OuterVolumeSpecName: "kube-api-access-h4xn8") pod "50a11a60-476d-48af-9ff9-b3d9841e6260" (UID: "50a11a60-476d-48af-9ff9-b3d9841e6260"). InnerVolumeSpecName "kube-api-access-h4xn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.988562 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f598bfe-913e-4236-b3c5-78268f38396c-kube-api-access-6gqt4" (OuterVolumeSpecName: "kube-api-access-6gqt4") pod "9f598bfe-913e-4236-b3c5-78268f38396c" (UID: "9f598bfe-913e-4236-b3c5-78268f38396c"). InnerVolumeSpecName "kube-api-access-6gqt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.988680 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdbd0f5d-e6fb-4960-a928-7a5dcc399239-kube-api-access-h4d8t" (OuterVolumeSpecName: "kube-api-access-h4d8t") pod "cdbd0f5d-e6fb-4960-a928-7a5dcc399239" (UID: "cdbd0f5d-e6fb-4960-a928-7a5dcc399239"). InnerVolumeSpecName "kube-api-access-h4d8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.022214 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f598bfe-913e-4236-b3c5-78268f38396c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9f598bfe-913e-4236-b3c5-78268f38396c" (UID: "9f598bfe-913e-4236-b3c5-78268f38396c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.046531 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac4a36f6-21fe-4374-adaf-4505d59ce4c5" path="/var/lib/kubelet/pods/ac4a36f6-21fe-4374-adaf-4505d59ce4c5/volumes" Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.076897 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f55dc373-49c6-4b05-a945-79614dc282d8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.076930 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6gqt4\" (UniqueName: \"kubernetes.io/projected/9f598bfe-913e-4236-b3c5-78268f38396c-kube-api-access-6gqt4\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.076939 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4xn8\" (UniqueName: \"kubernetes.io/projected/50a11a60-476d-48af-9ff9-b3d9841e6260-kube-api-access-h4xn8\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.076949 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50a11a60-476d-48af-9ff9-b3d9841e6260-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.076958 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4d8t\" (UniqueName: \"kubernetes.io/projected/cdbd0f5d-e6fb-4960-a928-7a5dcc399239-kube-api-access-h4d8t\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.076967 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f598bfe-913e-4236-b3c5-78268f38396c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.076979 4766 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cdbd0f5d-e6fb-4960-a928-7a5dcc399239-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.076988 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f598bfe-913e-4236-b3c5-78268f38396c-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.076996 4766 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cdbd0f5d-e6fb-4960-a928-7a5dcc399239-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.115776 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50a11a60-476d-48af-9ff9-b3d9841e6260-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "50a11a60-476d-48af-9ff9-b3d9841e6260" (UID: "50a11a60-476d-48af-9ff9-b3d9841e6260"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.129109 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rwhkx"] Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.181220 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50a11a60-476d-48af-9ff9-b3d9841e6260-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.793679 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qct46" event={"ID":"9f598bfe-913e-4236-b3c5-78268f38396c","Type":"ContainerDied","Data":"e4ade6f221dc5ead87adec26ae126b386fc4d9600ec068ed3a99f86aa9f21eef"} Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.793835 4766 scope.go:117] "RemoveContainer" containerID="4fedda1f3608f9c6b64edb78a08731aa0ddac6e0535fa53504800f729c59836a" Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.793723 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qct46" Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.796116 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wcmvb" Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.796115 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wcmvb" event={"ID":"cdbd0f5d-e6fb-4960-a928-7a5dcc399239","Type":"ContainerDied","Data":"f1bcfef40c047ee2d486510556be4c02c15197feb65c844e1b250852a3541990"} Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.799325 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-969pn" event={"ID":"f55dc373-49c6-4b05-a945-79614dc282d8","Type":"ContainerDied","Data":"89ef9d87bc4ca6e14617c5d57a66c8f3479be224d2f0014eefd70f2deeb130e1"} Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.799393 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-969pn" Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.810919 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hfpqw" Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.810911 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hfpqw" event={"ID":"50a11a60-476d-48af-9ff9-b3d9841e6260","Type":"ContainerDied","Data":"56a4698fa29d8b3f31ac2d170f28bf29651c60264c984a5bcb461ab8477202c2"} Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.813544 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rwhkx" event={"ID":"2b001665-9e64-4f29-b35f-5f702206ae07","Type":"ContainerStarted","Data":"64f0e72481e287d2859faa639293ca26fd2e424e6fafde2e1eff36e2e5d8eae7"} Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.813593 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rwhkx" event={"ID":"2b001665-9e64-4f29-b35f-5f702206ae07","Type":"ContainerStarted","Data":"ad025bb4c60cac767acc5ddcf4b0302bb14775160c22b853d71e08d2f4a26feb"} Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.813938 4766 scope.go:117] "RemoveContainer" containerID="543dbb0915881eb0de3020763b26d25afd72cbd7d1477df0b515d8849845cb0f" Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.813947 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-rwhkx" Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.832941 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-rwhkx" Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.843390 4766 scope.go:117] "RemoveContainer" containerID="ec0ce517870aafe9b0b52ea02febd0b91432faa6102be5a4c960f4e6d47e8c20" Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.853451 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qct46"] Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.859510 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qct46"] Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.862095 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wcmvb"] Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.865416 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wcmvb"] Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.867008 4766 scope.go:117] "RemoveContainer" containerID="b6e9379c9cd40d8f1beccde490be8ea8ec9eabe93e20ab939489087d2f14c434" Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.870997 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-969pn"] Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.885167 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-969pn"] Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.894004 4766 scope.go:117] "RemoveContainer" containerID="3c17de7d9c8ff462aee20d6633666e6e8afb94763702757ff150c69ee7ee111d" Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.903321 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-rwhkx" podStartSLOduration=1.903305022 podStartE2EDuration="1.903305022s" podCreationTimestamp="2026-01-30 16:27:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:27:40.902520621 +0000 UTC m=+315.540477977" watchObservedRunningTime="2026-01-30 16:27:40.903305022 +0000 UTC m=+315.541262388" Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.912039 4766 scope.go:117] "RemoveContainer" containerID="18913b64598e390c8024ffdd2beaf8bfc1733f79b6e172d846d92e917392a4f2" Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.918296 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hfpqw"] Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.923384 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hfpqw"] Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.931547 4766 scope.go:117] "RemoveContainer" containerID="01a6df12be346d87bb230eb7d19417e7d00327a79babb5d36b9be297a80a0970" Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.958951 4766 scope.go:117] "RemoveContainer" containerID="be886e6bce28f07837bd1e5ff07fcae13b22456b433498c736f7be7e1ef836d8" Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.972273 4766 scope.go:117] "RemoveContainer" containerID="56845faa6a2886e9495f7e3b56129ef294daca0a466636b522f89f4aba889fd6" Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.992236 4766 scope.go:117] "RemoveContainer" containerID="6326cb8b7c494cb94cd7ca4aaa3a58767027c93625175f1ed1562feb35a32331" Jan 30 16:27:42 crc kubenswrapper[4766]: I0130 16:27:42.045694 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50a11a60-476d-48af-9ff9-b3d9841e6260" path="/var/lib/kubelet/pods/50a11a60-476d-48af-9ff9-b3d9841e6260/volumes" Jan 30 16:27:42 crc kubenswrapper[4766]: I0130 16:27:42.046941 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f598bfe-913e-4236-b3c5-78268f38396c" path="/var/lib/kubelet/pods/9f598bfe-913e-4236-b3c5-78268f38396c/volumes" Jan 30 16:27:42 crc kubenswrapper[4766]: I0130 16:27:42.047695 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdbd0f5d-e6fb-4960-a928-7a5dcc399239" path="/var/lib/kubelet/pods/cdbd0f5d-e6fb-4960-a928-7a5dcc399239/volumes" Jan 30 16:27:42 crc kubenswrapper[4766]: I0130 16:27:42.048719 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f55dc373-49c6-4b05-a945-79614dc282d8" path="/var/lib/kubelet/pods/f55dc373-49c6-4b05-a945-79614dc282d8/volumes" Jan 30 16:27:45 crc kubenswrapper[4766]: I0130 16:27:45.365910 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d55469fcf-485sj"] Jan 30 16:27:45 crc kubenswrapper[4766]: I0130 16:27:45.366456 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-d55469fcf-485sj" podUID="9bff95c9-c6a1-4ee5-ac7a-68dac2da0318" containerName="controller-manager" containerID="cri-o://d311fa8670dceb7e4a31251ca8e6a5715eb8dab77a0e0a77753b1ca24a74735a" gracePeriod=30 Jan 30 16:27:45 crc kubenswrapper[4766]: I0130 16:27:45.848387 4766 generic.go:334] "Generic (PLEG): container finished" podID="9bff95c9-c6a1-4ee5-ac7a-68dac2da0318" containerID="d311fa8670dceb7e4a31251ca8e6a5715eb8dab77a0e0a77753b1ca24a74735a" exitCode=0 Jan 30 16:27:45 crc kubenswrapper[4766]: I0130 16:27:45.848472 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d55469fcf-485sj" event={"ID":"9bff95c9-c6a1-4ee5-ac7a-68dac2da0318","Type":"ContainerDied","Data":"d311fa8670dceb7e4a31251ca8e6a5715eb8dab77a0e0a77753b1ca24a74735a"} Jan 30 16:27:45 crc kubenswrapper[4766]: I0130 16:27:45.968142 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d55469fcf-485sj" Jan 30 16:27:46 crc kubenswrapper[4766]: I0130 16:27:46.057211 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9bff95c9-c6a1-4ee5-ac7a-68dac2da0318-client-ca\") pod \"9bff95c9-c6a1-4ee5-ac7a-68dac2da0318\" (UID: \"9bff95c9-c6a1-4ee5-ac7a-68dac2da0318\") " Jan 30 16:27:46 crc kubenswrapper[4766]: I0130 16:27:46.057358 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9bff95c9-c6a1-4ee5-ac7a-68dac2da0318-proxy-ca-bundles\") pod \"9bff95c9-c6a1-4ee5-ac7a-68dac2da0318\" (UID: \"9bff95c9-c6a1-4ee5-ac7a-68dac2da0318\") " Jan 30 16:27:46 crc kubenswrapper[4766]: I0130 16:27:46.057485 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27tgd\" (UniqueName: \"kubernetes.io/projected/9bff95c9-c6a1-4ee5-ac7a-68dac2da0318-kube-api-access-27tgd\") pod \"9bff95c9-c6a1-4ee5-ac7a-68dac2da0318\" (UID: \"9bff95c9-c6a1-4ee5-ac7a-68dac2da0318\") " Jan 30 16:27:46 crc kubenswrapper[4766]: I0130 16:27:46.058797 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bff95c9-c6a1-4ee5-ac7a-68dac2da0318-client-ca" (OuterVolumeSpecName: "client-ca") pod "9bff95c9-c6a1-4ee5-ac7a-68dac2da0318" (UID: "9bff95c9-c6a1-4ee5-ac7a-68dac2da0318"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:27:46 crc kubenswrapper[4766]: I0130 16:27:46.059002 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bff95c9-c6a1-4ee5-ac7a-68dac2da0318-config" (OuterVolumeSpecName: "config") pod "9bff95c9-c6a1-4ee5-ac7a-68dac2da0318" (UID: "9bff95c9-c6a1-4ee5-ac7a-68dac2da0318"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:27:46 crc kubenswrapper[4766]: I0130 16:27:46.059033 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bff95c9-c6a1-4ee5-ac7a-68dac2da0318-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "9bff95c9-c6a1-4ee5-ac7a-68dac2da0318" (UID: "9bff95c9-c6a1-4ee5-ac7a-68dac2da0318"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:27:46 crc kubenswrapper[4766]: I0130 16:27:46.057522 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bff95c9-c6a1-4ee5-ac7a-68dac2da0318-config\") pod \"9bff95c9-c6a1-4ee5-ac7a-68dac2da0318\" (UID: \"9bff95c9-c6a1-4ee5-ac7a-68dac2da0318\") " Jan 30 16:27:46 crc kubenswrapper[4766]: I0130 16:27:46.059319 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bff95c9-c6a1-4ee5-ac7a-68dac2da0318-serving-cert\") pod \"9bff95c9-c6a1-4ee5-ac7a-68dac2da0318\" (UID: \"9bff95c9-c6a1-4ee5-ac7a-68dac2da0318\") " Jan 30 16:27:46 crc kubenswrapper[4766]: I0130 16:27:46.059552 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bff95c9-c6a1-4ee5-ac7a-68dac2da0318-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:46 crc kubenswrapper[4766]: I0130 16:27:46.059569 4766 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9bff95c9-c6a1-4ee5-ac7a-68dac2da0318-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:46 crc kubenswrapper[4766]: I0130 16:27:46.059578 4766 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9bff95c9-c6a1-4ee5-ac7a-68dac2da0318-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:46 crc kubenswrapper[4766]: I0130 16:27:46.066282 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bff95c9-c6a1-4ee5-ac7a-68dac2da0318-kube-api-access-27tgd" (OuterVolumeSpecName: "kube-api-access-27tgd") pod "9bff95c9-c6a1-4ee5-ac7a-68dac2da0318" (UID: "9bff95c9-c6a1-4ee5-ac7a-68dac2da0318"). InnerVolumeSpecName "kube-api-access-27tgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:27:46 crc kubenswrapper[4766]: I0130 16:27:46.070102 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bff95c9-c6a1-4ee5-ac7a-68dac2da0318-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9bff95c9-c6a1-4ee5-ac7a-68dac2da0318" (UID: "9bff95c9-c6a1-4ee5-ac7a-68dac2da0318"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:27:46 crc kubenswrapper[4766]: I0130 16:27:46.161065 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27tgd\" (UniqueName: \"kubernetes.io/projected/9bff95c9-c6a1-4ee5-ac7a-68dac2da0318-kube-api-access-27tgd\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:46 crc kubenswrapper[4766]: I0130 16:27:46.161112 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bff95c9-c6a1-4ee5-ac7a-68dac2da0318-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:46 crc kubenswrapper[4766]: I0130 16:27:46.855635 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d55469fcf-485sj" event={"ID":"9bff95c9-c6a1-4ee5-ac7a-68dac2da0318","Type":"ContainerDied","Data":"e58fbe7996a8ff003a2b6f7f74a31d396be00251f43d6d9bee24d2bba733d54a"} Jan 30 16:27:46 crc kubenswrapper[4766]: I0130 16:27:46.855710 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d55469fcf-485sj" Jan 30 16:27:46 crc kubenswrapper[4766]: I0130 16:27:46.856044 4766 scope.go:117] "RemoveContainer" containerID="d311fa8670dceb7e4a31251ca8e6a5715eb8dab77a0e0a77753b1ca24a74735a" Jan 30 16:27:46 crc kubenswrapper[4766]: I0130 16:27:46.886417 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d55469fcf-485sj"] Jan 30 16:27:46 crc kubenswrapper[4766]: I0130 16:27:46.893857 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-d55469fcf-485sj"] Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.382837 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-9f999584f-p9fmc"] Jan 30 16:27:47 crc kubenswrapper[4766]: E0130 16:27:47.383082 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50a11a60-476d-48af-9ff9-b3d9841e6260" containerName="registry-server" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.383094 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="50a11a60-476d-48af-9ff9-b3d9841e6260" containerName="registry-server" Jan 30 16:27:47 crc kubenswrapper[4766]: E0130 16:27:47.383105 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac4a36f6-21fe-4374-adaf-4505d59ce4c5" containerName="extract-content" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.383110 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac4a36f6-21fe-4374-adaf-4505d59ce4c5" containerName="extract-content" Jan 30 16:27:47 crc kubenswrapper[4766]: E0130 16:27:47.383116 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f55dc373-49c6-4b05-a945-79614dc282d8" containerName="extract-content" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.383122 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f55dc373-49c6-4b05-a945-79614dc282d8" containerName="extract-content" Jan 30 16:27:47 crc kubenswrapper[4766]: E0130 16:27:47.383131 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50a11a60-476d-48af-9ff9-b3d9841e6260" containerName="extract-utilities" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.383137 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="50a11a60-476d-48af-9ff9-b3d9841e6260" containerName="extract-utilities" Jan 30 16:27:47 crc kubenswrapper[4766]: E0130 16:27:47.383146 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac4a36f6-21fe-4374-adaf-4505d59ce4c5" containerName="registry-server" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.383151 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac4a36f6-21fe-4374-adaf-4505d59ce4c5" containerName="registry-server" Jan 30 16:27:47 crc kubenswrapper[4766]: E0130 16:27:47.383159 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50a11a60-476d-48af-9ff9-b3d9841e6260" containerName="extract-content" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.383165 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="50a11a60-476d-48af-9ff9-b3d9841e6260" containerName="extract-content" Jan 30 16:27:47 crc kubenswrapper[4766]: E0130 16:27:47.383172 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac4a36f6-21fe-4374-adaf-4505d59ce4c5" containerName="extract-utilities" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.383190 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac4a36f6-21fe-4374-adaf-4505d59ce4c5" containerName="extract-utilities" Jan 30 16:27:47 crc kubenswrapper[4766]: E0130 16:27:47.383203 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f55dc373-49c6-4b05-a945-79614dc282d8" containerName="extract-utilities" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.383209 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f55dc373-49c6-4b05-a945-79614dc282d8" containerName="extract-utilities" Jan 30 16:27:47 crc kubenswrapper[4766]: E0130 16:27:47.383220 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bff95c9-c6a1-4ee5-ac7a-68dac2da0318" containerName="controller-manager" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.383226 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bff95c9-c6a1-4ee5-ac7a-68dac2da0318" containerName="controller-manager" Jan 30 16:27:47 crc kubenswrapper[4766]: E0130 16:27:47.383232 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdbd0f5d-e6fb-4960-a928-7a5dcc399239" containerName="marketplace-operator" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.383238 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdbd0f5d-e6fb-4960-a928-7a5dcc399239" containerName="marketplace-operator" Jan 30 16:27:47 crc kubenswrapper[4766]: E0130 16:27:47.383247 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f55dc373-49c6-4b05-a945-79614dc282d8" containerName="registry-server" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.383252 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f55dc373-49c6-4b05-a945-79614dc282d8" containerName="registry-server" Jan 30 16:27:47 crc kubenswrapper[4766]: E0130 16:27:47.383261 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f598bfe-913e-4236-b3c5-78268f38396c" containerName="extract-utilities" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.383266 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f598bfe-913e-4236-b3c5-78268f38396c" containerName="extract-utilities" Jan 30 16:27:47 crc kubenswrapper[4766]: E0130 16:27:47.383275 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f598bfe-913e-4236-b3c5-78268f38396c" containerName="extract-content" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.383281 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f598bfe-913e-4236-b3c5-78268f38396c" containerName="extract-content" Jan 30 16:27:47 crc kubenswrapper[4766]: E0130 16:27:47.383289 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f598bfe-913e-4236-b3c5-78268f38396c" containerName="registry-server" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.383295 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f598bfe-913e-4236-b3c5-78268f38396c" containerName="registry-server" Jan 30 16:27:47 crc kubenswrapper[4766]: E0130 16:27:47.383302 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdbd0f5d-e6fb-4960-a928-7a5dcc399239" containerName="marketplace-operator" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.383308 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdbd0f5d-e6fb-4960-a928-7a5dcc399239" containerName="marketplace-operator" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.383389 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdbd0f5d-e6fb-4960-a928-7a5dcc399239" containerName="marketplace-operator" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.383397 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="50a11a60-476d-48af-9ff9-b3d9841e6260" containerName="registry-server" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.383403 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bff95c9-c6a1-4ee5-ac7a-68dac2da0318" containerName="controller-manager" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.383412 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="f55dc373-49c6-4b05-a945-79614dc282d8" containerName="registry-server" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.383426 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f598bfe-913e-4236-b3c5-78268f38396c" containerName="registry-server" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.383434 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac4a36f6-21fe-4374-adaf-4505d59ce4c5" containerName="registry-server" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.383825 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9f999584f-p9fmc" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.385876 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.386794 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.387714 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.388912 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.389089 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.390320 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.401719 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.406123 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-9f999584f-p9fmc"] Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.479336 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btm4c\" (UniqueName: \"kubernetes.io/projected/faac4a21-a6d9-49cb-aa50-a78811180a26-kube-api-access-btm4c\") pod \"controller-manager-9f999584f-p9fmc\" (UID: \"faac4a21-a6d9-49cb-aa50-a78811180a26\") " pod="openshift-controller-manager/controller-manager-9f999584f-p9fmc" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.479776 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/faac4a21-a6d9-49cb-aa50-a78811180a26-client-ca\") pod \"controller-manager-9f999584f-p9fmc\" (UID: \"faac4a21-a6d9-49cb-aa50-a78811180a26\") " pod="openshift-controller-manager/controller-manager-9f999584f-p9fmc" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.479990 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/faac4a21-a6d9-49cb-aa50-a78811180a26-serving-cert\") pod \"controller-manager-9f999584f-p9fmc\" (UID: \"faac4a21-a6d9-49cb-aa50-a78811180a26\") " pod="openshift-controller-manager/controller-manager-9f999584f-p9fmc" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.480046 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/faac4a21-a6d9-49cb-aa50-a78811180a26-proxy-ca-bundles\") pod \"controller-manager-9f999584f-p9fmc\" (UID: \"faac4a21-a6d9-49cb-aa50-a78811180a26\") " pod="openshift-controller-manager/controller-manager-9f999584f-p9fmc" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.480247 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/faac4a21-a6d9-49cb-aa50-a78811180a26-config\") pod \"controller-manager-9f999584f-p9fmc\" (UID: \"faac4a21-a6d9-49cb-aa50-a78811180a26\") " pod="openshift-controller-manager/controller-manager-9f999584f-p9fmc" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.581371 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/faac4a21-a6d9-49cb-aa50-a78811180a26-config\") pod \"controller-manager-9f999584f-p9fmc\" (UID: \"faac4a21-a6d9-49cb-aa50-a78811180a26\") " pod="openshift-controller-manager/controller-manager-9f999584f-p9fmc" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.581470 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btm4c\" (UniqueName: \"kubernetes.io/projected/faac4a21-a6d9-49cb-aa50-a78811180a26-kube-api-access-btm4c\") pod \"controller-manager-9f999584f-p9fmc\" (UID: \"faac4a21-a6d9-49cb-aa50-a78811180a26\") " pod="openshift-controller-manager/controller-manager-9f999584f-p9fmc" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.581515 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/faac4a21-a6d9-49cb-aa50-a78811180a26-client-ca\") pod \"controller-manager-9f999584f-p9fmc\" (UID: \"faac4a21-a6d9-49cb-aa50-a78811180a26\") " pod="openshift-controller-manager/controller-manager-9f999584f-p9fmc" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.581556 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/faac4a21-a6d9-49cb-aa50-a78811180a26-serving-cert\") pod \"controller-manager-9f999584f-p9fmc\" (UID: \"faac4a21-a6d9-49cb-aa50-a78811180a26\") " pod="openshift-controller-manager/controller-manager-9f999584f-p9fmc" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.581589 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/faac4a21-a6d9-49cb-aa50-a78811180a26-proxy-ca-bundles\") pod \"controller-manager-9f999584f-p9fmc\" (UID: \"faac4a21-a6d9-49cb-aa50-a78811180a26\") " pod="openshift-controller-manager/controller-manager-9f999584f-p9fmc" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.582831 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/faac4a21-a6d9-49cb-aa50-a78811180a26-client-ca\") pod \"controller-manager-9f999584f-p9fmc\" (UID: \"faac4a21-a6d9-49cb-aa50-a78811180a26\") " pod="openshift-controller-manager/controller-manager-9f999584f-p9fmc" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.583516 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/faac4a21-a6d9-49cb-aa50-a78811180a26-config\") pod \"controller-manager-9f999584f-p9fmc\" (UID: \"faac4a21-a6d9-49cb-aa50-a78811180a26\") " pod="openshift-controller-manager/controller-manager-9f999584f-p9fmc" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.583582 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/faac4a21-a6d9-49cb-aa50-a78811180a26-proxy-ca-bundles\") pod \"controller-manager-9f999584f-p9fmc\" (UID: \"faac4a21-a6d9-49cb-aa50-a78811180a26\") " pod="openshift-controller-manager/controller-manager-9f999584f-p9fmc" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.595437 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/faac4a21-a6d9-49cb-aa50-a78811180a26-serving-cert\") pod \"controller-manager-9f999584f-p9fmc\" (UID: \"faac4a21-a6d9-49cb-aa50-a78811180a26\") " pod="openshift-controller-manager/controller-manager-9f999584f-p9fmc" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.602451 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btm4c\" (UniqueName: \"kubernetes.io/projected/faac4a21-a6d9-49cb-aa50-a78811180a26-kube-api-access-btm4c\") pod \"controller-manager-9f999584f-p9fmc\" (UID: \"faac4a21-a6d9-49cb-aa50-a78811180a26\") " pod="openshift-controller-manager/controller-manager-9f999584f-p9fmc" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.703753 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9f999584f-p9fmc" Jan 30 16:27:48 crc kubenswrapper[4766]: I0130 16:27:48.048556 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bff95c9-c6a1-4ee5-ac7a-68dac2da0318" path="/var/lib/kubelet/pods/9bff95c9-c6a1-4ee5-ac7a-68dac2da0318/volumes" Jan 30 16:27:48 crc kubenswrapper[4766]: I0130 16:27:48.147367 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-9f999584f-p9fmc"] Jan 30 16:27:48 crc kubenswrapper[4766]: I0130 16:27:48.873392 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9f999584f-p9fmc" event={"ID":"faac4a21-a6d9-49cb-aa50-a78811180a26","Type":"ContainerStarted","Data":"a1385a0ef21788a01b4db812c90b4b2ef2d42befd912556df8c69aa87dcfcd7c"} Jan 30 16:27:48 crc kubenswrapper[4766]: I0130 16:27:48.874069 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-9f999584f-p9fmc" Jan 30 16:27:48 crc kubenswrapper[4766]: I0130 16:27:48.874085 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9f999584f-p9fmc" event={"ID":"faac4a21-a6d9-49cb-aa50-a78811180a26","Type":"ContainerStarted","Data":"24355eb30bec46eb83e3211c8bae21fc355f9439589efa7eae30cf23a54a185e"} Jan 30 16:27:48 crc kubenswrapper[4766]: I0130 16:27:48.879095 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-9f999584f-p9fmc" Jan 30 16:27:48 crc kubenswrapper[4766]: I0130 16:27:48.892489 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-9f999584f-p9fmc" podStartSLOduration=3.892471524 podStartE2EDuration="3.892471524s" podCreationTimestamp="2026-01-30 16:27:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:27:48.891895229 +0000 UTC m=+323.529852585" watchObservedRunningTime="2026-01-30 16:27:48.892471524 +0000 UTC m=+323.530428870" Jan 30 16:28:02 crc kubenswrapper[4766]: I0130 16:28:02.112086 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-j67vg"] Jan 30 16:28:02 crc kubenswrapper[4766]: I0130 16:28:02.112987 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdbd0f5d-e6fb-4960-a928-7a5dcc399239" containerName="marketplace-operator" Jan 30 16:28:02 crc kubenswrapper[4766]: I0130 16:28:02.113387 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-j67vg" Jan 30 16:28:02 crc kubenswrapper[4766]: I0130 16:28:02.135556 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-j67vg"] Jan 30 16:28:02 crc kubenswrapper[4766]: I0130 16:28:02.193829 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-j67vg\" (UID: \"e691f63b-e081-4e1f-9d9e-3af3af8749bc\") " pod="openshift-image-registry/image-registry-66df7c8f76-j67vg" Jan 30 16:28:02 crc kubenswrapper[4766]: I0130 16:28:02.193925 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7scsc\" (UniqueName: \"kubernetes.io/projected/e691f63b-e081-4e1f-9d9e-3af3af8749bc-kube-api-access-7scsc\") pod \"image-registry-66df7c8f76-j67vg\" (UID: \"e691f63b-e081-4e1f-9d9e-3af3af8749bc\") " pod="openshift-image-registry/image-registry-66df7c8f76-j67vg" Jan 30 16:28:02 crc kubenswrapper[4766]: I0130 16:28:02.193955 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e691f63b-e081-4e1f-9d9e-3af3af8749bc-installation-pull-secrets\") pod \"image-registry-66df7c8f76-j67vg\" (UID: \"e691f63b-e081-4e1f-9d9e-3af3af8749bc\") " pod="openshift-image-registry/image-registry-66df7c8f76-j67vg" Jan 30 16:28:02 crc kubenswrapper[4766]: I0130 16:28:02.193984 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e691f63b-e081-4e1f-9d9e-3af3af8749bc-registry-certificates\") pod \"image-registry-66df7c8f76-j67vg\" (UID: \"e691f63b-e081-4e1f-9d9e-3af3af8749bc\") " pod="openshift-image-registry/image-registry-66df7c8f76-j67vg" Jan 30 16:28:02 crc kubenswrapper[4766]: I0130 16:28:02.194015 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e691f63b-e081-4e1f-9d9e-3af3af8749bc-ca-trust-extracted\") pod \"image-registry-66df7c8f76-j67vg\" (UID: \"e691f63b-e081-4e1f-9d9e-3af3af8749bc\") " pod="openshift-image-registry/image-registry-66df7c8f76-j67vg" Jan 30 16:28:02 crc kubenswrapper[4766]: I0130 16:28:02.194036 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e691f63b-e081-4e1f-9d9e-3af3af8749bc-registry-tls\") pod \"image-registry-66df7c8f76-j67vg\" (UID: \"e691f63b-e081-4e1f-9d9e-3af3af8749bc\") " pod="openshift-image-registry/image-registry-66df7c8f76-j67vg" Jan 30 16:28:02 crc kubenswrapper[4766]: I0130 16:28:02.194247 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e691f63b-e081-4e1f-9d9e-3af3af8749bc-trusted-ca\") pod \"image-registry-66df7c8f76-j67vg\" (UID: \"e691f63b-e081-4e1f-9d9e-3af3af8749bc\") " pod="openshift-image-registry/image-registry-66df7c8f76-j67vg" Jan 30 16:28:02 crc kubenswrapper[4766]: I0130 16:28:02.194277 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e691f63b-e081-4e1f-9d9e-3af3af8749bc-bound-sa-token\") pod \"image-registry-66df7c8f76-j67vg\" (UID: \"e691f63b-e081-4e1f-9d9e-3af3af8749bc\") " pod="openshift-image-registry/image-registry-66df7c8f76-j67vg" Jan 30 16:28:02 crc kubenswrapper[4766]: I0130 16:28:02.212142 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-j67vg\" (UID: \"e691f63b-e081-4e1f-9d9e-3af3af8749bc\") " pod="openshift-image-registry/image-registry-66df7c8f76-j67vg" Jan 30 16:28:02 crc kubenswrapper[4766]: I0130 16:28:02.296344 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e691f63b-e081-4e1f-9d9e-3af3af8749bc-ca-trust-extracted\") pod \"image-registry-66df7c8f76-j67vg\" (UID: \"e691f63b-e081-4e1f-9d9e-3af3af8749bc\") " pod="openshift-image-registry/image-registry-66df7c8f76-j67vg" Jan 30 16:28:02 crc kubenswrapper[4766]: I0130 16:28:02.296429 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e691f63b-e081-4e1f-9d9e-3af3af8749bc-registry-tls\") pod \"image-registry-66df7c8f76-j67vg\" (UID: \"e691f63b-e081-4e1f-9d9e-3af3af8749bc\") " pod="openshift-image-registry/image-registry-66df7c8f76-j67vg" Jan 30 16:28:02 crc kubenswrapper[4766]: I0130 16:28:02.296490 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e691f63b-e081-4e1f-9d9e-3af3af8749bc-trusted-ca\") pod \"image-registry-66df7c8f76-j67vg\" (UID: \"e691f63b-e081-4e1f-9d9e-3af3af8749bc\") " pod="openshift-image-registry/image-registry-66df7c8f76-j67vg" Jan 30 16:28:02 crc kubenswrapper[4766]: I0130 16:28:02.296519 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e691f63b-e081-4e1f-9d9e-3af3af8749bc-bound-sa-token\") pod \"image-registry-66df7c8f76-j67vg\" (UID: \"e691f63b-e081-4e1f-9d9e-3af3af8749bc\") " pod="openshift-image-registry/image-registry-66df7c8f76-j67vg" Jan 30 16:28:02 crc kubenswrapper[4766]: I0130 16:28:02.296595 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7scsc\" (UniqueName: \"kubernetes.io/projected/e691f63b-e081-4e1f-9d9e-3af3af8749bc-kube-api-access-7scsc\") pod \"image-registry-66df7c8f76-j67vg\" (UID: \"e691f63b-e081-4e1f-9d9e-3af3af8749bc\") " pod="openshift-image-registry/image-registry-66df7c8f76-j67vg" Jan 30 16:28:02 crc kubenswrapper[4766]: I0130 16:28:02.296621 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e691f63b-e081-4e1f-9d9e-3af3af8749bc-installation-pull-secrets\") pod \"image-registry-66df7c8f76-j67vg\" (UID: \"e691f63b-e081-4e1f-9d9e-3af3af8749bc\") " pod="openshift-image-registry/image-registry-66df7c8f76-j67vg" Jan 30 16:28:02 crc kubenswrapper[4766]: I0130 16:28:02.296694 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e691f63b-e081-4e1f-9d9e-3af3af8749bc-registry-certificates\") pod \"image-registry-66df7c8f76-j67vg\" (UID: \"e691f63b-e081-4e1f-9d9e-3af3af8749bc\") " pod="openshift-image-registry/image-registry-66df7c8f76-j67vg" Jan 30 16:28:02 crc kubenswrapper[4766]: I0130 16:28:02.297135 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e691f63b-e081-4e1f-9d9e-3af3af8749bc-ca-trust-extracted\") pod \"image-registry-66df7c8f76-j67vg\" (UID: \"e691f63b-e081-4e1f-9d9e-3af3af8749bc\") " pod="openshift-image-registry/image-registry-66df7c8f76-j67vg" Jan 30 16:28:02 crc kubenswrapper[4766]: I0130 16:28:02.298038 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e691f63b-e081-4e1f-9d9e-3af3af8749bc-trusted-ca\") pod \"image-registry-66df7c8f76-j67vg\" (UID: \"e691f63b-e081-4e1f-9d9e-3af3af8749bc\") " pod="openshift-image-registry/image-registry-66df7c8f76-j67vg" Jan 30 16:28:02 crc kubenswrapper[4766]: I0130 16:28:02.298064 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e691f63b-e081-4e1f-9d9e-3af3af8749bc-registry-certificates\") pod \"image-registry-66df7c8f76-j67vg\" (UID: \"e691f63b-e081-4e1f-9d9e-3af3af8749bc\") " pod="openshift-image-registry/image-registry-66df7c8f76-j67vg" Jan 30 16:28:02 crc kubenswrapper[4766]: I0130 16:28:02.305141 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e691f63b-e081-4e1f-9d9e-3af3af8749bc-installation-pull-secrets\") pod \"image-registry-66df7c8f76-j67vg\" (UID: \"e691f63b-e081-4e1f-9d9e-3af3af8749bc\") " pod="openshift-image-registry/image-registry-66df7c8f76-j67vg" Jan 30 16:28:02 crc kubenswrapper[4766]: I0130 16:28:02.305377 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e691f63b-e081-4e1f-9d9e-3af3af8749bc-registry-tls\") pod \"image-registry-66df7c8f76-j67vg\" (UID: \"e691f63b-e081-4e1f-9d9e-3af3af8749bc\") " pod="openshift-image-registry/image-registry-66df7c8f76-j67vg" Jan 30 16:28:02 crc kubenswrapper[4766]: I0130 16:28:02.313996 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7scsc\" (UniqueName: \"kubernetes.io/projected/e691f63b-e081-4e1f-9d9e-3af3af8749bc-kube-api-access-7scsc\") pod \"image-registry-66df7c8f76-j67vg\" (UID: \"e691f63b-e081-4e1f-9d9e-3af3af8749bc\") " pod="openshift-image-registry/image-registry-66df7c8f76-j67vg" Jan 30 16:28:02 crc kubenswrapper[4766]: I0130 16:28:02.317781 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e691f63b-e081-4e1f-9d9e-3af3af8749bc-bound-sa-token\") pod \"image-registry-66df7c8f76-j67vg\" (UID: \"e691f63b-e081-4e1f-9d9e-3af3af8749bc\") " pod="openshift-image-registry/image-registry-66df7c8f76-j67vg" Jan 30 16:28:02 crc kubenswrapper[4766]: I0130 16:28:02.429275 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-j67vg" Jan 30 16:28:02 crc kubenswrapper[4766]: I0130 16:28:02.861099 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-j67vg"] Jan 30 16:28:02 crc kubenswrapper[4766]: I0130 16:28:02.946004 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-j67vg" event={"ID":"e691f63b-e081-4e1f-9d9e-3af3af8749bc","Type":"ContainerStarted","Data":"c9b68069e30b45190858f72e51693a4243d4226fd4159d3db90ecdd90bd4cb0c"} Jan 30 16:28:03 crc kubenswrapper[4766]: I0130 16:28:03.952557 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-j67vg" event={"ID":"e691f63b-e081-4e1f-9d9e-3af3af8749bc","Type":"ContainerStarted","Data":"cd3b4983f2b0eb75ee718357bedf79fc7950fa3ab7cebc59df1905e5af5cfa67"} Jan 30 16:28:03 crc kubenswrapper[4766]: I0130 16:28:03.952964 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-j67vg" Jan 30 16:28:03 crc kubenswrapper[4766]: I0130 16:28:03.986513 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-j67vg" podStartSLOduration=1.986491587 podStartE2EDuration="1.986491587s" podCreationTimestamp="2026-01-30 16:28:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:28:03.986432216 +0000 UTC m=+338.624389582" watchObservedRunningTime="2026-01-30 16:28:03.986491587 +0000 UTC m=+338.624448933" Jan 30 16:28:22 crc kubenswrapper[4766]: I0130 16:28:22.434378 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-j67vg" Jan 30 16:28:22 crc kubenswrapper[4766]: I0130 16:28:22.484517 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9nn5q"] Jan 30 16:28:28 crc kubenswrapper[4766]: I0130 16:28:28.279859 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9s94z"] Jan 30 16:28:28 crc kubenswrapper[4766]: I0130 16:28:28.281799 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9s94z" Jan 30 16:28:28 crc kubenswrapper[4766]: I0130 16:28:28.283934 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 30 16:28:28 crc kubenswrapper[4766]: I0130 16:28:28.289369 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9s94z"] Jan 30 16:28:28 crc kubenswrapper[4766]: I0130 16:28:28.373536 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45931cc3-9fdc-43a0-bc52-7ac389c4f75b-utilities\") pod \"community-operators-9s94z\" (UID: \"45931cc3-9fdc-43a0-bc52-7ac389c4f75b\") " pod="openshift-marketplace/community-operators-9s94z" Jan 30 16:28:28 crc kubenswrapper[4766]: I0130 16:28:28.373960 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45931cc3-9fdc-43a0-bc52-7ac389c4f75b-catalog-content\") pod \"community-operators-9s94z\" (UID: \"45931cc3-9fdc-43a0-bc52-7ac389c4f75b\") " pod="openshift-marketplace/community-operators-9s94z" Jan 30 16:28:28 crc kubenswrapper[4766]: I0130 16:28:28.373993 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl7wl\" (UniqueName: \"kubernetes.io/projected/45931cc3-9fdc-43a0-bc52-7ac389c4f75b-kube-api-access-hl7wl\") pod \"community-operators-9s94z\" (UID: \"45931cc3-9fdc-43a0-bc52-7ac389c4f75b\") " pod="openshift-marketplace/community-operators-9s94z" Jan 30 16:28:28 crc kubenswrapper[4766]: I0130 16:28:28.469196 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sqx4x"] Jan 30 16:28:28 crc kubenswrapper[4766]: I0130 16:28:28.470487 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sqx4x" Jan 30 16:28:28 crc kubenswrapper[4766]: I0130 16:28:28.472510 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 30 16:28:28 crc kubenswrapper[4766]: I0130 16:28:28.475267 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45931cc3-9fdc-43a0-bc52-7ac389c4f75b-utilities\") pod \"community-operators-9s94z\" (UID: \"45931cc3-9fdc-43a0-bc52-7ac389c4f75b\") " pod="openshift-marketplace/community-operators-9s94z" Jan 30 16:28:28 crc kubenswrapper[4766]: I0130 16:28:28.475414 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45931cc3-9fdc-43a0-bc52-7ac389c4f75b-catalog-content\") pod \"community-operators-9s94z\" (UID: \"45931cc3-9fdc-43a0-bc52-7ac389c4f75b\") " pod="openshift-marketplace/community-operators-9s94z" Jan 30 16:28:28 crc kubenswrapper[4766]: I0130 16:28:28.475463 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hl7wl\" (UniqueName: \"kubernetes.io/projected/45931cc3-9fdc-43a0-bc52-7ac389c4f75b-kube-api-access-hl7wl\") pod \"community-operators-9s94z\" (UID: \"45931cc3-9fdc-43a0-bc52-7ac389c4f75b\") " pod="openshift-marketplace/community-operators-9s94z" Jan 30 16:28:28 crc kubenswrapper[4766]: I0130 16:28:28.475898 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45931cc3-9fdc-43a0-bc52-7ac389c4f75b-utilities\") pod \"community-operators-9s94z\" (UID: \"45931cc3-9fdc-43a0-bc52-7ac389c4f75b\") " pod="openshift-marketplace/community-operators-9s94z" Jan 30 16:28:28 crc kubenswrapper[4766]: I0130 16:28:28.475911 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45931cc3-9fdc-43a0-bc52-7ac389c4f75b-catalog-content\") pod \"community-operators-9s94z\" (UID: \"45931cc3-9fdc-43a0-bc52-7ac389c4f75b\") " pod="openshift-marketplace/community-operators-9s94z" Jan 30 16:28:28 crc kubenswrapper[4766]: I0130 16:28:28.479385 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sqx4x"] Jan 30 16:28:28 crc kubenswrapper[4766]: I0130 16:28:28.497735 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hl7wl\" (UniqueName: \"kubernetes.io/projected/45931cc3-9fdc-43a0-bc52-7ac389c4f75b-kube-api-access-hl7wl\") pod \"community-operators-9s94z\" (UID: \"45931cc3-9fdc-43a0-bc52-7ac389c4f75b\") " pod="openshift-marketplace/community-operators-9s94z" Jan 30 16:28:28 crc kubenswrapper[4766]: I0130 16:28:28.577263 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/748d2b4a-b71d-4ecb-9df9-166be9b20302-utilities\") pod \"certified-operators-sqx4x\" (UID: \"748d2b4a-b71d-4ecb-9df9-166be9b20302\") " pod="openshift-marketplace/certified-operators-sqx4x" Jan 30 16:28:28 crc kubenswrapper[4766]: I0130 16:28:28.577369 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/748d2b4a-b71d-4ecb-9df9-166be9b20302-catalog-content\") pod \"certified-operators-sqx4x\" (UID: \"748d2b4a-b71d-4ecb-9df9-166be9b20302\") " pod="openshift-marketplace/certified-operators-sqx4x" Jan 30 16:28:28 crc kubenswrapper[4766]: I0130 16:28:28.577430 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjxkj\" (UniqueName: \"kubernetes.io/projected/748d2b4a-b71d-4ecb-9df9-166be9b20302-kube-api-access-mjxkj\") pod \"certified-operators-sqx4x\" (UID: \"748d2b4a-b71d-4ecb-9df9-166be9b20302\") " pod="openshift-marketplace/certified-operators-sqx4x" Jan 30 16:28:28 crc kubenswrapper[4766]: I0130 16:28:28.651947 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9s94z" Jan 30 16:28:28 crc kubenswrapper[4766]: I0130 16:28:28.678152 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/748d2b4a-b71d-4ecb-9df9-166be9b20302-utilities\") pod \"certified-operators-sqx4x\" (UID: \"748d2b4a-b71d-4ecb-9df9-166be9b20302\") " pod="openshift-marketplace/certified-operators-sqx4x" Jan 30 16:28:28 crc kubenswrapper[4766]: I0130 16:28:28.678274 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/748d2b4a-b71d-4ecb-9df9-166be9b20302-catalog-content\") pod \"certified-operators-sqx4x\" (UID: \"748d2b4a-b71d-4ecb-9df9-166be9b20302\") " pod="openshift-marketplace/certified-operators-sqx4x" Jan 30 16:28:28 crc kubenswrapper[4766]: I0130 16:28:28.678338 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjxkj\" (UniqueName: \"kubernetes.io/projected/748d2b4a-b71d-4ecb-9df9-166be9b20302-kube-api-access-mjxkj\") pod \"certified-operators-sqx4x\" (UID: \"748d2b4a-b71d-4ecb-9df9-166be9b20302\") " pod="openshift-marketplace/certified-operators-sqx4x" Jan 30 16:28:28 crc kubenswrapper[4766]: I0130 16:28:28.678800 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/748d2b4a-b71d-4ecb-9df9-166be9b20302-utilities\") pod \"certified-operators-sqx4x\" (UID: \"748d2b4a-b71d-4ecb-9df9-166be9b20302\") " pod="openshift-marketplace/certified-operators-sqx4x" Jan 30 16:28:28 crc kubenswrapper[4766]: I0130 16:28:28.679065 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/748d2b4a-b71d-4ecb-9df9-166be9b20302-catalog-content\") pod \"certified-operators-sqx4x\" (UID: \"748d2b4a-b71d-4ecb-9df9-166be9b20302\") " pod="openshift-marketplace/certified-operators-sqx4x" Jan 30 16:28:28 crc kubenswrapper[4766]: I0130 16:28:28.701984 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjxkj\" (UniqueName: \"kubernetes.io/projected/748d2b4a-b71d-4ecb-9df9-166be9b20302-kube-api-access-mjxkj\") pod \"certified-operators-sqx4x\" (UID: \"748d2b4a-b71d-4ecb-9df9-166be9b20302\") " pod="openshift-marketplace/certified-operators-sqx4x" Jan 30 16:28:28 crc kubenswrapper[4766]: I0130 16:28:28.783614 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sqx4x" Jan 30 16:28:29 crc kubenswrapper[4766]: I0130 16:28:29.062151 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9s94z"] Jan 30 16:28:29 crc kubenswrapper[4766]: W0130 16:28:29.064553 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45931cc3_9fdc_43a0_bc52_7ac389c4f75b.slice/crio-70905ce6d0cf791bf32733efc45389df679fed0c62b0a32da5a573e02225b27e WatchSource:0}: Error finding container 70905ce6d0cf791bf32733efc45389df679fed0c62b0a32da5a573e02225b27e: Status 404 returned error can't find the container with id 70905ce6d0cf791bf32733efc45389df679fed0c62b0a32da5a573e02225b27e Jan 30 16:28:29 crc kubenswrapper[4766]: I0130 16:28:29.078755 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9s94z" event={"ID":"45931cc3-9fdc-43a0-bc52-7ac389c4f75b","Type":"ContainerStarted","Data":"70905ce6d0cf791bf32733efc45389df679fed0c62b0a32da5a573e02225b27e"} Jan 30 16:28:29 crc kubenswrapper[4766]: I0130 16:28:29.194068 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sqx4x"] Jan 30 16:28:30 crc kubenswrapper[4766]: I0130 16:28:30.085699 4766 generic.go:334] "Generic (PLEG): container finished" podID="45931cc3-9fdc-43a0-bc52-7ac389c4f75b" containerID="0b941cc6b7547eb39ab2f29096c216bd65a342eedf24fba721f6d7abced9eeb3" exitCode=0 Jan 30 16:28:30 crc kubenswrapper[4766]: I0130 16:28:30.085784 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9s94z" event={"ID":"45931cc3-9fdc-43a0-bc52-7ac389c4f75b","Type":"ContainerDied","Data":"0b941cc6b7547eb39ab2f29096c216bd65a342eedf24fba721f6d7abced9eeb3"} Jan 30 16:28:30 crc kubenswrapper[4766]: I0130 16:28:30.093199 4766 generic.go:334] "Generic (PLEG): container finished" podID="748d2b4a-b71d-4ecb-9df9-166be9b20302" containerID="ba435ab631f7ff92ef8bba4d34cfafad5419427cc9857694848d5fda4018e46b" exitCode=0 Jan 30 16:28:30 crc kubenswrapper[4766]: I0130 16:28:30.093252 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sqx4x" event={"ID":"748d2b4a-b71d-4ecb-9df9-166be9b20302","Type":"ContainerDied","Data":"ba435ab631f7ff92ef8bba4d34cfafad5419427cc9857694848d5fda4018e46b"} Jan 30 16:28:30 crc kubenswrapper[4766]: I0130 16:28:30.093288 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sqx4x" event={"ID":"748d2b4a-b71d-4ecb-9df9-166be9b20302","Type":"ContainerStarted","Data":"4e2e822728d72b043828d2c376fae8de09ee8b30107e67f666204b30101944fd"} Jan 30 16:28:30 crc kubenswrapper[4766]: I0130 16:28:30.673811 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-d8wb8"] Jan 30 16:28:30 crc kubenswrapper[4766]: I0130 16:28:30.674852 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d8wb8" Jan 30 16:28:30 crc kubenswrapper[4766]: I0130 16:28:30.690938 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 16:28:30 crc kubenswrapper[4766]: I0130 16:28:30.691773 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d8wb8"] Jan 30 16:28:30 crc kubenswrapper[4766]: I0130 16:28:30.705081 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bf71edb-8510-412d-95bd-028b90482ad1-catalog-content\") pod \"redhat-marketplace-d8wb8\" (UID: \"5bf71edb-8510-412d-95bd-028b90482ad1\") " pod="openshift-marketplace/redhat-marketplace-d8wb8" Jan 30 16:28:30 crc kubenswrapper[4766]: I0130 16:28:30.705164 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvdrq\" (UniqueName: \"kubernetes.io/projected/5bf71edb-8510-412d-95bd-028b90482ad1-kube-api-access-tvdrq\") pod \"redhat-marketplace-d8wb8\" (UID: \"5bf71edb-8510-412d-95bd-028b90482ad1\") " pod="openshift-marketplace/redhat-marketplace-d8wb8" Jan 30 16:28:30 crc kubenswrapper[4766]: I0130 16:28:30.705224 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bf71edb-8510-412d-95bd-028b90482ad1-utilities\") pod \"redhat-marketplace-d8wb8\" (UID: \"5bf71edb-8510-412d-95bd-028b90482ad1\") " pod="openshift-marketplace/redhat-marketplace-d8wb8" Jan 30 16:28:30 crc kubenswrapper[4766]: I0130 16:28:30.806848 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bf71edb-8510-412d-95bd-028b90482ad1-utilities\") pod \"redhat-marketplace-d8wb8\" (UID: \"5bf71edb-8510-412d-95bd-028b90482ad1\") " pod="openshift-marketplace/redhat-marketplace-d8wb8" Jan 30 16:28:30 crc kubenswrapper[4766]: I0130 16:28:30.807024 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bf71edb-8510-412d-95bd-028b90482ad1-catalog-content\") pod \"redhat-marketplace-d8wb8\" (UID: \"5bf71edb-8510-412d-95bd-028b90482ad1\") " pod="openshift-marketplace/redhat-marketplace-d8wb8" Jan 30 16:28:30 crc kubenswrapper[4766]: I0130 16:28:30.807071 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvdrq\" (UniqueName: \"kubernetes.io/projected/5bf71edb-8510-412d-95bd-028b90482ad1-kube-api-access-tvdrq\") pod \"redhat-marketplace-d8wb8\" (UID: \"5bf71edb-8510-412d-95bd-028b90482ad1\") " pod="openshift-marketplace/redhat-marketplace-d8wb8" Jan 30 16:28:30 crc kubenswrapper[4766]: I0130 16:28:30.807466 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bf71edb-8510-412d-95bd-028b90482ad1-utilities\") pod \"redhat-marketplace-d8wb8\" (UID: \"5bf71edb-8510-412d-95bd-028b90482ad1\") " pod="openshift-marketplace/redhat-marketplace-d8wb8" Jan 30 16:28:30 crc kubenswrapper[4766]: I0130 16:28:30.807483 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bf71edb-8510-412d-95bd-028b90482ad1-catalog-content\") pod \"redhat-marketplace-d8wb8\" (UID: \"5bf71edb-8510-412d-95bd-028b90482ad1\") " pod="openshift-marketplace/redhat-marketplace-d8wb8" Jan 30 16:28:30 crc kubenswrapper[4766]: I0130 16:28:30.827143 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvdrq\" (UniqueName: \"kubernetes.io/projected/5bf71edb-8510-412d-95bd-028b90482ad1-kube-api-access-tvdrq\") pod \"redhat-marketplace-d8wb8\" (UID: \"5bf71edb-8510-412d-95bd-028b90482ad1\") " pod="openshift-marketplace/redhat-marketplace-d8wb8" Jan 30 16:28:30 crc kubenswrapper[4766]: I0130 16:28:30.873197 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ck55d"] Jan 30 16:28:30 crc kubenswrapper[4766]: I0130 16:28:30.874425 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ck55d" Jan 30 16:28:30 crc kubenswrapper[4766]: I0130 16:28:30.879405 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 30 16:28:30 crc kubenswrapper[4766]: I0130 16:28:30.886267 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ck55d"] Jan 30 16:28:31 crc kubenswrapper[4766]: I0130 16:28:31.009409 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d8wb8" Jan 30 16:28:31 crc kubenswrapper[4766]: I0130 16:28:31.013233 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lvft\" (UniqueName: \"kubernetes.io/projected/e775d594-6680-4e4a-8b1f-01f3a0738015-kube-api-access-5lvft\") pod \"redhat-operators-ck55d\" (UID: \"e775d594-6680-4e4a-8b1f-01f3a0738015\") " pod="openshift-marketplace/redhat-operators-ck55d" Jan 30 16:28:31 crc kubenswrapper[4766]: I0130 16:28:31.013348 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e775d594-6680-4e4a-8b1f-01f3a0738015-catalog-content\") pod \"redhat-operators-ck55d\" (UID: \"e775d594-6680-4e4a-8b1f-01f3a0738015\") " pod="openshift-marketplace/redhat-operators-ck55d" Jan 30 16:28:31 crc kubenswrapper[4766]: I0130 16:28:31.013436 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e775d594-6680-4e4a-8b1f-01f3a0738015-utilities\") pod \"redhat-operators-ck55d\" (UID: \"e775d594-6680-4e4a-8b1f-01f3a0738015\") " pod="openshift-marketplace/redhat-operators-ck55d" Jan 30 16:28:31 crc kubenswrapper[4766]: I0130 16:28:31.115455 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e775d594-6680-4e4a-8b1f-01f3a0738015-utilities\") pod \"redhat-operators-ck55d\" (UID: \"e775d594-6680-4e4a-8b1f-01f3a0738015\") " pod="openshift-marketplace/redhat-operators-ck55d" Jan 30 16:28:31 crc kubenswrapper[4766]: I0130 16:28:31.115557 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lvft\" (UniqueName: \"kubernetes.io/projected/e775d594-6680-4e4a-8b1f-01f3a0738015-kube-api-access-5lvft\") pod \"redhat-operators-ck55d\" (UID: \"e775d594-6680-4e4a-8b1f-01f3a0738015\") " pod="openshift-marketplace/redhat-operators-ck55d" Jan 30 16:28:31 crc kubenswrapper[4766]: I0130 16:28:31.115631 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e775d594-6680-4e4a-8b1f-01f3a0738015-catalog-content\") pod \"redhat-operators-ck55d\" (UID: \"e775d594-6680-4e4a-8b1f-01f3a0738015\") " pod="openshift-marketplace/redhat-operators-ck55d" Jan 30 16:28:31 crc kubenswrapper[4766]: I0130 16:28:31.116381 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e775d594-6680-4e4a-8b1f-01f3a0738015-utilities\") pod \"redhat-operators-ck55d\" (UID: \"e775d594-6680-4e4a-8b1f-01f3a0738015\") " pod="openshift-marketplace/redhat-operators-ck55d" Jan 30 16:28:31 crc kubenswrapper[4766]: I0130 16:28:31.116656 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e775d594-6680-4e4a-8b1f-01f3a0738015-catalog-content\") pod \"redhat-operators-ck55d\" (UID: \"e775d594-6680-4e4a-8b1f-01f3a0738015\") " pod="openshift-marketplace/redhat-operators-ck55d" Jan 30 16:28:31 crc kubenswrapper[4766]: I0130 16:28:31.137826 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lvft\" (UniqueName: \"kubernetes.io/projected/e775d594-6680-4e4a-8b1f-01f3a0738015-kube-api-access-5lvft\") pod \"redhat-operators-ck55d\" (UID: \"e775d594-6680-4e4a-8b1f-01f3a0738015\") " pod="openshift-marketplace/redhat-operators-ck55d" Jan 30 16:28:31 crc kubenswrapper[4766]: I0130 16:28:31.195501 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ck55d" Jan 30 16:28:31 crc kubenswrapper[4766]: I0130 16:28:31.540649 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d8wb8"] Jan 30 16:28:31 crc kubenswrapper[4766]: W0130 16:28:31.548246 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5bf71edb_8510_412d_95bd_028b90482ad1.slice/crio-a6d3fbc36feeddeade12a2f5969134d1d73fa81cc9158ce4caede29cc936669e WatchSource:0}: Error finding container a6d3fbc36feeddeade12a2f5969134d1d73fa81cc9158ce4caede29cc936669e: Status 404 returned error can't find the container with id a6d3fbc36feeddeade12a2f5969134d1d73fa81cc9158ce4caede29cc936669e Jan 30 16:28:31 crc kubenswrapper[4766]: I0130 16:28:31.652522 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ck55d"] Jan 30 16:28:31 crc kubenswrapper[4766]: W0130 16:28:31.716267 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode775d594_6680_4e4a_8b1f_01f3a0738015.slice/crio-f894c54809796e9bc955e9c65573180850c5025aad67c7a860801cd7fd7de425 WatchSource:0}: Error finding container f894c54809796e9bc955e9c65573180850c5025aad67c7a860801cd7fd7de425: Status 404 returned error can't find the container with id f894c54809796e9bc955e9c65573180850c5025aad67c7a860801cd7fd7de425 Jan 30 16:28:32 crc kubenswrapper[4766]: I0130 16:28:32.110340 4766 generic.go:334] "Generic (PLEG): container finished" podID="748d2b4a-b71d-4ecb-9df9-166be9b20302" containerID="e9061df418f3f64de90a81135deae4b851739a3e9086514cdf7058448889f571" exitCode=0 Jan 30 16:28:32 crc kubenswrapper[4766]: I0130 16:28:32.110415 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sqx4x" event={"ID":"748d2b4a-b71d-4ecb-9df9-166be9b20302","Type":"ContainerDied","Data":"e9061df418f3f64de90a81135deae4b851739a3e9086514cdf7058448889f571"} Jan 30 16:28:32 crc kubenswrapper[4766]: I0130 16:28:32.119599 4766 generic.go:334] "Generic (PLEG): container finished" podID="e775d594-6680-4e4a-8b1f-01f3a0738015" containerID="cda6ca3941388968472d6bc22ee2e166c80fab033e008666ff71f186be586a54" exitCode=0 Jan 30 16:28:32 crc kubenswrapper[4766]: I0130 16:28:32.119670 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ck55d" event={"ID":"e775d594-6680-4e4a-8b1f-01f3a0738015","Type":"ContainerDied","Data":"cda6ca3941388968472d6bc22ee2e166c80fab033e008666ff71f186be586a54"} Jan 30 16:28:32 crc kubenswrapper[4766]: I0130 16:28:32.119695 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ck55d" event={"ID":"e775d594-6680-4e4a-8b1f-01f3a0738015","Type":"ContainerStarted","Data":"f894c54809796e9bc955e9c65573180850c5025aad67c7a860801cd7fd7de425"} Jan 30 16:28:32 crc kubenswrapper[4766]: I0130 16:28:32.125091 4766 generic.go:334] "Generic (PLEG): container finished" podID="45931cc3-9fdc-43a0-bc52-7ac389c4f75b" containerID="51511cdb8a77cd476c1f4436902e5eace1abf72deb2e557361fd2a2085bea65f" exitCode=0 Jan 30 16:28:32 crc kubenswrapper[4766]: I0130 16:28:32.125582 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9s94z" event={"ID":"45931cc3-9fdc-43a0-bc52-7ac389c4f75b","Type":"ContainerDied","Data":"51511cdb8a77cd476c1f4436902e5eace1abf72deb2e557361fd2a2085bea65f"} Jan 30 16:28:32 crc kubenswrapper[4766]: I0130 16:28:32.140163 4766 generic.go:334] "Generic (PLEG): container finished" podID="5bf71edb-8510-412d-95bd-028b90482ad1" containerID="ab9f45c4bdf83a02544aa35f32e53d8adf89cd399185ca73d184784e819b21ee" exitCode=0 Jan 30 16:28:32 crc kubenswrapper[4766]: I0130 16:28:32.140258 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d8wb8" event={"ID":"5bf71edb-8510-412d-95bd-028b90482ad1","Type":"ContainerDied","Data":"ab9f45c4bdf83a02544aa35f32e53d8adf89cd399185ca73d184784e819b21ee"} Jan 30 16:28:32 crc kubenswrapper[4766]: I0130 16:28:32.140330 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d8wb8" event={"ID":"5bf71edb-8510-412d-95bd-028b90482ad1","Type":"ContainerStarted","Data":"a6d3fbc36feeddeade12a2f5969134d1d73fa81cc9158ce4caede29cc936669e"} Jan 30 16:28:33 crc kubenswrapper[4766]: I0130 16:28:33.149843 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sqx4x" event={"ID":"748d2b4a-b71d-4ecb-9df9-166be9b20302","Type":"ContainerStarted","Data":"80f82cf583547c818df4ad2c3dd4a04653845e781c7f074dd998f95869f66a78"} Jan 30 16:28:33 crc kubenswrapper[4766]: I0130 16:28:33.152423 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9s94z" event={"ID":"45931cc3-9fdc-43a0-bc52-7ac389c4f75b","Type":"ContainerStarted","Data":"8e87bb0275b753b25ae6e95f27a6de8c9a8bf65607aa22b6921c55a7c79624c1"} Jan 30 16:28:33 crc kubenswrapper[4766]: I0130 16:28:33.154927 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d8wb8" event={"ID":"5bf71edb-8510-412d-95bd-028b90482ad1","Type":"ContainerStarted","Data":"0a2670f3cd94e5d451caa6d6ce4606c417090d07ea0096560b54d6d04adad77f"} Jan 30 16:28:33 crc kubenswrapper[4766]: I0130 16:28:33.172433 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sqx4x" podStartSLOduration=2.428133548 podStartE2EDuration="5.172410416s" podCreationTimestamp="2026-01-30 16:28:28 +0000 UTC" firstStartedPulling="2026-01-30 16:28:30.094704227 +0000 UTC m=+364.732661563" lastFinishedPulling="2026-01-30 16:28:32.838981085 +0000 UTC m=+367.476938431" observedRunningTime="2026-01-30 16:28:33.167239642 +0000 UTC m=+367.805197018" watchObservedRunningTime="2026-01-30 16:28:33.172410416 +0000 UTC m=+367.810367772" Jan 30 16:28:33 crc kubenswrapper[4766]: I0130 16:28:33.217285 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9s94z" podStartSLOduration=2.364731378 podStartE2EDuration="5.217260741s" podCreationTimestamp="2026-01-30 16:28:28 +0000 UTC" firstStartedPulling="2026-01-30 16:28:30.091503539 +0000 UTC m=+364.729460885" lastFinishedPulling="2026-01-30 16:28:32.944032902 +0000 UTC m=+367.581990248" observedRunningTime="2026-01-30 16:28:33.216606593 +0000 UTC m=+367.854563939" watchObservedRunningTime="2026-01-30 16:28:33.217260741 +0000 UTC m=+367.855218087" Jan 30 16:28:34 crc kubenswrapper[4766]: I0130 16:28:34.162531 4766 generic.go:334] "Generic (PLEG): container finished" podID="5bf71edb-8510-412d-95bd-028b90482ad1" containerID="0a2670f3cd94e5d451caa6d6ce4606c417090d07ea0096560b54d6d04adad77f" exitCode=0 Jan 30 16:28:34 crc kubenswrapper[4766]: I0130 16:28:34.162589 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d8wb8" event={"ID":"5bf71edb-8510-412d-95bd-028b90482ad1","Type":"ContainerDied","Data":"0a2670f3cd94e5d451caa6d6ce4606c417090d07ea0096560b54d6d04adad77f"} Jan 30 16:28:34 crc kubenswrapper[4766]: I0130 16:28:34.166842 4766 generic.go:334] "Generic (PLEG): container finished" podID="e775d594-6680-4e4a-8b1f-01f3a0738015" containerID="f0512c8bbc0a86dc1e032c89616f9648e9e30da9036a2694b12318eb2817a1dd" exitCode=0 Jan 30 16:28:34 crc kubenswrapper[4766]: I0130 16:28:34.167700 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ck55d" event={"ID":"e775d594-6680-4e4a-8b1f-01f3a0738015","Type":"ContainerDied","Data":"f0512c8bbc0a86dc1e032c89616f9648e9e30da9036a2694b12318eb2817a1dd"} Jan 30 16:28:35 crc kubenswrapper[4766]: I0130 16:28:35.176783 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ck55d" event={"ID":"e775d594-6680-4e4a-8b1f-01f3a0738015","Type":"ContainerStarted","Data":"0aca5a8d90310b8c5dc83366f011e64837ec6305eaa5f37b461361c778c239ee"} Jan 30 16:28:35 crc kubenswrapper[4766]: I0130 16:28:35.180562 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d8wb8" event={"ID":"5bf71edb-8510-412d-95bd-028b90482ad1","Type":"ContainerStarted","Data":"d0169081296ca2e47a66457159417ebeaa1fe9531b78fa8baee181223da03c4d"} Jan 30 16:28:35 crc kubenswrapper[4766]: I0130 16:28:35.197851 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ck55d" podStartSLOduration=2.7345860760000003 podStartE2EDuration="5.197833858s" podCreationTimestamp="2026-01-30 16:28:30 +0000 UTC" firstStartedPulling="2026-01-30 16:28:32.12264678 +0000 UTC m=+366.760604126" lastFinishedPulling="2026-01-30 16:28:34.585894572 +0000 UTC m=+369.223851908" observedRunningTime="2026-01-30 16:28:35.193997171 +0000 UTC m=+369.831954547" watchObservedRunningTime="2026-01-30 16:28:35.197833858 +0000 UTC m=+369.835791204" Jan 30 16:28:35 crc kubenswrapper[4766]: I0130 16:28:35.215957 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-d8wb8" podStartSLOduration=2.7636568329999998 podStartE2EDuration="5.215936581s" podCreationTimestamp="2026-01-30 16:28:30 +0000 UTC" firstStartedPulling="2026-01-30 16:28:32.143651613 +0000 UTC m=+366.781608959" lastFinishedPulling="2026-01-30 16:28:34.595931361 +0000 UTC m=+369.233888707" observedRunningTime="2026-01-30 16:28:35.21050546 +0000 UTC m=+369.848462816" watchObservedRunningTime="2026-01-30 16:28:35.215936581 +0000 UTC m=+369.853893927" Jan 30 16:28:38 crc kubenswrapper[4766]: I0130 16:28:38.652691 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9s94z" Jan 30 16:28:38 crc kubenswrapper[4766]: I0130 16:28:38.653225 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9s94z" Jan 30 16:28:38 crc kubenswrapper[4766]: I0130 16:28:38.700687 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9s94z" Jan 30 16:28:38 crc kubenswrapper[4766]: I0130 16:28:38.784140 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sqx4x" Jan 30 16:28:38 crc kubenswrapper[4766]: I0130 16:28:38.784228 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sqx4x" Jan 30 16:28:38 crc kubenswrapper[4766]: I0130 16:28:38.826119 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sqx4x" Jan 30 16:28:39 crc kubenswrapper[4766]: I0130 16:28:39.045723 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:28:39 crc kubenswrapper[4766]: I0130 16:28:39.045785 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:28:39 crc kubenswrapper[4766]: I0130 16:28:39.245961 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sqx4x" Jan 30 16:28:39 crc kubenswrapper[4766]: I0130 16:28:39.246020 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9s94z" Jan 30 16:28:41 crc kubenswrapper[4766]: I0130 16:28:41.009812 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-d8wb8" Jan 30 16:28:41 crc kubenswrapper[4766]: I0130 16:28:41.010247 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-d8wb8" Jan 30 16:28:41 crc kubenswrapper[4766]: I0130 16:28:41.054051 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-d8wb8" Jan 30 16:28:41 crc kubenswrapper[4766]: I0130 16:28:41.196619 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ck55d" Jan 30 16:28:41 crc kubenswrapper[4766]: I0130 16:28:41.196685 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ck55d" Jan 30 16:28:41 crc kubenswrapper[4766]: I0130 16:28:41.235250 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ck55d" Jan 30 16:28:41 crc kubenswrapper[4766]: I0130 16:28:41.263964 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-d8wb8" Jan 30 16:28:41 crc kubenswrapper[4766]: I0130 16:28:41.281795 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ck55d" Jan 30 16:28:47 crc kubenswrapper[4766]: I0130 16:28:47.537561 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" podUID="97631abe-0d99-4f69-b208-4da9d19a8400" containerName="registry" containerID="cri-o://78ed4a5085e5d869bfcb80a811cd6fcf0f153ceead1030384c7d491fb2b98086" gracePeriod=30 Jan 30 16:28:47 crc kubenswrapper[4766]: I0130 16:28:47.589798 4766 patch_prober.go:28] interesting pod/image-registry-697d97f7c8-9nn5q container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.15:5000/healthz\": dial tcp 10.217.0.15:5000: connect: connection refused" start-of-body= Jan 30 16:28:47 crc kubenswrapper[4766]: I0130 16:28:47.589916 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" podUID="97631abe-0d99-4f69-b208-4da9d19a8400" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.15:5000/healthz\": dial tcp 10.217.0.15:5000: connect: connection refused" Jan 30 16:28:47 crc kubenswrapper[4766]: I0130 16:28:47.914695 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:28:48 crc kubenswrapper[4766]: I0130 16:28:48.058683 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/97631abe-0d99-4f69-b208-4da9d19a8400-bound-sa-token\") pod \"97631abe-0d99-4f69-b208-4da9d19a8400\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " Jan 30 16:28:48 crc kubenswrapper[4766]: I0130 16:28:48.058772 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/97631abe-0d99-4f69-b208-4da9d19a8400-trusted-ca\") pod \"97631abe-0d99-4f69-b208-4da9d19a8400\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " Jan 30 16:28:48 crc kubenswrapper[4766]: I0130 16:28:48.058997 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"97631abe-0d99-4f69-b208-4da9d19a8400\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " Jan 30 16:28:48 crc kubenswrapper[4766]: I0130 16:28:48.059038 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/97631abe-0d99-4f69-b208-4da9d19a8400-registry-tls\") pod \"97631abe-0d99-4f69-b208-4da9d19a8400\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " Jan 30 16:28:48 crc kubenswrapper[4766]: I0130 16:28:48.059064 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/97631abe-0d99-4f69-b208-4da9d19a8400-installation-pull-secrets\") pod \"97631abe-0d99-4f69-b208-4da9d19a8400\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " Jan 30 16:28:48 crc kubenswrapper[4766]: I0130 16:28:48.059116 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/97631abe-0d99-4f69-b208-4da9d19a8400-registry-certificates\") pod \"97631abe-0d99-4f69-b208-4da9d19a8400\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " Jan 30 16:28:48 crc kubenswrapper[4766]: I0130 16:28:48.059222 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79252\" (UniqueName: \"kubernetes.io/projected/97631abe-0d99-4f69-b208-4da9d19a8400-kube-api-access-79252\") pod \"97631abe-0d99-4f69-b208-4da9d19a8400\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " Jan 30 16:28:48 crc kubenswrapper[4766]: I0130 16:28:48.059267 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/97631abe-0d99-4f69-b208-4da9d19a8400-ca-trust-extracted\") pod \"97631abe-0d99-4f69-b208-4da9d19a8400\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " Jan 30 16:28:48 crc kubenswrapper[4766]: I0130 16:28:48.060366 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97631abe-0d99-4f69-b208-4da9d19a8400-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "97631abe-0d99-4f69-b208-4da9d19a8400" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:28:48 crc kubenswrapper[4766]: I0130 16:28:48.061017 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97631abe-0d99-4f69-b208-4da9d19a8400-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "97631abe-0d99-4f69-b208-4da9d19a8400" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:28:48 crc kubenswrapper[4766]: I0130 16:28:48.071755 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97631abe-0d99-4f69-b208-4da9d19a8400-kube-api-access-79252" (OuterVolumeSpecName: "kube-api-access-79252") pod "97631abe-0d99-4f69-b208-4da9d19a8400" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400"). InnerVolumeSpecName "kube-api-access-79252". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:28:48 crc kubenswrapper[4766]: I0130 16:28:48.072129 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97631abe-0d99-4f69-b208-4da9d19a8400-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "97631abe-0d99-4f69-b208-4da9d19a8400" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:28:48 crc kubenswrapper[4766]: I0130 16:28:48.072690 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "97631abe-0d99-4f69-b208-4da9d19a8400" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 16:28:48 crc kubenswrapper[4766]: I0130 16:28:48.073897 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97631abe-0d99-4f69-b208-4da9d19a8400-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "97631abe-0d99-4f69-b208-4da9d19a8400" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:28:48 crc kubenswrapper[4766]: I0130 16:28:48.074333 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97631abe-0d99-4f69-b208-4da9d19a8400-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "97631abe-0d99-4f69-b208-4da9d19a8400" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:28:48 crc kubenswrapper[4766]: I0130 16:28:48.075488 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97631abe-0d99-4f69-b208-4da9d19a8400-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "97631abe-0d99-4f69-b208-4da9d19a8400" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:28:48 crc kubenswrapper[4766]: I0130 16:28:48.161317 4766 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/97631abe-0d99-4f69-b208-4da9d19a8400-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 16:28:48 crc kubenswrapper[4766]: I0130 16:28:48.161350 4766 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/97631abe-0d99-4f69-b208-4da9d19a8400-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:28:48 crc kubenswrapper[4766]: I0130 16:28:48.161359 4766 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/97631abe-0d99-4f69-b208-4da9d19a8400-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 30 16:28:48 crc kubenswrapper[4766]: I0130 16:28:48.161368 4766 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/97631abe-0d99-4f69-b208-4da9d19a8400-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 30 16:28:48 crc kubenswrapper[4766]: I0130 16:28:48.161379 4766 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/97631abe-0d99-4f69-b208-4da9d19a8400-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 30 16:28:48 crc kubenswrapper[4766]: I0130 16:28:48.161386 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79252\" (UniqueName: \"kubernetes.io/projected/97631abe-0d99-4f69-b208-4da9d19a8400-kube-api-access-79252\") on node \"crc\" DevicePath \"\"" Jan 30 16:28:48 crc kubenswrapper[4766]: I0130 16:28:48.161394 4766 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/97631abe-0d99-4f69-b208-4da9d19a8400-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 30 16:28:48 crc kubenswrapper[4766]: I0130 16:28:48.252169 4766 generic.go:334] "Generic (PLEG): container finished" podID="97631abe-0d99-4f69-b208-4da9d19a8400" containerID="78ed4a5085e5d869bfcb80a811cd6fcf0f153ceead1030384c7d491fb2b98086" exitCode=0 Jan 30 16:28:48 crc kubenswrapper[4766]: I0130 16:28:48.252236 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" event={"ID":"97631abe-0d99-4f69-b208-4da9d19a8400","Type":"ContainerDied","Data":"78ed4a5085e5d869bfcb80a811cd6fcf0f153ceead1030384c7d491fb2b98086"} Jan 30 16:28:48 crc kubenswrapper[4766]: I0130 16:28:48.252263 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" event={"ID":"97631abe-0d99-4f69-b208-4da9d19a8400","Type":"ContainerDied","Data":"8607ddfed85f0737d38a8c68a75c871fb7626f9536fec8516b4240081fc47421"} Jan 30 16:28:48 crc kubenswrapper[4766]: I0130 16:28:48.252270 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:28:48 crc kubenswrapper[4766]: I0130 16:28:48.252280 4766 scope.go:117] "RemoveContainer" containerID="78ed4a5085e5d869bfcb80a811cd6fcf0f153ceead1030384c7d491fb2b98086" Jan 30 16:28:48 crc kubenswrapper[4766]: I0130 16:28:48.269058 4766 scope.go:117] "RemoveContainer" containerID="78ed4a5085e5d869bfcb80a811cd6fcf0f153ceead1030384c7d491fb2b98086" Jan 30 16:28:48 crc kubenswrapper[4766]: E0130 16:28:48.269659 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78ed4a5085e5d869bfcb80a811cd6fcf0f153ceead1030384c7d491fb2b98086\": container with ID starting with 78ed4a5085e5d869bfcb80a811cd6fcf0f153ceead1030384c7d491fb2b98086 not found: ID does not exist" containerID="78ed4a5085e5d869bfcb80a811cd6fcf0f153ceead1030384c7d491fb2b98086" Jan 30 16:28:48 crc kubenswrapper[4766]: I0130 16:28:48.269692 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78ed4a5085e5d869bfcb80a811cd6fcf0f153ceead1030384c7d491fb2b98086"} err="failed to get container status \"78ed4a5085e5d869bfcb80a811cd6fcf0f153ceead1030384c7d491fb2b98086\": rpc error: code = NotFound desc = could not find container \"78ed4a5085e5d869bfcb80a811cd6fcf0f153ceead1030384c7d491fb2b98086\": container with ID starting with 78ed4a5085e5d869bfcb80a811cd6fcf0f153ceead1030384c7d491fb2b98086 not found: ID does not exist" Jan 30 16:28:48 crc kubenswrapper[4766]: I0130 16:28:48.283351 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9nn5q"] Jan 30 16:28:48 crc kubenswrapper[4766]: I0130 16:28:48.288093 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9nn5q"] Jan 30 16:28:50 crc kubenswrapper[4766]: I0130 16:28:50.048218 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97631abe-0d99-4f69-b208-4da9d19a8400" path="/var/lib/kubelet/pods/97631abe-0d99-4f69-b208-4da9d19a8400/volumes" Jan 30 16:29:09 crc kubenswrapper[4766]: I0130 16:29:09.045846 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:29:09 crc kubenswrapper[4766]: I0130 16:29:09.046553 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:29:39 crc kubenswrapper[4766]: I0130 16:29:39.045778 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:29:39 crc kubenswrapper[4766]: I0130 16:29:39.046422 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:29:39 crc kubenswrapper[4766]: I0130 16:29:39.046466 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 16:29:39 crc kubenswrapper[4766]: I0130 16:29:39.046997 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a61da9bc846bcef2fd5085fc646835d633689f5537ff5019224103cb78b8173f"} pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 16:29:39 crc kubenswrapper[4766]: I0130 16:29:39.047051 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" containerID="cri-o://a61da9bc846bcef2fd5085fc646835d633689f5537ff5019224103cb78b8173f" gracePeriod=600 Jan 30 16:29:39 crc kubenswrapper[4766]: I0130 16:29:39.530236 4766 generic.go:334] "Generic (PLEG): container finished" podID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerID="a61da9bc846bcef2fd5085fc646835d633689f5537ff5019224103cb78b8173f" exitCode=0 Jan 30 16:29:39 crc kubenswrapper[4766]: I0130 16:29:39.530336 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerDied","Data":"a61da9bc846bcef2fd5085fc646835d633689f5537ff5019224103cb78b8173f"} Jan 30 16:29:39 crc kubenswrapper[4766]: I0130 16:29:39.530994 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerStarted","Data":"2b6328ad3aaf373dc4a6f6fbe7d49ef2029c9f80f2a9eb0657102d9506d1cc4f"} Jan 30 16:29:39 crc kubenswrapper[4766]: I0130 16:29:39.531037 4766 scope.go:117] "RemoveContainer" containerID="183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823" Jan 30 16:30:00 crc kubenswrapper[4766]: I0130 16:30:00.174797 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496510-glrms"] Jan 30 16:30:00 crc kubenswrapper[4766]: E0130 16:30:00.175799 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97631abe-0d99-4f69-b208-4da9d19a8400" containerName="registry" Jan 30 16:30:00 crc kubenswrapper[4766]: I0130 16:30:00.175819 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="97631abe-0d99-4f69-b208-4da9d19a8400" containerName="registry" Jan 30 16:30:00 crc kubenswrapper[4766]: I0130 16:30:00.175943 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="97631abe-0d99-4f69-b208-4da9d19a8400" containerName="registry" Jan 30 16:30:00 crc kubenswrapper[4766]: I0130 16:30:00.176535 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496510-glrms" Jan 30 16:30:00 crc kubenswrapper[4766]: I0130 16:30:00.178882 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496510-glrms"] Jan 30 16:30:00 crc kubenswrapper[4766]: I0130 16:30:00.179840 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 16:30:00 crc kubenswrapper[4766]: I0130 16:30:00.183113 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 16:30:00 crc kubenswrapper[4766]: I0130 16:30:00.218921 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aabaaf93-f51e-4847-b39a-8ecccc43f8d4-config-volume\") pod \"collect-profiles-29496510-glrms\" (UID: \"aabaaf93-f51e-4847-b39a-8ecccc43f8d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496510-glrms" Jan 30 16:30:00 crc kubenswrapper[4766]: I0130 16:30:00.218996 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4qsw\" (UniqueName: \"kubernetes.io/projected/aabaaf93-f51e-4847-b39a-8ecccc43f8d4-kube-api-access-s4qsw\") pod \"collect-profiles-29496510-glrms\" (UID: \"aabaaf93-f51e-4847-b39a-8ecccc43f8d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496510-glrms" Jan 30 16:30:00 crc kubenswrapper[4766]: I0130 16:30:00.219046 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aabaaf93-f51e-4847-b39a-8ecccc43f8d4-secret-volume\") pod \"collect-profiles-29496510-glrms\" (UID: \"aabaaf93-f51e-4847-b39a-8ecccc43f8d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496510-glrms" Jan 30 16:30:00 crc kubenswrapper[4766]: I0130 16:30:00.319846 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4qsw\" (UniqueName: \"kubernetes.io/projected/aabaaf93-f51e-4847-b39a-8ecccc43f8d4-kube-api-access-s4qsw\") pod \"collect-profiles-29496510-glrms\" (UID: \"aabaaf93-f51e-4847-b39a-8ecccc43f8d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496510-glrms" Jan 30 16:30:00 crc kubenswrapper[4766]: I0130 16:30:00.319925 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aabaaf93-f51e-4847-b39a-8ecccc43f8d4-secret-volume\") pod \"collect-profiles-29496510-glrms\" (UID: \"aabaaf93-f51e-4847-b39a-8ecccc43f8d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496510-glrms" Jan 30 16:30:00 crc kubenswrapper[4766]: I0130 16:30:00.319958 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aabaaf93-f51e-4847-b39a-8ecccc43f8d4-config-volume\") pod \"collect-profiles-29496510-glrms\" (UID: \"aabaaf93-f51e-4847-b39a-8ecccc43f8d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496510-glrms" Jan 30 16:30:00 crc kubenswrapper[4766]: I0130 16:30:00.321800 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aabaaf93-f51e-4847-b39a-8ecccc43f8d4-config-volume\") pod \"collect-profiles-29496510-glrms\" (UID: \"aabaaf93-f51e-4847-b39a-8ecccc43f8d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496510-glrms" Jan 30 16:30:00 crc kubenswrapper[4766]: I0130 16:30:00.334222 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aabaaf93-f51e-4847-b39a-8ecccc43f8d4-secret-volume\") pod \"collect-profiles-29496510-glrms\" (UID: \"aabaaf93-f51e-4847-b39a-8ecccc43f8d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496510-glrms" Jan 30 16:30:00 crc kubenswrapper[4766]: I0130 16:30:00.341652 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4qsw\" (UniqueName: \"kubernetes.io/projected/aabaaf93-f51e-4847-b39a-8ecccc43f8d4-kube-api-access-s4qsw\") pod \"collect-profiles-29496510-glrms\" (UID: \"aabaaf93-f51e-4847-b39a-8ecccc43f8d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496510-glrms" Jan 30 16:30:00 crc kubenswrapper[4766]: I0130 16:30:00.506159 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496510-glrms" Jan 30 16:30:00 crc kubenswrapper[4766]: I0130 16:30:00.690452 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496510-glrms"] Jan 30 16:30:01 crc kubenswrapper[4766]: I0130 16:30:01.657582 4766 generic.go:334] "Generic (PLEG): container finished" podID="aabaaf93-f51e-4847-b39a-8ecccc43f8d4" containerID="add3babd5c979004ca5cf98ed2207ebf2c3f7f606e68f1380f3bcb0131882a0e" exitCode=0 Jan 30 16:30:01 crc kubenswrapper[4766]: I0130 16:30:01.657683 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496510-glrms" event={"ID":"aabaaf93-f51e-4847-b39a-8ecccc43f8d4","Type":"ContainerDied","Data":"add3babd5c979004ca5cf98ed2207ebf2c3f7f606e68f1380f3bcb0131882a0e"} Jan 30 16:30:01 crc kubenswrapper[4766]: I0130 16:30:01.657969 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496510-glrms" event={"ID":"aabaaf93-f51e-4847-b39a-8ecccc43f8d4","Type":"ContainerStarted","Data":"e688a33fe70e771eac1b1a8dca3c2b0e939682e5b9a2a820bafb347a8c213deb"} Jan 30 16:30:02 crc kubenswrapper[4766]: I0130 16:30:02.896258 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496510-glrms" Jan 30 16:30:03 crc kubenswrapper[4766]: I0130 16:30:03.049513 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aabaaf93-f51e-4847-b39a-8ecccc43f8d4-secret-volume\") pod \"aabaaf93-f51e-4847-b39a-8ecccc43f8d4\" (UID: \"aabaaf93-f51e-4847-b39a-8ecccc43f8d4\") " Jan 30 16:30:03 crc kubenswrapper[4766]: I0130 16:30:03.049602 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4qsw\" (UniqueName: \"kubernetes.io/projected/aabaaf93-f51e-4847-b39a-8ecccc43f8d4-kube-api-access-s4qsw\") pod \"aabaaf93-f51e-4847-b39a-8ecccc43f8d4\" (UID: \"aabaaf93-f51e-4847-b39a-8ecccc43f8d4\") " Jan 30 16:30:03 crc kubenswrapper[4766]: I0130 16:30:03.049707 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aabaaf93-f51e-4847-b39a-8ecccc43f8d4-config-volume\") pod \"aabaaf93-f51e-4847-b39a-8ecccc43f8d4\" (UID: \"aabaaf93-f51e-4847-b39a-8ecccc43f8d4\") " Jan 30 16:30:03 crc kubenswrapper[4766]: I0130 16:30:03.050331 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aabaaf93-f51e-4847-b39a-8ecccc43f8d4-config-volume" (OuterVolumeSpecName: "config-volume") pod "aabaaf93-f51e-4847-b39a-8ecccc43f8d4" (UID: "aabaaf93-f51e-4847-b39a-8ecccc43f8d4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:30:03 crc kubenswrapper[4766]: I0130 16:30:03.055631 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aabaaf93-f51e-4847-b39a-8ecccc43f8d4-kube-api-access-s4qsw" (OuterVolumeSpecName: "kube-api-access-s4qsw") pod "aabaaf93-f51e-4847-b39a-8ecccc43f8d4" (UID: "aabaaf93-f51e-4847-b39a-8ecccc43f8d4"). InnerVolumeSpecName "kube-api-access-s4qsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:30:03 crc kubenswrapper[4766]: I0130 16:30:03.055689 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aabaaf93-f51e-4847-b39a-8ecccc43f8d4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "aabaaf93-f51e-4847-b39a-8ecccc43f8d4" (UID: "aabaaf93-f51e-4847-b39a-8ecccc43f8d4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:30:03 crc kubenswrapper[4766]: I0130 16:30:03.151583 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4qsw\" (UniqueName: \"kubernetes.io/projected/aabaaf93-f51e-4847-b39a-8ecccc43f8d4-kube-api-access-s4qsw\") on node \"crc\" DevicePath \"\"" Jan 30 16:30:03 crc kubenswrapper[4766]: I0130 16:30:03.151918 4766 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aabaaf93-f51e-4847-b39a-8ecccc43f8d4-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 16:30:03 crc kubenswrapper[4766]: I0130 16:30:03.151933 4766 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aabaaf93-f51e-4847-b39a-8ecccc43f8d4-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 16:30:03 crc kubenswrapper[4766]: I0130 16:30:03.670266 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496510-glrms" event={"ID":"aabaaf93-f51e-4847-b39a-8ecccc43f8d4","Type":"ContainerDied","Data":"e688a33fe70e771eac1b1a8dca3c2b0e939682e5b9a2a820bafb347a8c213deb"} Jan 30 16:30:03 crc kubenswrapper[4766]: I0130 16:30:03.670311 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e688a33fe70e771eac1b1a8dca3c2b0e939682e5b9a2a820bafb347a8c213deb" Jan 30 16:30:03 crc kubenswrapper[4766]: I0130 16:30:03.670349 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496510-glrms" Jan 30 16:31:39 crc kubenswrapper[4766]: I0130 16:31:39.045875 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:31:39 crc kubenswrapper[4766]: I0130 16:31:39.046409 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:32:09 crc kubenswrapper[4766]: I0130 16:32:09.046254 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:32:09 crc kubenswrapper[4766]: I0130 16:32:09.046967 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:32:39 crc kubenswrapper[4766]: I0130 16:32:39.045567 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:32:39 crc kubenswrapper[4766]: I0130 16:32:39.046234 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:32:39 crc kubenswrapper[4766]: I0130 16:32:39.046279 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 16:32:39 crc kubenswrapper[4766]: I0130 16:32:39.047583 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2b6328ad3aaf373dc4a6f6fbe7d49ef2029c9f80f2a9eb0657102d9506d1cc4f"} pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 16:32:39 crc kubenswrapper[4766]: I0130 16:32:39.047696 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" containerID="cri-o://2b6328ad3aaf373dc4a6f6fbe7d49ef2029c9f80f2a9eb0657102d9506d1cc4f" gracePeriod=600 Jan 30 16:32:39 crc kubenswrapper[4766]: I0130 16:32:39.707775 4766 generic.go:334] "Generic (PLEG): container finished" podID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerID="2b6328ad3aaf373dc4a6f6fbe7d49ef2029c9f80f2a9eb0657102d9506d1cc4f" exitCode=0 Jan 30 16:32:39 crc kubenswrapper[4766]: I0130 16:32:39.707857 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerDied","Data":"2b6328ad3aaf373dc4a6f6fbe7d49ef2029c9f80f2a9eb0657102d9506d1cc4f"} Jan 30 16:32:39 crc kubenswrapper[4766]: I0130 16:32:39.708219 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerStarted","Data":"2324c4835fd4bdd1303bb3b79291e1e367ad78303906e6548593c60cc4a66d08"} Jan 30 16:32:39 crc kubenswrapper[4766]: I0130 16:32:39.708264 4766 scope.go:117] "RemoveContainer" containerID="a61da9bc846bcef2fd5085fc646835d633689f5537ff5019224103cb78b8173f" Jan 30 16:34:10 crc kubenswrapper[4766]: I0130 16:34:10.774193 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-mxw77"] Jan 30 16:34:10 crc kubenswrapper[4766]: E0130 16:34:10.774964 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aabaaf93-f51e-4847-b39a-8ecccc43f8d4" containerName="collect-profiles" Jan 30 16:34:10 crc kubenswrapper[4766]: I0130 16:34:10.774979 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="aabaaf93-f51e-4847-b39a-8ecccc43f8d4" containerName="collect-profiles" Jan 30 16:34:10 crc kubenswrapper[4766]: I0130 16:34:10.775073 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="aabaaf93-f51e-4847-b39a-8ecccc43f8d4" containerName="collect-profiles" Jan 30 16:34:10 crc kubenswrapper[4766]: I0130 16:34:10.775446 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-mxw77" Jan 30 16:34:10 crc kubenswrapper[4766]: I0130 16:34:10.777750 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 30 16:34:10 crc kubenswrapper[4766]: I0130 16:34:10.777860 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 30 16:34:10 crc kubenswrapper[4766]: I0130 16:34:10.777972 4766 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-r8skn" Jan 30 16:34:10 crc kubenswrapper[4766]: I0130 16:34:10.780253 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 30 16:34:10 crc kubenswrapper[4766]: I0130 16:34:10.790492 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-mxw77"] Jan 30 16:34:10 crc kubenswrapper[4766]: I0130 16:34:10.914438 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtjq9\" (UniqueName: \"kubernetes.io/projected/3ad5692e-34c5-4e32-ba96-cd5e6e617c62-kube-api-access-mtjq9\") pod \"crc-storage-crc-mxw77\" (UID: \"3ad5692e-34c5-4e32-ba96-cd5e6e617c62\") " pod="crc-storage/crc-storage-crc-mxw77" Jan 30 16:34:10 crc kubenswrapper[4766]: I0130 16:34:10.914583 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/3ad5692e-34c5-4e32-ba96-cd5e6e617c62-crc-storage\") pod \"crc-storage-crc-mxw77\" (UID: \"3ad5692e-34c5-4e32-ba96-cd5e6e617c62\") " pod="crc-storage/crc-storage-crc-mxw77" Jan 30 16:34:10 crc kubenswrapper[4766]: I0130 16:34:10.914671 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/3ad5692e-34c5-4e32-ba96-cd5e6e617c62-node-mnt\") pod \"crc-storage-crc-mxw77\" (UID: \"3ad5692e-34c5-4e32-ba96-cd5e6e617c62\") " pod="crc-storage/crc-storage-crc-mxw77" Jan 30 16:34:11 crc kubenswrapper[4766]: I0130 16:34:11.016149 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtjq9\" (UniqueName: \"kubernetes.io/projected/3ad5692e-34c5-4e32-ba96-cd5e6e617c62-kube-api-access-mtjq9\") pod \"crc-storage-crc-mxw77\" (UID: \"3ad5692e-34c5-4e32-ba96-cd5e6e617c62\") " pod="crc-storage/crc-storage-crc-mxw77" Jan 30 16:34:11 crc kubenswrapper[4766]: I0130 16:34:11.016241 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/3ad5692e-34c5-4e32-ba96-cd5e6e617c62-crc-storage\") pod \"crc-storage-crc-mxw77\" (UID: \"3ad5692e-34c5-4e32-ba96-cd5e6e617c62\") " pod="crc-storage/crc-storage-crc-mxw77" Jan 30 16:34:11 crc kubenswrapper[4766]: I0130 16:34:11.016293 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/3ad5692e-34c5-4e32-ba96-cd5e6e617c62-node-mnt\") pod \"crc-storage-crc-mxw77\" (UID: \"3ad5692e-34c5-4e32-ba96-cd5e6e617c62\") " pod="crc-storage/crc-storage-crc-mxw77" Jan 30 16:34:11 crc kubenswrapper[4766]: I0130 16:34:11.016729 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/3ad5692e-34c5-4e32-ba96-cd5e6e617c62-node-mnt\") pod \"crc-storage-crc-mxw77\" (UID: \"3ad5692e-34c5-4e32-ba96-cd5e6e617c62\") " pod="crc-storage/crc-storage-crc-mxw77" Jan 30 16:34:11 crc kubenswrapper[4766]: I0130 16:34:11.017154 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/3ad5692e-34c5-4e32-ba96-cd5e6e617c62-crc-storage\") pod \"crc-storage-crc-mxw77\" (UID: \"3ad5692e-34c5-4e32-ba96-cd5e6e617c62\") " pod="crc-storage/crc-storage-crc-mxw77" Jan 30 16:34:11 crc kubenswrapper[4766]: I0130 16:34:11.038469 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtjq9\" (UniqueName: \"kubernetes.io/projected/3ad5692e-34c5-4e32-ba96-cd5e6e617c62-kube-api-access-mtjq9\") pod \"crc-storage-crc-mxw77\" (UID: \"3ad5692e-34c5-4e32-ba96-cd5e6e617c62\") " pod="crc-storage/crc-storage-crc-mxw77" Jan 30 16:34:11 crc kubenswrapper[4766]: I0130 16:34:11.095581 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-mxw77" Jan 30 16:34:11 crc kubenswrapper[4766]: I0130 16:34:11.279045 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-mxw77"] Jan 30 16:34:11 crc kubenswrapper[4766]: W0130 16:34:11.286581 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ad5692e_34c5_4e32_ba96_cd5e6e617c62.slice/crio-41aa3cc9e83c071b20feb20af7cd9beb2280e38cda41f53df29d32c582f72e3c WatchSource:0}: Error finding container 41aa3cc9e83c071b20feb20af7cd9beb2280e38cda41f53df29d32c582f72e3c: Status 404 returned error can't find the container with id 41aa3cc9e83c071b20feb20af7cd9beb2280e38cda41f53df29d32c582f72e3c Jan 30 16:34:11 crc kubenswrapper[4766]: I0130 16:34:11.288932 4766 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 16:34:12 crc kubenswrapper[4766]: I0130 16:34:12.185200 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-mxw77" event={"ID":"3ad5692e-34c5-4e32-ba96-cd5e6e617c62","Type":"ContainerStarted","Data":"41aa3cc9e83c071b20feb20af7cd9beb2280e38cda41f53df29d32c582f72e3c"} Jan 30 16:34:13 crc kubenswrapper[4766]: I0130 16:34:13.192398 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-mxw77" event={"ID":"3ad5692e-34c5-4e32-ba96-cd5e6e617c62","Type":"ContainerDied","Data":"403a056677f3371b0fbc8b04190fc4d600537695442bf6a2adce1bee6fee4304"} Jan 30 16:34:13 crc kubenswrapper[4766]: I0130 16:34:13.193345 4766 generic.go:334] "Generic (PLEG): container finished" podID="3ad5692e-34c5-4e32-ba96-cd5e6e617c62" containerID="403a056677f3371b0fbc8b04190fc4d600537695442bf6a2adce1bee6fee4304" exitCode=0 Jan 30 16:34:14 crc kubenswrapper[4766]: I0130 16:34:14.398672 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-mxw77" Jan 30 16:34:14 crc kubenswrapper[4766]: I0130 16:34:14.558645 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/3ad5692e-34c5-4e32-ba96-cd5e6e617c62-crc-storage\") pod \"3ad5692e-34c5-4e32-ba96-cd5e6e617c62\" (UID: \"3ad5692e-34c5-4e32-ba96-cd5e6e617c62\") " Jan 30 16:34:14 crc kubenswrapper[4766]: I0130 16:34:14.558787 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtjq9\" (UniqueName: \"kubernetes.io/projected/3ad5692e-34c5-4e32-ba96-cd5e6e617c62-kube-api-access-mtjq9\") pod \"3ad5692e-34c5-4e32-ba96-cd5e6e617c62\" (UID: \"3ad5692e-34c5-4e32-ba96-cd5e6e617c62\") " Jan 30 16:34:14 crc kubenswrapper[4766]: I0130 16:34:14.558822 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/3ad5692e-34c5-4e32-ba96-cd5e6e617c62-node-mnt\") pod \"3ad5692e-34c5-4e32-ba96-cd5e6e617c62\" (UID: \"3ad5692e-34c5-4e32-ba96-cd5e6e617c62\") " Jan 30 16:34:14 crc kubenswrapper[4766]: I0130 16:34:14.559073 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ad5692e-34c5-4e32-ba96-cd5e6e617c62-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "3ad5692e-34c5-4e32-ba96-cd5e6e617c62" (UID: "3ad5692e-34c5-4e32-ba96-cd5e6e617c62"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:34:14 crc kubenswrapper[4766]: I0130 16:34:14.565829 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ad5692e-34c5-4e32-ba96-cd5e6e617c62-kube-api-access-mtjq9" (OuterVolumeSpecName: "kube-api-access-mtjq9") pod "3ad5692e-34c5-4e32-ba96-cd5e6e617c62" (UID: "3ad5692e-34c5-4e32-ba96-cd5e6e617c62"). InnerVolumeSpecName "kube-api-access-mtjq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:34:14 crc kubenswrapper[4766]: I0130 16:34:14.574883 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ad5692e-34c5-4e32-ba96-cd5e6e617c62-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "3ad5692e-34c5-4e32-ba96-cd5e6e617c62" (UID: "3ad5692e-34c5-4e32-ba96-cd5e6e617c62"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:34:14 crc kubenswrapper[4766]: I0130 16:34:14.659863 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtjq9\" (UniqueName: \"kubernetes.io/projected/3ad5692e-34c5-4e32-ba96-cd5e6e617c62-kube-api-access-mtjq9\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:14 crc kubenswrapper[4766]: I0130 16:34:14.659907 4766 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/3ad5692e-34c5-4e32-ba96-cd5e6e617c62-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:14 crc kubenswrapper[4766]: I0130 16:34:14.659919 4766 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/3ad5692e-34c5-4e32-ba96-cd5e6e617c62-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:15 crc kubenswrapper[4766]: I0130 16:34:15.207240 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-mxw77" event={"ID":"3ad5692e-34c5-4e32-ba96-cd5e6e617c62","Type":"ContainerDied","Data":"41aa3cc9e83c071b20feb20af7cd9beb2280e38cda41f53df29d32c582f72e3c"} Jan 30 16:34:15 crc kubenswrapper[4766]: I0130 16:34:15.207291 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41aa3cc9e83c071b20feb20af7cd9beb2280e38cda41f53df29d32c582f72e3c" Jan 30 16:34:15 crc kubenswrapper[4766]: I0130 16:34:15.207386 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-mxw77" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.021607 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-54ngm"] Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.023327 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="ovn-controller" containerID="cri-o://eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78" gracePeriod=30 Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.023372 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="nbdb" containerID="cri-o://9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5" gracePeriod=30 Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.023452 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441" gracePeriod=30 Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.023560 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="northd" containerID="cri-o://5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044" gracePeriod=30 Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.023612 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="sbdb" containerID="cri-o://03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd" gracePeriod=30 Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.023632 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="kube-rbac-proxy-node" containerID="cri-o://fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01" gracePeriod=30 Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.023651 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="ovn-acl-logging" containerID="cri-o://041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6" gracePeriod=30 Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.098675 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="ovnkube-controller" containerID="cri-o://647d98f47c2a42e161b8c7c39f2520f193644bfbe6a04439449b24171986cc9b" gracePeriod=30 Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.223625 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-54ngm_d6a299e8-188d-4777-bb82-a0994feabcff/ovnkube-controller/3.log" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.225384 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-54ngm_d6a299e8-188d-4777-bb82-a0994feabcff/ovn-acl-logging/0.log" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.225898 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-54ngm_d6a299e8-188d-4777-bb82-a0994feabcff/ovn-controller/0.log" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.226332 4766 generic.go:334] "Generic (PLEG): container finished" podID="d6a299e8-188d-4777-bb82-a0994feabcff" containerID="3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441" exitCode=0 Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.226358 4766 generic.go:334] "Generic (PLEG): container finished" podID="d6a299e8-188d-4777-bb82-a0994feabcff" containerID="fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01" exitCode=0 Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.226367 4766 generic.go:334] "Generic (PLEG): container finished" podID="d6a299e8-188d-4777-bb82-a0994feabcff" containerID="041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6" exitCode=143 Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.226376 4766 generic.go:334] "Generic (PLEG): container finished" podID="d6a299e8-188d-4777-bb82-a0994feabcff" containerID="eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78" exitCode=143 Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.226419 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" event={"ID":"d6a299e8-188d-4777-bb82-a0994feabcff","Type":"ContainerDied","Data":"3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441"} Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.226444 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" event={"ID":"d6a299e8-188d-4777-bb82-a0994feabcff","Type":"ContainerDied","Data":"fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01"} Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.226454 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" event={"ID":"d6a299e8-188d-4777-bb82-a0994feabcff","Type":"ContainerDied","Data":"041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6"} Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.226462 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" event={"ID":"d6a299e8-188d-4777-bb82-a0994feabcff","Type":"ContainerDied","Data":"eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78"} Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.231346 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-l6xdr_3a74bc5e-af98-4849-820c-7056caabc485/kube-multus/2.log" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.232752 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-l6xdr_3a74bc5e-af98-4849-820c-7056caabc485/kube-multus/1.log" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.232799 4766 generic.go:334] "Generic (PLEG): container finished" podID="3a74bc5e-af98-4849-820c-7056caabc485" containerID="166e9165ba520b270882953160a98d79d10fd4c5b0fa39f8bd2fe923a3be331c" exitCode=2 Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.232836 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-l6xdr" event={"ID":"3a74bc5e-af98-4849-820c-7056caabc485","Type":"ContainerDied","Data":"166e9165ba520b270882953160a98d79d10fd4c5b0fa39f8bd2fe923a3be331c"} Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.232870 4766 scope.go:117] "RemoveContainer" containerID="5cca75d6ff61e1e073104559f6fcac5d76919a1e445da3f2d6fe271d1e0e4082" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.233444 4766 scope.go:117] "RemoveContainer" containerID="166e9165ba520b270882953160a98d79d10fd4c5b0fa39f8bd2fe923a3be331c" Jan 30 16:34:17 crc kubenswrapper[4766]: E0130 16:34:17.233610 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-l6xdr_openshift-multus(3a74bc5e-af98-4849-820c-7056caabc485)\"" pod="openshift-multus/multus-l6xdr" podUID="3a74bc5e-af98-4849-820c-7056caabc485" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.390064 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-54ngm_d6a299e8-188d-4777-bb82-a0994feabcff/ovnkube-controller/3.log" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.393763 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-54ngm_d6a299e8-188d-4777-bb82-a0994feabcff/ovn-acl-logging/0.log" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.395643 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-54ngm_d6a299e8-188d-4777-bb82-a0994feabcff/ovn-controller/0.log" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.396154 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.413280 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-node-log\") pod \"d6a299e8-188d-4777-bb82-a0994feabcff\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.413366 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-kubelet\") pod \"d6a299e8-188d-4777-bb82-a0994feabcff\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.413412 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d6a299e8-188d-4777-bb82-a0994feabcff-env-overrides\") pod \"d6a299e8-188d-4777-bb82-a0994feabcff\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.413443 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-run-systemd\") pod \"d6a299e8-188d-4777-bb82-a0994feabcff\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.413494 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4psqh\" (UniqueName: \"kubernetes.io/projected/d6a299e8-188d-4777-bb82-a0994feabcff-kube-api-access-4psqh\") pod \"d6a299e8-188d-4777-bb82-a0994feabcff\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.413525 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-log-socket\") pod \"d6a299e8-188d-4777-bb82-a0994feabcff\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.413557 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d6a299e8-188d-4777-bb82-a0994feabcff-ovnkube-config\") pod \"d6a299e8-188d-4777-bb82-a0994feabcff\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.413575 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-slash\") pod \"d6a299e8-188d-4777-bb82-a0994feabcff\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.413671 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-var-lib-openvswitch\") pod \"d6a299e8-188d-4777-bb82-a0994feabcff\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.413713 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-run-ovn\") pod \"d6a299e8-188d-4777-bb82-a0994feabcff\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.413741 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-systemd-units\") pod \"d6a299e8-188d-4777-bb82-a0994feabcff\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.413764 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-etc-openvswitch\") pod \"d6a299e8-188d-4777-bb82-a0994feabcff\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.413899 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-var-lib-cni-networks-ovn-kubernetes\") pod \"d6a299e8-188d-4777-bb82-a0994feabcff\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.414039 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d6a299e8-188d-4777-bb82-a0994feabcff-ovnkube-script-lib\") pod \"d6a299e8-188d-4777-bb82-a0994feabcff\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.414069 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-run-ovn-kubernetes\") pod \"d6a299e8-188d-4777-bb82-a0994feabcff\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.414087 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-cni-bin\") pod \"d6a299e8-188d-4777-bb82-a0994feabcff\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.414130 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-run-openvswitch\") pod \"d6a299e8-188d-4777-bb82-a0994feabcff\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.414216 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-cni-netd\") pod \"d6a299e8-188d-4777-bb82-a0994feabcff\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.414241 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-run-netns\") pod \"d6a299e8-188d-4777-bb82-a0994feabcff\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.414266 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d6a299e8-188d-4777-bb82-a0994feabcff-ovn-node-metrics-cert\") pod \"d6a299e8-188d-4777-bb82-a0994feabcff\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.415424 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "d6a299e8-188d-4777-bb82-a0994feabcff" (UID: "d6a299e8-188d-4777-bb82-a0994feabcff"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.415473 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "d6a299e8-188d-4777-bb82-a0994feabcff" (UID: "d6a299e8-188d-4777-bb82-a0994feabcff"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.415505 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-log-socket" (OuterVolumeSpecName: "log-socket") pod "d6a299e8-188d-4777-bb82-a0994feabcff" (UID: "d6a299e8-188d-4777-bb82-a0994feabcff"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.415526 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "d6a299e8-188d-4777-bb82-a0994feabcff" (UID: "d6a299e8-188d-4777-bb82-a0994feabcff"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.415541 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "d6a299e8-188d-4777-bb82-a0994feabcff" (UID: "d6a299e8-188d-4777-bb82-a0994feabcff"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.415556 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "d6a299e8-188d-4777-bb82-a0994feabcff" (UID: "d6a299e8-188d-4777-bb82-a0994feabcff"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.415558 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-slash" (OuterVolumeSpecName: "host-slash") pod "d6a299e8-188d-4777-bb82-a0994feabcff" (UID: "d6a299e8-188d-4777-bb82-a0994feabcff"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.415611 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "d6a299e8-188d-4777-bb82-a0994feabcff" (UID: "d6a299e8-188d-4777-bb82-a0994feabcff"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.415647 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "d6a299e8-188d-4777-bb82-a0994feabcff" (UID: "d6a299e8-188d-4777-bb82-a0994feabcff"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.415682 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "d6a299e8-188d-4777-bb82-a0994feabcff" (UID: "d6a299e8-188d-4777-bb82-a0994feabcff"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.415714 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "d6a299e8-188d-4777-bb82-a0994feabcff" (UID: "d6a299e8-188d-4777-bb82-a0994feabcff"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.415739 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-node-log" (OuterVolumeSpecName: "node-log") pod "d6a299e8-188d-4777-bb82-a0994feabcff" (UID: "d6a299e8-188d-4777-bb82-a0994feabcff"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.416411 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "d6a299e8-188d-4777-bb82-a0994feabcff" (UID: "d6a299e8-188d-4777-bb82-a0994feabcff"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.416504 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "d6a299e8-188d-4777-bb82-a0994feabcff" (UID: "d6a299e8-188d-4777-bb82-a0994feabcff"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.417100 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6a299e8-188d-4777-bb82-a0994feabcff-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "d6a299e8-188d-4777-bb82-a0994feabcff" (UID: "d6a299e8-188d-4777-bb82-a0994feabcff"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.417245 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6a299e8-188d-4777-bb82-a0994feabcff-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "d6a299e8-188d-4777-bb82-a0994feabcff" (UID: "d6a299e8-188d-4777-bb82-a0994feabcff"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.417438 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6a299e8-188d-4777-bb82-a0994feabcff-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "d6a299e8-188d-4777-bb82-a0994feabcff" (UID: "d6a299e8-188d-4777-bb82-a0994feabcff"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.419921 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6a299e8-188d-4777-bb82-a0994feabcff-kube-api-access-4psqh" (OuterVolumeSpecName: "kube-api-access-4psqh") pod "d6a299e8-188d-4777-bb82-a0994feabcff" (UID: "d6a299e8-188d-4777-bb82-a0994feabcff"). InnerVolumeSpecName "kube-api-access-4psqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.420106 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6a299e8-188d-4777-bb82-a0994feabcff-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "d6a299e8-188d-4777-bb82-a0994feabcff" (UID: "d6a299e8-188d-4777-bb82-a0994feabcff"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.428472 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "d6a299e8-188d-4777-bb82-a0994feabcff" (UID: "d6a299e8-188d-4777-bb82-a0994feabcff"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.460986 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-44h4c"] Jan 30 16:34:17 crc kubenswrapper[4766]: E0130 16:34:17.461223 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="ovn-acl-logging" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.461237 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="ovn-acl-logging" Jan 30 16:34:17 crc kubenswrapper[4766]: E0130 16:34:17.461245 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="ovnkube-controller" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.461251 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="ovnkube-controller" Jan 30 16:34:17 crc kubenswrapper[4766]: E0130 16:34:17.461264 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="ovnkube-controller" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.461270 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="ovnkube-controller" Jan 30 16:34:17 crc kubenswrapper[4766]: E0130 16:34:17.461277 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ad5692e-34c5-4e32-ba96-cd5e6e617c62" containerName="storage" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.461284 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ad5692e-34c5-4e32-ba96-cd5e6e617c62" containerName="storage" Jan 30 16:34:17 crc kubenswrapper[4766]: E0130 16:34:17.461293 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="kube-rbac-proxy-node" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.461299 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="kube-rbac-proxy-node" Jan 30 16:34:17 crc kubenswrapper[4766]: E0130 16:34:17.461308 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="kubecfg-setup" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.461315 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="kubecfg-setup" Jan 30 16:34:17 crc kubenswrapper[4766]: E0130 16:34:17.461323 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="ovnkube-controller" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.461330 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="ovnkube-controller" Jan 30 16:34:17 crc kubenswrapper[4766]: E0130 16:34:17.461339 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="northd" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.461345 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="northd" Jan 30 16:34:17 crc kubenswrapper[4766]: E0130 16:34:17.461355 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="sbdb" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.461361 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="sbdb" Jan 30 16:34:17 crc kubenswrapper[4766]: E0130 16:34:17.461368 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="nbdb" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.461374 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="nbdb" Jan 30 16:34:17 crc kubenswrapper[4766]: E0130 16:34:17.461381 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.461387 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 16:34:17 crc kubenswrapper[4766]: E0130 16:34:17.461395 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="ovn-controller" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.461400 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="ovn-controller" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.461480 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="ovnkube-controller" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.461488 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="ovn-acl-logging" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.461497 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="ovnkube-controller" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.461504 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="kube-rbac-proxy-node" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.461512 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="ovnkube-controller" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.461523 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="ovn-controller" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.461530 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.461536 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ad5692e-34c5-4e32-ba96-cd5e6e617c62" containerName="storage" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.461545 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="sbdb" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.461552 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="nbdb" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.461559 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="northd" Jan 30 16:34:17 crc kubenswrapper[4766]: E0130 16:34:17.461676 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="ovnkube-controller" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.461683 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="ovnkube-controller" Jan 30 16:34:17 crc kubenswrapper[4766]: E0130 16:34:17.461696 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="ovnkube-controller" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.461702 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="ovnkube-controller" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.461798 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="ovnkube-controller" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.461807 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="ovnkube-controller" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.463422 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.516248 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-log-socket\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.516295 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-host-cni-netd\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.516313 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c86b5492-8fad-4730-9587-79439536dfee-ovn-node-metrics-cert\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.516334 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-host-run-ovn-kubernetes\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.516350 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-run-ovn\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.516392 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-etc-openvswitch\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.516456 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-host-slash\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.516484 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.516518 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfxqb\" (UniqueName: \"kubernetes.io/projected/c86b5492-8fad-4730-9587-79439536dfee-kube-api-access-nfxqb\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.516546 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-host-run-netns\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.516569 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-systemd-units\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.516694 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-host-kubelet\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.516783 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-var-lib-openvswitch\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.516810 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-run-openvswitch\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.516912 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-node-log\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.516977 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c86b5492-8fad-4730-9587-79439536dfee-ovnkube-config\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.517002 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c86b5492-8fad-4730-9587-79439536dfee-ovnkube-script-lib\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.517118 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c86b5492-8fad-4730-9587-79439536dfee-env-overrides\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.517159 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-host-cni-bin\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.517293 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-run-systemd\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.517449 4766 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-log-socket\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.517494 4766 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d6a299e8-188d-4777-bb82-a0994feabcff-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.517512 4766 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-slash\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.517524 4766 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.517558 4766 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.517665 4766 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.517678 4766 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.517694 4766 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.517709 4766 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d6a299e8-188d-4777-bb82-a0994feabcff-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.517722 4766 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.517736 4766 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.517785 4766 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.517798 4766 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.517809 4766 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.517822 4766 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d6a299e8-188d-4777-bb82-a0994feabcff-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.517836 4766 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-node-log\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.517848 4766 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.517859 4766 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d6a299e8-188d-4777-bb82-a0994feabcff-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.517870 4766 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.517881 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4psqh\" (UniqueName: \"kubernetes.io/projected/d6a299e8-188d-4777-bb82-a0994feabcff-kube-api-access-4psqh\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.619501 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-host-slash\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.619654 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-host-slash\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.619975 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.619910 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.620043 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfxqb\" (UniqueName: \"kubernetes.io/projected/c86b5492-8fad-4730-9587-79439536dfee-kube-api-access-nfxqb\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.620075 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-host-run-netns\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.620113 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-systemd-units\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.620126 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-host-run-netns\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.620145 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-host-kubelet\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.620162 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-var-lib-openvswitch\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.620166 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-systemd-units\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.620198 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-run-openvswitch\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.620208 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-host-kubelet\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.620232 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-var-lib-openvswitch\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.620250 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-node-log\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.620270 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-node-log\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.620275 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c86b5492-8fad-4730-9587-79439536dfee-ovnkube-config\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.620291 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c86b5492-8fad-4730-9587-79439536dfee-ovnkube-script-lib\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.620326 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c86b5492-8fad-4730-9587-79439536dfee-env-overrides\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.620352 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-host-cni-bin\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.620400 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-run-systemd\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.620473 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-log-socket\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.620498 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-host-cni-netd\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.620519 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c86b5492-8fad-4730-9587-79439536dfee-ovn-node-metrics-cert\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.620558 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-host-run-ovn-kubernetes\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.620587 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-etc-openvswitch\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.620613 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-run-ovn\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.620711 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-run-ovn\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.620743 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-host-cni-bin\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.620765 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-run-systemd\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.620253 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-run-openvswitch\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.620784 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-log-socket\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.620804 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-host-cni-netd\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.620991 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c86b5492-8fad-4730-9587-79439536dfee-ovnkube-config\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.621054 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-host-run-ovn-kubernetes\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.621088 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-etc-openvswitch\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.621104 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c86b5492-8fad-4730-9587-79439536dfee-env-overrides\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.621418 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c86b5492-8fad-4730-9587-79439536dfee-ovnkube-script-lib\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.625958 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c86b5492-8fad-4730-9587-79439536dfee-ovn-node-metrics-cert\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.639247 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfxqb\" (UniqueName: \"kubernetes.io/projected/c86b5492-8fad-4730-9587-79439536dfee-kube-api-access-nfxqb\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.778658 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.242430 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-54ngm_d6a299e8-188d-4777-bb82-a0994feabcff/ovnkube-controller/3.log" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.244878 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-54ngm_d6a299e8-188d-4777-bb82-a0994feabcff/ovn-acl-logging/0.log" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.245365 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-54ngm_d6a299e8-188d-4777-bb82-a0994feabcff/ovn-controller/0.log" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.245699 4766 generic.go:334] "Generic (PLEG): container finished" podID="d6a299e8-188d-4777-bb82-a0994feabcff" containerID="647d98f47c2a42e161b8c7c39f2520f193644bfbe6a04439449b24171986cc9b" exitCode=0 Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.245726 4766 generic.go:334] "Generic (PLEG): container finished" podID="d6a299e8-188d-4777-bb82-a0994feabcff" containerID="03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd" exitCode=0 Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.245736 4766 generic.go:334] "Generic (PLEG): container finished" podID="d6a299e8-188d-4777-bb82-a0994feabcff" containerID="9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5" exitCode=0 Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.245770 4766 generic.go:334] "Generic (PLEG): container finished" podID="d6a299e8-188d-4777-bb82-a0994feabcff" containerID="5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044" exitCode=0 Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.245814 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" event={"ID":"d6a299e8-188d-4777-bb82-a0994feabcff","Type":"ContainerDied","Data":"647d98f47c2a42e161b8c7c39f2520f193644bfbe6a04439449b24171986cc9b"} Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.245919 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" event={"ID":"d6a299e8-188d-4777-bb82-a0994feabcff","Type":"ContainerDied","Data":"03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd"} Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.245931 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" event={"ID":"d6a299e8-188d-4777-bb82-a0994feabcff","Type":"ContainerDied","Data":"9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5"} Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.245941 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" event={"ID":"d6a299e8-188d-4777-bb82-a0994feabcff","Type":"ContainerDied","Data":"5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044"} Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.245954 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" event={"ID":"d6a299e8-188d-4777-bb82-a0994feabcff","Type":"ContainerDied","Data":"b7c7571b036dc1cbf0576f5638a00f9530f0e7ad9d69b4b12af59327bef5efe3"} Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.245969 4766 scope.go:117] "RemoveContainer" containerID="647d98f47c2a42e161b8c7c39f2520f193644bfbe6a04439449b24171986cc9b" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.246071 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.248945 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-l6xdr_3a74bc5e-af98-4849-820c-7056caabc485/kube-multus/2.log" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.250826 4766 generic.go:334] "Generic (PLEG): container finished" podID="c86b5492-8fad-4730-9587-79439536dfee" containerID="9f62ed3f25bc6771847095b8e8045bffd473ce8376e8f6e634c0ed562f4703cf" exitCode=0 Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.250877 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" event={"ID":"c86b5492-8fad-4730-9587-79439536dfee","Type":"ContainerDied","Data":"9f62ed3f25bc6771847095b8e8045bffd473ce8376e8f6e634c0ed562f4703cf"} Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.250909 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" event={"ID":"c86b5492-8fad-4730-9587-79439536dfee","Type":"ContainerStarted","Data":"75a1bcc6b31f2cef694000185b132df1bc20b86ae4a75a382758838626d5d09d"} Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.264070 4766 scope.go:117] "RemoveContainer" containerID="18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.296388 4766 scope.go:117] "RemoveContainer" containerID="03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.311254 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-54ngm"] Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.319312 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-54ngm"] Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.337231 4766 scope.go:117] "RemoveContainer" containerID="9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.355291 4766 scope.go:117] "RemoveContainer" containerID="5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.368040 4766 scope.go:117] "RemoveContainer" containerID="3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.386402 4766 scope.go:117] "RemoveContainer" containerID="fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.405450 4766 scope.go:117] "RemoveContainer" containerID="041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.419447 4766 scope.go:117] "RemoveContainer" containerID="eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.434165 4766 scope.go:117] "RemoveContainer" containerID="458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.455686 4766 scope.go:117] "RemoveContainer" containerID="647d98f47c2a42e161b8c7c39f2520f193644bfbe6a04439449b24171986cc9b" Jan 30 16:34:18 crc kubenswrapper[4766]: E0130 16:34:18.456327 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"647d98f47c2a42e161b8c7c39f2520f193644bfbe6a04439449b24171986cc9b\": container with ID starting with 647d98f47c2a42e161b8c7c39f2520f193644bfbe6a04439449b24171986cc9b not found: ID does not exist" containerID="647d98f47c2a42e161b8c7c39f2520f193644bfbe6a04439449b24171986cc9b" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.456465 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"647d98f47c2a42e161b8c7c39f2520f193644bfbe6a04439449b24171986cc9b"} err="failed to get container status \"647d98f47c2a42e161b8c7c39f2520f193644bfbe6a04439449b24171986cc9b\": rpc error: code = NotFound desc = could not find container \"647d98f47c2a42e161b8c7c39f2520f193644bfbe6a04439449b24171986cc9b\": container with ID starting with 647d98f47c2a42e161b8c7c39f2520f193644bfbe6a04439449b24171986cc9b not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.456505 4766 scope.go:117] "RemoveContainer" containerID="18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75" Jan 30 16:34:18 crc kubenswrapper[4766]: E0130 16:34:18.457164 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75\": container with ID starting with 18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75 not found: ID does not exist" containerID="18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.457202 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75"} err="failed to get container status \"18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75\": rpc error: code = NotFound desc = could not find container \"18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75\": container with ID starting with 18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75 not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.457219 4766 scope.go:117] "RemoveContainer" containerID="03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd" Jan 30 16:34:18 crc kubenswrapper[4766]: E0130 16:34:18.457571 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\": container with ID starting with 03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd not found: ID does not exist" containerID="03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.457609 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd"} err="failed to get container status \"03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\": rpc error: code = NotFound desc = could not find container \"03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\": container with ID starting with 03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.457640 4766 scope.go:117] "RemoveContainer" containerID="9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5" Jan 30 16:34:18 crc kubenswrapper[4766]: E0130 16:34:18.457926 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\": container with ID starting with 9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5 not found: ID does not exist" containerID="9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.457954 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5"} err="failed to get container status \"9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\": rpc error: code = NotFound desc = could not find container \"9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\": container with ID starting with 9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5 not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.457971 4766 scope.go:117] "RemoveContainer" containerID="5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044" Jan 30 16:34:18 crc kubenswrapper[4766]: E0130 16:34:18.458211 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\": container with ID starting with 5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044 not found: ID does not exist" containerID="5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.458264 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044"} err="failed to get container status \"5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\": rpc error: code = NotFound desc = could not find container \"5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\": container with ID starting with 5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044 not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.458284 4766 scope.go:117] "RemoveContainer" containerID="3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441" Jan 30 16:34:18 crc kubenswrapper[4766]: E0130 16:34:18.458605 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\": container with ID starting with 3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441 not found: ID does not exist" containerID="3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.458638 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441"} err="failed to get container status \"3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\": rpc error: code = NotFound desc = could not find container \"3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\": container with ID starting with 3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441 not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.458662 4766 scope.go:117] "RemoveContainer" containerID="fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01" Jan 30 16:34:18 crc kubenswrapper[4766]: E0130 16:34:18.458985 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\": container with ID starting with fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01 not found: ID does not exist" containerID="fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.459008 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01"} err="failed to get container status \"fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\": rpc error: code = NotFound desc = could not find container \"fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\": container with ID starting with fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01 not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.459023 4766 scope.go:117] "RemoveContainer" containerID="041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6" Jan 30 16:34:18 crc kubenswrapper[4766]: E0130 16:34:18.459312 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\": container with ID starting with 041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6 not found: ID does not exist" containerID="041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.459360 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6"} err="failed to get container status \"041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\": rpc error: code = NotFound desc = could not find container \"041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\": container with ID starting with 041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6 not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.459378 4766 scope.go:117] "RemoveContainer" containerID="eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78" Jan 30 16:34:18 crc kubenswrapper[4766]: E0130 16:34:18.459666 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\": container with ID starting with eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78 not found: ID does not exist" containerID="eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.459696 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78"} err="failed to get container status \"eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\": rpc error: code = NotFound desc = could not find container \"eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\": container with ID starting with eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78 not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.459731 4766 scope.go:117] "RemoveContainer" containerID="458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1" Jan 30 16:34:18 crc kubenswrapper[4766]: E0130 16:34:18.459965 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\": container with ID starting with 458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1 not found: ID does not exist" containerID="458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.459995 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1"} err="failed to get container status \"458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\": rpc error: code = NotFound desc = could not find container \"458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\": container with ID starting with 458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1 not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.460013 4766 scope.go:117] "RemoveContainer" containerID="647d98f47c2a42e161b8c7c39f2520f193644bfbe6a04439449b24171986cc9b" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.460322 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"647d98f47c2a42e161b8c7c39f2520f193644bfbe6a04439449b24171986cc9b"} err="failed to get container status \"647d98f47c2a42e161b8c7c39f2520f193644bfbe6a04439449b24171986cc9b\": rpc error: code = NotFound desc = could not find container \"647d98f47c2a42e161b8c7c39f2520f193644bfbe6a04439449b24171986cc9b\": container with ID starting with 647d98f47c2a42e161b8c7c39f2520f193644bfbe6a04439449b24171986cc9b not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.460350 4766 scope.go:117] "RemoveContainer" containerID="18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.460604 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75"} err="failed to get container status \"18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75\": rpc error: code = NotFound desc = could not find container \"18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75\": container with ID starting with 18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75 not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.460632 4766 scope.go:117] "RemoveContainer" containerID="03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.461004 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd"} err="failed to get container status \"03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\": rpc error: code = NotFound desc = could not find container \"03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\": container with ID starting with 03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.461044 4766 scope.go:117] "RemoveContainer" containerID="9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.461382 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5"} err="failed to get container status \"9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\": rpc error: code = NotFound desc = could not find container \"9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\": container with ID starting with 9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5 not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.461428 4766 scope.go:117] "RemoveContainer" containerID="5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.461716 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044"} err="failed to get container status \"5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\": rpc error: code = NotFound desc = could not find container \"5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\": container with ID starting with 5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044 not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.461735 4766 scope.go:117] "RemoveContainer" containerID="3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.462016 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441"} err="failed to get container status \"3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\": rpc error: code = NotFound desc = could not find container \"3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\": container with ID starting with 3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441 not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.462043 4766 scope.go:117] "RemoveContainer" containerID="fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.462294 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01"} err="failed to get container status \"fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\": rpc error: code = NotFound desc = could not find container \"fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\": container with ID starting with fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01 not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.462314 4766 scope.go:117] "RemoveContainer" containerID="041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.462593 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6"} err="failed to get container status \"041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\": rpc error: code = NotFound desc = could not find container \"041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\": container with ID starting with 041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6 not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.462611 4766 scope.go:117] "RemoveContainer" containerID="eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.462870 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78"} err="failed to get container status \"eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\": rpc error: code = NotFound desc = could not find container \"eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\": container with ID starting with eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78 not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.462886 4766 scope.go:117] "RemoveContainer" containerID="458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.463151 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1"} err="failed to get container status \"458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\": rpc error: code = NotFound desc = could not find container \"458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\": container with ID starting with 458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1 not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.463167 4766 scope.go:117] "RemoveContainer" containerID="647d98f47c2a42e161b8c7c39f2520f193644bfbe6a04439449b24171986cc9b" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.463443 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"647d98f47c2a42e161b8c7c39f2520f193644bfbe6a04439449b24171986cc9b"} err="failed to get container status \"647d98f47c2a42e161b8c7c39f2520f193644bfbe6a04439449b24171986cc9b\": rpc error: code = NotFound desc = could not find container \"647d98f47c2a42e161b8c7c39f2520f193644bfbe6a04439449b24171986cc9b\": container with ID starting with 647d98f47c2a42e161b8c7c39f2520f193644bfbe6a04439449b24171986cc9b not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.463461 4766 scope.go:117] "RemoveContainer" containerID="18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.463794 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75"} err="failed to get container status \"18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75\": rpc error: code = NotFound desc = could not find container \"18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75\": container with ID starting with 18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75 not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.463819 4766 scope.go:117] "RemoveContainer" containerID="03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.464458 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd"} err="failed to get container status \"03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\": rpc error: code = NotFound desc = could not find container \"03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\": container with ID starting with 03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.464496 4766 scope.go:117] "RemoveContainer" containerID="9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.464800 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5"} err="failed to get container status \"9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\": rpc error: code = NotFound desc = could not find container \"9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\": container with ID starting with 9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5 not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.464824 4766 scope.go:117] "RemoveContainer" containerID="5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.465161 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044"} err="failed to get container status \"5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\": rpc error: code = NotFound desc = could not find container \"5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\": container with ID starting with 5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044 not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.465193 4766 scope.go:117] "RemoveContainer" containerID="3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.465489 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441"} err="failed to get container status \"3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\": rpc error: code = NotFound desc = could not find container \"3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\": container with ID starting with 3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441 not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.465508 4766 scope.go:117] "RemoveContainer" containerID="fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.465766 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01"} err="failed to get container status \"fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\": rpc error: code = NotFound desc = could not find container \"fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\": container with ID starting with fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01 not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.465783 4766 scope.go:117] "RemoveContainer" containerID="041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.466006 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6"} err="failed to get container status \"041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\": rpc error: code = NotFound desc = could not find container \"041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\": container with ID starting with 041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6 not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.466024 4766 scope.go:117] "RemoveContainer" containerID="eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.466234 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78"} err="failed to get container status \"eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\": rpc error: code = NotFound desc = could not find container \"eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\": container with ID starting with eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78 not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.466256 4766 scope.go:117] "RemoveContainer" containerID="458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.466508 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1"} err="failed to get container status \"458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\": rpc error: code = NotFound desc = could not find container \"458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\": container with ID starting with 458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1 not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.466530 4766 scope.go:117] "RemoveContainer" containerID="647d98f47c2a42e161b8c7c39f2520f193644bfbe6a04439449b24171986cc9b" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.466720 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"647d98f47c2a42e161b8c7c39f2520f193644bfbe6a04439449b24171986cc9b"} err="failed to get container status \"647d98f47c2a42e161b8c7c39f2520f193644bfbe6a04439449b24171986cc9b\": rpc error: code = NotFound desc = could not find container \"647d98f47c2a42e161b8c7c39f2520f193644bfbe6a04439449b24171986cc9b\": container with ID starting with 647d98f47c2a42e161b8c7c39f2520f193644bfbe6a04439449b24171986cc9b not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.466736 4766 scope.go:117] "RemoveContainer" containerID="18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.466995 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75"} err="failed to get container status \"18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75\": rpc error: code = NotFound desc = could not find container \"18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75\": container with ID starting with 18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75 not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.467019 4766 scope.go:117] "RemoveContainer" containerID="03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.467278 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd"} err="failed to get container status \"03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\": rpc error: code = NotFound desc = could not find container \"03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\": container with ID starting with 03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.467302 4766 scope.go:117] "RemoveContainer" containerID="9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.467576 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5"} err="failed to get container status \"9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\": rpc error: code = NotFound desc = could not find container \"9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\": container with ID starting with 9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5 not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.467602 4766 scope.go:117] "RemoveContainer" containerID="5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.467835 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044"} err="failed to get container status \"5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\": rpc error: code = NotFound desc = could not find container \"5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\": container with ID starting with 5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044 not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.467861 4766 scope.go:117] "RemoveContainer" containerID="3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.468098 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441"} err="failed to get container status \"3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\": rpc error: code = NotFound desc = could not find container \"3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\": container with ID starting with 3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441 not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.468147 4766 scope.go:117] "RemoveContainer" containerID="fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.468476 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01"} err="failed to get container status \"fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\": rpc error: code = NotFound desc = could not find container \"fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\": container with ID starting with fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01 not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.468522 4766 scope.go:117] "RemoveContainer" containerID="041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.468851 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6"} err="failed to get container status \"041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\": rpc error: code = NotFound desc = could not find container \"041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\": container with ID starting with 041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6 not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.468874 4766 scope.go:117] "RemoveContainer" containerID="eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.469244 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78"} err="failed to get container status \"eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\": rpc error: code = NotFound desc = could not find container \"eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\": container with ID starting with eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78 not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.469272 4766 scope.go:117] "RemoveContainer" containerID="458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.469536 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1"} err="failed to get container status \"458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\": rpc error: code = NotFound desc = could not find container \"458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\": container with ID starting with 458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1 not found: ID does not exist" Jan 30 16:34:19 crc kubenswrapper[4766]: I0130 16:34:19.260524 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" event={"ID":"c86b5492-8fad-4730-9587-79439536dfee","Type":"ContainerStarted","Data":"22d6b7400859e2ed0cbf6a8a7f9fc829406089f0538e65bb7577f5c435edea46"} Jan 30 16:34:19 crc kubenswrapper[4766]: I0130 16:34:19.260830 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" event={"ID":"c86b5492-8fad-4730-9587-79439536dfee","Type":"ContainerStarted","Data":"506ad4c7a04f42ef0a5732b1f006296851de0cb2ce967eb0300b530c1b668103"} Jan 30 16:34:19 crc kubenswrapper[4766]: I0130 16:34:19.260846 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" event={"ID":"c86b5492-8fad-4730-9587-79439536dfee","Type":"ContainerStarted","Data":"82688917a0460a29cd019cc88f9714be1657f7ee18dbb117f81bcfecadb3f846"} Jan 30 16:34:19 crc kubenswrapper[4766]: I0130 16:34:19.260858 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" event={"ID":"c86b5492-8fad-4730-9587-79439536dfee","Type":"ContainerStarted","Data":"0cbc88b47c4e6d0abaef77ffb45c7a93fa376bd27e8926ba4ae530c6e74b7cc6"} Jan 30 16:34:19 crc kubenswrapper[4766]: I0130 16:34:19.260873 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" event={"ID":"c86b5492-8fad-4730-9587-79439536dfee","Type":"ContainerStarted","Data":"f721586498011f5a3be49997817d6abf23cf3be8d4c432796851c02d42295bb9"} Jan 30 16:34:19 crc kubenswrapper[4766]: I0130 16:34:19.260884 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" event={"ID":"c86b5492-8fad-4730-9587-79439536dfee","Type":"ContainerStarted","Data":"778d81e8098661d61bdb5e56b5d01eaa521ef7726e0fff5e12d45cdb1cded618"} Jan 30 16:34:20 crc kubenswrapper[4766]: I0130 16:34:20.047716 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" path="/var/lib/kubelet/pods/d6a299e8-188d-4777-bb82-a0994feabcff/volumes" Jan 30 16:34:21 crc kubenswrapper[4766]: I0130 16:34:21.270446 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn"] Jan 30 16:34:21 crc kubenswrapper[4766]: I0130 16:34:21.271750 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" Jan 30 16:34:21 crc kubenswrapper[4766]: I0130 16:34:21.273938 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 30 16:34:21 crc kubenswrapper[4766]: I0130 16:34:21.275992 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" event={"ID":"c86b5492-8fad-4730-9587-79439536dfee","Type":"ContainerStarted","Data":"79ae7bcae501bb7a73d5732ca84bffb5c97991491acceb963871684414c91b5d"} Jan 30 16:34:21 crc kubenswrapper[4766]: I0130 16:34:21.367956 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgjs9\" (UniqueName: \"kubernetes.io/projected/7cde9372-207a-40f0-829b-1e0b5c662ec1-kube-api-access-jgjs9\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn\" (UID: \"7cde9372-207a-40f0-829b-1e0b5c662ec1\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" Jan 30 16:34:21 crc kubenswrapper[4766]: I0130 16:34:21.368022 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7cde9372-207a-40f0-829b-1e0b5c662ec1-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn\" (UID: \"7cde9372-207a-40f0-829b-1e0b5c662ec1\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" Jan 30 16:34:21 crc kubenswrapper[4766]: I0130 16:34:21.368043 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7cde9372-207a-40f0-829b-1e0b5c662ec1-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn\" (UID: \"7cde9372-207a-40f0-829b-1e0b5c662ec1\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" Jan 30 16:34:21 crc kubenswrapper[4766]: I0130 16:34:21.469534 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgjs9\" (UniqueName: \"kubernetes.io/projected/7cde9372-207a-40f0-829b-1e0b5c662ec1-kube-api-access-jgjs9\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn\" (UID: \"7cde9372-207a-40f0-829b-1e0b5c662ec1\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" Jan 30 16:34:21 crc kubenswrapper[4766]: I0130 16:34:21.469617 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7cde9372-207a-40f0-829b-1e0b5c662ec1-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn\" (UID: \"7cde9372-207a-40f0-829b-1e0b5c662ec1\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" Jan 30 16:34:21 crc kubenswrapper[4766]: I0130 16:34:21.469645 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7cde9372-207a-40f0-829b-1e0b5c662ec1-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn\" (UID: \"7cde9372-207a-40f0-829b-1e0b5c662ec1\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" Jan 30 16:34:21 crc kubenswrapper[4766]: I0130 16:34:21.470072 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7cde9372-207a-40f0-829b-1e0b5c662ec1-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn\" (UID: \"7cde9372-207a-40f0-829b-1e0b5c662ec1\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" Jan 30 16:34:21 crc kubenswrapper[4766]: I0130 16:34:21.470195 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7cde9372-207a-40f0-829b-1e0b5c662ec1-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn\" (UID: \"7cde9372-207a-40f0-829b-1e0b5c662ec1\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" Jan 30 16:34:21 crc kubenswrapper[4766]: I0130 16:34:21.488827 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgjs9\" (UniqueName: \"kubernetes.io/projected/7cde9372-207a-40f0-829b-1e0b5c662ec1-kube-api-access-jgjs9\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn\" (UID: \"7cde9372-207a-40f0-829b-1e0b5c662ec1\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" Jan 30 16:34:21 crc kubenswrapper[4766]: I0130 16:34:21.585021 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" Jan 30 16:34:21 crc kubenswrapper[4766]: E0130 16:34:21.608415 4766 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn_openshift-marketplace_7cde9372-207a-40f0-829b-1e0b5c662ec1_0(0f9cf759cf4fcbc4263b721c9fab0f6df77c599e3e2d76a9648ff5703e475541): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 16:34:21 crc kubenswrapper[4766]: E0130 16:34:21.608535 4766 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn_openshift-marketplace_7cde9372-207a-40f0-829b-1e0b5c662ec1_0(0f9cf759cf4fcbc4263b721c9fab0f6df77c599e3e2d76a9648ff5703e475541): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" Jan 30 16:34:21 crc kubenswrapper[4766]: E0130 16:34:21.608563 4766 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn_openshift-marketplace_7cde9372-207a-40f0-829b-1e0b5c662ec1_0(0f9cf759cf4fcbc4263b721c9fab0f6df77c599e3e2d76a9648ff5703e475541): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" Jan 30 16:34:21 crc kubenswrapper[4766]: E0130 16:34:21.608627 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn_openshift-marketplace(7cde9372-207a-40f0-829b-1e0b5c662ec1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn_openshift-marketplace(7cde9372-207a-40f0-829b-1e0b5c662ec1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn_openshift-marketplace_7cde9372-207a-40f0-829b-1e0b5c662ec1_0(0f9cf759cf4fcbc4263b721c9fab0f6df77c599e3e2d76a9648ff5703e475541): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" podUID="7cde9372-207a-40f0-829b-1e0b5c662ec1" Jan 30 16:34:24 crc kubenswrapper[4766]: I0130 16:34:24.124577 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn"] Jan 30 16:34:24 crc kubenswrapper[4766]: I0130 16:34:24.125572 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" Jan 30 16:34:24 crc kubenswrapper[4766]: I0130 16:34:24.126049 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" Jan 30 16:34:24 crc kubenswrapper[4766]: E0130 16:34:24.159736 4766 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn_openshift-marketplace_7cde9372-207a-40f0-829b-1e0b5c662ec1_0(993f8bef9586636e8b11ad0b8e6aabd003df1d5b1991bdd49436cd84887a9787): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 16:34:24 crc kubenswrapper[4766]: E0130 16:34:24.159859 4766 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn_openshift-marketplace_7cde9372-207a-40f0-829b-1e0b5c662ec1_0(993f8bef9586636e8b11ad0b8e6aabd003df1d5b1991bdd49436cd84887a9787): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" Jan 30 16:34:24 crc kubenswrapper[4766]: E0130 16:34:24.159896 4766 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn_openshift-marketplace_7cde9372-207a-40f0-829b-1e0b5c662ec1_0(993f8bef9586636e8b11ad0b8e6aabd003df1d5b1991bdd49436cd84887a9787): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" Jan 30 16:34:24 crc kubenswrapper[4766]: E0130 16:34:24.160458 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn_openshift-marketplace(7cde9372-207a-40f0-829b-1e0b5c662ec1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn_openshift-marketplace(7cde9372-207a-40f0-829b-1e0b5c662ec1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn_openshift-marketplace_7cde9372-207a-40f0-829b-1e0b5c662ec1_0(993f8bef9586636e8b11ad0b8e6aabd003df1d5b1991bdd49436cd84887a9787): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" podUID="7cde9372-207a-40f0-829b-1e0b5c662ec1" Jan 30 16:34:24 crc kubenswrapper[4766]: I0130 16:34:24.309518 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" event={"ID":"c86b5492-8fad-4730-9587-79439536dfee","Type":"ContainerStarted","Data":"cd14fc5594090fb492f38421457d8396d7f7543f41b2e6a77bd883e197144815"} Jan 30 16:34:24 crc kubenswrapper[4766]: I0130 16:34:24.309580 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:24 crc kubenswrapper[4766]: I0130 16:34:24.309590 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:24 crc kubenswrapper[4766]: I0130 16:34:24.360152 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:24 crc kubenswrapper[4766]: I0130 16:34:24.388107 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" podStartSLOduration=7.388086347 podStartE2EDuration="7.388086347s" podCreationTimestamp="2026-01-30 16:34:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:34:24.383107149 +0000 UTC m=+719.021064505" watchObservedRunningTime="2026-01-30 16:34:24.388086347 +0000 UTC m=+719.026043693" Jan 30 16:34:25 crc kubenswrapper[4766]: I0130 16:34:25.315429 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:25 crc kubenswrapper[4766]: I0130 16:34:25.342277 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:29 crc kubenswrapper[4766]: I0130 16:34:29.039621 4766 scope.go:117] "RemoveContainer" containerID="166e9165ba520b270882953160a98d79d10fd4c5b0fa39f8bd2fe923a3be331c" Jan 30 16:34:29 crc kubenswrapper[4766]: E0130 16:34:29.040069 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-l6xdr_openshift-multus(3a74bc5e-af98-4849-820c-7056caabc485)\"" pod="openshift-multus/multus-l6xdr" podUID="3a74bc5e-af98-4849-820c-7056caabc485" Jan 30 16:34:35 crc kubenswrapper[4766]: I0130 16:34:35.039242 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" Jan 30 16:34:35 crc kubenswrapper[4766]: I0130 16:34:35.041218 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" Jan 30 16:34:35 crc kubenswrapper[4766]: E0130 16:34:35.073984 4766 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn_openshift-marketplace_7cde9372-207a-40f0-829b-1e0b5c662ec1_0(9a280ebcb8971193df816f363393f0b269d17dea2fe3b1d90473d3d1f2177e39): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 16:34:35 crc kubenswrapper[4766]: E0130 16:34:35.074113 4766 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn_openshift-marketplace_7cde9372-207a-40f0-829b-1e0b5c662ec1_0(9a280ebcb8971193df816f363393f0b269d17dea2fe3b1d90473d3d1f2177e39): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" Jan 30 16:34:35 crc kubenswrapper[4766]: E0130 16:34:35.074170 4766 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn_openshift-marketplace_7cde9372-207a-40f0-829b-1e0b5c662ec1_0(9a280ebcb8971193df816f363393f0b269d17dea2fe3b1d90473d3d1f2177e39): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" Jan 30 16:34:35 crc kubenswrapper[4766]: E0130 16:34:35.074307 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn_openshift-marketplace(7cde9372-207a-40f0-829b-1e0b5c662ec1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn_openshift-marketplace(7cde9372-207a-40f0-829b-1e0b5c662ec1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn_openshift-marketplace_7cde9372-207a-40f0-829b-1e0b5c662ec1_0(9a280ebcb8971193df816f363393f0b269d17dea2fe3b1d90473d3d1f2177e39): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" podUID="7cde9372-207a-40f0-829b-1e0b5c662ec1" Jan 30 16:34:39 crc kubenswrapper[4766]: I0130 16:34:39.045071 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:34:39 crc kubenswrapper[4766]: I0130 16:34:39.045511 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:34:42 crc kubenswrapper[4766]: I0130 16:34:42.039766 4766 scope.go:117] "RemoveContainer" containerID="166e9165ba520b270882953160a98d79d10fd4c5b0fa39f8bd2fe923a3be331c" Jan 30 16:34:42 crc kubenswrapper[4766]: I0130 16:34:42.398688 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-l6xdr_3a74bc5e-af98-4849-820c-7056caabc485/kube-multus/2.log" Jan 30 16:34:42 crc kubenswrapper[4766]: I0130 16:34:42.398995 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-l6xdr" event={"ID":"3a74bc5e-af98-4849-820c-7056caabc485","Type":"ContainerStarted","Data":"bbdc1125b2a2d4ced39fc4271a41707288f580e00c51cdac577a979d9cbd3cb4"} Jan 30 16:34:47 crc kubenswrapper[4766]: I0130 16:34:47.807292 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:49 crc kubenswrapper[4766]: I0130 16:34:49.039061 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" Jan 30 16:34:49 crc kubenswrapper[4766]: I0130 16:34:49.039503 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" Jan 30 16:34:49 crc kubenswrapper[4766]: I0130 16:34:49.429649 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn"] Jan 30 16:34:50 crc kubenswrapper[4766]: I0130 16:34:50.443801 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" event={"ID":"7cde9372-207a-40f0-829b-1e0b5c662ec1","Type":"ContainerStarted","Data":"744530273ebd16fb16a3018ffe27a238f4d8162cb092bd23625842e70001915f"} Jan 30 16:34:51 crc kubenswrapper[4766]: I0130 16:34:51.450821 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" event={"ID":"7cde9372-207a-40f0-829b-1e0b5c662ec1","Type":"ContainerStarted","Data":"3a557415adc4c1a24b5e3dd8c04efa476959a3cc0dc056e6d0722bbe885f522b"} Jan 30 16:34:52 crc kubenswrapper[4766]: I0130 16:34:52.458750 4766 generic.go:334] "Generic (PLEG): container finished" podID="7cde9372-207a-40f0-829b-1e0b5c662ec1" containerID="3a557415adc4c1a24b5e3dd8c04efa476959a3cc0dc056e6d0722bbe885f522b" exitCode=0 Jan 30 16:34:52 crc kubenswrapper[4766]: I0130 16:34:52.458823 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" event={"ID":"7cde9372-207a-40f0-829b-1e0b5c662ec1","Type":"ContainerDied","Data":"3a557415adc4c1a24b5e3dd8c04efa476959a3cc0dc056e6d0722bbe885f522b"} Jan 30 16:34:54 crc kubenswrapper[4766]: E0130 16:34:54.189302 4766 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7cde9372_207a_40f0_829b_1e0b5c662ec1.slice/crio-conmon-103dbfee5d273e9ffedd8c92d2570e1876b974a9170372fda75c5aa51f6aabe7.scope\": RecentStats: unable to find data in memory cache]" Jan 30 16:34:54 crc kubenswrapper[4766]: I0130 16:34:54.471162 4766 generic.go:334] "Generic (PLEG): container finished" podID="7cde9372-207a-40f0-829b-1e0b5c662ec1" containerID="103dbfee5d273e9ffedd8c92d2570e1876b974a9170372fda75c5aa51f6aabe7" exitCode=0 Jan 30 16:34:54 crc kubenswrapper[4766]: I0130 16:34:54.471256 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" event={"ID":"7cde9372-207a-40f0-829b-1e0b5c662ec1","Type":"ContainerDied","Data":"103dbfee5d273e9ffedd8c92d2570e1876b974a9170372fda75c5aa51f6aabe7"} Jan 30 16:34:55 crc kubenswrapper[4766]: I0130 16:34:55.478795 4766 generic.go:334] "Generic (PLEG): container finished" podID="7cde9372-207a-40f0-829b-1e0b5c662ec1" containerID="5319ff026802c0c82e451a33c953ff8cf1736dd73a18f6abd307187dd5f7cbf4" exitCode=0 Jan 30 16:34:55 crc kubenswrapper[4766]: I0130 16:34:55.478864 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" event={"ID":"7cde9372-207a-40f0-829b-1e0b5c662ec1","Type":"ContainerDied","Data":"5319ff026802c0c82e451a33c953ff8cf1736dd73a18f6abd307187dd5f7cbf4"} Jan 30 16:34:56 crc kubenswrapper[4766]: I0130 16:34:56.677419 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" Jan 30 16:34:56 crc kubenswrapper[4766]: I0130 16:34:56.853572 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7cde9372-207a-40f0-829b-1e0b5c662ec1-bundle\") pod \"7cde9372-207a-40f0-829b-1e0b5c662ec1\" (UID: \"7cde9372-207a-40f0-829b-1e0b5c662ec1\") " Jan 30 16:34:56 crc kubenswrapper[4766]: I0130 16:34:56.854021 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgjs9\" (UniqueName: \"kubernetes.io/projected/7cde9372-207a-40f0-829b-1e0b5c662ec1-kube-api-access-jgjs9\") pod \"7cde9372-207a-40f0-829b-1e0b5c662ec1\" (UID: \"7cde9372-207a-40f0-829b-1e0b5c662ec1\") " Jan 30 16:34:56 crc kubenswrapper[4766]: I0130 16:34:56.854067 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7cde9372-207a-40f0-829b-1e0b5c662ec1-util\") pod \"7cde9372-207a-40f0-829b-1e0b5c662ec1\" (UID: \"7cde9372-207a-40f0-829b-1e0b5c662ec1\") " Jan 30 16:34:56 crc kubenswrapper[4766]: I0130 16:34:56.854495 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cde9372-207a-40f0-829b-1e0b5c662ec1-bundle" (OuterVolumeSpecName: "bundle") pod "7cde9372-207a-40f0-829b-1e0b5c662ec1" (UID: "7cde9372-207a-40f0-829b-1e0b5c662ec1"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:34:56 crc kubenswrapper[4766]: I0130 16:34:56.864142 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cde9372-207a-40f0-829b-1e0b5c662ec1-kube-api-access-jgjs9" (OuterVolumeSpecName: "kube-api-access-jgjs9") pod "7cde9372-207a-40f0-829b-1e0b5c662ec1" (UID: "7cde9372-207a-40f0-829b-1e0b5c662ec1"). InnerVolumeSpecName "kube-api-access-jgjs9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:34:56 crc kubenswrapper[4766]: I0130 16:34:56.877965 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cde9372-207a-40f0-829b-1e0b5c662ec1-util" (OuterVolumeSpecName: "util") pod "7cde9372-207a-40f0-829b-1e0b5c662ec1" (UID: "7cde9372-207a-40f0-829b-1e0b5c662ec1"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:34:56 crc kubenswrapper[4766]: I0130 16:34:56.954691 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgjs9\" (UniqueName: \"kubernetes.io/projected/7cde9372-207a-40f0-829b-1e0b5c662ec1-kube-api-access-jgjs9\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:56 crc kubenswrapper[4766]: I0130 16:34:56.954755 4766 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7cde9372-207a-40f0-829b-1e0b5c662ec1-util\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:56 crc kubenswrapper[4766]: I0130 16:34:56.954774 4766 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7cde9372-207a-40f0-829b-1e0b5c662ec1-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:57 crc kubenswrapper[4766]: I0130 16:34:57.492988 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" event={"ID":"7cde9372-207a-40f0-829b-1e0b5c662ec1","Type":"ContainerDied","Data":"744530273ebd16fb16a3018ffe27a238f4d8162cb092bd23625842e70001915f"} Jan 30 16:34:57 crc kubenswrapper[4766]: I0130 16:34:57.493033 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="744530273ebd16fb16a3018ffe27a238f4d8162cb092bd23625842e70001915f" Jan 30 16:34:57 crc kubenswrapper[4766]: I0130 16:34:57.493171 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" Jan 30 16:35:02 crc kubenswrapper[4766]: I0130 16:35:02.078221 4766 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 16:35:03 crc kubenswrapper[4766]: I0130 16:35:03.000554 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-v6mpm"] Jan 30 16:35:03 crc kubenswrapper[4766]: E0130 16:35:03.001053 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cde9372-207a-40f0-829b-1e0b5c662ec1" containerName="extract" Jan 30 16:35:03 crc kubenswrapper[4766]: I0130 16:35:03.001151 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cde9372-207a-40f0-829b-1e0b5c662ec1" containerName="extract" Jan 30 16:35:03 crc kubenswrapper[4766]: E0130 16:35:03.001255 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cde9372-207a-40f0-829b-1e0b5c662ec1" containerName="pull" Jan 30 16:35:03 crc kubenswrapper[4766]: I0130 16:35:03.001318 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cde9372-207a-40f0-829b-1e0b5c662ec1" containerName="pull" Jan 30 16:35:03 crc kubenswrapper[4766]: E0130 16:35:03.001386 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cde9372-207a-40f0-829b-1e0b5c662ec1" containerName="util" Jan 30 16:35:03 crc kubenswrapper[4766]: I0130 16:35:03.001446 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cde9372-207a-40f0-829b-1e0b5c662ec1" containerName="util" Jan 30 16:35:03 crc kubenswrapper[4766]: I0130 16:35:03.001616 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cde9372-207a-40f0-829b-1e0b5c662ec1" containerName="extract" Jan 30 16:35:03 crc kubenswrapper[4766]: I0130 16:35:03.002134 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-v6mpm" Jan 30 16:35:03 crc kubenswrapper[4766]: I0130 16:35:03.004193 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-bd9xs" Jan 30 16:35:03 crc kubenswrapper[4766]: I0130 16:35:03.005096 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 30 16:35:03 crc kubenswrapper[4766]: I0130 16:35:03.005564 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 30 16:35:03 crc kubenswrapper[4766]: I0130 16:35:03.015269 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-v6mpm"] Jan 30 16:35:03 crc kubenswrapper[4766]: I0130 16:35:03.130928 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj5jp\" (UniqueName: \"kubernetes.io/projected/463d1450-7318-4003-b30d-82dc9e1bec53-kube-api-access-wj5jp\") pod \"nmstate-operator-646758c888-v6mpm\" (UID: \"463d1450-7318-4003-b30d-82dc9e1bec53\") " pod="openshift-nmstate/nmstate-operator-646758c888-v6mpm" Jan 30 16:35:03 crc kubenswrapper[4766]: I0130 16:35:03.233103 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wj5jp\" (UniqueName: \"kubernetes.io/projected/463d1450-7318-4003-b30d-82dc9e1bec53-kube-api-access-wj5jp\") pod \"nmstate-operator-646758c888-v6mpm\" (UID: \"463d1450-7318-4003-b30d-82dc9e1bec53\") " pod="openshift-nmstate/nmstate-operator-646758c888-v6mpm" Jan 30 16:35:03 crc kubenswrapper[4766]: I0130 16:35:03.253008 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wj5jp\" (UniqueName: \"kubernetes.io/projected/463d1450-7318-4003-b30d-82dc9e1bec53-kube-api-access-wj5jp\") pod \"nmstate-operator-646758c888-v6mpm\" (UID: \"463d1450-7318-4003-b30d-82dc9e1bec53\") " pod="openshift-nmstate/nmstate-operator-646758c888-v6mpm" Jan 30 16:35:03 crc kubenswrapper[4766]: I0130 16:35:03.319363 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-v6mpm" Jan 30 16:35:03 crc kubenswrapper[4766]: I0130 16:35:03.526049 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-v6mpm"] Jan 30 16:35:04 crc kubenswrapper[4766]: I0130 16:35:04.537652 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-v6mpm" event={"ID":"463d1450-7318-4003-b30d-82dc9e1bec53","Type":"ContainerStarted","Data":"a79d66a79a7f5b750db23b68abf2fb93538a4dc242f33d202d6e2b5ee160328d"} Jan 30 16:35:06 crc kubenswrapper[4766]: I0130 16:35:06.550784 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-v6mpm" event={"ID":"463d1450-7318-4003-b30d-82dc9e1bec53","Type":"ContainerStarted","Data":"98c0729a0d2909f704b9e6fc150502d78682796d964f12c2fa3b9ce73ed9c47d"} Jan 30 16:35:06 crc kubenswrapper[4766]: I0130 16:35:06.569325 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-v6mpm" podStartSLOduration=2.530632072 podStartE2EDuration="4.569307557s" podCreationTimestamp="2026-01-30 16:35:02 +0000 UTC" firstStartedPulling="2026-01-30 16:35:03.54753517 +0000 UTC m=+758.185492516" lastFinishedPulling="2026-01-30 16:35:05.586210655 +0000 UTC m=+760.224168001" observedRunningTime="2026-01-30 16:35:06.565655406 +0000 UTC m=+761.203612742" watchObservedRunningTime="2026-01-30 16:35:06.569307557 +0000 UTC m=+761.207264903" Jan 30 16:35:09 crc kubenswrapper[4766]: I0130 16:35:09.045603 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:35:09 crc kubenswrapper[4766]: I0130 16:35:09.046221 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.620014 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-wv52c"] Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.621281 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-wv52c" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.628724 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-pbrwf" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.629936 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-wv52c"] Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.639091 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-zj7fb"] Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.640077 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-zj7fb" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.642399 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.648736 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-82wxr"] Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.649823 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-82wxr" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.658752 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-zj7fb"] Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.748271 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-d2p2z"] Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.748985 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-d2p2z" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.749905 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/121c0166-75c7-4f39-a07b-c89cb81d2fd8-dbus-socket\") pod \"nmstate-handler-82wxr\" (UID: \"121c0166-75c7-4f39-a07b-c89cb81d2fd8\") " pod="openshift-nmstate/nmstate-handler-82wxr" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.749933 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/121c0166-75c7-4f39-a07b-c89cb81d2fd8-nmstate-lock\") pod \"nmstate-handler-82wxr\" (UID: \"121c0166-75c7-4f39-a07b-c89cb81d2fd8\") " pod="openshift-nmstate/nmstate-handler-82wxr" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.750010 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/121c0166-75c7-4f39-a07b-c89cb81d2fd8-ovs-socket\") pod \"nmstate-handler-82wxr\" (UID: \"121c0166-75c7-4f39-a07b-c89cb81d2fd8\") " pod="openshift-nmstate/nmstate-handler-82wxr" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.750084 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94lcf\" (UniqueName: \"kubernetes.io/projected/46ac0f62-2413-4258-a957-35039942d0f7-kube-api-access-94lcf\") pod \"nmstate-metrics-54757c584b-wv52c\" (UID: \"46ac0f62-2413-4258-a957-35039942d0f7\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-wv52c" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.750109 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d6qp\" (UniqueName: \"kubernetes.io/projected/ed7e34e5-c04e-4852-b4a3-9e28fd5f960d-kube-api-access-9d6qp\") pod \"nmstate-webhook-8474b5b9d8-zj7fb\" (UID: \"ed7e34e5-c04e-4852-b4a3-9e28fd5f960d\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-zj7fb" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.750286 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8w6xp\" (UniqueName: \"kubernetes.io/projected/121c0166-75c7-4f39-a07b-c89cb81d2fd8-kube-api-access-8w6xp\") pod \"nmstate-handler-82wxr\" (UID: \"121c0166-75c7-4f39-a07b-c89cb81d2fd8\") " pod="openshift-nmstate/nmstate-handler-82wxr" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.750344 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ed7e34e5-c04e-4852-b4a3-9e28fd5f960d-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-zj7fb\" (UID: \"ed7e34e5-c04e-4852-b4a3-9e28fd5f960d\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-zj7fb" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.752518 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-ftwwh" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.752535 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.752559 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.758803 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-d2p2z"] Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.851455 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94lcf\" (UniqueName: \"kubernetes.io/projected/46ac0f62-2413-4258-a957-35039942d0f7-kube-api-access-94lcf\") pod \"nmstate-metrics-54757c584b-wv52c\" (UID: \"46ac0f62-2413-4258-a957-35039942d0f7\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-wv52c" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.851509 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9d6qp\" (UniqueName: \"kubernetes.io/projected/ed7e34e5-c04e-4852-b4a3-9e28fd5f960d-kube-api-access-9d6qp\") pod \"nmstate-webhook-8474b5b9d8-zj7fb\" (UID: \"ed7e34e5-c04e-4852-b4a3-9e28fd5f960d\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-zj7fb" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.851585 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8w6xp\" (UniqueName: \"kubernetes.io/projected/121c0166-75c7-4f39-a07b-c89cb81d2fd8-kube-api-access-8w6xp\") pod \"nmstate-handler-82wxr\" (UID: \"121c0166-75c7-4f39-a07b-c89cb81d2fd8\") " pod="openshift-nmstate/nmstate-handler-82wxr" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.851620 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d30ca6b4-bd87-4d25-92dd-f3d94410f2a3-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-d2p2z\" (UID: \"d30ca6b4-bd87-4d25-92dd-f3d94410f2a3\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-d2p2z" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.851650 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjd6k\" (UniqueName: \"kubernetes.io/projected/d30ca6b4-bd87-4d25-92dd-f3d94410f2a3-kube-api-access-kjd6k\") pod \"nmstate-console-plugin-7754f76f8b-d2p2z\" (UID: \"d30ca6b4-bd87-4d25-92dd-f3d94410f2a3\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-d2p2z" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.851680 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ed7e34e5-c04e-4852-b4a3-9e28fd5f960d-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-zj7fb\" (UID: \"ed7e34e5-c04e-4852-b4a3-9e28fd5f960d\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-zj7fb" Jan 30 16:35:11 crc kubenswrapper[4766]: E0130 16:35:11.851824 4766 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.851893 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/121c0166-75c7-4f39-a07b-c89cb81d2fd8-dbus-socket\") pod \"nmstate-handler-82wxr\" (UID: \"121c0166-75c7-4f39-a07b-c89cb81d2fd8\") " pod="openshift-nmstate/nmstate-handler-82wxr" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.851979 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d30ca6b4-bd87-4d25-92dd-f3d94410f2a3-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-d2p2z\" (UID: \"d30ca6b4-bd87-4d25-92dd-f3d94410f2a3\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-d2p2z" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.852029 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/121c0166-75c7-4f39-a07b-c89cb81d2fd8-nmstate-lock\") pod \"nmstate-handler-82wxr\" (UID: \"121c0166-75c7-4f39-a07b-c89cb81d2fd8\") " pod="openshift-nmstate/nmstate-handler-82wxr" Jan 30 16:35:11 crc kubenswrapper[4766]: E0130 16:35:11.852051 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed7e34e5-c04e-4852-b4a3-9e28fd5f960d-tls-key-pair podName:ed7e34e5-c04e-4852-b4a3-9e28fd5f960d nodeName:}" failed. No retries permitted until 2026-01-30 16:35:12.35202666 +0000 UTC m=+766.989984026 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/ed7e34e5-c04e-4852-b4a3-9e28fd5f960d-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-zj7fb" (UID: "ed7e34e5-c04e-4852-b4a3-9e28fd5f960d") : secret "openshift-nmstate-webhook" not found Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.852100 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/121c0166-75c7-4f39-a07b-c89cb81d2fd8-ovs-socket\") pod \"nmstate-handler-82wxr\" (UID: \"121c0166-75c7-4f39-a07b-c89cb81d2fd8\") " pod="openshift-nmstate/nmstate-handler-82wxr" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.852102 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/121c0166-75c7-4f39-a07b-c89cb81d2fd8-nmstate-lock\") pod \"nmstate-handler-82wxr\" (UID: \"121c0166-75c7-4f39-a07b-c89cb81d2fd8\") " pod="openshift-nmstate/nmstate-handler-82wxr" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.852132 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/121c0166-75c7-4f39-a07b-c89cb81d2fd8-ovs-socket\") pod \"nmstate-handler-82wxr\" (UID: \"121c0166-75c7-4f39-a07b-c89cb81d2fd8\") " pod="openshift-nmstate/nmstate-handler-82wxr" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.852258 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/121c0166-75c7-4f39-a07b-c89cb81d2fd8-dbus-socket\") pod \"nmstate-handler-82wxr\" (UID: \"121c0166-75c7-4f39-a07b-c89cb81d2fd8\") " pod="openshift-nmstate/nmstate-handler-82wxr" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.875928 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94lcf\" (UniqueName: \"kubernetes.io/projected/46ac0f62-2413-4258-a957-35039942d0f7-kube-api-access-94lcf\") pod \"nmstate-metrics-54757c584b-wv52c\" (UID: \"46ac0f62-2413-4258-a957-35039942d0f7\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-wv52c" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.875934 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9d6qp\" (UniqueName: \"kubernetes.io/projected/ed7e34e5-c04e-4852-b4a3-9e28fd5f960d-kube-api-access-9d6qp\") pod \"nmstate-webhook-8474b5b9d8-zj7fb\" (UID: \"ed7e34e5-c04e-4852-b4a3-9e28fd5f960d\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-zj7fb" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.885979 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8w6xp\" (UniqueName: \"kubernetes.io/projected/121c0166-75c7-4f39-a07b-c89cb81d2fd8-kube-api-access-8w6xp\") pod \"nmstate-handler-82wxr\" (UID: \"121c0166-75c7-4f39-a07b-c89cb81d2fd8\") " pod="openshift-nmstate/nmstate-handler-82wxr" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.936552 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7977978877-p7rd4"] Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.937482 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7977978877-p7rd4" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.938317 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-wv52c" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.952030 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7977978877-p7rd4"] Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.953690 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d30ca6b4-bd87-4d25-92dd-f3d94410f2a3-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-d2p2z\" (UID: \"d30ca6b4-bd87-4d25-92dd-f3d94410f2a3\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-d2p2z" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.953742 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjd6k\" (UniqueName: \"kubernetes.io/projected/d30ca6b4-bd87-4d25-92dd-f3d94410f2a3-kube-api-access-kjd6k\") pod \"nmstate-console-plugin-7754f76f8b-d2p2z\" (UID: \"d30ca6b4-bd87-4d25-92dd-f3d94410f2a3\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-d2p2z" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.953803 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d30ca6b4-bd87-4d25-92dd-f3d94410f2a3-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-d2p2z\" (UID: \"d30ca6b4-bd87-4d25-92dd-f3d94410f2a3\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-d2p2z" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.954942 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d30ca6b4-bd87-4d25-92dd-f3d94410f2a3-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-d2p2z\" (UID: \"d30ca6b4-bd87-4d25-92dd-f3d94410f2a3\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-d2p2z" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.967874 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d30ca6b4-bd87-4d25-92dd-f3d94410f2a3-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-d2p2z\" (UID: \"d30ca6b4-bd87-4d25-92dd-f3d94410f2a3\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-d2p2z" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.977477 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-82wxr" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.993043 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjd6k\" (UniqueName: \"kubernetes.io/projected/d30ca6b4-bd87-4d25-92dd-f3d94410f2a3-kube-api-access-kjd6k\") pod \"nmstate-console-plugin-7754f76f8b-d2p2z\" (UID: \"d30ca6b4-bd87-4d25-92dd-f3d94410f2a3\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-d2p2z" Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.055326 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/381a1829-22f0-46b2-827d-92cc919105b8-console-serving-cert\") pod \"console-7977978877-p7rd4\" (UID: \"381a1829-22f0-46b2-827d-92cc919105b8\") " pod="openshift-console/console-7977978877-p7rd4" Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.055561 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/381a1829-22f0-46b2-827d-92cc919105b8-console-oauth-config\") pod \"console-7977978877-p7rd4\" (UID: \"381a1829-22f0-46b2-827d-92cc919105b8\") " pod="openshift-console/console-7977978877-p7rd4" Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.055592 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/381a1829-22f0-46b2-827d-92cc919105b8-trusted-ca-bundle\") pod \"console-7977978877-p7rd4\" (UID: \"381a1829-22f0-46b2-827d-92cc919105b8\") " pod="openshift-console/console-7977978877-p7rd4" Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.055646 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/381a1829-22f0-46b2-827d-92cc919105b8-console-config\") pod \"console-7977978877-p7rd4\" (UID: \"381a1829-22f0-46b2-827d-92cc919105b8\") " pod="openshift-console/console-7977978877-p7rd4" Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.055688 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/381a1829-22f0-46b2-827d-92cc919105b8-service-ca\") pod \"console-7977978877-p7rd4\" (UID: \"381a1829-22f0-46b2-827d-92cc919105b8\") " pod="openshift-console/console-7977978877-p7rd4" Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.055822 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/381a1829-22f0-46b2-827d-92cc919105b8-oauth-serving-cert\") pod \"console-7977978877-p7rd4\" (UID: \"381a1829-22f0-46b2-827d-92cc919105b8\") " pod="openshift-console/console-7977978877-p7rd4" Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.055870 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxbg7\" (UniqueName: \"kubernetes.io/projected/381a1829-22f0-46b2-827d-92cc919105b8-kube-api-access-kxbg7\") pod \"console-7977978877-p7rd4\" (UID: \"381a1829-22f0-46b2-827d-92cc919105b8\") " pod="openshift-console/console-7977978877-p7rd4" Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.065978 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-d2p2z" Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.157363 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/381a1829-22f0-46b2-827d-92cc919105b8-console-config\") pod \"console-7977978877-p7rd4\" (UID: \"381a1829-22f0-46b2-827d-92cc919105b8\") " pod="openshift-console/console-7977978877-p7rd4" Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.157416 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/381a1829-22f0-46b2-827d-92cc919105b8-service-ca\") pod \"console-7977978877-p7rd4\" (UID: \"381a1829-22f0-46b2-827d-92cc919105b8\") " pod="openshift-console/console-7977978877-p7rd4" Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.157443 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/381a1829-22f0-46b2-827d-92cc919105b8-oauth-serving-cert\") pod \"console-7977978877-p7rd4\" (UID: \"381a1829-22f0-46b2-827d-92cc919105b8\") " pod="openshift-console/console-7977978877-p7rd4" Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.157487 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxbg7\" (UniqueName: \"kubernetes.io/projected/381a1829-22f0-46b2-827d-92cc919105b8-kube-api-access-kxbg7\") pod \"console-7977978877-p7rd4\" (UID: \"381a1829-22f0-46b2-827d-92cc919105b8\") " pod="openshift-console/console-7977978877-p7rd4" Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.157525 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/381a1829-22f0-46b2-827d-92cc919105b8-console-oauth-config\") pod \"console-7977978877-p7rd4\" (UID: \"381a1829-22f0-46b2-827d-92cc919105b8\") " pod="openshift-console/console-7977978877-p7rd4" Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.157542 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/381a1829-22f0-46b2-827d-92cc919105b8-console-serving-cert\") pod \"console-7977978877-p7rd4\" (UID: \"381a1829-22f0-46b2-827d-92cc919105b8\") " pod="openshift-console/console-7977978877-p7rd4" Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.157562 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/381a1829-22f0-46b2-827d-92cc919105b8-trusted-ca-bundle\") pod \"console-7977978877-p7rd4\" (UID: \"381a1829-22f0-46b2-827d-92cc919105b8\") " pod="openshift-console/console-7977978877-p7rd4" Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.158468 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/381a1829-22f0-46b2-827d-92cc919105b8-console-config\") pod \"console-7977978877-p7rd4\" (UID: \"381a1829-22f0-46b2-827d-92cc919105b8\") " pod="openshift-console/console-7977978877-p7rd4" Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.158941 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/381a1829-22f0-46b2-827d-92cc919105b8-trusted-ca-bundle\") pod \"console-7977978877-p7rd4\" (UID: \"381a1829-22f0-46b2-827d-92cc919105b8\") " pod="openshift-console/console-7977978877-p7rd4" Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.159153 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/381a1829-22f0-46b2-827d-92cc919105b8-oauth-serving-cert\") pod \"console-7977978877-p7rd4\" (UID: \"381a1829-22f0-46b2-827d-92cc919105b8\") " pod="openshift-console/console-7977978877-p7rd4" Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.160305 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/381a1829-22f0-46b2-827d-92cc919105b8-service-ca\") pod \"console-7977978877-p7rd4\" (UID: \"381a1829-22f0-46b2-827d-92cc919105b8\") " pod="openshift-console/console-7977978877-p7rd4" Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.163314 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/381a1829-22f0-46b2-827d-92cc919105b8-console-serving-cert\") pod \"console-7977978877-p7rd4\" (UID: \"381a1829-22f0-46b2-827d-92cc919105b8\") " pod="openshift-console/console-7977978877-p7rd4" Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.163778 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/381a1829-22f0-46b2-827d-92cc919105b8-console-oauth-config\") pod \"console-7977978877-p7rd4\" (UID: \"381a1829-22f0-46b2-827d-92cc919105b8\") " pod="openshift-console/console-7977978877-p7rd4" Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.176402 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxbg7\" (UniqueName: \"kubernetes.io/projected/381a1829-22f0-46b2-827d-92cc919105b8-kube-api-access-kxbg7\") pod \"console-7977978877-p7rd4\" (UID: \"381a1829-22f0-46b2-827d-92cc919105b8\") " pod="openshift-console/console-7977978877-p7rd4" Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.241651 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-d2p2z"] Jan 30 16:35:12 crc kubenswrapper[4766]: W0130 16:35:12.247982 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd30ca6b4_bd87_4d25_92dd_f3d94410f2a3.slice/crio-5b340ded7fa8e74a0a0f6db174cd84c4bece45a343cfe6fcb0b58e8240cc0568 WatchSource:0}: Error finding container 5b340ded7fa8e74a0a0f6db174cd84c4bece45a343cfe6fcb0b58e8240cc0568: Status 404 returned error can't find the container with id 5b340ded7fa8e74a0a0f6db174cd84c4bece45a343cfe6fcb0b58e8240cc0568 Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.310159 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7977978877-p7rd4" Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.360613 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ed7e34e5-c04e-4852-b4a3-9e28fd5f960d-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-zj7fb\" (UID: \"ed7e34e5-c04e-4852-b4a3-9e28fd5f960d\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-zj7fb" Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.362207 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-wv52c"] Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.364639 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ed7e34e5-c04e-4852-b4a3-9e28fd5f960d-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-zj7fb\" (UID: \"ed7e34e5-c04e-4852-b4a3-9e28fd5f960d\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-zj7fb" Jan 30 16:35:12 crc kubenswrapper[4766]: W0130 16:35:12.378876 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46ac0f62_2413_4258_a957_35039942d0f7.slice/crio-2f4b2feb2242c54291b881f360feabde88ca321866c6f44fecd2fb3c670d86f4 WatchSource:0}: Error finding container 2f4b2feb2242c54291b881f360feabde88ca321866c6f44fecd2fb3c670d86f4: Status 404 returned error can't find the container with id 2f4b2feb2242c54291b881f360feabde88ca321866c6f44fecd2fb3c670d86f4 Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.511829 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7977978877-p7rd4"] Jan 30 16:35:12 crc kubenswrapper[4766]: W0130 16:35:12.523645 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod381a1829_22f0_46b2_827d_92cc919105b8.slice/crio-063ac212d3cbc101dc63688fcd2003f3b24bb0d80b4f9cf65a3f67233e4e585d WatchSource:0}: Error finding container 063ac212d3cbc101dc63688fcd2003f3b24bb0d80b4f9cf65a3f67233e4e585d: Status 404 returned error can't find the container with id 063ac212d3cbc101dc63688fcd2003f3b24bb0d80b4f9cf65a3f67233e4e585d Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.557061 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-zj7fb" Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.589287 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-wv52c" event={"ID":"46ac0f62-2413-4258-a957-35039942d0f7","Type":"ContainerStarted","Data":"2f4b2feb2242c54291b881f360feabde88ca321866c6f44fecd2fb3c670d86f4"} Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.590904 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-d2p2z" event={"ID":"d30ca6b4-bd87-4d25-92dd-f3d94410f2a3","Type":"ContainerStarted","Data":"5b340ded7fa8e74a0a0f6db174cd84c4bece45a343cfe6fcb0b58e8240cc0568"} Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.593346 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7977978877-p7rd4" event={"ID":"381a1829-22f0-46b2-827d-92cc919105b8","Type":"ContainerStarted","Data":"063ac212d3cbc101dc63688fcd2003f3b24bb0d80b4f9cf65a3f67233e4e585d"} Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.594405 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-82wxr" event={"ID":"121c0166-75c7-4f39-a07b-c89cb81d2fd8","Type":"ContainerStarted","Data":"057fdc3d90e854ae0c9233ae76abfd21fc0773043e75ffb2ccb775261f7b0670"} Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.734454 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-zj7fb"] Jan 30 16:35:12 crc kubenswrapper[4766]: W0130 16:35:12.740404 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded7e34e5_c04e_4852_b4a3_9e28fd5f960d.slice/crio-732160be18988effa9953eb767ba818557b75127bb413541c9bddcfc827cdc3b WatchSource:0}: Error finding container 732160be18988effa9953eb767ba818557b75127bb413541c9bddcfc827cdc3b: Status 404 returned error can't find the container with id 732160be18988effa9953eb767ba818557b75127bb413541c9bddcfc827cdc3b Jan 30 16:35:13 crc kubenswrapper[4766]: I0130 16:35:13.601098 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-zj7fb" event={"ID":"ed7e34e5-c04e-4852-b4a3-9e28fd5f960d","Type":"ContainerStarted","Data":"732160be18988effa9953eb767ba818557b75127bb413541c9bddcfc827cdc3b"} Jan 30 16:35:13 crc kubenswrapper[4766]: I0130 16:35:13.603135 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7977978877-p7rd4" event={"ID":"381a1829-22f0-46b2-827d-92cc919105b8","Type":"ContainerStarted","Data":"a8c5bd6d627f0391f72c6ecdfbe2e7043c67e77f3961b40d56f8cbc123288c9d"} Jan 30 16:35:13 crc kubenswrapper[4766]: I0130 16:35:13.633093 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7977978877-p7rd4" podStartSLOduration=2.633075714 podStartE2EDuration="2.633075714s" podCreationTimestamp="2026-01-30 16:35:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:35:13.630333268 +0000 UTC m=+768.268290634" watchObservedRunningTime="2026-01-30 16:35:13.633075714 +0000 UTC m=+768.271033060" Jan 30 16:35:16 crc kubenswrapper[4766]: I0130 16:35:16.628508 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-d2p2z" event={"ID":"d30ca6b4-bd87-4d25-92dd-f3d94410f2a3","Type":"ContainerStarted","Data":"43663a3c3948da6f3bb9050df62ced2f22d35c35389a220a1c58f97b160b4d2f"} Jan 30 16:35:16 crc kubenswrapper[4766]: I0130 16:35:16.630503 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-82wxr" event={"ID":"121c0166-75c7-4f39-a07b-c89cb81d2fd8","Type":"ContainerStarted","Data":"5f955387c142d6152aa72c93e3a22cc5b6418dcf260225b86468a4e7471ae981"} Jan 30 16:35:16 crc kubenswrapper[4766]: I0130 16:35:16.631009 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-82wxr" Jan 30 16:35:16 crc kubenswrapper[4766]: I0130 16:35:16.632077 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-wv52c" event={"ID":"46ac0f62-2413-4258-a957-35039942d0f7","Type":"ContainerStarted","Data":"06c964640e084928c5191bc00c31fa05177e2f9e8b07b0248d9ac652202402a8"} Jan 30 16:35:16 crc kubenswrapper[4766]: I0130 16:35:16.633881 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-zj7fb" event={"ID":"ed7e34e5-c04e-4852-b4a3-9e28fd5f960d","Type":"ContainerStarted","Data":"80c46d4f11c6b9af97ee8ade02b26f5e0c516804ceeab295df15b00d598a3c25"} Jan 30 16:35:16 crc kubenswrapper[4766]: I0130 16:35:16.634449 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-zj7fb" Jan 30 16:35:16 crc kubenswrapper[4766]: I0130 16:35:16.649616 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-d2p2z" podStartSLOduration=2.547173594 podStartE2EDuration="5.649600865s" podCreationTimestamp="2026-01-30 16:35:11 +0000 UTC" firstStartedPulling="2026-01-30 16:35:12.250355963 +0000 UTC m=+766.888313309" lastFinishedPulling="2026-01-30 16:35:15.352783224 +0000 UTC m=+769.990740580" observedRunningTime="2026-01-30 16:35:16.646744346 +0000 UTC m=+771.284701692" watchObservedRunningTime="2026-01-30 16:35:16.649600865 +0000 UTC m=+771.287558201" Jan 30 16:35:16 crc kubenswrapper[4766]: I0130 16:35:16.674535 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-82wxr" podStartSLOduration=2.25684625 podStartE2EDuration="5.674503152s" podCreationTimestamp="2026-01-30 16:35:11 +0000 UTC" firstStartedPulling="2026-01-30 16:35:12.003463439 +0000 UTC m=+766.641420785" lastFinishedPulling="2026-01-30 16:35:15.421120341 +0000 UTC m=+770.059077687" observedRunningTime="2026-01-30 16:35:16.664324322 +0000 UTC m=+771.302281678" watchObservedRunningTime="2026-01-30 16:35:16.674503152 +0000 UTC m=+771.312460498" Jan 30 16:35:16 crc kubenswrapper[4766]: I0130 16:35:16.707558 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-zj7fb" podStartSLOduration=3.005994386 podStartE2EDuration="5.707285707s" podCreationTimestamp="2026-01-30 16:35:11 +0000 UTC" firstStartedPulling="2026-01-30 16:35:12.743255646 +0000 UTC m=+767.381212992" lastFinishedPulling="2026-01-30 16:35:15.444546967 +0000 UTC m=+770.082504313" observedRunningTime="2026-01-30 16:35:16.688058516 +0000 UTC m=+771.326015862" watchObservedRunningTime="2026-01-30 16:35:16.707285707 +0000 UTC m=+771.345243053" Jan 30 16:35:18 crc kubenswrapper[4766]: I0130 16:35:18.645397 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-wv52c" event={"ID":"46ac0f62-2413-4258-a957-35039942d0f7","Type":"ContainerStarted","Data":"2c5cc759ad98f952f1e523184193bf2408e6e57c04cc9a0dd4ca4f335a3f34cd"} Jan 30 16:35:22 crc kubenswrapper[4766]: I0130 16:35:22.006300 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-82wxr" Jan 30 16:35:22 crc kubenswrapper[4766]: I0130 16:35:22.023069 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-wv52c" podStartSLOduration=5.543252648 podStartE2EDuration="11.023049722s" podCreationTimestamp="2026-01-30 16:35:11 +0000 UTC" firstStartedPulling="2026-01-30 16:35:12.383004283 +0000 UTC m=+767.020961629" lastFinishedPulling="2026-01-30 16:35:17.862801367 +0000 UTC m=+772.500758703" observedRunningTime="2026-01-30 16:35:18.670819138 +0000 UTC m=+773.308776494" watchObservedRunningTime="2026-01-30 16:35:22.023049722 +0000 UTC m=+776.661007068" Jan 30 16:35:22 crc kubenswrapper[4766]: I0130 16:35:22.311544 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7977978877-p7rd4" Jan 30 16:35:22 crc kubenswrapper[4766]: I0130 16:35:22.311889 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7977978877-p7rd4" Jan 30 16:35:22 crc kubenswrapper[4766]: I0130 16:35:22.324139 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7977978877-p7rd4" Jan 30 16:35:22 crc kubenswrapper[4766]: I0130 16:35:22.674552 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7977978877-p7rd4" Jan 30 16:35:22 crc kubenswrapper[4766]: I0130 16:35:22.734228 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-8fgxh"] Jan 30 16:35:32 crc kubenswrapper[4766]: I0130 16:35:32.563873 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-zj7fb" Jan 30 16:35:39 crc kubenswrapper[4766]: I0130 16:35:39.045929 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:35:39 crc kubenswrapper[4766]: I0130 16:35:39.046688 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:35:39 crc kubenswrapper[4766]: I0130 16:35:39.046758 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 16:35:39 crc kubenswrapper[4766]: I0130 16:35:39.047548 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2324c4835fd4bdd1303bb3b79291e1e367ad78303906e6548593c60cc4a66d08"} pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 16:35:39 crc kubenswrapper[4766]: I0130 16:35:39.047627 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" containerID="cri-o://2324c4835fd4bdd1303bb3b79291e1e367ad78303906e6548593c60cc4a66d08" gracePeriod=600 Jan 30 16:35:39 crc kubenswrapper[4766]: I0130 16:35:39.768482 4766 generic.go:334] "Generic (PLEG): container finished" podID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerID="2324c4835fd4bdd1303bb3b79291e1e367ad78303906e6548593c60cc4a66d08" exitCode=0 Jan 30 16:35:39 crc kubenswrapper[4766]: I0130 16:35:39.768526 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerDied","Data":"2324c4835fd4bdd1303bb3b79291e1e367ad78303906e6548593c60cc4a66d08"} Jan 30 16:35:39 crc kubenswrapper[4766]: I0130 16:35:39.768857 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerStarted","Data":"5e25fe15fa17987c12e4d9db1a1dd14967f9d491c11f7c6086924c59f51346cf"} Jan 30 16:35:39 crc kubenswrapper[4766]: I0130 16:35:39.768886 4766 scope.go:117] "RemoveContainer" containerID="2b6328ad3aaf373dc4a6f6fbe7d49ef2029c9f80f2a9eb0657102d9506d1cc4f" Jan 30 16:35:46 crc kubenswrapper[4766]: I0130 16:35:46.450661 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz"] Jan 30 16:35:46 crc kubenswrapper[4766]: I0130 16:35:46.453046 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz" Jan 30 16:35:46 crc kubenswrapper[4766]: I0130 16:35:46.460030 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz"] Jan 30 16:35:46 crc kubenswrapper[4766]: I0130 16:35:46.460291 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 30 16:35:46 crc kubenswrapper[4766]: I0130 16:35:46.589810 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/246ff80e-3711-4ffe-8fdb-0942844aef18-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz\" (UID: \"246ff80e-3711-4ffe-8fdb-0942844aef18\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz" Jan 30 16:35:46 crc kubenswrapper[4766]: I0130 16:35:46.590053 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfs8g\" (UniqueName: \"kubernetes.io/projected/246ff80e-3711-4ffe-8fdb-0942844aef18-kube-api-access-xfs8g\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz\" (UID: \"246ff80e-3711-4ffe-8fdb-0942844aef18\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz" Jan 30 16:35:46 crc kubenswrapper[4766]: I0130 16:35:46.590103 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/246ff80e-3711-4ffe-8fdb-0942844aef18-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz\" (UID: \"246ff80e-3711-4ffe-8fdb-0942844aef18\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz" Jan 30 16:35:46 crc kubenswrapper[4766]: I0130 16:35:46.691867 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfs8g\" (UniqueName: \"kubernetes.io/projected/246ff80e-3711-4ffe-8fdb-0942844aef18-kube-api-access-xfs8g\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz\" (UID: \"246ff80e-3711-4ffe-8fdb-0942844aef18\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz" Jan 30 16:35:46 crc kubenswrapper[4766]: I0130 16:35:46.691984 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/246ff80e-3711-4ffe-8fdb-0942844aef18-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz\" (UID: \"246ff80e-3711-4ffe-8fdb-0942844aef18\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz" Jan 30 16:35:46 crc kubenswrapper[4766]: I0130 16:35:46.692040 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/246ff80e-3711-4ffe-8fdb-0942844aef18-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz\" (UID: \"246ff80e-3711-4ffe-8fdb-0942844aef18\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz" Jan 30 16:35:46 crc kubenswrapper[4766]: I0130 16:35:46.692768 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/246ff80e-3711-4ffe-8fdb-0942844aef18-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz\" (UID: \"246ff80e-3711-4ffe-8fdb-0942844aef18\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz" Jan 30 16:35:46 crc kubenswrapper[4766]: I0130 16:35:46.692813 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/246ff80e-3711-4ffe-8fdb-0942844aef18-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz\" (UID: \"246ff80e-3711-4ffe-8fdb-0942844aef18\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz" Jan 30 16:35:46 crc kubenswrapper[4766]: I0130 16:35:46.718801 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfs8g\" (UniqueName: \"kubernetes.io/projected/246ff80e-3711-4ffe-8fdb-0942844aef18-kube-api-access-xfs8g\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz\" (UID: \"246ff80e-3711-4ffe-8fdb-0942844aef18\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz" Jan 30 16:35:46 crc kubenswrapper[4766]: I0130 16:35:46.813392 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz" Jan 30 16:35:47 crc kubenswrapper[4766]: I0130 16:35:47.013895 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz"] Jan 30 16:35:47 crc kubenswrapper[4766]: I0130 16:35:47.783708 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-8fgxh" podUID="695ff148-b91d-49a2-ad3b-9a240f11e454" containerName="console" containerID="cri-o://a29f3a6e898177f84fd4104c6f79254b235a6b9d4f40d3ce7f40e1993c6d8532" gracePeriod=15 Jan 30 16:35:47 crc kubenswrapper[4766]: I0130 16:35:47.829467 4766 generic.go:334] "Generic (PLEG): container finished" podID="246ff80e-3711-4ffe-8fdb-0942844aef18" containerID="4ed8858646645e29e0e1f5dc3c37cd6744bc9c6d25d0edc3cd0331bfbd7f56f0" exitCode=0 Jan 30 16:35:47 crc kubenswrapper[4766]: I0130 16:35:47.829529 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz" event={"ID":"246ff80e-3711-4ffe-8fdb-0942844aef18","Type":"ContainerDied","Data":"4ed8858646645e29e0e1f5dc3c37cd6744bc9c6d25d0edc3cd0331bfbd7f56f0"} Jan 30 16:35:47 crc kubenswrapper[4766]: I0130 16:35:47.829569 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz" event={"ID":"246ff80e-3711-4ffe-8fdb-0942844aef18","Type":"ContainerStarted","Data":"8b75abe8f00db0e3e85c4aed6e0f3389ef161eb2a1e7781b57fc6abf8d5a0ca2"} Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.152720 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-8fgxh_695ff148-b91d-49a2-ad3b-9a240f11e454/console/0.log" Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.153078 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-8fgxh" Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.314695 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/695ff148-b91d-49a2-ad3b-9a240f11e454-service-ca\") pod \"695ff148-b91d-49a2-ad3b-9a240f11e454\" (UID: \"695ff148-b91d-49a2-ad3b-9a240f11e454\") " Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.314770 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/695ff148-b91d-49a2-ad3b-9a240f11e454-console-serving-cert\") pod \"695ff148-b91d-49a2-ad3b-9a240f11e454\" (UID: \"695ff148-b91d-49a2-ad3b-9a240f11e454\") " Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.315290 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/695ff148-b91d-49a2-ad3b-9a240f11e454-trusted-ca-bundle\") pod \"695ff148-b91d-49a2-ad3b-9a240f11e454\" (UID: \"695ff148-b91d-49a2-ad3b-9a240f11e454\") " Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.315363 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/695ff148-b91d-49a2-ad3b-9a240f11e454-console-oauth-config\") pod \"695ff148-b91d-49a2-ad3b-9a240f11e454\" (UID: \"695ff148-b91d-49a2-ad3b-9a240f11e454\") " Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.315391 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/695ff148-b91d-49a2-ad3b-9a240f11e454-console-config\") pod \"695ff148-b91d-49a2-ad3b-9a240f11e454\" (UID: \"695ff148-b91d-49a2-ad3b-9a240f11e454\") " Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.315416 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/695ff148-b91d-49a2-ad3b-9a240f11e454-oauth-serving-cert\") pod \"695ff148-b91d-49a2-ad3b-9a240f11e454\" (UID: \"695ff148-b91d-49a2-ad3b-9a240f11e454\") " Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.315440 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cb5jf\" (UniqueName: \"kubernetes.io/projected/695ff148-b91d-49a2-ad3b-9a240f11e454-kube-api-access-cb5jf\") pod \"695ff148-b91d-49a2-ad3b-9a240f11e454\" (UID: \"695ff148-b91d-49a2-ad3b-9a240f11e454\") " Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.315849 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/695ff148-b91d-49a2-ad3b-9a240f11e454-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "695ff148-b91d-49a2-ad3b-9a240f11e454" (UID: "695ff148-b91d-49a2-ad3b-9a240f11e454"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.316128 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/695ff148-b91d-49a2-ad3b-9a240f11e454-console-config" (OuterVolumeSpecName: "console-config") pod "695ff148-b91d-49a2-ad3b-9a240f11e454" (UID: "695ff148-b91d-49a2-ad3b-9a240f11e454"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.316289 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/695ff148-b91d-49a2-ad3b-9a240f11e454-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "695ff148-b91d-49a2-ad3b-9a240f11e454" (UID: "695ff148-b91d-49a2-ad3b-9a240f11e454"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.316618 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/695ff148-b91d-49a2-ad3b-9a240f11e454-service-ca" (OuterVolumeSpecName: "service-ca") pod "695ff148-b91d-49a2-ad3b-9a240f11e454" (UID: "695ff148-b91d-49a2-ad3b-9a240f11e454"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.333163 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/695ff148-b91d-49a2-ad3b-9a240f11e454-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "695ff148-b91d-49a2-ad3b-9a240f11e454" (UID: "695ff148-b91d-49a2-ad3b-9a240f11e454"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.333313 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/695ff148-b91d-49a2-ad3b-9a240f11e454-kube-api-access-cb5jf" (OuterVolumeSpecName: "kube-api-access-cb5jf") pod "695ff148-b91d-49a2-ad3b-9a240f11e454" (UID: "695ff148-b91d-49a2-ad3b-9a240f11e454"). InnerVolumeSpecName "kube-api-access-cb5jf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.333706 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/695ff148-b91d-49a2-ad3b-9a240f11e454-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "695ff148-b91d-49a2-ad3b-9a240f11e454" (UID: "695ff148-b91d-49a2-ad3b-9a240f11e454"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.417083 4766 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/695ff148-b91d-49a2-ad3b-9a240f11e454-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.417155 4766 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/695ff148-b91d-49a2-ad3b-9a240f11e454-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.417196 4766 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/695ff148-b91d-49a2-ad3b-9a240f11e454-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.417212 4766 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/695ff148-b91d-49a2-ad3b-9a240f11e454-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.417226 4766 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/695ff148-b91d-49a2-ad3b-9a240f11e454-console-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.417238 4766 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/695ff148-b91d-49a2-ad3b-9a240f11e454-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.417251 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cb5jf\" (UniqueName: \"kubernetes.io/projected/695ff148-b91d-49a2-ad3b-9a240f11e454-kube-api-access-cb5jf\") on node \"crc\" DevicePath \"\"" Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.837058 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-8fgxh_695ff148-b91d-49a2-ad3b-9a240f11e454/console/0.log" Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.837109 4766 generic.go:334] "Generic (PLEG): container finished" podID="695ff148-b91d-49a2-ad3b-9a240f11e454" containerID="a29f3a6e898177f84fd4104c6f79254b235a6b9d4f40d3ce7f40e1993c6d8532" exitCode=2 Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.837141 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-8fgxh" event={"ID":"695ff148-b91d-49a2-ad3b-9a240f11e454","Type":"ContainerDied","Data":"a29f3a6e898177f84fd4104c6f79254b235a6b9d4f40d3ce7f40e1993c6d8532"} Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.837169 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-8fgxh" event={"ID":"695ff148-b91d-49a2-ad3b-9a240f11e454","Type":"ContainerDied","Data":"49a469bfbf32d87fdc9772eb7cb8b7a2cfda12f2178ff6d5d4530255ca2db5f7"} Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.837210 4766 scope.go:117] "RemoveContainer" containerID="a29f3a6e898177f84fd4104c6f79254b235a6b9d4f40d3ce7f40e1993c6d8532" Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.837232 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-8fgxh" Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.852335 4766 scope.go:117] "RemoveContainer" containerID="a29f3a6e898177f84fd4104c6f79254b235a6b9d4f40d3ce7f40e1993c6d8532" Jan 30 16:35:48 crc kubenswrapper[4766]: E0130 16:35:48.852916 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a29f3a6e898177f84fd4104c6f79254b235a6b9d4f40d3ce7f40e1993c6d8532\": container with ID starting with a29f3a6e898177f84fd4104c6f79254b235a6b9d4f40d3ce7f40e1993c6d8532 not found: ID does not exist" containerID="a29f3a6e898177f84fd4104c6f79254b235a6b9d4f40d3ce7f40e1993c6d8532" Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.852966 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a29f3a6e898177f84fd4104c6f79254b235a6b9d4f40d3ce7f40e1993c6d8532"} err="failed to get container status \"a29f3a6e898177f84fd4104c6f79254b235a6b9d4f40d3ce7f40e1993c6d8532\": rpc error: code = NotFound desc = could not find container \"a29f3a6e898177f84fd4104c6f79254b235a6b9d4f40d3ce7f40e1993c6d8532\": container with ID starting with a29f3a6e898177f84fd4104c6f79254b235a6b9d4f40d3ce7f40e1993c6d8532 not found: ID does not exist" Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.869289 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-8fgxh"] Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.873696 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-8fgxh"] Jan 30 16:35:49 crc kubenswrapper[4766]: I0130 16:35:49.807714 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-j8lj5"] Jan 30 16:35:49 crc kubenswrapper[4766]: E0130 16:35:49.808323 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="695ff148-b91d-49a2-ad3b-9a240f11e454" containerName="console" Jan 30 16:35:49 crc kubenswrapper[4766]: I0130 16:35:49.808339 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="695ff148-b91d-49a2-ad3b-9a240f11e454" containerName="console" Jan 30 16:35:49 crc kubenswrapper[4766]: I0130 16:35:49.808455 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="695ff148-b91d-49a2-ad3b-9a240f11e454" containerName="console" Jan 30 16:35:49 crc kubenswrapper[4766]: I0130 16:35:49.809380 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j8lj5" Jan 30 16:35:49 crc kubenswrapper[4766]: I0130 16:35:49.824018 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j8lj5"] Jan 30 16:35:49 crc kubenswrapper[4766]: I0130 16:35:49.851655 4766 generic.go:334] "Generic (PLEG): container finished" podID="246ff80e-3711-4ffe-8fdb-0942844aef18" containerID="a29420d4a2ee559fdc0731a79df5db056cec11144a44618263f8a7fe5f30a7d0" exitCode=0 Jan 30 16:35:49 crc kubenswrapper[4766]: I0130 16:35:49.851766 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz" event={"ID":"246ff80e-3711-4ffe-8fdb-0942844aef18","Type":"ContainerDied","Data":"a29420d4a2ee559fdc0731a79df5db056cec11144a44618263f8a7fe5f30a7d0"} Jan 30 16:35:49 crc kubenswrapper[4766]: I0130 16:35:49.936423 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvzmf\" (UniqueName: \"kubernetes.io/projected/3289ef2d-c514-4e8a-91f9-200f8b7742dd-kube-api-access-dvzmf\") pod \"redhat-operators-j8lj5\" (UID: \"3289ef2d-c514-4e8a-91f9-200f8b7742dd\") " pod="openshift-marketplace/redhat-operators-j8lj5" Jan 30 16:35:49 crc kubenswrapper[4766]: I0130 16:35:49.936497 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3289ef2d-c514-4e8a-91f9-200f8b7742dd-utilities\") pod \"redhat-operators-j8lj5\" (UID: \"3289ef2d-c514-4e8a-91f9-200f8b7742dd\") " pod="openshift-marketplace/redhat-operators-j8lj5" Jan 30 16:35:49 crc kubenswrapper[4766]: I0130 16:35:49.936521 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3289ef2d-c514-4e8a-91f9-200f8b7742dd-catalog-content\") pod \"redhat-operators-j8lj5\" (UID: \"3289ef2d-c514-4e8a-91f9-200f8b7742dd\") " pod="openshift-marketplace/redhat-operators-j8lj5" Jan 30 16:35:50 crc kubenswrapper[4766]: I0130 16:35:50.038350 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvzmf\" (UniqueName: \"kubernetes.io/projected/3289ef2d-c514-4e8a-91f9-200f8b7742dd-kube-api-access-dvzmf\") pod \"redhat-operators-j8lj5\" (UID: \"3289ef2d-c514-4e8a-91f9-200f8b7742dd\") " pod="openshift-marketplace/redhat-operators-j8lj5" Jan 30 16:35:50 crc kubenswrapper[4766]: I0130 16:35:50.038433 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3289ef2d-c514-4e8a-91f9-200f8b7742dd-utilities\") pod \"redhat-operators-j8lj5\" (UID: \"3289ef2d-c514-4e8a-91f9-200f8b7742dd\") " pod="openshift-marketplace/redhat-operators-j8lj5" Jan 30 16:35:50 crc kubenswrapper[4766]: I0130 16:35:50.038462 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3289ef2d-c514-4e8a-91f9-200f8b7742dd-catalog-content\") pod \"redhat-operators-j8lj5\" (UID: \"3289ef2d-c514-4e8a-91f9-200f8b7742dd\") " pod="openshift-marketplace/redhat-operators-j8lj5" Jan 30 16:35:50 crc kubenswrapper[4766]: I0130 16:35:50.038962 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3289ef2d-c514-4e8a-91f9-200f8b7742dd-utilities\") pod \"redhat-operators-j8lj5\" (UID: \"3289ef2d-c514-4e8a-91f9-200f8b7742dd\") " pod="openshift-marketplace/redhat-operators-j8lj5" Jan 30 16:35:50 crc kubenswrapper[4766]: I0130 16:35:50.039222 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3289ef2d-c514-4e8a-91f9-200f8b7742dd-catalog-content\") pod \"redhat-operators-j8lj5\" (UID: \"3289ef2d-c514-4e8a-91f9-200f8b7742dd\") " pod="openshift-marketplace/redhat-operators-j8lj5" Jan 30 16:35:50 crc kubenswrapper[4766]: I0130 16:35:50.046815 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="695ff148-b91d-49a2-ad3b-9a240f11e454" path="/var/lib/kubelet/pods/695ff148-b91d-49a2-ad3b-9a240f11e454/volumes" Jan 30 16:35:50 crc kubenswrapper[4766]: I0130 16:35:50.062296 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvzmf\" (UniqueName: \"kubernetes.io/projected/3289ef2d-c514-4e8a-91f9-200f8b7742dd-kube-api-access-dvzmf\") pod \"redhat-operators-j8lj5\" (UID: \"3289ef2d-c514-4e8a-91f9-200f8b7742dd\") " pod="openshift-marketplace/redhat-operators-j8lj5" Jan 30 16:35:50 crc kubenswrapper[4766]: I0130 16:35:50.150685 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j8lj5" Jan 30 16:35:50 crc kubenswrapper[4766]: I0130 16:35:50.608387 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j8lj5"] Jan 30 16:35:50 crc kubenswrapper[4766]: W0130 16:35:50.612674 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3289ef2d_c514_4e8a_91f9_200f8b7742dd.slice/crio-280644e1d18a0d1e8d1142b1055140312017ca431290f10dd3e831116e441aea WatchSource:0}: Error finding container 280644e1d18a0d1e8d1142b1055140312017ca431290f10dd3e831116e441aea: Status 404 returned error can't find the container with id 280644e1d18a0d1e8d1142b1055140312017ca431290f10dd3e831116e441aea Jan 30 16:35:50 crc kubenswrapper[4766]: I0130 16:35:50.860365 4766 generic.go:334] "Generic (PLEG): container finished" podID="246ff80e-3711-4ffe-8fdb-0942844aef18" containerID="cf60c97e9515f8005b247d17957550bd7aa3b775f838d376d06bdc764bba4d06" exitCode=0 Jan 30 16:35:50 crc kubenswrapper[4766]: I0130 16:35:50.860419 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz" event={"ID":"246ff80e-3711-4ffe-8fdb-0942844aef18","Type":"ContainerDied","Data":"cf60c97e9515f8005b247d17957550bd7aa3b775f838d376d06bdc764bba4d06"} Jan 30 16:35:50 crc kubenswrapper[4766]: I0130 16:35:50.862277 4766 generic.go:334] "Generic (PLEG): container finished" podID="3289ef2d-c514-4e8a-91f9-200f8b7742dd" containerID="f7f92f8300c8713d910aa6a0bca5661d6dd6450946457b864fc079e600e849f2" exitCode=0 Jan 30 16:35:50 crc kubenswrapper[4766]: I0130 16:35:50.862298 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j8lj5" event={"ID":"3289ef2d-c514-4e8a-91f9-200f8b7742dd","Type":"ContainerDied","Data":"f7f92f8300c8713d910aa6a0bca5661d6dd6450946457b864fc079e600e849f2"} Jan 30 16:35:50 crc kubenswrapper[4766]: I0130 16:35:50.862311 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j8lj5" event={"ID":"3289ef2d-c514-4e8a-91f9-200f8b7742dd","Type":"ContainerStarted","Data":"280644e1d18a0d1e8d1142b1055140312017ca431290f10dd3e831116e441aea"} Jan 30 16:35:52 crc kubenswrapper[4766]: I0130 16:35:52.150676 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz" Jan 30 16:35:52 crc kubenswrapper[4766]: I0130 16:35:52.274663 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfs8g\" (UniqueName: \"kubernetes.io/projected/246ff80e-3711-4ffe-8fdb-0942844aef18-kube-api-access-xfs8g\") pod \"246ff80e-3711-4ffe-8fdb-0942844aef18\" (UID: \"246ff80e-3711-4ffe-8fdb-0942844aef18\") " Jan 30 16:35:52 crc kubenswrapper[4766]: I0130 16:35:52.274801 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/246ff80e-3711-4ffe-8fdb-0942844aef18-util\") pod \"246ff80e-3711-4ffe-8fdb-0942844aef18\" (UID: \"246ff80e-3711-4ffe-8fdb-0942844aef18\") " Jan 30 16:35:52 crc kubenswrapper[4766]: I0130 16:35:52.274910 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/246ff80e-3711-4ffe-8fdb-0942844aef18-bundle\") pod \"246ff80e-3711-4ffe-8fdb-0942844aef18\" (UID: \"246ff80e-3711-4ffe-8fdb-0942844aef18\") " Jan 30 16:35:52 crc kubenswrapper[4766]: I0130 16:35:52.276461 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/246ff80e-3711-4ffe-8fdb-0942844aef18-bundle" (OuterVolumeSpecName: "bundle") pod "246ff80e-3711-4ffe-8fdb-0942844aef18" (UID: "246ff80e-3711-4ffe-8fdb-0942844aef18"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:35:52 crc kubenswrapper[4766]: I0130 16:35:52.283275 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/246ff80e-3711-4ffe-8fdb-0942844aef18-kube-api-access-xfs8g" (OuterVolumeSpecName: "kube-api-access-xfs8g") pod "246ff80e-3711-4ffe-8fdb-0942844aef18" (UID: "246ff80e-3711-4ffe-8fdb-0942844aef18"). InnerVolumeSpecName "kube-api-access-xfs8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:35:52 crc kubenswrapper[4766]: I0130 16:35:52.291354 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/246ff80e-3711-4ffe-8fdb-0942844aef18-util" (OuterVolumeSpecName: "util") pod "246ff80e-3711-4ffe-8fdb-0942844aef18" (UID: "246ff80e-3711-4ffe-8fdb-0942844aef18"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:35:52 crc kubenswrapper[4766]: I0130 16:35:52.376924 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfs8g\" (UniqueName: \"kubernetes.io/projected/246ff80e-3711-4ffe-8fdb-0942844aef18-kube-api-access-xfs8g\") on node \"crc\" DevicePath \"\"" Jan 30 16:35:52 crc kubenswrapper[4766]: I0130 16:35:52.376978 4766 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/246ff80e-3711-4ffe-8fdb-0942844aef18-util\") on node \"crc\" DevicePath \"\"" Jan 30 16:35:52 crc kubenswrapper[4766]: I0130 16:35:52.376992 4766 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/246ff80e-3711-4ffe-8fdb-0942844aef18-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:35:52 crc kubenswrapper[4766]: I0130 16:35:52.882138 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz" event={"ID":"246ff80e-3711-4ffe-8fdb-0942844aef18","Type":"ContainerDied","Data":"8b75abe8f00db0e3e85c4aed6e0f3389ef161eb2a1e7781b57fc6abf8d5a0ca2"} Jan 30 16:35:52 crc kubenswrapper[4766]: I0130 16:35:52.882202 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b75abe8f00db0e3e85c4aed6e0f3389ef161eb2a1e7781b57fc6abf8d5a0ca2" Jan 30 16:35:52 crc kubenswrapper[4766]: I0130 16:35:52.882218 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz" Jan 30 16:35:52 crc kubenswrapper[4766]: I0130 16:35:52.883804 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j8lj5" event={"ID":"3289ef2d-c514-4e8a-91f9-200f8b7742dd","Type":"ContainerStarted","Data":"6335444566f76747a1bbabdfff09d019511a163d12f77c386abcd02273d0ace5"} Jan 30 16:35:54 crc kubenswrapper[4766]: I0130 16:35:54.897631 4766 generic.go:334] "Generic (PLEG): container finished" podID="3289ef2d-c514-4e8a-91f9-200f8b7742dd" containerID="6335444566f76747a1bbabdfff09d019511a163d12f77c386abcd02273d0ace5" exitCode=0 Jan 30 16:35:54 crc kubenswrapper[4766]: I0130 16:35:54.897663 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j8lj5" event={"ID":"3289ef2d-c514-4e8a-91f9-200f8b7742dd","Type":"ContainerDied","Data":"6335444566f76747a1bbabdfff09d019511a163d12f77c386abcd02273d0ace5"} Jan 30 16:35:55 crc kubenswrapper[4766]: I0130 16:35:55.906379 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j8lj5" event={"ID":"3289ef2d-c514-4e8a-91f9-200f8b7742dd","Type":"ContainerStarted","Data":"279ab1c91a57f88b554459f91b5f022cd47b53bbb2b07d6dc8bbf2b21565bd4f"} Jan 30 16:35:55 crc kubenswrapper[4766]: I0130 16:35:55.932114 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-j8lj5" podStartSLOduration=2.488849472 podStartE2EDuration="6.932092669s" podCreationTimestamp="2026-01-30 16:35:49 +0000 UTC" firstStartedPulling="2026-01-30 16:35:50.863485623 +0000 UTC m=+805.501442969" lastFinishedPulling="2026-01-30 16:35:55.30672882 +0000 UTC m=+809.944686166" observedRunningTime="2026-01-30 16:35:55.927553023 +0000 UTC m=+810.565510369" watchObservedRunningTime="2026-01-30 16:35:55.932092669 +0000 UTC m=+810.570050015" Jan 30 16:36:00 crc kubenswrapper[4766]: I0130 16:36:00.151570 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-j8lj5" Jan 30 16:36:00 crc kubenswrapper[4766]: I0130 16:36:00.153198 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-j8lj5" Jan 30 16:36:01 crc kubenswrapper[4766]: I0130 16:36:01.209921 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-j8lj5" podUID="3289ef2d-c514-4e8a-91f9-200f8b7742dd" containerName="registry-server" probeResult="failure" output=< Jan 30 16:36:01 crc kubenswrapper[4766]: timeout: failed to connect service ":50051" within 1s Jan 30 16:36:01 crc kubenswrapper[4766]: > Jan 30 16:36:01 crc kubenswrapper[4766]: I0130 16:36:01.762474 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-5d87dd9885-cpjtx"] Jan 30 16:36:01 crc kubenswrapper[4766]: E0130 16:36:01.762756 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="246ff80e-3711-4ffe-8fdb-0942844aef18" containerName="pull" Jan 30 16:36:01 crc kubenswrapper[4766]: I0130 16:36:01.762771 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="246ff80e-3711-4ffe-8fdb-0942844aef18" containerName="pull" Jan 30 16:36:01 crc kubenswrapper[4766]: E0130 16:36:01.762790 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="246ff80e-3711-4ffe-8fdb-0942844aef18" containerName="extract" Jan 30 16:36:01 crc kubenswrapper[4766]: I0130 16:36:01.762797 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="246ff80e-3711-4ffe-8fdb-0942844aef18" containerName="extract" Jan 30 16:36:01 crc kubenswrapper[4766]: E0130 16:36:01.762819 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="246ff80e-3711-4ffe-8fdb-0942844aef18" containerName="util" Jan 30 16:36:01 crc kubenswrapper[4766]: I0130 16:36:01.762828 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="246ff80e-3711-4ffe-8fdb-0942844aef18" containerName="util" Jan 30 16:36:01 crc kubenswrapper[4766]: I0130 16:36:01.762947 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="246ff80e-3711-4ffe-8fdb-0942844aef18" containerName="extract" Jan 30 16:36:01 crc kubenswrapper[4766]: I0130 16:36:01.763447 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5d87dd9885-cpjtx" Jan 30 16:36:01 crc kubenswrapper[4766]: I0130 16:36:01.771716 4766 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-8rlg6" Jan 30 16:36:01 crc kubenswrapper[4766]: I0130 16:36:01.771795 4766 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 30 16:36:01 crc kubenswrapper[4766]: I0130 16:36:01.771947 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 30 16:36:01 crc kubenswrapper[4766]: I0130 16:36:01.772000 4766 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 30 16:36:01 crc kubenswrapper[4766]: I0130 16:36:01.772999 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 30 16:36:01 crc kubenswrapper[4766]: I0130 16:36:01.796791 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5d87dd9885-cpjtx"] Jan 30 16:36:01 crc kubenswrapper[4766]: I0130 16:36:01.910778 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8f4ddea0-a380-401d-849f-6968d6d80e8b-webhook-cert\") pod \"metallb-operator-controller-manager-5d87dd9885-cpjtx\" (UID: \"8f4ddea0-a380-401d-849f-6968d6d80e8b\") " pod="metallb-system/metallb-operator-controller-manager-5d87dd9885-cpjtx" Jan 30 16:36:01 crc kubenswrapper[4766]: I0130 16:36:01.910859 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbf2r\" (UniqueName: \"kubernetes.io/projected/8f4ddea0-a380-401d-849f-6968d6d80e8b-kube-api-access-pbf2r\") pod \"metallb-operator-controller-manager-5d87dd9885-cpjtx\" (UID: \"8f4ddea0-a380-401d-849f-6968d6d80e8b\") " pod="metallb-system/metallb-operator-controller-manager-5d87dd9885-cpjtx" Jan 30 16:36:01 crc kubenswrapper[4766]: I0130 16:36:01.911012 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8f4ddea0-a380-401d-849f-6968d6d80e8b-apiservice-cert\") pod \"metallb-operator-controller-manager-5d87dd9885-cpjtx\" (UID: \"8f4ddea0-a380-401d-849f-6968d6d80e8b\") " pod="metallb-system/metallb-operator-controller-manager-5d87dd9885-cpjtx" Jan 30 16:36:02 crc kubenswrapper[4766]: I0130 16:36:02.012763 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8f4ddea0-a380-401d-849f-6968d6d80e8b-webhook-cert\") pod \"metallb-operator-controller-manager-5d87dd9885-cpjtx\" (UID: \"8f4ddea0-a380-401d-849f-6968d6d80e8b\") " pod="metallb-system/metallb-operator-controller-manager-5d87dd9885-cpjtx" Jan 30 16:36:02 crc kubenswrapper[4766]: I0130 16:36:02.012830 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbf2r\" (UniqueName: \"kubernetes.io/projected/8f4ddea0-a380-401d-849f-6968d6d80e8b-kube-api-access-pbf2r\") pod \"metallb-operator-controller-manager-5d87dd9885-cpjtx\" (UID: \"8f4ddea0-a380-401d-849f-6968d6d80e8b\") " pod="metallb-system/metallb-operator-controller-manager-5d87dd9885-cpjtx" Jan 30 16:36:02 crc kubenswrapper[4766]: I0130 16:36:02.012858 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8f4ddea0-a380-401d-849f-6968d6d80e8b-apiservice-cert\") pod \"metallb-operator-controller-manager-5d87dd9885-cpjtx\" (UID: \"8f4ddea0-a380-401d-849f-6968d6d80e8b\") " pod="metallb-system/metallb-operator-controller-manager-5d87dd9885-cpjtx" Jan 30 16:36:02 crc kubenswrapper[4766]: I0130 16:36:02.020146 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8f4ddea0-a380-401d-849f-6968d6d80e8b-webhook-cert\") pod \"metallb-operator-controller-manager-5d87dd9885-cpjtx\" (UID: \"8f4ddea0-a380-401d-849f-6968d6d80e8b\") " pod="metallb-system/metallb-operator-controller-manager-5d87dd9885-cpjtx" Jan 30 16:36:02 crc kubenswrapper[4766]: I0130 16:36:02.020189 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8f4ddea0-a380-401d-849f-6968d6d80e8b-apiservice-cert\") pod \"metallb-operator-controller-manager-5d87dd9885-cpjtx\" (UID: \"8f4ddea0-a380-401d-849f-6968d6d80e8b\") " pod="metallb-system/metallb-operator-controller-manager-5d87dd9885-cpjtx" Jan 30 16:36:02 crc kubenswrapper[4766]: I0130 16:36:02.034830 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbf2r\" (UniqueName: \"kubernetes.io/projected/8f4ddea0-a380-401d-849f-6968d6d80e8b-kube-api-access-pbf2r\") pod \"metallb-operator-controller-manager-5d87dd9885-cpjtx\" (UID: \"8f4ddea0-a380-401d-849f-6968d6d80e8b\") " pod="metallb-system/metallb-operator-controller-manager-5d87dd9885-cpjtx" Jan 30 16:36:02 crc kubenswrapper[4766]: I0130 16:36:02.083780 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5d87dd9885-cpjtx" Jan 30 16:36:02 crc kubenswrapper[4766]: I0130 16:36:02.113838 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-698996dc4d-5ps7v"] Jan 30 16:36:02 crc kubenswrapper[4766]: I0130 16:36:02.115656 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-698996dc4d-5ps7v" Jan 30 16:36:02 crc kubenswrapper[4766]: I0130 16:36:02.120280 4766 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 30 16:36:02 crc kubenswrapper[4766]: I0130 16:36:02.122580 4766 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-xm4mf" Jan 30 16:36:02 crc kubenswrapper[4766]: I0130 16:36:02.122793 4766 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 30 16:36:02 crc kubenswrapper[4766]: I0130 16:36:02.131647 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-698996dc4d-5ps7v"] Jan 30 16:36:02 crc kubenswrapper[4766]: I0130 16:36:02.217492 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sw2xb\" (UniqueName: \"kubernetes.io/projected/5aa43b8e-3f06-441e-ade0-264da132ec73-kube-api-access-sw2xb\") pod \"metallb-operator-webhook-server-698996dc4d-5ps7v\" (UID: \"5aa43b8e-3f06-441e-ade0-264da132ec73\") " pod="metallb-system/metallb-operator-webhook-server-698996dc4d-5ps7v" Jan 30 16:36:02 crc kubenswrapper[4766]: I0130 16:36:02.217605 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5aa43b8e-3f06-441e-ade0-264da132ec73-apiservice-cert\") pod \"metallb-operator-webhook-server-698996dc4d-5ps7v\" (UID: \"5aa43b8e-3f06-441e-ade0-264da132ec73\") " pod="metallb-system/metallb-operator-webhook-server-698996dc4d-5ps7v" Jan 30 16:36:02 crc kubenswrapper[4766]: I0130 16:36:02.217633 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5aa43b8e-3f06-441e-ade0-264da132ec73-webhook-cert\") pod \"metallb-operator-webhook-server-698996dc4d-5ps7v\" (UID: \"5aa43b8e-3f06-441e-ade0-264da132ec73\") " pod="metallb-system/metallb-operator-webhook-server-698996dc4d-5ps7v" Jan 30 16:36:02 crc kubenswrapper[4766]: I0130 16:36:02.319262 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sw2xb\" (UniqueName: \"kubernetes.io/projected/5aa43b8e-3f06-441e-ade0-264da132ec73-kube-api-access-sw2xb\") pod \"metallb-operator-webhook-server-698996dc4d-5ps7v\" (UID: \"5aa43b8e-3f06-441e-ade0-264da132ec73\") " pod="metallb-system/metallb-operator-webhook-server-698996dc4d-5ps7v" Jan 30 16:36:02 crc kubenswrapper[4766]: I0130 16:36:02.319316 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5aa43b8e-3f06-441e-ade0-264da132ec73-apiservice-cert\") pod \"metallb-operator-webhook-server-698996dc4d-5ps7v\" (UID: \"5aa43b8e-3f06-441e-ade0-264da132ec73\") " pod="metallb-system/metallb-operator-webhook-server-698996dc4d-5ps7v" Jan 30 16:36:02 crc kubenswrapper[4766]: I0130 16:36:02.319334 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5aa43b8e-3f06-441e-ade0-264da132ec73-webhook-cert\") pod \"metallb-operator-webhook-server-698996dc4d-5ps7v\" (UID: \"5aa43b8e-3f06-441e-ade0-264da132ec73\") " pod="metallb-system/metallb-operator-webhook-server-698996dc4d-5ps7v" Jan 30 16:36:02 crc kubenswrapper[4766]: I0130 16:36:02.325005 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5aa43b8e-3f06-441e-ade0-264da132ec73-apiservice-cert\") pod \"metallb-operator-webhook-server-698996dc4d-5ps7v\" (UID: \"5aa43b8e-3f06-441e-ade0-264da132ec73\") " pod="metallb-system/metallb-operator-webhook-server-698996dc4d-5ps7v" Jan 30 16:36:02 crc kubenswrapper[4766]: I0130 16:36:02.343591 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sw2xb\" (UniqueName: \"kubernetes.io/projected/5aa43b8e-3f06-441e-ade0-264da132ec73-kube-api-access-sw2xb\") pod \"metallb-operator-webhook-server-698996dc4d-5ps7v\" (UID: \"5aa43b8e-3f06-441e-ade0-264da132ec73\") " pod="metallb-system/metallb-operator-webhook-server-698996dc4d-5ps7v" Jan 30 16:36:02 crc kubenswrapper[4766]: I0130 16:36:02.350891 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5aa43b8e-3f06-441e-ade0-264da132ec73-webhook-cert\") pod \"metallb-operator-webhook-server-698996dc4d-5ps7v\" (UID: \"5aa43b8e-3f06-441e-ade0-264da132ec73\") " pod="metallb-system/metallb-operator-webhook-server-698996dc4d-5ps7v" Jan 30 16:36:02 crc kubenswrapper[4766]: I0130 16:36:02.468606 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-698996dc4d-5ps7v" Jan 30 16:36:02 crc kubenswrapper[4766]: I0130 16:36:02.513700 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5d87dd9885-cpjtx"] Jan 30 16:36:02 crc kubenswrapper[4766]: I0130 16:36:02.710825 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-698996dc4d-5ps7v"] Jan 30 16:36:02 crc kubenswrapper[4766]: I0130 16:36:02.943081 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-698996dc4d-5ps7v" event={"ID":"5aa43b8e-3f06-441e-ade0-264da132ec73","Type":"ContainerStarted","Data":"9e619a9ff7144ad61b032bce7c1d57fa12b75d8a4555f752c788c8adf52acd7d"} Jan 30 16:36:02 crc kubenswrapper[4766]: I0130 16:36:02.944305 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5d87dd9885-cpjtx" event={"ID":"8f4ddea0-a380-401d-849f-6968d6d80e8b","Type":"ContainerStarted","Data":"8093a03634dc6f265c75fa34bf526f27f52e20cd6faf07d52462ad22f30e983d"} Jan 30 16:36:08 crc kubenswrapper[4766]: I0130 16:36:08.988191 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5d87dd9885-cpjtx" event={"ID":"8f4ddea0-a380-401d-849f-6968d6d80e8b","Type":"ContainerStarted","Data":"6cb06be018cb1dc73deb3e06fa95c9c10d71b75c766628df10648d8b73b3dfdd"} Jan 30 16:36:08 crc kubenswrapper[4766]: I0130 16:36:08.988794 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5d87dd9885-cpjtx" Jan 30 16:36:08 crc kubenswrapper[4766]: I0130 16:36:08.990007 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-698996dc4d-5ps7v" event={"ID":"5aa43b8e-3f06-441e-ade0-264da132ec73","Type":"ContainerStarted","Data":"4b4363d975b03f0dd583639c564b496fec4e643ae2789f7f3bc429df5e7f9290"} Jan 30 16:36:08 crc kubenswrapper[4766]: I0130 16:36:08.990363 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-698996dc4d-5ps7v" Jan 30 16:36:09 crc kubenswrapper[4766]: I0130 16:36:09.008699 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-5d87dd9885-cpjtx" podStartSLOduration=1.910686766 podStartE2EDuration="8.008682512s" podCreationTimestamp="2026-01-30 16:36:01 +0000 UTC" firstStartedPulling="2026-01-30 16:36:02.531942984 +0000 UTC m=+817.169900340" lastFinishedPulling="2026-01-30 16:36:08.62993874 +0000 UTC m=+823.267896086" observedRunningTime="2026-01-30 16:36:09.005728931 +0000 UTC m=+823.643686297" watchObservedRunningTime="2026-01-30 16:36:09.008682512 +0000 UTC m=+823.646639858" Jan 30 16:36:09 crc kubenswrapper[4766]: I0130 16:36:09.031425 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-698996dc4d-5ps7v" podStartSLOduration=1.111027266 podStartE2EDuration="7.031404329s" podCreationTimestamp="2026-01-30 16:36:02 +0000 UTC" firstStartedPulling="2026-01-30 16:36:02.724719315 +0000 UTC m=+817.362676661" lastFinishedPulling="2026-01-30 16:36:08.645096388 +0000 UTC m=+823.283053724" observedRunningTime="2026-01-30 16:36:09.030270397 +0000 UTC m=+823.668227743" watchObservedRunningTime="2026-01-30 16:36:09.031404329 +0000 UTC m=+823.669361675" Jan 30 16:36:10 crc kubenswrapper[4766]: I0130 16:36:10.200890 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-j8lj5" Jan 30 16:36:10 crc kubenswrapper[4766]: I0130 16:36:10.245537 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-j8lj5" Jan 30 16:36:10 crc kubenswrapper[4766]: I0130 16:36:10.436097 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j8lj5"] Jan 30 16:36:12 crc kubenswrapper[4766]: I0130 16:36:12.015233 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-j8lj5" podUID="3289ef2d-c514-4e8a-91f9-200f8b7742dd" containerName="registry-server" containerID="cri-o://279ab1c91a57f88b554459f91b5f022cd47b53bbb2b07d6dc8bbf2b21565bd4f" gracePeriod=2 Jan 30 16:36:13 crc kubenswrapper[4766]: I0130 16:36:13.026105 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j8lj5" Jan 30 16:36:13 crc kubenswrapper[4766]: I0130 16:36:13.029669 4766 generic.go:334] "Generic (PLEG): container finished" podID="3289ef2d-c514-4e8a-91f9-200f8b7742dd" containerID="279ab1c91a57f88b554459f91b5f022cd47b53bbb2b07d6dc8bbf2b21565bd4f" exitCode=0 Jan 30 16:36:13 crc kubenswrapper[4766]: I0130 16:36:13.029723 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j8lj5" event={"ID":"3289ef2d-c514-4e8a-91f9-200f8b7742dd","Type":"ContainerDied","Data":"279ab1c91a57f88b554459f91b5f022cd47b53bbb2b07d6dc8bbf2b21565bd4f"} Jan 30 16:36:13 crc kubenswrapper[4766]: I0130 16:36:13.029752 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j8lj5" event={"ID":"3289ef2d-c514-4e8a-91f9-200f8b7742dd","Type":"ContainerDied","Data":"280644e1d18a0d1e8d1142b1055140312017ca431290f10dd3e831116e441aea"} Jan 30 16:36:13 crc kubenswrapper[4766]: I0130 16:36:13.029769 4766 scope.go:117] "RemoveContainer" containerID="279ab1c91a57f88b554459f91b5f022cd47b53bbb2b07d6dc8bbf2b21565bd4f" Jan 30 16:36:13 crc kubenswrapper[4766]: I0130 16:36:13.056793 4766 scope.go:117] "RemoveContainer" containerID="6335444566f76747a1bbabdfff09d019511a163d12f77c386abcd02273d0ace5" Jan 30 16:36:13 crc kubenswrapper[4766]: I0130 16:36:13.079232 4766 scope.go:117] "RemoveContainer" containerID="f7f92f8300c8713d910aa6a0bca5661d6dd6450946457b864fc079e600e849f2" Jan 30 16:36:13 crc kubenswrapper[4766]: I0130 16:36:13.097384 4766 scope.go:117] "RemoveContainer" containerID="279ab1c91a57f88b554459f91b5f022cd47b53bbb2b07d6dc8bbf2b21565bd4f" Jan 30 16:36:13 crc kubenswrapper[4766]: E0130 16:36:13.098047 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"279ab1c91a57f88b554459f91b5f022cd47b53bbb2b07d6dc8bbf2b21565bd4f\": container with ID starting with 279ab1c91a57f88b554459f91b5f022cd47b53bbb2b07d6dc8bbf2b21565bd4f not found: ID does not exist" containerID="279ab1c91a57f88b554459f91b5f022cd47b53bbb2b07d6dc8bbf2b21565bd4f" Jan 30 16:36:13 crc kubenswrapper[4766]: I0130 16:36:13.098109 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"279ab1c91a57f88b554459f91b5f022cd47b53bbb2b07d6dc8bbf2b21565bd4f"} err="failed to get container status \"279ab1c91a57f88b554459f91b5f022cd47b53bbb2b07d6dc8bbf2b21565bd4f\": rpc error: code = NotFound desc = could not find container \"279ab1c91a57f88b554459f91b5f022cd47b53bbb2b07d6dc8bbf2b21565bd4f\": container with ID starting with 279ab1c91a57f88b554459f91b5f022cd47b53bbb2b07d6dc8bbf2b21565bd4f not found: ID does not exist" Jan 30 16:36:13 crc kubenswrapper[4766]: I0130 16:36:13.098143 4766 scope.go:117] "RemoveContainer" containerID="6335444566f76747a1bbabdfff09d019511a163d12f77c386abcd02273d0ace5" Jan 30 16:36:13 crc kubenswrapper[4766]: E0130 16:36:13.099800 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6335444566f76747a1bbabdfff09d019511a163d12f77c386abcd02273d0ace5\": container with ID starting with 6335444566f76747a1bbabdfff09d019511a163d12f77c386abcd02273d0ace5 not found: ID does not exist" containerID="6335444566f76747a1bbabdfff09d019511a163d12f77c386abcd02273d0ace5" Jan 30 16:36:13 crc kubenswrapper[4766]: I0130 16:36:13.100243 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6335444566f76747a1bbabdfff09d019511a163d12f77c386abcd02273d0ace5"} err="failed to get container status \"6335444566f76747a1bbabdfff09d019511a163d12f77c386abcd02273d0ace5\": rpc error: code = NotFound desc = could not find container \"6335444566f76747a1bbabdfff09d019511a163d12f77c386abcd02273d0ace5\": container with ID starting with 6335444566f76747a1bbabdfff09d019511a163d12f77c386abcd02273d0ace5 not found: ID does not exist" Jan 30 16:36:13 crc kubenswrapper[4766]: I0130 16:36:13.100277 4766 scope.go:117] "RemoveContainer" containerID="f7f92f8300c8713d910aa6a0bca5661d6dd6450946457b864fc079e600e849f2" Jan 30 16:36:13 crc kubenswrapper[4766]: E0130 16:36:13.100712 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7f92f8300c8713d910aa6a0bca5661d6dd6450946457b864fc079e600e849f2\": container with ID starting with f7f92f8300c8713d910aa6a0bca5661d6dd6450946457b864fc079e600e849f2 not found: ID does not exist" containerID="f7f92f8300c8713d910aa6a0bca5661d6dd6450946457b864fc079e600e849f2" Jan 30 16:36:13 crc kubenswrapper[4766]: I0130 16:36:13.100743 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7f92f8300c8713d910aa6a0bca5661d6dd6450946457b864fc079e600e849f2"} err="failed to get container status \"f7f92f8300c8713d910aa6a0bca5661d6dd6450946457b864fc079e600e849f2\": rpc error: code = NotFound desc = could not find container \"f7f92f8300c8713d910aa6a0bca5661d6dd6450946457b864fc079e600e849f2\": container with ID starting with f7f92f8300c8713d910aa6a0bca5661d6dd6450946457b864fc079e600e849f2 not found: ID does not exist" Jan 30 16:36:13 crc kubenswrapper[4766]: I0130 16:36:13.184087 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvzmf\" (UniqueName: \"kubernetes.io/projected/3289ef2d-c514-4e8a-91f9-200f8b7742dd-kube-api-access-dvzmf\") pod \"3289ef2d-c514-4e8a-91f9-200f8b7742dd\" (UID: \"3289ef2d-c514-4e8a-91f9-200f8b7742dd\") " Jan 30 16:36:13 crc kubenswrapper[4766]: I0130 16:36:13.184144 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3289ef2d-c514-4e8a-91f9-200f8b7742dd-utilities\") pod \"3289ef2d-c514-4e8a-91f9-200f8b7742dd\" (UID: \"3289ef2d-c514-4e8a-91f9-200f8b7742dd\") " Jan 30 16:36:13 crc kubenswrapper[4766]: I0130 16:36:13.184239 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3289ef2d-c514-4e8a-91f9-200f8b7742dd-catalog-content\") pod \"3289ef2d-c514-4e8a-91f9-200f8b7742dd\" (UID: \"3289ef2d-c514-4e8a-91f9-200f8b7742dd\") " Jan 30 16:36:13 crc kubenswrapper[4766]: I0130 16:36:13.185132 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3289ef2d-c514-4e8a-91f9-200f8b7742dd-utilities" (OuterVolumeSpecName: "utilities") pod "3289ef2d-c514-4e8a-91f9-200f8b7742dd" (UID: "3289ef2d-c514-4e8a-91f9-200f8b7742dd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:36:13 crc kubenswrapper[4766]: I0130 16:36:13.190241 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3289ef2d-c514-4e8a-91f9-200f8b7742dd-kube-api-access-dvzmf" (OuterVolumeSpecName: "kube-api-access-dvzmf") pod "3289ef2d-c514-4e8a-91f9-200f8b7742dd" (UID: "3289ef2d-c514-4e8a-91f9-200f8b7742dd"). InnerVolumeSpecName "kube-api-access-dvzmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:36:13 crc kubenswrapper[4766]: I0130 16:36:13.286875 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvzmf\" (UniqueName: \"kubernetes.io/projected/3289ef2d-c514-4e8a-91f9-200f8b7742dd-kube-api-access-dvzmf\") on node \"crc\" DevicePath \"\"" Jan 30 16:36:13 crc kubenswrapper[4766]: I0130 16:36:13.286923 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3289ef2d-c514-4e8a-91f9-200f8b7742dd-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:36:13 crc kubenswrapper[4766]: I0130 16:36:13.292771 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3289ef2d-c514-4e8a-91f9-200f8b7742dd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3289ef2d-c514-4e8a-91f9-200f8b7742dd" (UID: "3289ef2d-c514-4e8a-91f9-200f8b7742dd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:36:13 crc kubenswrapper[4766]: I0130 16:36:13.388282 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3289ef2d-c514-4e8a-91f9-200f8b7742dd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:36:14 crc kubenswrapper[4766]: I0130 16:36:14.035435 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j8lj5" Jan 30 16:36:14 crc kubenswrapper[4766]: I0130 16:36:14.074594 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j8lj5"] Jan 30 16:36:14 crc kubenswrapper[4766]: I0130 16:36:14.081256 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-j8lj5"] Jan 30 16:36:16 crc kubenswrapper[4766]: I0130 16:36:16.049317 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3289ef2d-c514-4e8a-91f9-200f8b7742dd" path="/var/lib/kubelet/pods/3289ef2d-c514-4e8a-91f9-200f8b7742dd/volumes" Jan 30 16:36:22 crc kubenswrapper[4766]: I0130 16:36:22.479031 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-698996dc4d-5ps7v" Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.086729 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-5d87dd9885-cpjtx" Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.790526 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-fr242"] Jan 30 16:36:42 crc kubenswrapper[4766]: E0130 16:36:42.790774 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3289ef2d-c514-4e8a-91f9-200f8b7742dd" containerName="extract-content" Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.790787 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="3289ef2d-c514-4e8a-91f9-200f8b7742dd" containerName="extract-content" Jan 30 16:36:42 crc kubenswrapper[4766]: E0130 16:36:42.790798 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3289ef2d-c514-4e8a-91f9-200f8b7742dd" containerName="registry-server" Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.790803 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="3289ef2d-c514-4e8a-91f9-200f8b7742dd" containerName="registry-server" Jan 30 16:36:42 crc kubenswrapper[4766]: E0130 16:36:42.790817 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3289ef2d-c514-4e8a-91f9-200f8b7742dd" containerName="extract-utilities" Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.790825 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="3289ef2d-c514-4e8a-91f9-200f8b7742dd" containerName="extract-utilities" Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.790922 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="3289ef2d-c514-4e8a-91f9-200f8b7742dd" containerName="registry-server" Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.793348 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-fr242" Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.796593 4766 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.796861 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.797482 4766 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-lr98l" Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.812139 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-z9cbg"] Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.813104 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-z9cbg" Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.814460 4766 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.829889 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-z9cbg"] Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.894857 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-pfspk"] Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.896090 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-pfspk" Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.901228 4766 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.901273 4766 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.901312 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.901347 4766 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-l98vt" Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.902008 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kllx9\" (UniqueName: \"kubernetes.io/projected/85bd5ff3-9577-4598-92a9-f24f00c56187-kube-api-access-kllx9\") pod \"frr-k8s-webhook-server-7df86c4f6c-z9cbg\" (UID: \"85bd5ff3-9577-4598-92a9-f24f00c56187\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-z9cbg" Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.902069 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/4a563046-adc2-4e82-9b89-a549d3f06250-frr-startup\") pod \"frr-k8s-fr242\" (UID: \"4a563046-adc2-4e82-9b89-a549d3f06250\") " pod="metallb-system/frr-k8s-fr242" Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.902096 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/4a563046-adc2-4e82-9b89-a549d3f06250-frr-sockets\") pod \"frr-k8s-fr242\" (UID: \"4a563046-adc2-4e82-9b89-a549d3f06250\") " pod="metallb-system/frr-k8s-fr242" Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.902127 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/85bd5ff3-9577-4598-92a9-f24f00c56187-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-z9cbg\" (UID: \"85bd5ff3-9577-4598-92a9-f24f00c56187\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-z9cbg" Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.902150 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/4a563046-adc2-4e82-9b89-a549d3f06250-frr-conf\") pod \"frr-k8s-fr242\" (UID: \"4a563046-adc2-4e82-9b89-a549d3f06250\") " pod="metallb-system/frr-k8s-fr242" Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.902194 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a563046-adc2-4e82-9b89-a549d3f06250-metrics-certs\") pod \"frr-k8s-fr242\" (UID: \"4a563046-adc2-4e82-9b89-a549d3f06250\") " pod="metallb-system/frr-k8s-fr242" Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.902215 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft95b\" (UniqueName: \"kubernetes.io/projected/4a563046-adc2-4e82-9b89-a549d3f06250-kube-api-access-ft95b\") pod \"frr-k8s-fr242\" (UID: \"4a563046-adc2-4e82-9b89-a549d3f06250\") " pod="metallb-system/frr-k8s-fr242" Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.902236 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/4a563046-adc2-4e82-9b89-a549d3f06250-metrics\") pod \"frr-k8s-fr242\" (UID: \"4a563046-adc2-4e82-9b89-a549d3f06250\") " pod="metallb-system/frr-k8s-fr242" Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.902257 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/4a563046-adc2-4e82-9b89-a549d3f06250-reloader\") pod \"frr-k8s-fr242\" (UID: \"4a563046-adc2-4e82-9b89-a549d3f06250\") " pod="metallb-system/frr-k8s-fr242" Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.935365 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-7v5hl"] Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.938744 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-7v5hl" Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.940877 4766 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.967854 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-7v5hl"] Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.003452 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/4a563046-adc2-4e82-9b89-a549d3f06250-frr-sockets\") pod \"frr-k8s-fr242\" (UID: \"4a563046-adc2-4e82-9b89-a549d3f06250\") " pod="metallb-system/frr-k8s-fr242" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.003563 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4ad0227f-0410-4f5e-bfc5-7dd96164c9b5-metrics-certs\") pod \"speaker-pfspk\" (UID: \"4ad0227f-0410-4f5e-bfc5-7dd96164c9b5\") " pod="metallb-system/speaker-pfspk" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.003606 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4ad0227f-0410-4f5e-bfc5-7dd96164c9b5-memberlist\") pod \"speaker-pfspk\" (UID: \"4ad0227f-0410-4f5e-bfc5-7dd96164c9b5\") " pod="metallb-system/speaker-pfspk" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.003650 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4ad0227f-0410-4f5e-bfc5-7dd96164c9b5-metallb-excludel2\") pod \"speaker-pfspk\" (UID: \"4ad0227f-0410-4f5e-bfc5-7dd96164c9b5\") " pod="metallb-system/speaker-pfspk" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.003683 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/85bd5ff3-9577-4598-92a9-f24f00c56187-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-z9cbg\" (UID: \"85bd5ff3-9577-4598-92a9-f24f00c56187\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-z9cbg" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.003714 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/4a563046-adc2-4e82-9b89-a549d3f06250-frr-conf\") pod \"frr-k8s-fr242\" (UID: \"4a563046-adc2-4e82-9b89-a549d3f06250\") " pod="metallb-system/frr-k8s-fr242" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.003760 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a563046-adc2-4e82-9b89-a549d3f06250-metrics-certs\") pod \"frr-k8s-fr242\" (UID: \"4a563046-adc2-4e82-9b89-a549d3f06250\") " pod="metallb-system/frr-k8s-fr242" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.003793 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ft95b\" (UniqueName: \"kubernetes.io/projected/4a563046-adc2-4e82-9b89-a549d3f06250-kube-api-access-ft95b\") pod \"frr-k8s-fr242\" (UID: \"4a563046-adc2-4e82-9b89-a549d3f06250\") " pod="metallb-system/frr-k8s-fr242" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.003828 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/4a563046-adc2-4e82-9b89-a549d3f06250-metrics\") pod \"frr-k8s-fr242\" (UID: \"4a563046-adc2-4e82-9b89-a549d3f06250\") " pod="metallb-system/frr-k8s-fr242" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.003861 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/4a563046-adc2-4e82-9b89-a549d3f06250-reloader\") pod \"frr-k8s-fr242\" (UID: \"4a563046-adc2-4e82-9b89-a549d3f06250\") " pod="metallb-system/frr-k8s-fr242" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.003905 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhsvl\" (UniqueName: \"kubernetes.io/projected/4ad0227f-0410-4f5e-bfc5-7dd96164c9b5-kube-api-access-nhsvl\") pod \"speaker-pfspk\" (UID: \"4ad0227f-0410-4f5e-bfc5-7dd96164c9b5\") " pod="metallb-system/speaker-pfspk" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.003942 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kllx9\" (UniqueName: \"kubernetes.io/projected/85bd5ff3-9577-4598-92a9-f24f00c56187-kube-api-access-kllx9\") pod \"frr-k8s-webhook-server-7df86c4f6c-z9cbg\" (UID: \"85bd5ff3-9577-4598-92a9-f24f00c56187\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-z9cbg" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.004015 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/4a563046-adc2-4e82-9b89-a549d3f06250-frr-startup\") pod \"frr-k8s-fr242\" (UID: \"4a563046-adc2-4e82-9b89-a549d3f06250\") " pod="metallb-system/frr-k8s-fr242" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.005383 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/4a563046-adc2-4e82-9b89-a549d3f06250-frr-conf\") pod \"frr-k8s-fr242\" (UID: \"4a563046-adc2-4e82-9b89-a549d3f06250\") " pod="metallb-system/frr-k8s-fr242" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.005455 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/4a563046-adc2-4e82-9b89-a549d3f06250-frr-startup\") pod \"frr-k8s-fr242\" (UID: \"4a563046-adc2-4e82-9b89-a549d3f06250\") " pod="metallb-system/frr-k8s-fr242" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.022459 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/4a563046-adc2-4e82-9b89-a549d3f06250-metrics\") pod \"frr-k8s-fr242\" (UID: \"4a563046-adc2-4e82-9b89-a549d3f06250\") " pod="metallb-system/frr-k8s-fr242" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.022727 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/4a563046-adc2-4e82-9b89-a549d3f06250-frr-sockets\") pod \"frr-k8s-fr242\" (UID: \"4a563046-adc2-4e82-9b89-a549d3f06250\") " pod="metallb-system/frr-k8s-fr242" Jan 30 16:36:43 crc kubenswrapper[4766]: E0130 16:36:43.022831 4766 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 30 16:36:43 crc kubenswrapper[4766]: E0130 16:36:43.022912 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/85bd5ff3-9577-4598-92a9-f24f00c56187-cert podName:85bd5ff3-9577-4598-92a9-f24f00c56187 nodeName:}" failed. No retries permitted until 2026-01-30 16:36:43.522889211 +0000 UTC m=+858.160846557 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/85bd5ff3-9577-4598-92a9-f24f00c56187-cert") pod "frr-k8s-webhook-server-7df86c4f6c-z9cbg" (UID: "85bd5ff3-9577-4598-92a9-f24f00c56187") : secret "frr-k8s-webhook-server-cert" not found Jan 30 16:36:43 crc kubenswrapper[4766]: E0130 16:36:43.023582 4766 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 30 16:36:43 crc kubenswrapper[4766]: E0130 16:36:43.023641 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a563046-adc2-4e82-9b89-a549d3f06250-metrics-certs podName:4a563046-adc2-4e82-9b89-a549d3f06250 nodeName:}" failed. No retries permitted until 2026-01-30 16:36:43.523625291 +0000 UTC m=+858.161582637 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4a563046-adc2-4e82-9b89-a549d3f06250-metrics-certs") pod "frr-k8s-fr242" (UID: "4a563046-adc2-4e82-9b89-a549d3f06250") : secret "frr-k8s-certs-secret" not found Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.034149 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/4a563046-adc2-4e82-9b89-a549d3f06250-reloader\") pod \"frr-k8s-fr242\" (UID: \"4a563046-adc2-4e82-9b89-a549d3f06250\") " pod="metallb-system/frr-k8s-fr242" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.054482 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft95b\" (UniqueName: \"kubernetes.io/projected/4a563046-adc2-4e82-9b89-a549d3f06250-kube-api-access-ft95b\") pod \"frr-k8s-fr242\" (UID: \"4a563046-adc2-4e82-9b89-a549d3f06250\") " pod="metallb-system/frr-k8s-fr242" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.071250 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kllx9\" (UniqueName: \"kubernetes.io/projected/85bd5ff3-9577-4598-92a9-f24f00c56187-kube-api-access-kllx9\") pod \"frr-k8s-webhook-server-7df86c4f6c-z9cbg\" (UID: \"85bd5ff3-9577-4598-92a9-f24f00c56187\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-z9cbg" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.105438 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4ad0227f-0410-4f5e-bfc5-7dd96164c9b5-metrics-certs\") pod \"speaker-pfspk\" (UID: \"4ad0227f-0410-4f5e-bfc5-7dd96164c9b5\") " pod="metallb-system/speaker-pfspk" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.105493 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4ad0227f-0410-4f5e-bfc5-7dd96164c9b5-memberlist\") pod \"speaker-pfspk\" (UID: \"4ad0227f-0410-4f5e-bfc5-7dd96164c9b5\") " pod="metallb-system/speaker-pfspk" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.105525 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4ad0227f-0410-4f5e-bfc5-7dd96164c9b5-metallb-excludel2\") pod \"speaker-pfspk\" (UID: \"4ad0227f-0410-4f5e-bfc5-7dd96164c9b5\") " pod="metallb-system/speaker-pfspk" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.105571 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f4f6fbd7-b3c4-4f9f-8689-6ef8bfffc873-metrics-certs\") pod \"controller-6968d8fdc4-7v5hl\" (UID: \"f4f6fbd7-b3c4-4f9f-8689-6ef8bfffc873\") " pod="metallb-system/controller-6968d8fdc4-7v5hl" Jan 30 16:36:43 crc kubenswrapper[4766]: E0130 16:36:43.105623 4766 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 30 16:36:43 crc kubenswrapper[4766]: E0130 16:36:43.105660 4766 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 30 16:36:43 crc kubenswrapper[4766]: E0130 16:36:43.105700 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4ad0227f-0410-4f5e-bfc5-7dd96164c9b5-metrics-certs podName:4ad0227f-0410-4f5e-bfc5-7dd96164c9b5 nodeName:}" failed. No retries permitted until 2026-01-30 16:36:43.605682876 +0000 UTC m=+858.243640272 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4ad0227f-0410-4f5e-bfc5-7dd96164c9b5-metrics-certs") pod "speaker-pfspk" (UID: "4ad0227f-0410-4f5e-bfc5-7dd96164c9b5") : secret "speaker-certs-secret" not found Jan 30 16:36:43 crc kubenswrapper[4766]: E0130 16:36:43.105755 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4ad0227f-0410-4f5e-bfc5-7dd96164c9b5-memberlist podName:4ad0227f-0410-4f5e-bfc5-7dd96164c9b5 nodeName:}" failed. No retries permitted until 2026-01-30 16:36:43.605735757 +0000 UTC m=+858.243693103 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/4ad0227f-0410-4f5e-bfc5-7dd96164c9b5-memberlist") pod "speaker-pfspk" (UID: "4ad0227f-0410-4f5e-bfc5-7dd96164c9b5") : secret "metallb-memberlist" not found Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.105632 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvqxr\" (UniqueName: \"kubernetes.io/projected/f4f6fbd7-b3c4-4f9f-8689-6ef8bfffc873-kube-api-access-pvqxr\") pod \"controller-6968d8fdc4-7v5hl\" (UID: \"f4f6fbd7-b3c4-4f9f-8689-6ef8bfffc873\") " pod="metallb-system/controller-6968d8fdc4-7v5hl" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.105827 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhsvl\" (UniqueName: \"kubernetes.io/projected/4ad0227f-0410-4f5e-bfc5-7dd96164c9b5-kube-api-access-nhsvl\") pod \"speaker-pfspk\" (UID: \"4ad0227f-0410-4f5e-bfc5-7dd96164c9b5\") " pod="metallb-system/speaker-pfspk" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.105867 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f4f6fbd7-b3c4-4f9f-8689-6ef8bfffc873-cert\") pod \"controller-6968d8fdc4-7v5hl\" (UID: \"f4f6fbd7-b3c4-4f9f-8689-6ef8bfffc873\") " pod="metallb-system/controller-6968d8fdc4-7v5hl" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.106477 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4ad0227f-0410-4f5e-bfc5-7dd96164c9b5-metallb-excludel2\") pod \"speaker-pfspk\" (UID: \"4ad0227f-0410-4f5e-bfc5-7dd96164c9b5\") " pod="metallb-system/speaker-pfspk" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.124486 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhsvl\" (UniqueName: \"kubernetes.io/projected/4ad0227f-0410-4f5e-bfc5-7dd96164c9b5-kube-api-access-nhsvl\") pod \"speaker-pfspk\" (UID: \"4ad0227f-0410-4f5e-bfc5-7dd96164c9b5\") " pod="metallb-system/speaker-pfspk" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.207625 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvqxr\" (UniqueName: \"kubernetes.io/projected/f4f6fbd7-b3c4-4f9f-8689-6ef8bfffc873-kube-api-access-pvqxr\") pod \"controller-6968d8fdc4-7v5hl\" (UID: \"f4f6fbd7-b3c4-4f9f-8689-6ef8bfffc873\") " pod="metallb-system/controller-6968d8fdc4-7v5hl" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.207698 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f4f6fbd7-b3c4-4f9f-8689-6ef8bfffc873-cert\") pod \"controller-6968d8fdc4-7v5hl\" (UID: \"f4f6fbd7-b3c4-4f9f-8689-6ef8bfffc873\") " pod="metallb-system/controller-6968d8fdc4-7v5hl" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.207824 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f4f6fbd7-b3c4-4f9f-8689-6ef8bfffc873-metrics-certs\") pod \"controller-6968d8fdc4-7v5hl\" (UID: \"f4f6fbd7-b3c4-4f9f-8689-6ef8bfffc873\") " pod="metallb-system/controller-6968d8fdc4-7v5hl" Jan 30 16:36:43 crc kubenswrapper[4766]: E0130 16:36:43.207977 4766 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Jan 30 16:36:43 crc kubenswrapper[4766]: E0130 16:36:43.208047 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f4f6fbd7-b3c4-4f9f-8689-6ef8bfffc873-metrics-certs podName:f4f6fbd7-b3c4-4f9f-8689-6ef8bfffc873 nodeName:}" failed. No retries permitted until 2026-01-30 16:36:43.708029281 +0000 UTC m=+858.345986627 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f4f6fbd7-b3c4-4f9f-8689-6ef8bfffc873-metrics-certs") pod "controller-6968d8fdc4-7v5hl" (UID: "f4f6fbd7-b3c4-4f9f-8689-6ef8bfffc873") : secret "controller-certs-secret" not found Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.209354 4766 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.223821 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f4f6fbd7-b3c4-4f9f-8689-6ef8bfffc873-cert\") pod \"controller-6968d8fdc4-7v5hl\" (UID: \"f4f6fbd7-b3c4-4f9f-8689-6ef8bfffc873\") " pod="metallb-system/controller-6968d8fdc4-7v5hl" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.225616 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvqxr\" (UniqueName: \"kubernetes.io/projected/f4f6fbd7-b3c4-4f9f-8689-6ef8bfffc873-kube-api-access-pvqxr\") pod \"controller-6968d8fdc4-7v5hl\" (UID: \"f4f6fbd7-b3c4-4f9f-8689-6ef8bfffc873\") " pod="metallb-system/controller-6968d8fdc4-7v5hl" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.613531 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4ad0227f-0410-4f5e-bfc5-7dd96164c9b5-metrics-certs\") pod \"speaker-pfspk\" (UID: \"4ad0227f-0410-4f5e-bfc5-7dd96164c9b5\") " pod="metallb-system/speaker-pfspk" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.613593 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4ad0227f-0410-4f5e-bfc5-7dd96164c9b5-memberlist\") pod \"speaker-pfspk\" (UID: \"4ad0227f-0410-4f5e-bfc5-7dd96164c9b5\") " pod="metallb-system/speaker-pfspk" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.613630 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/85bd5ff3-9577-4598-92a9-f24f00c56187-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-z9cbg\" (UID: \"85bd5ff3-9577-4598-92a9-f24f00c56187\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-z9cbg" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.613681 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a563046-adc2-4e82-9b89-a549d3f06250-metrics-certs\") pod \"frr-k8s-fr242\" (UID: \"4a563046-adc2-4e82-9b89-a549d3f06250\") " pod="metallb-system/frr-k8s-fr242" Jan 30 16:36:43 crc kubenswrapper[4766]: E0130 16:36:43.613763 4766 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 30 16:36:43 crc kubenswrapper[4766]: E0130 16:36:43.613852 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4ad0227f-0410-4f5e-bfc5-7dd96164c9b5-memberlist podName:4ad0227f-0410-4f5e-bfc5-7dd96164c9b5 nodeName:}" failed. No retries permitted until 2026-01-30 16:36:44.61382924 +0000 UTC m=+859.251786596 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/4ad0227f-0410-4f5e-bfc5-7dd96164c9b5-memberlist") pod "speaker-pfspk" (UID: "4ad0227f-0410-4f5e-bfc5-7dd96164c9b5") : secret "metallb-memberlist" not found Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.617024 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a563046-adc2-4e82-9b89-a549d3f06250-metrics-certs\") pod \"frr-k8s-fr242\" (UID: \"4a563046-adc2-4e82-9b89-a549d3f06250\") " pod="metallb-system/frr-k8s-fr242" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.617222 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4ad0227f-0410-4f5e-bfc5-7dd96164c9b5-metrics-certs\") pod \"speaker-pfspk\" (UID: \"4ad0227f-0410-4f5e-bfc5-7dd96164c9b5\") " pod="metallb-system/speaker-pfspk" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.618383 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/85bd5ff3-9577-4598-92a9-f24f00c56187-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-z9cbg\" (UID: \"85bd5ff3-9577-4598-92a9-f24f00c56187\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-z9cbg" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.713724 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-fr242" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.714552 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f4f6fbd7-b3c4-4f9f-8689-6ef8bfffc873-metrics-certs\") pod \"controller-6968d8fdc4-7v5hl\" (UID: \"f4f6fbd7-b3c4-4f9f-8689-6ef8bfffc873\") " pod="metallb-system/controller-6968d8fdc4-7v5hl" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.718993 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f4f6fbd7-b3c4-4f9f-8689-6ef8bfffc873-metrics-certs\") pod \"controller-6968d8fdc4-7v5hl\" (UID: \"f4f6fbd7-b3c4-4f9f-8689-6ef8bfffc873\") " pod="metallb-system/controller-6968d8fdc4-7v5hl" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.731806 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-z9cbg" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.855208 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-7v5hl" Jan 30 16:36:44 crc kubenswrapper[4766]: I0130 16:36:44.096539 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-7v5hl"] Jan 30 16:36:44 crc kubenswrapper[4766]: W0130 16:36:44.103950 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4f6fbd7_b3c4_4f9f_8689_6ef8bfffc873.slice/crio-88565e9d56893170f4bbeb1152c6f58f59120f50c3ea3e3d36aca1530b34e259 WatchSource:0}: Error finding container 88565e9d56893170f4bbeb1152c6f58f59120f50c3ea3e3d36aca1530b34e259: Status 404 returned error can't find the container with id 88565e9d56893170f4bbeb1152c6f58f59120f50c3ea3e3d36aca1530b34e259 Jan 30 16:36:44 crc kubenswrapper[4766]: I0130 16:36:44.173928 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-z9cbg"] Jan 30 16:36:44 crc kubenswrapper[4766]: W0130 16:36:44.184476 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85bd5ff3_9577_4598_92a9_f24f00c56187.slice/crio-afc38c4549098e7769f5c2e30eeef2c49915e66311608c060eacc89327369a74 WatchSource:0}: Error finding container afc38c4549098e7769f5c2e30eeef2c49915e66311608c060eacc89327369a74: Status 404 returned error can't find the container with id afc38c4549098e7769f5c2e30eeef2c49915e66311608c060eacc89327369a74 Jan 30 16:36:44 crc kubenswrapper[4766]: I0130 16:36:44.203490 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fr242" event={"ID":"4a563046-adc2-4e82-9b89-a549d3f06250","Type":"ContainerStarted","Data":"7a57d12be9db33f7944867a5d5772a42224afe93156b5996ce7704ebaafb810b"} Jan 30 16:36:44 crc kubenswrapper[4766]: I0130 16:36:44.206741 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-7v5hl" event={"ID":"f4f6fbd7-b3c4-4f9f-8689-6ef8bfffc873","Type":"ContainerStarted","Data":"88565e9d56893170f4bbeb1152c6f58f59120f50c3ea3e3d36aca1530b34e259"} Jan 30 16:36:44 crc kubenswrapper[4766]: I0130 16:36:44.208208 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-z9cbg" event={"ID":"85bd5ff3-9577-4598-92a9-f24f00c56187","Type":"ContainerStarted","Data":"afc38c4549098e7769f5c2e30eeef2c49915e66311608c060eacc89327369a74"} Jan 30 16:36:44 crc kubenswrapper[4766]: I0130 16:36:44.625526 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4ad0227f-0410-4f5e-bfc5-7dd96164c9b5-memberlist\") pod \"speaker-pfspk\" (UID: \"4ad0227f-0410-4f5e-bfc5-7dd96164c9b5\") " pod="metallb-system/speaker-pfspk" Jan 30 16:36:44 crc kubenswrapper[4766]: I0130 16:36:44.632990 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4ad0227f-0410-4f5e-bfc5-7dd96164c9b5-memberlist\") pod \"speaker-pfspk\" (UID: \"4ad0227f-0410-4f5e-bfc5-7dd96164c9b5\") " pod="metallb-system/speaker-pfspk" Jan 30 16:36:44 crc kubenswrapper[4766]: I0130 16:36:44.712744 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-pfspk" Jan 30 16:36:44 crc kubenswrapper[4766]: W0130 16:36:44.734088 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ad0227f_0410_4f5e_bfc5_7dd96164c9b5.slice/crio-8e386a4b97d396ffdbfac0da34197f2fbdfb2b1d8b4da282b30916f5c44ca6df WatchSource:0}: Error finding container 8e386a4b97d396ffdbfac0da34197f2fbdfb2b1d8b4da282b30916f5c44ca6df: Status 404 returned error can't find the container with id 8e386a4b97d396ffdbfac0da34197f2fbdfb2b1d8b4da282b30916f5c44ca6df Jan 30 16:36:45 crc kubenswrapper[4766]: I0130 16:36:45.216862 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-7v5hl" event={"ID":"f4f6fbd7-b3c4-4f9f-8689-6ef8bfffc873","Type":"ContainerStarted","Data":"382ee6499400dc94efa59f0668fcda135b7569b8752a0f523567aecfc009ebde"} Jan 30 16:36:45 crc kubenswrapper[4766]: I0130 16:36:45.217345 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-7v5hl" Jan 30 16:36:45 crc kubenswrapper[4766]: I0130 16:36:45.217365 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-7v5hl" event={"ID":"f4f6fbd7-b3c4-4f9f-8689-6ef8bfffc873","Type":"ContainerStarted","Data":"b62e4e90d51a1b2ce278ac45697a19f01a3546f6bd182006d30a7104b5d374f1"} Jan 30 16:36:45 crc kubenswrapper[4766]: I0130 16:36:45.218469 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-pfspk" event={"ID":"4ad0227f-0410-4f5e-bfc5-7dd96164c9b5","Type":"ContainerStarted","Data":"c8717d2d48641eff4fc5b1b9212396898a8a851794941527db40589ddbad6bea"} Jan 30 16:36:45 crc kubenswrapper[4766]: I0130 16:36:45.218513 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-pfspk" event={"ID":"4ad0227f-0410-4f5e-bfc5-7dd96164c9b5","Type":"ContainerStarted","Data":"8e386a4b97d396ffdbfac0da34197f2fbdfb2b1d8b4da282b30916f5c44ca6df"} Jan 30 16:36:45 crc kubenswrapper[4766]: I0130 16:36:45.235328 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-7v5hl" podStartSLOduration=3.23530828 podStartE2EDuration="3.23530828s" podCreationTimestamp="2026-01-30 16:36:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:36:45.232095971 +0000 UTC m=+859.870053327" watchObservedRunningTime="2026-01-30 16:36:45.23530828 +0000 UTC m=+859.873265626" Jan 30 16:36:46 crc kubenswrapper[4766]: I0130 16:36:46.238367 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-pfspk" event={"ID":"4ad0227f-0410-4f5e-bfc5-7dd96164c9b5","Type":"ContainerStarted","Data":"fae748f14ee98118df529619f1e5571f008377e981a821124004c21af1051271"} Jan 30 16:36:46 crc kubenswrapper[4766]: I0130 16:36:46.238538 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-pfspk" Jan 30 16:36:46 crc kubenswrapper[4766]: I0130 16:36:46.265139 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-pfspk" podStartSLOduration=4.265121922 podStartE2EDuration="4.265121922s" podCreationTimestamp="2026-01-30 16:36:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:36:46.263291531 +0000 UTC m=+860.901248907" watchObservedRunningTime="2026-01-30 16:36:46.265121922 +0000 UTC m=+860.903079268" Jan 30 16:36:52 crc kubenswrapper[4766]: I0130 16:36:52.280830 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-z9cbg" event={"ID":"85bd5ff3-9577-4598-92a9-f24f00c56187","Type":"ContainerStarted","Data":"44d13023bd8846f5d03e9ed900ff2395ae7f6c094a213ac4119198efb563e41e"} Jan 30 16:36:52 crc kubenswrapper[4766]: I0130 16:36:52.281431 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-z9cbg" Jan 30 16:36:52 crc kubenswrapper[4766]: I0130 16:36:52.282777 4766 generic.go:334] "Generic (PLEG): container finished" podID="4a563046-adc2-4e82-9b89-a549d3f06250" containerID="c8430d33a25d7dfebc61cdfe3fa72c14282cac69cf25a679cf1b274982e79c2c" exitCode=0 Jan 30 16:36:52 crc kubenswrapper[4766]: I0130 16:36:52.282814 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fr242" event={"ID":"4a563046-adc2-4e82-9b89-a549d3f06250","Type":"ContainerDied","Data":"c8430d33a25d7dfebc61cdfe3fa72c14282cac69cf25a679cf1b274982e79c2c"} Jan 30 16:36:52 crc kubenswrapper[4766]: I0130 16:36:52.297909 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-z9cbg" podStartSLOduration=2.719123585 podStartE2EDuration="10.297893317s" podCreationTimestamp="2026-01-30 16:36:42 +0000 UTC" firstStartedPulling="2026-01-30 16:36:44.187975856 +0000 UTC m=+858.825933202" lastFinishedPulling="2026-01-30 16:36:51.766745588 +0000 UTC m=+866.404702934" observedRunningTime="2026-01-30 16:36:52.297730312 +0000 UTC m=+866.935687678" watchObservedRunningTime="2026-01-30 16:36:52.297893317 +0000 UTC m=+866.935850663" Jan 30 16:36:53 crc kubenswrapper[4766]: I0130 16:36:53.158477 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vxf97"] Jan 30 16:36:53 crc kubenswrapper[4766]: I0130 16:36:53.159778 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vxf97" Jan 30 16:36:53 crc kubenswrapper[4766]: I0130 16:36:53.170153 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vxf97"] Jan 30 16:36:53 crc kubenswrapper[4766]: I0130 16:36:53.257667 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8b95\" (UniqueName: \"kubernetes.io/projected/ce9dc1b9-f415-411b-a16d-88cd7b5a7f93-kube-api-access-w8b95\") pod \"redhat-marketplace-vxf97\" (UID: \"ce9dc1b9-f415-411b-a16d-88cd7b5a7f93\") " pod="openshift-marketplace/redhat-marketplace-vxf97" Jan 30 16:36:53 crc kubenswrapper[4766]: I0130 16:36:53.257730 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce9dc1b9-f415-411b-a16d-88cd7b5a7f93-utilities\") pod \"redhat-marketplace-vxf97\" (UID: \"ce9dc1b9-f415-411b-a16d-88cd7b5a7f93\") " pod="openshift-marketplace/redhat-marketplace-vxf97" Jan 30 16:36:53 crc kubenswrapper[4766]: I0130 16:36:53.257774 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce9dc1b9-f415-411b-a16d-88cd7b5a7f93-catalog-content\") pod \"redhat-marketplace-vxf97\" (UID: \"ce9dc1b9-f415-411b-a16d-88cd7b5a7f93\") " pod="openshift-marketplace/redhat-marketplace-vxf97" Jan 30 16:36:53 crc kubenswrapper[4766]: I0130 16:36:53.292076 4766 generic.go:334] "Generic (PLEG): container finished" podID="4a563046-adc2-4e82-9b89-a549d3f06250" containerID="3d9a5e34b7fc44db8f475d327060cd21e9f5b5ba7f5587d9fc0f1eea1c0dafc5" exitCode=0 Jan 30 16:36:53 crc kubenswrapper[4766]: I0130 16:36:53.292447 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fr242" event={"ID":"4a563046-adc2-4e82-9b89-a549d3f06250","Type":"ContainerDied","Data":"3d9a5e34b7fc44db8f475d327060cd21e9f5b5ba7f5587d9fc0f1eea1c0dafc5"} Jan 30 16:36:53 crc kubenswrapper[4766]: I0130 16:36:53.358850 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8b95\" (UniqueName: \"kubernetes.io/projected/ce9dc1b9-f415-411b-a16d-88cd7b5a7f93-kube-api-access-w8b95\") pod \"redhat-marketplace-vxf97\" (UID: \"ce9dc1b9-f415-411b-a16d-88cd7b5a7f93\") " pod="openshift-marketplace/redhat-marketplace-vxf97" Jan 30 16:36:53 crc kubenswrapper[4766]: I0130 16:36:53.359329 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce9dc1b9-f415-411b-a16d-88cd7b5a7f93-utilities\") pod \"redhat-marketplace-vxf97\" (UID: \"ce9dc1b9-f415-411b-a16d-88cd7b5a7f93\") " pod="openshift-marketplace/redhat-marketplace-vxf97" Jan 30 16:36:53 crc kubenswrapper[4766]: I0130 16:36:53.359399 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce9dc1b9-f415-411b-a16d-88cd7b5a7f93-catalog-content\") pod \"redhat-marketplace-vxf97\" (UID: \"ce9dc1b9-f415-411b-a16d-88cd7b5a7f93\") " pod="openshift-marketplace/redhat-marketplace-vxf97" Jan 30 16:36:53 crc kubenswrapper[4766]: I0130 16:36:53.359997 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce9dc1b9-f415-411b-a16d-88cd7b5a7f93-utilities\") pod \"redhat-marketplace-vxf97\" (UID: \"ce9dc1b9-f415-411b-a16d-88cd7b5a7f93\") " pod="openshift-marketplace/redhat-marketplace-vxf97" Jan 30 16:36:53 crc kubenswrapper[4766]: I0130 16:36:53.360218 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce9dc1b9-f415-411b-a16d-88cd7b5a7f93-catalog-content\") pod \"redhat-marketplace-vxf97\" (UID: \"ce9dc1b9-f415-411b-a16d-88cd7b5a7f93\") " pod="openshift-marketplace/redhat-marketplace-vxf97" Jan 30 16:36:53 crc kubenswrapper[4766]: I0130 16:36:53.384730 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8b95\" (UniqueName: \"kubernetes.io/projected/ce9dc1b9-f415-411b-a16d-88cd7b5a7f93-kube-api-access-w8b95\") pod \"redhat-marketplace-vxf97\" (UID: \"ce9dc1b9-f415-411b-a16d-88cd7b5a7f93\") " pod="openshift-marketplace/redhat-marketplace-vxf97" Jan 30 16:36:53 crc kubenswrapper[4766]: I0130 16:36:53.478454 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vxf97" Jan 30 16:36:53 crc kubenswrapper[4766]: I0130 16:36:53.702911 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vxf97"] Jan 30 16:36:54 crc kubenswrapper[4766]: I0130 16:36:54.300869 4766 generic.go:334] "Generic (PLEG): container finished" podID="4a563046-adc2-4e82-9b89-a549d3f06250" containerID="abf4bfd5c5ae534c7f2f2661737b4d1b3a1f987982416c44e5f867efc92dc5df" exitCode=0 Jan 30 16:36:54 crc kubenswrapper[4766]: I0130 16:36:54.300958 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fr242" event={"ID":"4a563046-adc2-4e82-9b89-a549d3f06250","Type":"ContainerDied","Data":"abf4bfd5c5ae534c7f2f2661737b4d1b3a1f987982416c44e5f867efc92dc5df"} Jan 30 16:36:54 crc kubenswrapper[4766]: I0130 16:36:54.303485 4766 generic.go:334] "Generic (PLEG): container finished" podID="ce9dc1b9-f415-411b-a16d-88cd7b5a7f93" containerID="4a7ad1bd633774d0622fbb6f585fc2c5583634d9cc452aacda6d68763bee66d2" exitCode=0 Jan 30 16:36:54 crc kubenswrapper[4766]: I0130 16:36:54.304485 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vxf97" event={"ID":"ce9dc1b9-f415-411b-a16d-88cd7b5a7f93","Type":"ContainerDied","Data":"4a7ad1bd633774d0622fbb6f585fc2c5583634d9cc452aacda6d68763bee66d2"} Jan 30 16:36:54 crc kubenswrapper[4766]: I0130 16:36:54.304575 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vxf97" event={"ID":"ce9dc1b9-f415-411b-a16d-88cd7b5a7f93","Type":"ContainerStarted","Data":"61610dab44c5f75f053174ef3d6dd6d46a8f7dfdffe1f5a823849014fc14712e"} Jan 30 16:36:55 crc kubenswrapper[4766]: I0130 16:36:55.325403 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fr242" event={"ID":"4a563046-adc2-4e82-9b89-a549d3f06250","Type":"ContainerStarted","Data":"a7a4297585f60a5fb97fb762e65def52b2100d59397b348ca9bd938d92d2e9da"} Jan 30 16:36:55 crc kubenswrapper[4766]: I0130 16:36:55.325720 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fr242" event={"ID":"4a563046-adc2-4e82-9b89-a549d3f06250","Type":"ContainerStarted","Data":"94604a8c7c937828a757accec4fc2325738b11249749b12f406bd41a73640f81"} Jan 30 16:36:55 crc kubenswrapper[4766]: I0130 16:36:55.325729 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fr242" event={"ID":"4a563046-adc2-4e82-9b89-a549d3f06250","Type":"ContainerStarted","Data":"2f6caad5ff6f24679e25993fab9707766d6967277ef04bebefd1400d0e1f6f62"} Jan 30 16:36:55 crc kubenswrapper[4766]: I0130 16:36:55.325740 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fr242" event={"ID":"4a563046-adc2-4e82-9b89-a549d3f06250","Type":"ContainerStarted","Data":"83ee4acf4c0431e29bbe4cffe5fe7ac7994acc1097aa42ae5f809f0fed43ff25"} Jan 30 16:36:55 crc kubenswrapper[4766]: I0130 16:36:55.325747 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fr242" event={"ID":"4a563046-adc2-4e82-9b89-a549d3f06250","Type":"ContainerStarted","Data":"484c406d7229ad94d5a4d0d213d9c2163e2ef33675b290bddc271c8e30414915"} Jan 30 16:36:55 crc kubenswrapper[4766]: I0130 16:36:55.327067 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vxf97" event={"ID":"ce9dc1b9-f415-411b-a16d-88cd7b5a7f93","Type":"ContainerStarted","Data":"47fa4c9d02f627c53c6850d21c1b33cf039d947ef751f3cb370819aa9f64c733"} Jan 30 16:36:56 crc kubenswrapper[4766]: I0130 16:36:56.334359 4766 generic.go:334] "Generic (PLEG): container finished" podID="ce9dc1b9-f415-411b-a16d-88cd7b5a7f93" containerID="47fa4c9d02f627c53c6850d21c1b33cf039d947ef751f3cb370819aa9f64c733" exitCode=0 Jan 30 16:36:56 crc kubenswrapper[4766]: I0130 16:36:56.334488 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vxf97" event={"ID":"ce9dc1b9-f415-411b-a16d-88cd7b5a7f93","Type":"ContainerDied","Data":"47fa4c9d02f627c53c6850d21c1b33cf039d947ef751f3cb370819aa9f64c733"} Jan 30 16:36:56 crc kubenswrapper[4766]: I0130 16:36:56.338555 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fr242" event={"ID":"4a563046-adc2-4e82-9b89-a549d3f06250","Type":"ContainerStarted","Data":"e5d6973d5b8e0393c5c52c923814601658e1e4030da75738e9288444c5a5cb12"} Jan 30 16:36:56 crc kubenswrapper[4766]: I0130 16:36:56.339326 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-fr242" Jan 30 16:36:56 crc kubenswrapper[4766]: I0130 16:36:56.379792 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-fr242" podStartSLOduration=6.504529336 podStartE2EDuration="14.37977469s" podCreationTimestamp="2026-01-30 16:36:42 +0000 UTC" firstStartedPulling="2026-01-30 16:36:43.878669139 +0000 UTC m=+858.516626485" lastFinishedPulling="2026-01-30 16:36:51.753914493 +0000 UTC m=+866.391871839" observedRunningTime="2026-01-30 16:36:56.373759983 +0000 UTC m=+871.011717339" watchObservedRunningTime="2026-01-30 16:36:56.37977469 +0000 UTC m=+871.017732036" Jan 30 16:36:57 crc kubenswrapper[4766]: I0130 16:36:57.346946 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vxf97" event={"ID":"ce9dc1b9-f415-411b-a16d-88cd7b5a7f93","Type":"ContainerStarted","Data":"4802e8e3e99295152b650c423951e2a5ce30714756f491f37fa685c64257e9cc"} Jan 30 16:36:58 crc kubenswrapper[4766]: I0130 16:36:58.714640 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-fr242" Jan 30 16:36:58 crc kubenswrapper[4766]: I0130 16:36:58.754519 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-fr242" Jan 30 16:36:58 crc kubenswrapper[4766]: I0130 16:36:58.777627 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vxf97" podStartSLOduration=3.341668958 podStartE2EDuration="5.777609136s" podCreationTimestamp="2026-01-30 16:36:53 +0000 UTC" firstStartedPulling="2026-01-30 16:36:54.306016717 +0000 UTC m=+868.943974063" lastFinishedPulling="2026-01-30 16:36:56.741956895 +0000 UTC m=+871.379914241" observedRunningTime="2026-01-30 16:36:57.380161539 +0000 UTC m=+872.018118895" watchObservedRunningTime="2026-01-30 16:36:58.777609136 +0000 UTC m=+873.415566482" Jan 30 16:37:03 crc kubenswrapper[4766]: I0130 16:37:03.479735 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vxf97" Jan 30 16:37:03 crc kubenswrapper[4766]: I0130 16:37:03.480123 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vxf97" Jan 30 16:37:03 crc kubenswrapper[4766]: I0130 16:37:03.519971 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vxf97" Jan 30 16:37:03 crc kubenswrapper[4766]: I0130 16:37:03.776229 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-z9cbg" Jan 30 16:37:03 crc kubenswrapper[4766]: I0130 16:37:03.859075 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-7v5hl" Jan 30 16:37:04 crc kubenswrapper[4766]: I0130 16:37:04.446671 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vxf97" Jan 30 16:37:04 crc kubenswrapper[4766]: I0130 16:37:04.496988 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vxf97"] Jan 30 16:37:04 crc kubenswrapper[4766]: I0130 16:37:04.716261 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-pfspk" Jan 30 16:37:06 crc kubenswrapper[4766]: I0130 16:37:06.219748 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs"] Jan 30 16:37:06 crc kubenswrapper[4766]: I0130 16:37:06.223386 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs" Jan 30 16:37:06 crc kubenswrapper[4766]: I0130 16:37:06.225606 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 30 16:37:06 crc kubenswrapper[4766]: I0130 16:37:06.233007 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs"] Jan 30 16:37:06 crc kubenswrapper[4766]: I0130 16:37:06.250818 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgm9c\" (UniqueName: \"kubernetes.io/projected/a2619907-b01e-44ad-99e7-a1ae313da017-kube-api-access-hgm9c\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs\" (UID: \"a2619907-b01e-44ad-99e7-a1ae313da017\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs" Jan 30 16:37:06 crc kubenswrapper[4766]: I0130 16:37:06.250929 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a2619907-b01e-44ad-99e7-a1ae313da017-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs\" (UID: \"a2619907-b01e-44ad-99e7-a1ae313da017\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs" Jan 30 16:37:06 crc kubenswrapper[4766]: I0130 16:37:06.251032 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a2619907-b01e-44ad-99e7-a1ae313da017-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs\" (UID: \"a2619907-b01e-44ad-99e7-a1ae313da017\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs" Jan 30 16:37:06 crc kubenswrapper[4766]: I0130 16:37:06.352907 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgm9c\" (UniqueName: \"kubernetes.io/projected/a2619907-b01e-44ad-99e7-a1ae313da017-kube-api-access-hgm9c\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs\" (UID: \"a2619907-b01e-44ad-99e7-a1ae313da017\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs" Jan 30 16:37:06 crc kubenswrapper[4766]: I0130 16:37:06.353244 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a2619907-b01e-44ad-99e7-a1ae313da017-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs\" (UID: \"a2619907-b01e-44ad-99e7-a1ae313da017\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs" Jan 30 16:37:06 crc kubenswrapper[4766]: I0130 16:37:06.353387 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a2619907-b01e-44ad-99e7-a1ae313da017-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs\" (UID: \"a2619907-b01e-44ad-99e7-a1ae313da017\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs" Jan 30 16:37:06 crc kubenswrapper[4766]: I0130 16:37:06.354467 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a2619907-b01e-44ad-99e7-a1ae313da017-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs\" (UID: \"a2619907-b01e-44ad-99e7-a1ae313da017\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs" Jan 30 16:37:06 crc kubenswrapper[4766]: I0130 16:37:06.354543 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a2619907-b01e-44ad-99e7-a1ae313da017-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs\" (UID: \"a2619907-b01e-44ad-99e7-a1ae313da017\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs" Jan 30 16:37:06 crc kubenswrapper[4766]: I0130 16:37:06.374669 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgm9c\" (UniqueName: \"kubernetes.io/projected/a2619907-b01e-44ad-99e7-a1ae313da017-kube-api-access-hgm9c\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs\" (UID: \"a2619907-b01e-44ad-99e7-a1ae313da017\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs" Jan 30 16:37:06 crc kubenswrapper[4766]: I0130 16:37:06.398267 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vxf97" podUID="ce9dc1b9-f415-411b-a16d-88cd7b5a7f93" containerName="registry-server" containerID="cri-o://4802e8e3e99295152b650c423951e2a5ce30714756f491f37fa685c64257e9cc" gracePeriod=2 Jan 30 16:37:06 crc kubenswrapper[4766]: I0130 16:37:06.550444 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs" Jan 30 16:37:06 crc kubenswrapper[4766]: I0130 16:37:06.802076 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vxf97" Jan 30 16:37:06 crc kubenswrapper[4766]: I0130 16:37:06.862087 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce9dc1b9-f415-411b-a16d-88cd7b5a7f93-catalog-content\") pod \"ce9dc1b9-f415-411b-a16d-88cd7b5a7f93\" (UID: \"ce9dc1b9-f415-411b-a16d-88cd7b5a7f93\") " Jan 30 16:37:06 crc kubenswrapper[4766]: I0130 16:37:06.863382 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce9dc1b9-f415-411b-a16d-88cd7b5a7f93-utilities\") pod \"ce9dc1b9-f415-411b-a16d-88cd7b5a7f93\" (UID: \"ce9dc1b9-f415-411b-a16d-88cd7b5a7f93\") " Jan 30 16:37:06 crc kubenswrapper[4766]: I0130 16:37:06.863526 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8b95\" (UniqueName: \"kubernetes.io/projected/ce9dc1b9-f415-411b-a16d-88cd7b5a7f93-kube-api-access-w8b95\") pod \"ce9dc1b9-f415-411b-a16d-88cd7b5a7f93\" (UID: \"ce9dc1b9-f415-411b-a16d-88cd7b5a7f93\") " Jan 30 16:37:06 crc kubenswrapper[4766]: I0130 16:37:06.864260 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce9dc1b9-f415-411b-a16d-88cd7b5a7f93-utilities" (OuterVolumeSpecName: "utilities") pod "ce9dc1b9-f415-411b-a16d-88cd7b5a7f93" (UID: "ce9dc1b9-f415-411b-a16d-88cd7b5a7f93"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:37:06 crc kubenswrapper[4766]: I0130 16:37:06.870315 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce9dc1b9-f415-411b-a16d-88cd7b5a7f93-kube-api-access-w8b95" (OuterVolumeSpecName: "kube-api-access-w8b95") pod "ce9dc1b9-f415-411b-a16d-88cd7b5a7f93" (UID: "ce9dc1b9-f415-411b-a16d-88cd7b5a7f93"). InnerVolumeSpecName "kube-api-access-w8b95". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:37:06 crc kubenswrapper[4766]: I0130 16:37:06.885064 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce9dc1b9-f415-411b-a16d-88cd7b5a7f93-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ce9dc1b9-f415-411b-a16d-88cd7b5a7f93" (UID: "ce9dc1b9-f415-411b-a16d-88cd7b5a7f93"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:37:06 crc kubenswrapper[4766]: I0130 16:37:06.965491 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce9dc1b9-f415-411b-a16d-88cd7b5a7f93-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:37:06 crc kubenswrapper[4766]: I0130 16:37:06.965544 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce9dc1b9-f415-411b-a16d-88cd7b5a7f93-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:37:06 crc kubenswrapper[4766]: I0130 16:37:06.965557 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8b95\" (UniqueName: \"kubernetes.io/projected/ce9dc1b9-f415-411b-a16d-88cd7b5a7f93-kube-api-access-w8b95\") on node \"crc\" DevicePath \"\"" Jan 30 16:37:07 crc kubenswrapper[4766]: I0130 16:37:07.041452 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs"] Jan 30 16:37:07 crc kubenswrapper[4766]: W0130 16:37:07.047282 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2619907_b01e_44ad_99e7_a1ae313da017.slice/crio-4790a06ad10e693c077ef5fea99a5eddb9f1b9aed3163e69b2cf11273475458b WatchSource:0}: Error finding container 4790a06ad10e693c077ef5fea99a5eddb9f1b9aed3163e69b2cf11273475458b: Status 404 returned error can't find the container with id 4790a06ad10e693c077ef5fea99a5eddb9f1b9aed3163e69b2cf11273475458b Jan 30 16:37:07 crc kubenswrapper[4766]: I0130 16:37:07.408036 4766 generic.go:334] "Generic (PLEG): container finished" podID="a2619907-b01e-44ad-99e7-a1ae313da017" containerID="1959e6dd1b2ba4a3477f420a2bbea12940cdba112a8ba32bc20c6d9dfec9ca9b" exitCode=0 Jan 30 16:37:07 crc kubenswrapper[4766]: I0130 16:37:07.408142 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs" event={"ID":"a2619907-b01e-44ad-99e7-a1ae313da017","Type":"ContainerDied","Data":"1959e6dd1b2ba4a3477f420a2bbea12940cdba112a8ba32bc20c6d9dfec9ca9b"} Jan 30 16:37:07 crc kubenswrapper[4766]: I0130 16:37:07.408206 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs" event={"ID":"a2619907-b01e-44ad-99e7-a1ae313da017","Type":"ContainerStarted","Data":"4790a06ad10e693c077ef5fea99a5eddb9f1b9aed3163e69b2cf11273475458b"} Jan 30 16:37:07 crc kubenswrapper[4766]: I0130 16:37:07.413237 4766 generic.go:334] "Generic (PLEG): container finished" podID="ce9dc1b9-f415-411b-a16d-88cd7b5a7f93" containerID="4802e8e3e99295152b650c423951e2a5ce30714756f491f37fa685c64257e9cc" exitCode=0 Jan 30 16:37:07 crc kubenswrapper[4766]: I0130 16:37:07.413277 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vxf97" event={"ID":"ce9dc1b9-f415-411b-a16d-88cd7b5a7f93","Type":"ContainerDied","Data":"4802e8e3e99295152b650c423951e2a5ce30714756f491f37fa685c64257e9cc"} Jan 30 16:37:07 crc kubenswrapper[4766]: I0130 16:37:07.413303 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vxf97" event={"ID":"ce9dc1b9-f415-411b-a16d-88cd7b5a7f93","Type":"ContainerDied","Data":"61610dab44c5f75f053174ef3d6dd6d46a8f7dfdffe1f5a823849014fc14712e"} Jan 30 16:37:07 crc kubenswrapper[4766]: I0130 16:37:07.413322 4766 scope.go:117] "RemoveContainer" containerID="4802e8e3e99295152b650c423951e2a5ce30714756f491f37fa685c64257e9cc" Jan 30 16:37:07 crc kubenswrapper[4766]: I0130 16:37:07.413438 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vxf97" Jan 30 16:37:07 crc kubenswrapper[4766]: I0130 16:37:07.439664 4766 scope.go:117] "RemoveContainer" containerID="47fa4c9d02f627c53c6850d21c1b33cf039d947ef751f3cb370819aa9f64c733" Jan 30 16:37:07 crc kubenswrapper[4766]: I0130 16:37:07.458963 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vxf97"] Jan 30 16:37:07 crc kubenswrapper[4766]: I0130 16:37:07.464511 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vxf97"] Jan 30 16:37:07 crc kubenswrapper[4766]: I0130 16:37:07.482055 4766 scope.go:117] "RemoveContainer" containerID="4a7ad1bd633774d0622fbb6f585fc2c5583634d9cc452aacda6d68763bee66d2" Jan 30 16:37:07 crc kubenswrapper[4766]: I0130 16:37:07.497531 4766 scope.go:117] "RemoveContainer" containerID="4802e8e3e99295152b650c423951e2a5ce30714756f491f37fa685c64257e9cc" Jan 30 16:37:07 crc kubenswrapper[4766]: E0130 16:37:07.498079 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4802e8e3e99295152b650c423951e2a5ce30714756f491f37fa685c64257e9cc\": container with ID starting with 4802e8e3e99295152b650c423951e2a5ce30714756f491f37fa685c64257e9cc not found: ID does not exist" containerID="4802e8e3e99295152b650c423951e2a5ce30714756f491f37fa685c64257e9cc" Jan 30 16:37:07 crc kubenswrapper[4766]: I0130 16:37:07.498199 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4802e8e3e99295152b650c423951e2a5ce30714756f491f37fa685c64257e9cc"} err="failed to get container status \"4802e8e3e99295152b650c423951e2a5ce30714756f491f37fa685c64257e9cc\": rpc error: code = NotFound desc = could not find container \"4802e8e3e99295152b650c423951e2a5ce30714756f491f37fa685c64257e9cc\": container with ID starting with 4802e8e3e99295152b650c423951e2a5ce30714756f491f37fa685c64257e9cc not found: ID does not exist" Jan 30 16:37:07 crc kubenswrapper[4766]: I0130 16:37:07.498288 4766 scope.go:117] "RemoveContainer" containerID="47fa4c9d02f627c53c6850d21c1b33cf039d947ef751f3cb370819aa9f64c733" Jan 30 16:37:07 crc kubenswrapper[4766]: E0130 16:37:07.498951 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47fa4c9d02f627c53c6850d21c1b33cf039d947ef751f3cb370819aa9f64c733\": container with ID starting with 47fa4c9d02f627c53c6850d21c1b33cf039d947ef751f3cb370819aa9f64c733 not found: ID does not exist" containerID="47fa4c9d02f627c53c6850d21c1b33cf039d947ef751f3cb370819aa9f64c733" Jan 30 16:37:07 crc kubenswrapper[4766]: I0130 16:37:07.499002 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47fa4c9d02f627c53c6850d21c1b33cf039d947ef751f3cb370819aa9f64c733"} err="failed to get container status \"47fa4c9d02f627c53c6850d21c1b33cf039d947ef751f3cb370819aa9f64c733\": rpc error: code = NotFound desc = could not find container \"47fa4c9d02f627c53c6850d21c1b33cf039d947ef751f3cb370819aa9f64c733\": container with ID starting with 47fa4c9d02f627c53c6850d21c1b33cf039d947ef751f3cb370819aa9f64c733 not found: ID does not exist" Jan 30 16:37:07 crc kubenswrapper[4766]: I0130 16:37:07.499036 4766 scope.go:117] "RemoveContainer" containerID="4a7ad1bd633774d0622fbb6f585fc2c5583634d9cc452aacda6d68763bee66d2" Jan 30 16:37:07 crc kubenswrapper[4766]: E0130 16:37:07.500013 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a7ad1bd633774d0622fbb6f585fc2c5583634d9cc452aacda6d68763bee66d2\": container with ID starting with 4a7ad1bd633774d0622fbb6f585fc2c5583634d9cc452aacda6d68763bee66d2 not found: ID does not exist" containerID="4a7ad1bd633774d0622fbb6f585fc2c5583634d9cc452aacda6d68763bee66d2" Jan 30 16:37:07 crc kubenswrapper[4766]: I0130 16:37:07.500411 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a7ad1bd633774d0622fbb6f585fc2c5583634d9cc452aacda6d68763bee66d2"} err="failed to get container status \"4a7ad1bd633774d0622fbb6f585fc2c5583634d9cc452aacda6d68763bee66d2\": rpc error: code = NotFound desc = could not find container \"4a7ad1bd633774d0622fbb6f585fc2c5583634d9cc452aacda6d68763bee66d2\": container with ID starting with 4a7ad1bd633774d0622fbb6f585fc2c5583634d9cc452aacda6d68763bee66d2 not found: ID does not exist" Jan 30 16:37:08 crc kubenswrapper[4766]: I0130 16:37:08.047050 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce9dc1b9-f415-411b-a16d-88cd7b5a7f93" path="/var/lib/kubelet/pods/ce9dc1b9-f415-411b-a16d-88cd7b5a7f93/volumes" Jan 30 16:37:11 crc kubenswrapper[4766]: I0130 16:37:11.444794 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs" event={"ID":"a2619907-b01e-44ad-99e7-a1ae313da017","Type":"ContainerStarted","Data":"94babe5601723e188e65da3e57660ba884fd9bbdb91ef8019028b7dcb8285225"} Jan 30 16:37:12 crc kubenswrapper[4766]: I0130 16:37:12.451857 4766 generic.go:334] "Generic (PLEG): container finished" podID="a2619907-b01e-44ad-99e7-a1ae313da017" containerID="94babe5601723e188e65da3e57660ba884fd9bbdb91ef8019028b7dcb8285225" exitCode=0 Jan 30 16:37:12 crc kubenswrapper[4766]: I0130 16:37:12.451910 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs" event={"ID":"a2619907-b01e-44ad-99e7-a1ae313da017","Type":"ContainerDied","Data":"94babe5601723e188e65da3e57660ba884fd9bbdb91ef8019028b7dcb8285225"} Jan 30 16:37:13 crc kubenswrapper[4766]: I0130 16:37:13.461450 4766 generic.go:334] "Generic (PLEG): container finished" podID="a2619907-b01e-44ad-99e7-a1ae313da017" containerID="5ae0733e68fbb3ac77bf630446515e496329b8e9a3abab1728364758f402ac1e" exitCode=0 Jan 30 16:37:13 crc kubenswrapper[4766]: I0130 16:37:13.461555 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs" event={"ID":"a2619907-b01e-44ad-99e7-a1ae313da017","Type":"ContainerDied","Data":"5ae0733e68fbb3ac77bf630446515e496329b8e9a3abab1728364758f402ac1e"} Jan 30 16:37:13 crc kubenswrapper[4766]: I0130 16:37:13.718069 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-fr242" Jan 30 16:37:14 crc kubenswrapper[4766]: I0130 16:37:14.755652 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs" Jan 30 16:37:14 crc kubenswrapper[4766]: I0130 16:37:14.896201 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a2619907-b01e-44ad-99e7-a1ae313da017-util\") pod \"a2619907-b01e-44ad-99e7-a1ae313da017\" (UID: \"a2619907-b01e-44ad-99e7-a1ae313da017\") " Jan 30 16:37:14 crc kubenswrapper[4766]: I0130 16:37:14.896325 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgm9c\" (UniqueName: \"kubernetes.io/projected/a2619907-b01e-44ad-99e7-a1ae313da017-kube-api-access-hgm9c\") pod \"a2619907-b01e-44ad-99e7-a1ae313da017\" (UID: \"a2619907-b01e-44ad-99e7-a1ae313da017\") " Jan 30 16:37:14 crc kubenswrapper[4766]: I0130 16:37:14.896457 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a2619907-b01e-44ad-99e7-a1ae313da017-bundle\") pod \"a2619907-b01e-44ad-99e7-a1ae313da017\" (UID: \"a2619907-b01e-44ad-99e7-a1ae313da017\") " Jan 30 16:37:14 crc kubenswrapper[4766]: I0130 16:37:14.897817 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2619907-b01e-44ad-99e7-a1ae313da017-bundle" (OuterVolumeSpecName: "bundle") pod "a2619907-b01e-44ad-99e7-a1ae313da017" (UID: "a2619907-b01e-44ad-99e7-a1ae313da017"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:37:14 crc kubenswrapper[4766]: I0130 16:37:14.898230 4766 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a2619907-b01e-44ad-99e7-a1ae313da017-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:37:14 crc kubenswrapper[4766]: I0130 16:37:14.904359 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2619907-b01e-44ad-99e7-a1ae313da017-kube-api-access-hgm9c" (OuterVolumeSpecName: "kube-api-access-hgm9c") pod "a2619907-b01e-44ad-99e7-a1ae313da017" (UID: "a2619907-b01e-44ad-99e7-a1ae313da017"). InnerVolumeSpecName "kube-api-access-hgm9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:37:14 crc kubenswrapper[4766]: I0130 16:37:14.908467 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2619907-b01e-44ad-99e7-a1ae313da017-util" (OuterVolumeSpecName: "util") pod "a2619907-b01e-44ad-99e7-a1ae313da017" (UID: "a2619907-b01e-44ad-99e7-a1ae313da017"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:37:14 crc kubenswrapper[4766]: I0130 16:37:14.999649 4766 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a2619907-b01e-44ad-99e7-a1ae313da017-util\") on node \"crc\" DevicePath \"\"" Jan 30 16:37:14 crc kubenswrapper[4766]: I0130 16:37:14.999702 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgm9c\" (UniqueName: \"kubernetes.io/projected/a2619907-b01e-44ad-99e7-a1ae313da017-kube-api-access-hgm9c\") on node \"crc\" DevicePath \"\"" Jan 30 16:37:15 crc kubenswrapper[4766]: I0130 16:37:15.476024 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs" event={"ID":"a2619907-b01e-44ad-99e7-a1ae313da017","Type":"ContainerDied","Data":"4790a06ad10e693c077ef5fea99a5eddb9f1b9aed3163e69b2cf11273475458b"} Jan 30 16:37:15 crc kubenswrapper[4766]: I0130 16:37:15.476070 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4790a06ad10e693c077ef5fea99a5eddb9f1b9aed3163e69b2cf11273475458b" Jan 30 16:37:15 crc kubenswrapper[4766]: I0130 16:37:15.476075 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs" Jan 30 16:37:19 crc kubenswrapper[4766]: I0130 16:37:19.703431 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-xhxlw"] Jan 30 16:37:19 crc kubenswrapper[4766]: E0130 16:37:19.704134 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2619907-b01e-44ad-99e7-a1ae313da017" containerName="extract" Jan 30 16:37:19 crc kubenswrapper[4766]: I0130 16:37:19.704146 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2619907-b01e-44ad-99e7-a1ae313da017" containerName="extract" Jan 30 16:37:19 crc kubenswrapper[4766]: E0130 16:37:19.704156 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2619907-b01e-44ad-99e7-a1ae313da017" containerName="util" Jan 30 16:37:19 crc kubenswrapper[4766]: I0130 16:37:19.704162 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2619907-b01e-44ad-99e7-a1ae313da017" containerName="util" Jan 30 16:37:19 crc kubenswrapper[4766]: E0130 16:37:19.704220 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce9dc1b9-f415-411b-a16d-88cd7b5a7f93" containerName="registry-server" Jan 30 16:37:19 crc kubenswrapper[4766]: I0130 16:37:19.704227 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce9dc1b9-f415-411b-a16d-88cd7b5a7f93" containerName="registry-server" Jan 30 16:37:19 crc kubenswrapper[4766]: E0130 16:37:19.704235 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce9dc1b9-f415-411b-a16d-88cd7b5a7f93" containerName="extract-content" Jan 30 16:37:19 crc kubenswrapper[4766]: I0130 16:37:19.704241 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce9dc1b9-f415-411b-a16d-88cd7b5a7f93" containerName="extract-content" Jan 30 16:37:19 crc kubenswrapper[4766]: E0130 16:37:19.704252 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2619907-b01e-44ad-99e7-a1ae313da017" containerName="pull" Jan 30 16:37:19 crc kubenswrapper[4766]: I0130 16:37:19.704259 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2619907-b01e-44ad-99e7-a1ae313da017" containerName="pull" Jan 30 16:37:19 crc kubenswrapper[4766]: E0130 16:37:19.704274 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce9dc1b9-f415-411b-a16d-88cd7b5a7f93" containerName="extract-utilities" Jan 30 16:37:19 crc kubenswrapper[4766]: I0130 16:37:19.704280 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce9dc1b9-f415-411b-a16d-88cd7b5a7f93" containerName="extract-utilities" Jan 30 16:37:19 crc kubenswrapper[4766]: I0130 16:37:19.704384 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2619907-b01e-44ad-99e7-a1ae313da017" containerName="extract" Jan 30 16:37:19 crc kubenswrapper[4766]: I0130 16:37:19.704394 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce9dc1b9-f415-411b-a16d-88cd7b5a7f93" containerName="registry-server" Jan 30 16:37:19 crc kubenswrapper[4766]: I0130 16:37:19.704794 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-xhxlw" Jan 30 16:37:19 crc kubenswrapper[4766]: I0130 16:37:19.708306 4766 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-8cqrb" Jan 30 16:37:19 crc kubenswrapper[4766]: I0130 16:37:19.708936 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Jan 30 16:37:19 crc kubenswrapper[4766]: I0130 16:37:19.709005 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Jan 30 16:37:19 crc kubenswrapper[4766]: I0130 16:37:19.719228 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-xhxlw"] Jan 30 16:37:19 crc kubenswrapper[4766]: I0130 16:37:19.776220 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnxqm\" (UniqueName: \"kubernetes.io/projected/e8d87956-3550-49b7-957e-56d39f9b81bf-kube-api-access-nnxqm\") pod \"cert-manager-operator-controller-manager-66c8bdd694-xhxlw\" (UID: \"e8d87956-3550-49b7-957e-56d39f9b81bf\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-xhxlw" Jan 30 16:37:19 crc kubenswrapper[4766]: I0130 16:37:19.776381 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e8d87956-3550-49b7-957e-56d39f9b81bf-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-xhxlw\" (UID: \"e8d87956-3550-49b7-957e-56d39f9b81bf\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-xhxlw" Jan 30 16:37:19 crc kubenswrapper[4766]: I0130 16:37:19.877380 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnxqm\" (UniqueName: \"kubernetes.io/projected/e8d87956-3550-49b7-957e-56d39f9b81bf-kube-api-access-nnxqm\") pod \"cert-manager-operator-controller-manager-66c8bdd694-xhxlw\" (UID: \"e8d87956-3550-49b7-957e-56d39f9b81bf\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-xhxlw" Jan 30 16:37:19 crc kubenswrapper[4766]: I0130 16:37:19.877449 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e8d87956-3550-49b7-957e-56d39f9b81bf-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-xhxlw\" (UID: \"e8d87956-3550-49b7-957e-56d39f9b81bf\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-xhxlw" Jan 30 16:37:19 crc kubenswrapper[4766]: I0130 16:37:19.878080 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e8d87956-3550-49b7-957e-56d39f9b81bf-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-xhxlw\" (UID: \"e8d87956-3550-49b7-957e-56d39f9b81bf\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-xhxlw" Jan 30 16:37:19 crc kubenswrapper[4766]: I0130 16:37:19.901113 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnxqm\" (UniqueName: \"kubernetes.io/projected/e8d87956-3550-49b7-957e-56d39f9b81bf-kube-api-access-nnxqm\") pod \"cert-manager-operator-controller-manager-66c8bdd694-xhxlw\" (UID: \"e8d87956-3550-49b7-957e-56d39f9b81bf\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-xhxlw" Jan 30 16:37:20 crc kubenswrapper[4766]: I0130 16:37:20.024396 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-xhxlw" Jan 30 16:37:20 crc kubenswrapper[4766]: I0130 16:37:20.281591 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-xhxlw"] Jan 30 16:37:20 crc kubenswrapper[4766]: I0130 16:37:20.507598 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-xhxlw" event={"ID":"e8d87956-3550-49b7-957e-56d39f9b81bf","Type":"ContainerStarted","Data":"c08571034286cbcc6601ec1daa16af854ca6fdd1c46435726ba7a2914558aadb"} Jan 30 16:37:24 crc kubenswrapper[4766]: I0130 16:37:24.535974 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-xhxlw" event={"ID":"e8d87956-3550-49b7-957e-56d39f9b81bf","Type":"ContainerStarted","Data":"50a6768314658c0aef5a8eaa9d961cf800a9675d42cdb608e4907e5c06746de3"} Jan 30 16:37:24 crc kubenswrapper[4766]: I0130 16:37:24.559342 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-xhxlw" podStartSLOduration=2.093519601 podStartE2EDuration="5.559321131s" podCreationTimestamp="2026-01-30 16:37:19 +0000 UTC" firstStartedPulling="2026-01-30 16:37:20.289277915 +0000 UTC m=+894.927235261" lastFinishedPulling="2026-01-30 16:37:23.755079445 +0000 UTC m=+898.393036791" observedRunningTime="2026-01-30 16:37:24.558015884 +0000 UTC m=+899.195973260" watchObservedRunningTime="2026-01-30 16:37:24.559321131 +0000 UTC m=+899.197278477" Jan 30 16:37:25 crc kubenswrapper[4766]: I0130 16:37:25.964509 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-25857"] Jan 30 16:37:25 crc kubenswrapper[4766]: I0130 16:37:25.965992 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-25857" Jan 30 16:37:25 crc kubenswrapper[4766]: I0130 16:37:25.969595 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbx2f\" (UniqueName: \"kubernetes.io/projected/0a17fb46-17ee-46fe-9e72-540aa19604cf-kube-api-access-dbx2f\") pod \"community-operators-25857\" (UID: \"0a17fb46-17ee-46fe-9e72-540aa19604cf\") " pod="openshift-marketplace/community-operators-25857" Jan 30 16:37:25 crc kubenswrapper[4766]: I0130 16:37:25.969680 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a17fb46-17ee-46fe-9e72-540aa19604cf-utilities\") pod \"community-operators-25857\" (UID: \"0a17fb46-17ee-46fe-9e72-540aa19604cf\") " pod="openshift-marketplace/community-operators-25857" Jan 30 16:37:25 crc kubenswrapper[4766]: I0130 16:37:25.969749 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a17fb46-17ee-46fe-9e72-540aa19604cf-catalog-content\") pod \"community-operators-25857\" (UID: \"0a17fb46-17ee-46fe-9e72-540aa19604cf\") " pod="openshift-marketplace/community-operators-25857" Jan 30 16:37:25 crc kubenswrapper[4766]: I0130 16:37:25.997261 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-25857"] Jan 30 16:37:26 crc kubenswrapper[4766]: I0130 16:37:26.071830 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a17fb46-17ee-46fe-9e72-540aa19604cf-utilities\") pod \"community-operators-25857\" (UID: \"0a17fb46-17ee-46fe-9e72-540aa19604cf\") " pod="openshift-marketplace/community-operators-25857" Jan 30 16:37:26 crc kubenswrapper[4766]: I0130 16:37:26.072003 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a17fb46-17ee-46fe-9e72-540aa19604cf-catalog-content\") pod \"community-operators-25857\" (UID: \"0a17fb46-17ee-46fe-9e72-540aa19604cf\") " pod="openshift-marketplace/community-operators-25857" Jan 30 16:37:26 crc kubenswrapper[4766]: I0130 16:37:26.072106 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbx2f\" (UniqueName: \"kubernetes.io/projected/0a17fb46-17ee-46fe-9e72-540aa19604cf-kube-api-access-dbx2f\") pod \"community-operators-25857\" (UID: \"0a17fb46-17ee-46fe-9e72-540aa19604cf\") " pod="openshift-marketplace/community-operators-25857" Jan 30 16:37:26 crc kubenswrapper[4766]: I0130 16:37:26.074120 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a17fb46-17ee-46fe-9e72-540aa19604cf-utilities\") pod \"community-operators-25857\" (UID: \"0a17fb46-17ee-46fe-9e72-540aa19604cf\") " pod="openshift-marketplace/community-operators-25857" Jan 30 16:37:26 crc kubenswrapper[4766]: I0130 16:37:26.074244 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a17fb46-17ee-46fe-9e72-540aa19604cf-catalog-content\") pod \"community-operators-25857\" (UID: \"0a17fb46-17ee-46fe-9e72-540aa19604cf\") " pod="openshift-marketplace/community-operators-25857" Jan 30 16:37:26 crc kubenswrapper[4766]: I0130 16:37:26.106390 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbx2f\" (UniqueName: \"kubernetes.io/projected/0a17fb46-17ee-46fe-9e72-540aa19604cf-kube-api-access-dbx2f\") pod \"community-operators-25857\" (UID: \"0a17fb46-17ee-46fe-9e72-540aa19604cf\") " pod="openshift-marketplace/community-operators-25857" Jan 30 16:37:26 crc kubenswrapper[4766]: I0130 16:37:26.289432 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-25857" Jan 30 16:37:26 crc kubenswrapper[4766]: I0130 16:37:26.969893 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-25857"] Jan 30 16:37:27 crc kubenswrapper[4766]: I0130 16:37:27.574358 4766 generic.go:334] "Generic (PLEG): container finished" podID="0a17fb46-17ee-46fe-9e72-540aa19604cf" containerID="a17ed51550b4a9e534b97bd55f68d2055e0aeaa5469e71c2bdc954f62cca0a4d" exitCode=0 Jan 30 16:37:27 crc kubenswrapper[4766]: I0130 16:37:27.574688 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-25857" event={"ID":"0a17fb46-17ee-46fe-9e72-540aa19604cf","Type":"ContainerDied","Data":"a17ed51550b4a9e534b97bd55f68d2055e0aeaa5469e71c2bdc954f62cca0a4d"} Jan 30 16:37:27 crc kubenswrapper[4766]: I0130 16:37:27.574716 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-25857" event={"ID":"0a17fb46-17ee-46fe-9e72-540aa19604cf","Type":"ContainerStarted","Data":"abb8fe20ff8febe1d8453814192f1c606f41fd3fcad611e77b0dc1734c540c56"} Jan 30 16:37:28 crc kubenswrapper[4766]: I0130 16:37:28.598002 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-25857" event={"ID":"0a17fb46-17ee-46fe-9e72-540aa19604cf","Type":"ContainerStarted","Data":"eca05fb6948c99e1a42ccccd551fd46bc9e4515706e8802e4d5f7197fbe55b81"} Jan 30 16:37:28 crc kubenswrapper[4766]: I0130 16:37:28.948965 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-qr6lx"] Jan 30 16:37:28 crc kubenswrapper[4766]: I0130 16:37:28.950265 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-qr6lx" Jan 30 16:37:28 crc kubenswrapper[4766]: I0130 16:37:28.958506 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 30 16:37:28 crc kubenswrapper[4766]: I0130 16:37:28.958694 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 30 16:37:28 crc kubenswrapper[4766]: I0130 16:37:28.959209 4766 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-vjnc8" Jan 30 16:37:28 crc kubenswrapper[4766]: I0130 16:37:28.964506 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-qr6lx"] Jan 30 16:37:29 crc kubenswrapper[4766]: I0130 16:37:29.026752 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b1682925-c14f-425a-b072-535a37cdca48-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-qr6lx\" (UID: \"b1682925-c14f-425a-b072-535a37cdca48\") " pod="cert-manager/cert-manager-cainjector-5545bd876-qr6lx" Jan 30 16:37:29 crc kubenswrapper[4766]: I0130 16:37:29.027041 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn85k\" (UniqueName: \"kubernetes.io/projected/b1682925-c14f-425a-b072-535a37cdca48-kube-api-access-pn85k\") pod \"cert-manager-cainjector-5545bd876-qr6lx\" (UID: \"b1682925-c14f-425a-b072-535a37cdca48\") " pod="cert-manager/cert-manager-cainjector-5545bd876-qr6lx" Jan 30 16:37:29 crc kubenswrapper[4766]: I0130 16:37:29.128051 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pn85k\" (UniqueName: \"kubernetes.io/projected/b1682925-c14f-425a-b072-535a37cdca48-kube-api-access-pn85k\") pod \"cert-manager-cainjector-5545bd876-qr6lx\" (UID: \"b1682925-c14f-425a-b072-535a37cdca48\") " pod="cert-manager/cert-manager-cainjector-5545bd876-qr6lx" Jan 30 16:37:29 crc kubenswrapper[4766]: I0130 16:37:29.128223 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b1682925-c14f-425a-b072-535a37cdca48-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-qr6lx\" (UID: \"b1682925-c14f-425a-b072-535a37cdca48\") " pod="cert-manager/cert-manager-cainjector-5545bd876-qr6lx" Jan 30 16:37:29 crc kubenswrapper[4766]: I0130 16:37:29.156544 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pn85k\" (UniqueName: \"kubernetes.io/projected/b1682925-c14f-425a-b072-535a37cdca48-kube-api-access-pn85k\") pod \"cert-manager-cainjector-5545bd876-qr6lx\" (UID: \"b1682925-c14f-425a-b072-535a37cdca48\") " pod="cert-manager/cert-manager-cainjector-5545bd876-qr6lx" Jan 30 16:37:29 crc kubenswrapper[4766]: I0130 16:37:29.167286 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b1682925-c14f-425a-b072-535a37cdca48-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-qr6lx\" (UID: \"b1682925-c14f-425a-b072-535a37cdca48\") " pod="cert-manager/cert-manager-cainjector-5545bd876-qr6lx" Jan 30 16:37:29 crc kubenswrapper[4766]: I0130 16:37:29.275618 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-qr6lx" Jan 30 16:37:29 crc kubenswrapper[4766]: I0130 16:37:29.608716 4766 generic.go:334] "Generic (PLEG): container finished" podID="0a17fb46-17ee-46fe-9e72-540aa19604cf" containerID="eca05fb6948c99e1a42ccccd551fd46bc9e4515706e8802e4d5f7197fbe55b81" exitCode=0 Jan 30 16:37:29 crc kubenswrapper[4766]: I0130 16:37:29.608914 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-25857" event={"ID":"0a17fb46-17ee-46fe-9e72-540aa19604cf","Type":"ContainerDied","Data":"eca05fb6948c99e1a42ccccd551fd46bc9e4515706e8802e4d5f7197fbe55b81"} Jan 30 16:37:29 crc kubenswrapper[4766]: I0130 16:37:29.729992 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-qr6lx"] Jan 30 16:37:29 crc kubenswrapper[4766]: W0130 16:37:29.732398 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb1682925_c14f_425a_b072_535a37cdca48.slice/crio-a489f0174978dd43cf9294ccd37aa816d7a7ace52ef27977f31e4e7e93ab59f4 WatchSource:0}: Error finding container a489f0174978dd43cf9294ccd37aa816d7a7ace52ef27977f31e4e7e93ab59f4: Status 404 returned error can't find the container with id a489f0174978dd43cf9294ccd37aa816d7a7ace52ef27977f31e4e7e93ab59f4 Jan 30 16:37:30 crc kubenswrapper[4766]: I0130 16:37:30.619677 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-25857" event={"ID":"0a17fb46-17ee-46fe-9e72-540aa19604cf","Type":"ContainerStarted","Data":"8fa62bbbc4386a577f5ad9f9d98ca83483a8d4f332baf4f9d8900f2e02dd5e36"} Jan 30 16:37:30 crc kubenswrapper[4766]: I0130 16:37:30.621955 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-qr6lx" event={"ID":"b1682925-c14f-425a-b072-535a37cdca48","Type":"ContainerStarted","Data":"a489f0174978dd43cf9294ccd37aa816d7a7ace52ef27977f31e4e7e93ab59f4"} Jan 30 16:37:30 crc kubenswrapper[4766]: I0130 16:37:30.641094 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-25857" podStartSLOduration=3.206846806 podStartE2EDuration="5.641056096s" podCreationTimestamp="2026-01-30 16:37:25 +0000 UTC" firstStartedPulling="2026-01-30 16:37:27.576022647 +0000 UTC m=+902.213979993" lastFinishedPulling="2026-01-30 16:37:30.010231937 +0000 UTC m=+904.648189283" observedRunningTime="2026-01-30 16:37:30.636804318 +0000 UTC m=+905.274761674" watchObservedRunningTime="2026-01-30 16:37:30.641056096 +0000 UTC m=+905.279013442" Jan 30 16:37:33 crc kubenswrapper[4766]: I0130 16:37:33.313297 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-ltbxj"] Jan 30 16:37:33 crc kubenswrapper[4766]: I0130 16:37:33.317943 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-ltbxj" Jan 30 16:37:33 crc kubenswrapper[4766]: I0130 16:37:33.320424 4766 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-cw248" Jan 30 16:37:33 crc kubenswrapper[4766]: I0130 16:37:33.327047 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-ltbxj"] Jan 30 16:37:33 crc kubenswrapper[4766]: I0130 16:37:33.395361 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lwnl\" (UniqueName: \"kubernetes.io/projected/92fa5747-17c3-4b1c-a66a-e8b0a1d6f622-kube-api-access-9lwnl\") pod \"cert-manager-webhook-6888856db4-ltbxj\" (UID: \"92fa5747-17c3-4b1c-a66a-e8b0a1d6f622\") " pod="cert-manager/cert-manager-webhook-6888856db4-ltbxj" Jan 30 16:37:33 crc kubenswrapper[4766]: I0130 16:37:33.395422 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/92fa5747-17c3-4b1c-a66a-e8b0a1d6f622-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-ltbxj\" (UID: \"92fa5747-17c3-4b1c-a66a-e8b0a1d6f622\") " pod="cert-manager/cert-manager-webhook-6888856db4-ltbxj" Jan 30 16:37:33 crc kubenswrapper[4766]: I0130 16:37:33.497031 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/92fa5747-17c3-4b1c-a66a-e8b0a1d6f622-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-ltbxj\" (UID: \"92fa5747-17c3-4b1c-a66a-e8b0a1d6f622\") " pod="cert-manager/cert-manager-webhook-6888856db4-ltbxj" Jan 30 16:37:33 crc kubenswrapper[4766]: I0130 16:37:33.497152 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lwnl\" (UniqueName: \"kubernetes.io/projected/92fa5747-17c3-4b1c-a66a-e8b0a1d6f622-kube-api-access-9lwnl\") pod \"cert-manager-webhook-6888856db4-ltbxj\" (UID: \"92fa5747-17c3-4b1c-a66a-e8b0a1d6f622\") " pod="cert-manager/cert-manager-webhook-6888856db4-ltbxj" Jan 30 16:37:33 crc kubenswrapper[4766]: I0130 16:37:33.516957 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/92fa5747-17c3-4b1c-a66a-e8b0a1d6f622-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-ltbxj\" (UID: \"92fa5747-17c3-4b1c-a66a-e8b0a1d6f622\") " pod="cert-manager/cert-manager-webhook-6888856db4-ltbxj" Jan 30 16:37:33 crc kubenswrapper[4766]: I0130 16:37:33.517299 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lwnl\" (UniqueName: \"kubernetes.io/projected/92fa5747-17c3-4b1c-a66a-e8b0a1d6f622-kube-api-access-9lwnl\") pod \"cert-manager-webhook-6888856db4-ltbxj\" (UID: \"92fa5747-17c3-4b1c-a66a-e8b0a1d6f622\") " pod="cert-manager/cert-manager-webhook-6888856db4-ltbxj" Jan 30 16:37:33 crc kubenswrapper[4766]: I0130 16:37:33.641400 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-ltbxj" Jan 30 16:37:36 crc kubenswrapper[4766]: I0130 16:37:36.093576 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-ltbxj"] Jan 30 16:37:36 crc kubenswrapper[4766]: I0130 16:37:36.290474 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-25857" Jan 30 16:37:36 crc kubenswrapper[4766]: I0130 16:37:36.290537 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-25857" Jan 30 16:37:36 crc kubenswrapper[4766]: I0130 16:37:36.345232 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-25857" Jan 30 16:37:36 crc kubenswrapper[4766]: I0130 16:37:36.665415 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-ltbxj" event={"ID":"92fa5747-17c3-4b1c-a66a-e8b0a1d6f622","Type":"ContainerStarted","Data":"32f83a3c64eea8078a35cbaf0925938f5d65ed7722efc40056d7bc1f58237195"} Jan 30 16:37:36 crc kubenswrapper[4766]: I0130 16:37:36.665866 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-ltbxj" event={"ID":"92fa5747-17c3-4b1c-a66a-e8b0a1d6f622","Type":"ContainerStarted","Data":"ff2b7038c2108e42282951273b6f2371080942271940a12a3292f3c8698d0cf8"} Jan 30 16:37:36 crc kubenswrapper[4766]: I0130 16:37:36.665893 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-ltbxj" Jan 30 16:37:36 crc kubenswrapper[4766]: I0130 16:37:36.667683 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-qr6lx" event={"ID":"b1682925-c14f-425a-b072-535a37cdca48","Type":"ContainerStarted","Data":"d1385ff266f168681398029c2230e913564ad9923fbbfeaf2f50103fb3bff937"} Jan 30 16:37:36 crc kubenswrapper[4766]: I0130 16:37:36.687908 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-ltbxj" podStartSLOduration=3.687876939 podStartE2EDuration="3.687876939s" podCreationTimestamp="2026-01-30 16:37:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:37:36.682047828 +0000 UTC m=+911.320005174" watchObservedRunningTime="2026-01-30 16:37:36.687876939 +0000 UTC m=+911.325834285" Jan 30 16:37:36 crc kubenswrapper[4766]: I0130 16:37:36.711889 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-qr6lx" podStartSLOduration=2.624086076 podStartE2EDuration="8.711863051s" podCreationTimestamp="2026-01-30 16:37:28 +0000 UTC" firstStartedPulling="2026-01-30 16:37:29.735236176 +0000 UTC m=+904.373193522" lastFinishedPulling="2026-01-30 16:37:35.823013151 +0000 UTC m=+910.460970497" observedRunningTime="2026-01-30 16:37:36.698952184 +0000 UTC m=+911.336909530" watchObservedRunningTime="2026-01-30 16:37:36.711863051 +0000 UTC m=+911.349820397" Jan 30 16:37:36 crc kubenswrapper[4766]: I0130 16:37:36.730995 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-25857" Jan 30 16:37:36 crc kubenswrapper[4766]: I0130 16:37:36.793838 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-25857"] Jan 30 16:37:38 crc kubenswrapper[4766]: I0130 16:37:38.684039 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-25857" podUID="0a17fb46-17ee-46fe-9e72-540aa19604cf" containerName="registry-server" containerID="cri-o://8fa62bbbc4386a577f5ad9f9d98ca83483a8d4f332baf4f9d8900f2e02dd5e36" gracePeriod=2 Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.017302 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fq6d9"] Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.018927 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fq6d9" Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.031726 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fq6d9"] Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.045295 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.045419 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.081660 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf6c0939-3788-45ef-b4d3-0f198fb4039f-utilities\") pod \"certified-operators-fq6d9\" (UID: \"bf6c0939-3788-45ef-b4d3-0f198fb4039f\") " pod="openshift-marketplace/certified-operators-fq6d9" Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.081739 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkcfn\" (UniqueName: \"kubernetes.io/projected/bf6c0939-3788-45ef-b4d3-0f198fb4039f-kube-api-access-kkcfn\") pod \"certified-operators-fq6d9\" (UID: \"bf6c0939-3788-45ef-b4d3-0f198fb4039f\") " pod="openshift-marketplace/certified-operators-fq6d9" Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.081782 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf6c0939-3788-45ef-b4d3-0f198fb4039f-catalog-content\") pod \"certified-operators-fq6d9\" (UID: \"bf6c0939-3788-45ef-b4d3-0f198fb4039f\") " pod="openshift-marketplace/certified-operators-fq6d9" Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.095381 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-25857" Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.182819 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a17fb46-17ee-46fe-9e72-540aa19604cf-catalog-content\") pod \"0a17fb46-17ee-46fe-9e72-540aa19604cf\" (UID: \"0a17fb46-17ee-46fe-9e72-540aa19604cf\") " Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.183255 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbx2f\" (UniqueName: \"kubernetes.io/projected/0a17fb46-17ee-46fe-9e72-540aa19604cf-kube-api-access-dbx2f\") pod \"0a17fb46-17ee-46fe-9e72-540aa19604cf\" (UID: \"0a17fb46-17ee-46fe-9e72-540aa19604cf\") " Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.183342 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a17fb46-17ee-46fe-9e72-540aa19604cf-utilities\") pod \"0a17fb46-17ee-46fe-9e72-540aa19604cf\" (UID: \"0a17fb46-17ee-46fe-9e72-540aa19604cf\") " Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.183545 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkcfn\" (UniqueName: \"kubernetes.io/projected/bf6c0939-3788-45ef-b4d3-0f198fb4039f-kube-api-access-kkcfn\") pod \"certified-operators-fq6d9\" (UID: \"bf6c0939-3788-45ef-b4d3-0f198fb4039f\") " pod="openshift-marketplace/certified-operators-fq6d9" Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.183605 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf6c0939-3788-45ef-b4d3-0f198fb4039f-catalog-content\") pod \"certified-operators-fq6d9\" (UID: \"bf6c0939-3788-45ef-b4d3-0f198fb4039f\") " pod="openshift-marketplace/certified-operators-fq6d9" Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.183676 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf6c0939-3788-45ef-b4d3-0f198fb4039f-utilities\") pod \"certified-operators-fq6d9\" (UID: \"bf6c0939-3788-45ef-b4d3-0f198fb4039f\") " pod="openshift-marketplace/certified-operators-fq6d9" Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.184168 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf6c0939-3788-45ef-b4d3-0f198fb4039f-utilities\") pod \"certified-operators-fq6d9\" (UID: \"bf6c0939-3788-45ef-b4d3-0f198fb4039f\") " pod="openshift-marketplace/certified-operators-fq6d9" Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.185214 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf6c0939-3788-45ef-b4d3-0f198fb4039f-catalog-content\") pod \"certified-operators-fq6d9\" (UID: \"bf6c0939-3788-45ef-b4d3-0f198fb4039f\") " pod="openshift-marketplace/certified-operators-fq6d9" Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.185670 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a17fb46-17ee-46fe-9e72-540aa19604cf-utilities" (OuterVolumeSpecName: "utilities") pod "0a17fb46-17ee-46fe-9e72-540aa19604cf" (UID: "0a17fb46-17ee-46fe-9e72-540aa19604cf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.189424 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a17fb46-17ee-46fe-9e72-540aa19604cf-kube-api-access-dbx2f" (OuterVolumeSpecName: "kube-api-access-dbx2f") pod "0a17fb46-17ee-46fe-9e72-540aa19604cf" (UID: "0a17fb46-17ee-46fe-9e72-540aa19604cf"). InnerVolumeSpecName "kube-api-access-dbx2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.200419 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkcfn\" (UniqueName: \"kubernetes.io/projected/bf6c0939-3788-45ef-b4d3-0f198fb4039f-kube-api-access-kkcfn\") pod \"certified-operators-fq6d9\" (UID: \"bf6c0939-3788-45ef-b4d3-0f198fb4039f\") " pod="openshift-marketplace/certified-operators-fq6d9" Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.243879 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a17fb46-17ee-46fe-9e72-540aa19604cf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0a17fb46-17ee-46fe-9e72-540aa19604cf" (UID: "0a17fb46-17ee-46fe-9e72-540aa19604cf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.284774 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a17fb46-17ee-46fe-9e72-540aa19604cf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.284818 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbx2f\" (UniqueName: \"kubernetes.io/projected/0a17fb46-17ee-46fe-9e72-540aa19604cf-kube-api-access-dbx2f\") on node \"crc\" DevicePath \"\"" Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.284835 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a17fb46-17ee-46fe-9e72-540aa19604cf-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.391845 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fq6d9" Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.652377 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fq6d9"] Jan 30 16:37:39 crc kubenswrapper[4766]: W0130 16:37:39.672832 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf6c0939_3788_45ef_b4d3_0f198fb4039f.slice/crio-33556d967db280ae11cd1592886d01bc97df1a2144ca2230206f0224167e7608 WatchSource:0}: Error finding container 33556d967db280ae11cd1592886d01bc97df1a2144ca2230206f0224167e7608: Status 404 returned error can't find the container with id 33556d967db280ae11cd1592886d01bc97df1a2144ca2230206f0224167e7608 Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.691635 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fq6d9" event={"ID":"bf6c0939-3788-45ef-b4d3-0f198fb4039f","Type":"ContainerStarted","Data":"33556d967db280ae11cd1592886d01bc97df1a2144ca2230206f0224167e7608"} Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.698668 4766 generic.go:334] "Generic (PLEG): container finished" podID="0a17fb46-17ee-46fe-9e72-540aa19604cf" containerID="8fa62bbbc4386a577f5ad9f9d98ca83483a8d4f332baf4f9d8900f2e02dd5e36" exitCode=0 Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.698716 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-25857" event={"ID":"0a17fb46-17ee-46fe-9e72-540aa19604cf","Type":"ContainerDied","Data":"8fa62bbbc4386a577f5ad9f9d98ca83483a8d4f332baf4f9d8900f2e02dd5e36"} Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.698747 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-25857" event={"ID":"0a17fb46-17ee-46fe-9e72-540aa19604cf","Type":"ContainerDied","Data":"abb8fe20ff8febe1d8453814192f1c606f41fd3fcad611e77b0dc1734c540c56"} Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.698764 4766 scope.go:117] "RemoveContainer" containerID="8fa62bbbc4386a577f5ad9f9d98ca83483a8d4f332baf4f9d8900f2e02dd5e36" Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.698768 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-25857" Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.721519 4766 scope.go:117] "RemoveContainer" containerID="eca05fb6948c99e1a42ccccd551fd46bc9e4515706e8802e4d5f7197fbe55b81" Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.743395 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-25857"] Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.749499 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-25857"] Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.757011 4766 scope.go:117] "RemoveContainer" containerID="a17ed51550b4a9e534b97bd55f68d2055e0aeaa5469e71c2bdc954f62cca0a4d" Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.784040 4766 scope.go:117] "RemoveContainer" containerID="8fa62bbbc4386a577f5ad9f9d98ca83483a8d4f332baf4f9d8900f2e02dd5e36" Jan 30 16:37:39 crc kubenswrapper[4766]: E0130 16:37:39.785462 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fa62bbbc4386a577f5ad9f9d98ca83483a8d4f332baf4f9d8900f2e02dd5e36\": container with ID starting with 8fa62bbbc4386a577f5ad9f9d98ca83483a8d4f332baf4f9d8900f2e02dd5e36 not found: ID does not exist" containerID="8fa62bbbc4386a577f5ad9f9d98ca83483a8d4f332baf4f9d8900f2e02dd5e36" Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.785550 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fa62bbbc4386a577f5ad9f9d98ca83483a8d4f332baf4f9d8900f2e02dd5e36"} err="failed to get container status \"8fa62bbbc4386a577f5ad9f9d98ca83483a8d4f332baf4f9d8900f2e02dd5e36\": rpc error: code = NotFound desc = could not find container \"8fa62bbbc4386a577f5ad9f9d98ca83483a8d4f332baf4f9d8900f2e02dd5e36\": container with ID starting with 8fa62bbbc4386a577f5ad9f9d98ca83483a8d4f332baf4f9d8900f2e02dd5e36 not found: ID does not exist" Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.785596 4766 scope.go:117] "RemoveContainer" containerID="eca05fb6948c99e1a42ccccd551fd46bc9e4515706e8802e4d5f7197fbe55b81" Jan 30 16:37:39 crc kubenswrapper[4766]: E0130 16:37:39.788567 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eca05fb6948c99e1a42ccccd551fd46bc9e4515706e8802e4d5f7197fbe55b81\": container with ID starting with eca05fb6948c99e1a42ccccd551fd46bc9e4515706e8802e4d5f7197fbe55b81 not found: ID does not exist" containerID="eca05fb6948c99e1a42ccccd551fd46bc9e4515706e8802e4d5f7197fbe55b81" Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.788603 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eca05fb6948c99e1a42ccccd551fd46bc9e4515706e8802e4d5f7197fbe55b81"} err="failed to get container status \"eca05fb6948c99e1a42ccccd551fd46bc9e4515706e8802e4d5f7197fbe55b81\": rpc error: code = NotFound desc = could not find container \"eca05fb6948c99e1a42ccccd551fd46bc9e4515706e8802e4d5f7197fbe55b81\": container with ID starting with eca05fb6948c99e1a42ccccd551fd46bc9e4515706e8802e4d5f7197fbe55b81 not found: ID does not exist" Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.788626 4766 scope.go:117] "RemoveContainer" containerID="a17ed51550b4a9e534b97bd55f68d2055e0aeaa5469e71c2bdc954f62cca0a4d" Jan 30 16:37:39 crc kubenswrapper[4766]: E0130 16:37:39.789097 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a17ed51550b4a9e534b97bd55f68d2055e0aeaa5469e71c2bdc954f62cca0a4d\": container with ID starting with a17ed51550b4a9e534b97bd55f68d2055e0aeaa5469e71c2bdc954f62cca0a4d not found: ID does not exist" containerID="a17ed51550b4a9e534b97bd55f68d2055e0aeaa5469e71c2bdc954f62cca0a4d" Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.789139 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a17ed51550b4a9e534b97bd55f68d2055e0aeaa5469e71c2bdc954f62cca0a4d"} err="failed to get container status \"a17ed51550b4a9e534b97bd55f68d2055e0aeaa5469e71c2bdc954f62cca0a4d\": rpc error: code = NotFound desc = could not find container \"a17ed51550b4a9e534b97bd55f68d2055e0aeaa5469e71c2bdc954f62cca0a4d\": container with ID starting with a17ed51550b4a9e534b97bd55f68d2055e0aeaa5469e71c2bdc954f62cca0a4d not found: ID does not exist" Jan 30 16:37:40 crc kubenswrapper[4766]: I0130 16:37:40.053835 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a17fb46-17ee-46fe-9e72-540aa19604cf" path="/var/lib/kubelet/pods/0a17fb46-17ee-46fe-9e72-540aa19604cf/volumes" Jan 30 16:37:40 crc kubenswrapper[4766]: I0130 16:37:40.705301 4766 generic.go:334] "Generic (PLEG): container finished" podID="bf6c0939-3788-45ef-b4d3-0f198fb4039f" containerID="f13d8029879008eb7e8943c08a404c67bb9657e75ab38741484f75ee50c5720b" exitCode=0 Jan 30 16:37:40 crc kubenswrapper[4766]: I0130 16:37:40.705355 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fq6d9" event={"ID":"bf6c0939-3788-45ef-b4d3-0f198fb4039f","Type":"ContainerDied","Data":"f13d8029879008eb7e8943c08a404c67bb9657e75ab38741484f75ee50c5720b"} Jan 30 16:37:43 crc kubenswrapper[4766]: I0130 16:37:43.645234 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-ltbxj" Jan 30 16:37:43 crc kubenswrapper[4766]: I0130 16:37:43.728279 4766 generic.go:334] "Generic (PLEG): container finished" podID="bf6c0939-3788-45ef-b4d3-0f198fb4039f" containerID="c5dc63f8e1d6f336d42fe7547ed132ce90530746f0e8c389ac7c00359038c7a7" exitCode=0 Jan 30 16:37:43 crc kubenswrapper[4766]: I0130 16:37:43.728314 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fq6d9" event={"ID":"bf6c0939-3788-45ef-b4d3-0f198fb4039f","Type":"ContainerDied","Data":"c5dc63f8e1d6f336d42fe7547ed132ce90530746f0e8c389ac7c00359038c7a7"} Jan 30 16:37:44 crc kubenswrapper[4766]: I0130 16:37:44.737944 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fq6d9" event={"ID":"bf6c0939-3788-45ef-b4d3-0f198fb4039f","Type":"ContainerStarted","Data":"6e87b5fc00cecc4612d05934906aaa4d439d921874116adb65d556fc8be12b6f"} Jan 30 16:37:44 crc kubenswrapper[4766]: I0130 16:37:44.769953 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fq6d9" podStartSLOduration=2.33127993 podStartE2EDuration="5.769933401s" podCreationTimestamp="2026-01-30 16:37:39 +0000 UTC" firstStartedPulling="2026-01-30 16:37:40.706457456 +0000 UTC m=+915.344414802" lastFinishedPulling="2026-01-30 16:37:44.145110927 +0000 UTC m=+918.783068273" observedRunningTime="2026-01-30 16:37:44.767899265 +0000 UTC m=+919.405856611" watchObservedRunningTime="2026-01-30 16:37:44.769933401 +0000 UTC m=+919.407890747" Jan 30 16:37:45 crc kubenswrapper[4766]: I0130 16:37:45.945999 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-9lmrd"] Jan 30 16:37:45 crc kubenswrapper[4766]: E0130 16:37:45.946328 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a17fb46-17ee-46fe-9e72-540aa19604cf" containerName="extract-content" Jan 30 16:37:45 crc kubenswrapper[4766]: I0130 16:37:45.946349 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a17fb46-17ee-46fe-9e72-540aa19604cf" containerName="extract-content" Jan 30 16:37:45 crc kubenswrapper[4766]: E0130 16:37:45.946368 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a17fb46-17ee-46fe-9e72-540aa19604cf" containerName="registry-server" Jan 30 16:37:45 crc kubenswrapper[4766]: I0130 16:37:45.946377 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a17fb46-17ee-46fe-9e72-540aa19604cf" containerName="registry-server" Jan 30 16:37:45 crc kubenswrapper[4766]: E0130 16:37:45.946393 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a17fb46-17ee-46fe-9e72-540aa19604cf" containerName="extract-utilities" Jan 30 16:37:45 crc kubenswrapper[4766]: I0130 16:37:45.946401 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a17fb46-17ee-46fe-9e72-540aa19604cf" containerName="extract-utilities" Jan 30 16:37:45 crc kubenswrapper[4766]: I0130 16:37:45.946523 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a17fb46-17ee-46fe-9e72-540aa19604cf" containerName="registry-server" Jan 30 16:37:45 crc kubenswrapper[4766]: I0130 16:37:45.947035 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-9lmrd" Jan 30 16:37:45 crc kubenswrapper[4766]: I0130 16:37:45.948886 4766 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-zh89x" Jan 30 16:37:45 crc kubenswrapper[4766]: I0130 16:37:45.957561 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-9lmrd"] Jan 30 16:37:46 crc kubenswrapper[4766]: I0130 16:37:46.078151 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwttj\" (UniqueName: \"kubernetes.io/projected/d635eb48-c2c9-404e-9ffb-c8385134670b-kube-api-access-wwttj\") pod \"cert-manager-545d4d4674-9lmrd\" (UID: \"d635eb48-c2c9-404e-9ffb-c8385134670b\") " pod="cert-manager/cert-manager-545d4d4674-9lmrd" Jan 30 16:37:46 crc kubenswrapper[4766]: I0130 16:37:46.078366 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d635eb48-c2c9-404e-9ffb-c8385134670b-bound-sa-token\") pod \"cert-manager-545d4d4674-9lmrd\" (UID: \"d635eb48-c2c9-404e-9ffb-c8385134670b\") " pod="cert-manager/cert-manager-545d4d4674-9lmrd" Jan 30 16:37:46 crc kubenswrapper[4766]: I0130 16:37:46.180089 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwttj\" (UniqueName: \"kubernetes.io/projected/d635eb48-c2c9-404e-9ffb-c8385134670b-kube-api-access-wwttj\") pod \"cert-manager-545d4d4674-9lmrd\" (UID: \"d635eb48-c2c9-404e-9ffb-c8385134670b\") " pod="cert-manager/cert-manager-545d4d4674-9lmrd" Jan 30 16:37:46 crc kubenswrapper[4766]: I0130 16:37:46.180236 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d635eb48-c2c9-404e-9ffb-c8385134670b-bound-sa-token\") pod \"cert-manager-545d4d4674-9lmrd\" (UID: \"d635eb48-c2c9-404e-9ffb-c8385134670b\") " pod="cert-manager/cert-manager-545d4d4674-9lmrd" Jan 30 16:37:46 crc kubenswrapper[4766]: I0130 16:37:46.198979 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d635eb48-c2c9-404e-9ffb-c8385134670b-bound-sa-token\") pod \"cert-manager-545d4d4674-9lmrd\" (UID: \"d635eb48-c2c9-404e-9ffb-c8385134670b\") " pod="cert-manager/cert-manager-545d4d4674-9lmrd" Jan 30 16:37:46 crc kubenswrapper[4766]: I0130 16:37:46.199574 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwttj\" (UniqueName: \"kubernetes.io/projected/d635eb48-c2c9-404e-9ffb-c8385134670b-kube-api-access-wwttj\") pod \"cert-manager-545d4d4674-9lmrd\" (UID: \"d635eb48-c2c9-404e-9ffb-c8385134670b\") " pod="cert-manager/cert-manager-545d4d4674-9lmrd" Jan 30 16:37:46 crc kubenswrapper[4766]: I0130 16:37:46.266653 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-9lmrd" Jan 30 16:37:46 crc kubenswrapper[4766]: I0130 16:37:46.762956 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-9lmrd"] Jan 30 16:37:46 crc kubenswrapper[4766]: W0130 16:37:46.767668 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd635eb48_c2c9_404e_9ffb_c8385134670b.slice/crio-f484b35e9bfdaea1a71d49ee61449bdba444d4e3841229cad3c58ea55052e2ba WatchSource:0}: Error finding container f484b35e9bfdaea1a71d49ee61449bdba444d4e3841229cad3c58ea55052e2ba: Status 404 returned error can't find the container with id f484b35e9bfdaea1a71d49ee61449bdba444d4e3841229cad3c58ea55052e2ba Jan 30 16:37:47 crc kubenswrapper[4766]: I0130 16:37:47.755863 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-9lmrd" event={"ID":"d635eb48-c2c9-404e-9ffb-c8385134670b","Type":"ContainerStarted","Data":"9242049d725f16b934a52e4def0df41908d2236ea945d97505f28750b7fa9d29"} Jan 30 16:37:47 crc kubenswrapper[4766]: I0130 16:37:47.756543 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-9lmrd" event={"ID":"d635eb48-c2c9-404e-9ffb-c8385134670b","Type":"ContainerStarted","Data":"f484b35e9bfdaea1a71d49ee61449bdba444d4e3841229cad3c58ea55052e2ba"} Jan 30 16:37:47 crc kubenswrapper[4766]: I0130 16:37:47.776914 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-9lmrd" podStartSLOduration=2.776887999 podStartE2EDuration="2.776887999s" podCreationTimestamp="2026-01-30 16:37:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:37:47.771838529 +0000 UTC m=+922.409795875" watchObservedRunningTime="2026-01-30 16:37:47.776887999 +0000 UTC m=+922.414845345" Jan 30 16:37:49 crc kubenswrapper[4766]: I0130 16:37:49.392111 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fq6d9" Jan 30 16:37:49 crc kubenswrapper[4766]: I0130 16:37:49.393143 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fq6d9" Jan 30 16:37:49 crc kubenswrapper[4766]: I0130 16:37:49.433950 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fq6d9" Jan 30 16:37:49 crc kubenswrapper[4766]: I0130 16:37:49.808980 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fq6d9" Jan 30 16:37:49 crc kubenswrapper[4766]: I0130 16:37:49.850809 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fq6d9"] Jan 30 16:37:51 crc kubenswrapper[4766]: I0130 16:37:51.778108 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fq6d9" podUID="bf6c0939-3788-45ef-b4d3-0f198fb4039f" containerName="registry-server" containerID="cri-o://6e87b5fc00cecc4612d05934906aaa4d439d921874116adb65d556fc8be12b6f" gracePeriod=2 Jan 30 16:37:52 crc kubenswrapper[4766]: I0130 16:37:52.137718 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fq6d9" Jan 30 16:37:52 crc kubenswrapper[4766]: I0130 16:37:52.290035 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf6c0939-3788-45ef-b4d3-0f198fb4039f-catalog-content\") pod \"bf6c0939-3788-45ef-b4d3-0f198fb4039f\" (UID: \"bf6c0939-3788-45ef-b4d3-0f198fb4039f\") " Jan 30 16:37:52 crc kubenswrapper[4766]: I0130 16:37:52.290292 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkcfn\" (UniqueName: \"kubernetes.io/projected/bf6c0939-3788-45ef-b4d3-0f198fb4039f-kube-api-access-kkcfn\") pod \"bf6c0939-3788-45ef-b4d3-0f198fb4039f\" (UID: \"bf6c0939-3788-45ef-b4d3-0f198fb4039f\") " Jan 30 16:37:52 crc kubenswrapper[4766]: I0130 16:37:52.290336 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf6c0939-3788-45ef-b4d3-0f198fb4039f-utilities\") pod \"bf6c0939-3788-45ef-b4d3-0f198fb4039f\" (UID: \"bf6c0939-3788-45ef-b4d3-0f198fb4039f\") " Jan 30 16:37:52 crc kubenswrapper[4766]: I0130 16:37:52.291225 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf6c0939-3788-45ef-b4d3-0f198fb4039f-utilities" (OuterVolumeSpecName: "utilities") pod "bf6c0939-3788-45ef-b4d3-0f198fb4039f" (UID: "bf6c0939-3788-45ef-b4d3-0f198fb4039f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:37:52 crc kubenswrapper[4766]: I0130 16:37:52.302424 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf6c0939-3788-45ef-b4d3-0f198fb4039f-kube-api-access-kkcfn" (OuterVolumeSpecName: "kube-api-access-kkcfn") pod "bf6c0939-3788-45ef-b4d3-0f198fb4039f" (UID: "bf6c0939-3788-45ef-b4d3-0f198fb4039f"). InnerVolumeSpecName "kube-api-access-kkcfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:37:52 crc kubenswrapper[4766]: I0130 16:37:52.349535 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf6c0939-3788-45ef-b4d3-0f198fb4039f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bf6c0939-3788-45ef-b4d3-0f198fb4039f" (UID: "bf6c0939-3788-45ef-b4d3-0f198fb4039f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:37:52 crc kubenswrapper[4766]: I0130 16:37:52.392609 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf6c0939-3788-45ef-b4d3-0f198fb4039f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:37:52 crc kubenswrapper[4766]: I0130 16:37:52.392671 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkcfn\" (UniqueName: \"kubernetes.io/projected/bf6c0939-3788-45ef-b4d3-0f198fb4039f-kube-api-access-kkcfn\") on node \"crc\" DevicePath \"\"" Jan 30 16:37:52 crc kubenswrapper[4766]: I0130 16:37:52.392685 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf6c0939-3788-45ef-b4d3-0f198fb4039f-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:37:52 crc kubenswrapper[4766]: I0130 16:37:52.787830 4766 generic.go:334] "Generic (PLEG): container finished" podID="bf6c0939-3788-45ef-b4d3-0f198fb4039f" containerID="6e87b5fc00cecc4612d05934906aaa4d439d921874116adb65d556fc8be12b6f" exitCode=0 Jan 30 16:37:52 crc kubenswrapper[4766]: I0130 16:37:52.787877 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fq6d9" event={"ID":"bf6c0939-3788-45ef-b4d3-0f198fb4039f","Type":"ContainerDied","Data":"6e87b5fc00cecc4612d05934906aaa4d439d921874116adb65d556fc8be12b6f"} Jan 30 16:37:52 crc kubenswrapper[4766]: I0130 16:37:52.787905 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fq6d9" event={"ID":"bf6c0939-3788-45ef-b4d3-0f198fb4039f","Type":"ContainerDied","Data":"33556d967db280ae11cd1592886d01bc97df1a2144ca2230206f0224167e7608"} Jan 30 16:37:52 crc kubenswrapper[4766]: I0130 16:37:52.787921 4766 scope.go:117] "RemoveContainer" containerID="6e87b5fc00cecc4612d05934906aaa4d439d921874116adb65d556fc8be12b6f" Jan 30 16:37:52 crc kubenswrapper[4766]: I0130 16:37:52.787925 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fq6d9" Jan 30 16:37:52 crc kubenswrapper[4766]: I0130 16:37:52.809326 4766 scope.go:117] "RemoveContainer" containerID="c5dc63f8e1d6f336d42fe7547ed132ce90530746f0e8c389ac7c00359038c7a7" Jan 30 16:37:52 crc kubenswrapper[4766]: I0130 16:37:52.815628 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fq6d9"] Jan 30 16:37:52 crc kubenswrapper[4766]: I0130 16:37:52.821128 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fq6d9"] Jan 30 16:37:52 crc kubenswrapper[4766]: I0130 16:37:52.838403 4766 scope.go:117] "RemoveContainer" containerID="f13d8029879008eb7e8943c08a404c67bb9657e75ab38741484f75ee50c5720b" Jan 30 16:37:52 crc kubenswrapper[4766]: I0130 16:37:52.859450 4766 scope.go:117] "RemoveContainer" containerID="6e87b5fc00cecc4612d05934906aaa4d439d921874116adb65d556fc8be12b6f" Jan 30 16:37:52 crc kubenswrapper[4766]: E0130 16:37:52.859909 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e87b5fc00cecc4612d05934906aaa4d439d921874116adb65d556fc8be12b6f\": container with ID starting with 6e87b5fc00cecc4612d05934906aaa4d439d921874116adb65d556fc8be12b6f not found: ID does not exist" containerID="6e87b5fc00cecc4612d05934906aaa4d439d921874116adb65d556fc8be12b6f" Jan 30 16:37:52 crc kubenswrapper[4766]: I0130 16:37:52.859938 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e87b5fc00cecc4612d05934906aaa4d439d921874116adb65d556fc8be12b6f"} err="failed to get container status \"6e87b5fc00cecc4612d05934906aaa4d439d921874116adb65d556fc8be12b6f\": rpc error: code = NotFound desc = could not find container \"6e87b5fc00cecc4612d05934906aaa4d439d921874116adb65d556fc8be12b6f\": container with ID starting with 6e87b5fc00cecc4612d05934906aaa4d439d921874116adb65d556fc8be12b6f not found: ID does not exist" Jan 30 16:37:52 crc kubenswrapper[4766]: I0130 16:37:52.859959 4766 scope.go:117] "RemoveContainer" containerID="c5dc63f8e1d6f336d42fe7547ed132ce90530746f0e8c389ac7c00359038c7a7" Jan 30 16:37:52 crc kubenswrapper[4766]: E0130 16:37:52.860312 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5dc63f8e1d6f336d42fe7547ed132ce90530746f0e8c389ac7c00359038c7a7\": container with ID starting with c5dc63f8e1d6f336d42fe7547ed132ce90530746f0e8c389ac7c00359038c7a7 not found: ID does not exist" containerID="c5dc63f8e1d6f336d42fe7547ed132ce90530746f0e8c389ac7c00359038c7a7" Jan 30 16:37:52 crc kubenswrapper[4766]: I0130 16:37:52.860340 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5dc63f8e1d6f336d42fe7547ed132ce90530746f0e8c389ac7c00359038c7a7"} err="failed to get container status \"c5dc63f8e1d6f336d42fe7547ed132ce90530746f0e8c389ac7c00359038c7a7\": rpc error: code = NotFound desc = could not find container \"c5dc63f8e1d6f336d42fe7547ed132ce90530746f0e8c389ac7c00359038c7a7\": container with ID starting with c5dc63f8e1d6f336d42fe7547ed132ce90530746f0e8c389ac7c00359038c7a7 not found: ID does not exist" Jan 30 16:37:52 crc kubenswrapper[4766]: I0130 16:37:52.860354 4766 scope.go:117] "RemoveContainer" containerID="f13d8029879008eb7e8943c08a404c67bb9657e75ab38741484f75ee50c5720b" Jan 30 16:37:52 crc kubenswrapper[4766]: E0130 16:37:52.860581 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f13d8029879008eb7e8943c08a404c67bb9657e75ab38741484f75ee50c5720b\": container with ID starting with f13d8029879008eb7e8943c08a404c67bb9657e75ab38741484f75ee50c5720b not found: ID does not exist" containerID="f13d8029879008eb7e8943c08a404c67bb9657e75ab38741484f75ee50c5720b" Jan 30 16:37:52 crc kubenswrapper[4766]: I0130 16:37:52.860602 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f13d8029879008eb7e8943c08a404c67bb9657e75ab38741484f75ee50c5720b"} err="failed to get container status \"f13d8029879008eb7e8943c08a404c67bb9657e75ab38741484f75ee50c5720b\": rpc error: code = NotFound desc = could not find container \"f13d8029879008eb7e8943c08a404c67bb9657e75ab38741484f75ee50c5720b\": container with ID starting with f13d8029879008eb7e8943c08a404c67bb9657e75ab38741484f75ee50c5720b not found: ID does not exist" Jan 30 16:37:54 crc kubenswrapper[4766]: I0130 16:37:54.048404 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf6c0939-3788-45ef-b4d3-0f198fb4039f" path="/var/lib/kubelet/pods/bf6c0939-3788-45ef-b4d3-0f198fb4039f/volumes" Jan 30 16:37:56 crc kubenswrapper[4766]: I0130 16:37:56.712942 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-tkmgw"] Jan 30 16:37:56 crc kubenswrapper[4766]: E0130 16:37:56.713559 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf6c0939-3788-45ef-b4d3-0f198fb4039f" containerName="registry-server" Jan 30 16:37:56 crc kubenswrapper[4766]: I0130 16:37:56.713577 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf6c0939-3788-45ef-b4d3-0f198fb4039f" containerName="registry-server" Jan 30 16:37:56 crc kubenswrapper[4766]: E0130 16:37:56.713598 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf6c0939-3788-45ef-b4d3-0f198fb4039f" containerName="extract-utilities" Jan 30 16:37:56 crc kubenswrapper[4766]: I0130 16:37:56.713608 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf6c0939-3788-45ef-b4d3-0f198fb4039f" containerName="extract-utilities" Jan 30 16:37:56 crc kubenswrapper[4766]: E0130 16:37:56.713627 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf6c0939-3788-45ef-b4d3-0f198fb4039f" containerName="extract-content" Jan 30 16:37:56 crc kubenswrapper[4766]: I0130 16:37:56.713636 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf6c0939-3788-45ef-b4d3-0f198fb4039f" containerName="extract-content" Jan 30 16:37:56 crc kubenswrapper[4766]: I0130 16:37:56.713775 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf6c0939-3788-45ef-b4d3-0f198fb4039f" containerName="registry-server" Jan 30 16:37:56 crc kubenswrapper[4766]: I0130 16:37:56.714371 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-tkmgw" Jan 30 16:37:56 crc kubenswrapper[4766]: I0130 16:37:56.716942 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 30 16:37:56 crc kubenswrapper[4766]: I0130 16:37:56.717153 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 30 16:37:56 crc kubenswrapper[4766]: I0130 16:37:56.721665 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-6r4tz" Jan 30 16:37:56 crc kubenswrapper[4766]: I0130 16:37:56.744063 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-tkmgw"] Jan 30 16:37:56 crc kubenswrapper[4766]: I0130 16:37:56.855680 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7jnp\" (UniqueName: \"kubernetes.io/projected/cd84aed8-c9c3-4e8d-b212-13955a78d7b4-kube-api-access-f7jnp\") pod \"openstack-operator-index-tkmgw\" (UID: \"cd84aed8-c9c3-4e8d-b212-13955a78d7b4\") " pod="openstack-operators/openstack-operator-index-tkmgw" Jan 30 16:37:56 crc kubenswrapper[4766]: I0130 16:37:56.957117 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7jnp\" (UniqueName: \"kubernetes.io/projected/cd84aed8-c9c3-4e8d-b212-13955a78d7b4-kube-api-access-f7jnp\") pod \"openstack-operator-index-tkmgw\" (UID: \"cd84aed8-c9c3-4e8d-b212-13955a78d7b4\") " pod="openstack-operators/openstack-operator-index-tkmgw" Jan 30 16:37:56 crc kubenswrapper[4766]: I0130 16:37:56.976993 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7jnp\" (UniqueName: \"kubernetes.io/projected/cd84aed8-c9c3-4e8d-b212-13955a78d7b4-kube-api-access-f7jnp\") pod \"openstack-operator-index-tkmgw\" (UID: \"cd84aed8-c9c3-4e8d-b212-13955a78d7b4\") " pod="openstack-operators/openstack-operator-index-tkmgw" Jan 30 16:37:57 crc kubenswrapper[4766]: I0130 16:37:57.039147 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-tkmgw" Jan 30 16:37:57 crc kubenswrapper[4766]: I0130 16:37:57.468247 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-tkmgw"] Jan 30 16:37:57 crc kubenswrapper[4766]: I0130 16:37:57.824984 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-tkmgw" event={"ID":"cd84aed8-c9c3-4e8d-b212-13955a78d7b4","Type":"ContainerStarted","Data":"bd66530389ff5553db017967ddf2037ad50e201e2f1dfc09574b461b85f741e1"} Jan 30 16:37:58 crc kubenswrapper[4766]: I0130 16:37:58.966884 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-tkmgw"] Jan 30 16:37:59 crc kubenswrapper[4766]: I0130 16:37:59.373774 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-dpb9n"] Jan 30 16:37:59 crc kubenswrapper[4766]: I0130 16:37:59.374616 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-dpb9n" Jan 30 16:37:59 crc kubenswrapper[4766]: I0130 16:37:59.380310 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-dpb9n"] Jan 30 16:37:59 crc kubenswrapper[4766]: I0130 16:37:59.502394 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45lb2\" (UniqueName: \"kubernetes.io/projected/502b8426-9711-4e00-b59f-743352003f2b-kube-api-access-45lb2\") pod \"openstack-operator-index-dpb9n\" (UID: \"502b8426-9711-4e00-b59f-743352003f2b\") " pod="openstack-operators/openstack-operator-index-dpb9n" Jan 30 16:37:59 crc kubenswrapper[4766]: I0130 16:37:59.604533 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45lb2\" (UniqueName: \"kubernetes.io/projected/502b8426-9711-4e00-b59f-743352003f2b-kube-api-access-45lb2\") pod \"openstack-operator-index-dpb9n\" (UID: \"502b8426-9711-4e00-b59f-743352003f2b\") " pod="openstack-operators/openstack-operator-index-dpb9n" Jan 30 16:37:59 crc kubenswrapper[4766]: I0130 16:37:59.629238 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45lb2\" (UniqueName: \"kubernetes.io/projected/502b8426-9711-4e00-b59f-743352003f2b-kube-api-access-45lb2\") pod \"openstack-operator-index-dpb9n\" (UID: \"502b8426-9711-4e00-b59f-743352003f2b\") " pod="openstack-operators/openstack-operator-index-dpb9n" Jan 30 16:37:59 crc kubenswrapper[4766]: I0130 16:37:59.698036 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-dpb9n" Jan 30 16:37:59 crc kubenswrapper[4766]: I0130 16:37:59.840009 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-tkmgw" event={"ID":"cd84aed8-c9c3-4e8d-b212-13955a78d7b4","Type":"ContainerStarted","Data":"b371864dc0f6bed27b7dd0241e80a1515afa20112e1afb79e99526e61c60c582"} Jan 30 16:37:59 crc kubenswrapper[4766]: I0130 16:37:59.840168 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-tkmgw" podUID="cd84aed8-c9c3-4e8d-b212-13955a78d7b4" containerName="registry-server" containerID="cri-o://b371864dc0f6bed27b7dd0241e80a1515afa20112e1afb79e99526e61c60c582" gracePeriod=2 Jan 30 16:37:59 crc kubenswrapper[4766]: I0130 16:37:59.865591 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-tkmgw" podStartSLOduration=1.7316621639999998 podStartE2EDuration="3.865571867s" podCreationTimestamp="2026-01-30 16:37:56 +0000 UTC" firstStartedPulling="2026-01-30 16:37:57.476572204 +0000 UTC m=+932.114529550" lastFinishedPulling="2026-01-30 16:37:59.610481907 +0000 UTC m=+934.248439253" observedRunningTime="2026-01-30 16:37:59.863301554 +0000 UTC m=+934.501258900" watchObservedRunningTime="2026-01-30 16:37:59.865571867 +0000 UTC m=+934.503529233" Jan 30 16:37:59 crc kubenswrapper[4766]: I0130 16:37:59.901723 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-dpb9n"] Jan 30 16:37:59 crc kubenswrapper[4766]: W0130 16:37:59.934995 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod502b8426_9711_4e00_b59f_743352003f2b.slice/crio-60d7feb08490fe05c5a4cd1658be3e0b09481194b2b7e38d9e3887e3045fffc6 WatchSource:0}: Error finding container 60d7feb08490fe05c5a4cd1658be3e0b09481194b2b7e38d9e3887e3045fffc6: Status 404 returned error can't find the container with id 60d7feb08490fe05c5a4cd1658be3e0b09481194b2b7e38d9e3887e3045fffc6 Jan 30 16:38:00 crc kubenswrapper[4766]: I0130 16:38:00.181679 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-tkmgw" Jan 30 16:38:00 crc kubenswrapper[4766]: I0130 16:38:00.313629 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7jnp\" (UniqueName: \"kubernetes.io/projected/cd84aed8-c9c3-4e8d-b212-13955a78d7b4-kube-api-access-f7jnp\") pod \"cd84aed8-c9c3-4e8d-b212-13955a78d7b4\" (UID: \"cd84aed8-c9c3-4e8d-b212-13955a78d7b4\") " Jan 30 16:38:00 crc kubenswrapper[4766]: I0130 16:38:00.318866 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd84aed8-c9c3-4e8d-b212-13955a78d7b4-kube-api-access-f7jnp" (OuterVolumeSpecName: "kube-api-access-f7jnp") pod "cd84aed8-c9c3-4e8d-b212-13955a78d7b4" (UID: "cd84aed8-c9c3-4e8d-b212-13955a78d7b4"). InnerVolumeSpecName "kube-api-access-f7jnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:38:00 crc kubenswrapper[4766]: I0130 16:38:00.415643 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7jnp\" (UniqueName: \"kubernetes.io/projected/cd84aed8-c9c3-4e8d-b212-13955a78d7b4-kube-api-access-f7jnp\") on node \"crc\" DevicePath \"\"" Jan 30 16:38:00 crc kubenswrapper[4766]: I0130 16:38:00.846376 4766 generic.go:334] "Generic (PLEG): container finished" podID="cd84aed8-c9c3-4e8d-b212-13955a78d7b4" containerID="b371864dc0f6bed27b7dd0241e80a1515afa20112e1afb79e99526e61c60c582" exitCode=0 Jan 30 16:38:00 crc kubenswrapper[4766]: I0130 16:38:00.846440 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-tkmgw" Jan 30 16:38:00 crc kubenswrapper[4766]: I0130 16:38:00.846444 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-tkmgw" event={"ID":"cd84aed8-c9c3-4e8d-b212-13955a78d7b4","Type":"ContainerDied","Data":"b371864dc0f6bed27b7dd0241e80a1515afa20112e1afb79e99526e61c60c582"} Jan 30 16:38:00 crc kubenswrapper[4766]: I0130 16:38:00.846590 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-tkmgw" event={"ID":"cd84aed8-c9c3-4e8d-b212-13955a78d7b4","Type":"ContainerDied","Data":"bd66530389ff5553db017967ddf2037ad50e201e2f1dfc09574b461b85f741e1"} Jan 30 16:38:00 crc kubenswrapper[4766]: I0130 16:38:00.846618 4766 scope.go:117] "RemoveContainer" containerID="b371864dc0f6bed27b7dd0241e80a1515afa20112e1afb79e99526e61c60c582" Jan 30 16:38:00 crc kubenswrapper[4766]: I0130 16:38:00.847695 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-dpb9n" event={"ID":"502b8426-9711-4e00-b59f-743352003f2b","Type":"ContainerStarted","Data":"05f190037886438d95b7be40f1bbfe4211027858d8fb86e5cfdb5159cf018c79"} Jan 30 16:38:00 crc kubenswrapper[4766]: I0130 16:38:00.847725 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-dpb9n" event={"ID":"502b8426-9711-4e00-b59f-743352003f2b","Type":"ContainerStarted","Data":"60d7feb08490fe05c5a4cd1658be3e0b09481194b2b7e38d9e3887e3045fffc6"} Jan 30 16:38:00 crc kubenswrapper[4766]: I0130 16:38:00.866372 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-dpb9n" podStartSLOduration=1.82262437 podStartE2EDuration="1.866352017s" podCreationTimestamp="2026-01-30 16:37:59 +0000 UTC" firstStartedPulling="2026-01-30 16:37:59.94176774 +0000 UTC m=+934.579725076" lastFinishedPulling="2026-01-30 16:37:59.985495367 +0000 UTC m=+934.623452723" observedRunningTime="2026-01-30 16:38:00.864554767 +0000 UTC m=+935.502512113" watchObservedRunningTime="2026-01-30 16:38:00.866352017 +0000 UTC m=+935.504309373" Jan 30 16:38:00 crc kubenswrapper[4766]: I0130 16:38:00.868785 4766 scope.go:117] "RemoveContainer" containerID="b371864dc0f6bed27b7dd0241e80a1515afa20112e1afb79e99526e61c60c582" Jan 30 16:38:00 crc kubenswrapper[4766]: E0130 16:38:00.869431 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b371864dc0f6bed27b7dd0241e80a1515afa20112e1afb79e99526e61c60c582\": container with ID starting with b371864dc0f6bed27b7dd0241e80a1515afa20112e1afb79e99526e61c60c582 not found: ID does not exist" containerID="b371864dc0f6bed27b7dd0241e80a1515afa20112e1afb79e99526e61c60c582" Jan 30 16:38:00 crc kubenswrapper[4766]: I0130 16:38:00.869467 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b371864dc0f6bed27b7dd0241e80a1515afa20112e1afb79e99526e61c60c582"} err="failed to get container status \"b371864dc0f6bed27b7dd0241e80a1515afa20112e1afb79e99526e61c60c582\": rpc error: code = NotFound desc = could not find container \"b371864dc0f6bed27b7dd0241e80a1515afa20112e1afb79e99526e61c60c582\": container with ID starting with b371864dc0f6bed27b7dd0241e80a1515afa20112e1afb79e99526e61c60c582 not found: ID does not exist" Jan 30 16:38:00 crc kubenswrapper[4766]: I0130 16:38:00.880874 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-tkmgw"] Jan 30 16:38:00 crc kubenswrapper[4766]: I0130 16:38:00.886883 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-tkmgw"] Jan 30 16:38:02 crc kubenswrapper[4766]: I0130 16:38:02.046991 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd84aed8-c9c3-4e8d-b212-13955a78d7b4" path="/var/lib/kubelet/pods/cd84aed8-c9c3-4e8d-b212-13955a78d7b4/volumes" Jan 30 16:38:09 crc kubenswrapper[4766]: I0130 16:38:09.045132 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:38:09 crc kubenswrapper[4766]: I0130 16:38:09.045572 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:38:09 crc kubenswrapper[4766]: I0130 16:38:09.699373 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-dpb9n" Jan 30 16:38:09 crc kubenswrapper[4766]: I0130 16:38:09.699566 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-dpb9n" Jan 30 16:38:09 crc kubenswrapper[4766]: I0130 16:38:09.736171 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-dpb9n" Jan 30 16:38:09 crc kubenswrapper[4766]: I0130 16:38:09.926127 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-dpb9n" Jan 30 16:38:16 crc kubenswrapper[4766]: I0130 16:38:16.605851 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv"] Jan 30 16:38:16 crc kubenswrapper[4766]: E0130 16:38:16.606643 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd84aed8-c9c3-4e8d-b212-13955a78d7b4" containerName="registry-server" Jan 30 16:38:16 crc kubenswrapper[4766]: I0130 16:38:16.606655 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd84aed8-c9c3-4e8d-b212-13955a78d7b4" containerName="registry-server" Jan 30 16:38:16 crc kubenswrapper[4766]: I0130 16:38:16.606775 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd84aed8-c9c3-4e8d-b212-13955a78d7b4" containerName="registry-server" Jan 30 16:38:16 crc kubenswrapper[4766]: I0130 16:38:16.607609 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv" Jan 30 16:38:16 crc kubenswrapper[4766]: I0130 16:38:16.610670 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-hqb7r" Jan 30 16:38:16 crc kubenswrapper[4766]: I0130 16:38:16.623881 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv"] Jan 30 16:38:16 crc kubenswrapper[4766]: I0130 16:38:16.748364 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cbc79777-d574-4d18-953a-6d51b5c2bd84-bundle\") pod \"ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv\" (UID: \"cbc79777-d574-4d18-953a-6d51b5c2bd84\") " pod="openstack-operators/ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv" Jan 30 16:38:16 crc kubenswrapper[4766]: I0130 16:38:16.748493 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5764\" (UniqueName: \"kubernetes.io/projected/cbc79777-d574-4d18-953a-6d51b5c2bd84-kube-api-access-z5764\") pod \"ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv\" (UID: \"cbc79777-d574-4d18-953a-6d51b5c2bd84\") " pod="openstack-operators/ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv" Jan 30 16:38:16 crc kubenswrapper[4766]: I0130 16:38:16.748548 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cbc79777-d574-4d18-953a-6d51b5c2bd84-util\") pod \"ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv\" (UID: \"cbc79777-d574-4d18-953a-6d51b5c2bd84\") " pod="openstack-operators/ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv" Jan 30 16:38:16 crc kubenswrapper[4766]: I0130 16:38:16.849714 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cbc79777-d574-4d18-953a-6d51b5c2bd84-bundle\") pod \"ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv\" (UID: \"cbc79777-d574-4d18-953a-6d51b5c2bd84\") " pod="openstack-operators/ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv" Jan 30 16:38:16 crc kubenswrapper[4766]: I0130 16:38:16.849810 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5764\" (UniqueName: \"kubernetes.io/projected/cbc79777-d574-4d18-953a-6d51b5c2bd84-kube-api-access-z5764\") pod \"ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv\" (UID: \"cbc79777-d574-4d18-953a-6d51b5c2bd84\") " pod="openstack-operators/ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv" Jan 30 16:38:16 crc kubenswrapper[4766]: I0130 16:38:16.849854 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cbc79777-d574-4d18-953a-6d51b5c2bd84-util\") pod \"ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv\" (UID: \"cbc79777-d574-4d18-953a-6d51b5c2bd84\") " pod="openstack-operators/ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv" Jan 30 16:38:16 crc kubenswrapper[4766]: I0130 16:38:16.850440 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cbc79777-d574-4d18-953a-6d51b5c2bd84-util\") pod \"ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv\" (UID: \"cbc79777-d574-4d18-953a-6d51b5c2bd84\") " pod="openstack-operators/ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv" Jan 30 16:38:16 crc kubenswrapper[4766]: I0130 16:38:16.850673 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cbc79777-d574-4d18-953a-6d51b5c2bd84-bundle\") pod \"ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv\" (UID: \"cbc79777-d574-4d18-953a-6d51b5c2bd84\") " pod="openstack-operators/ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv" Jan 30 16:38:16 crc kubenswrapper[4766]: I0130 16:38:16.876681 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5764\" (UniqueName: \"kubernetes.io/projected/cbc79777-d574-4d18-953a-6d51b5c2bd84-kube-api-access-z5764\") pod \"ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv\" (UID: \"cbc79777-d574-4d18-953a-6d51b5c2bd84\") " pod="openstack-operators/ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv" Jan 30 16:38:16 crc kubenswrapper[4766]: I0130 16:38:16.926062 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv" Jan 30 16:38:17 crc kubenswrapper[4766]: I0130 16:38:17.337420 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv"] Jan 30 16:38:17 crc kubenswrapper[4766]: W0130 16:38:17.341712 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbc79777_d574_4d18_953a_6d51b5c2bd84.slice/crio-1db96f7c4ef988280e169d2272439764ce0f3f81cbcce598bb9f124770611928 WatchSource:0}: Error finding container 1db96f7c4ef988280e169d2272439764ce0f3f81cbcce598bb9f124770611928: Status 404 returned error can't find the container with id 1db96f7c4ef988280e169d2272439764ce0f3f81cbcce598bb9f124770611928 Jan 30 16:38:17 crc kubenswrapper[4766]: I0130 16:38:17.949796 4766 generic.go:334] "Generic (PLEG): container finished" podID="cbc79777-d574-4d18-953a-6d51b5c2bd84" containerID="d1e1e63a775334305ecf471f09907034407ab73f21c38b8aaa80d0bed80fd160" exitCode=0 Jan 30 16:38:17 crc kubenswrapper[4766]: I0130 16:38:17.949847 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv" event={"ID":"cbc79777-d574-4d18-953a-6d51b5c2bd84","Type":"ContainerDied","Data":"d1e1e63a775334305ecf471f09907034407ab73f21c38b8aaa80d0bed80fd160"} Jan 30 16:38:17 crc kubenswrapper[4766]: I0130 16:38:17.949888 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv" event={"ID":"cbc79777-d574-4d18-953a-6d51b5c2bd84","Type":"ContainerStarted","Data":"1db96f7c4ef988280e169d2272439764ce0f3f81cbcce598bb9f124770611928"} Jan 30 16:38:19 crc kubenswrapper[4766]: I0130 16:38:19.973086 4766 generic.go:334] "Generic (PLEG): container finished" podID="cbc79777-d574-4d18-953a-6d51b5c2bd84" containerID="96028487a00bdf6d6b85da2927b154580a3a4d86e04cccc1442fb4e60a5adc96" exitCode=0 Jan 30 16:38:19 crc kubenswrapper[4766]: I0130 16:38:19.973162 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv" event={"ID":"cbc79777-d574-4d18-953a-6d51b5c2bd84","Type":"ContainerDied","Data":"96028487a00bdf6d6b85da2927b154580a3a4d86e04cccc1442fb4e60a5adc96"} Jan 30 16:38:20 crc kubenswrapper[4766]: I0130 16:38:20.982667 4766 generic.go:334] "Generic (PLEG): container finished" podID="cbc79777-d574-4d18-953a-6d51b5c2bd84" containerID="a0667897374fd1d7d0b96fac3e5ab5303850348d561ffad0c5f2041c5320a561" exitCode=0 Jan 30 16:38:20 crc kubenswrapper[4766]: I0130 16:38:20.982723 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv" event={"ID":"cbc79777-d574-4d18-953a-6d51b5c2bd84","Type":"ContainerDied","Data":"a0667897374fd1d7d0b96fac3e5ab5303850348d561ffad0c5f2041c5320a561"} Jan 30 16:38:22 crc kubenswrapper[4766]: I0130 16:38:22.247278 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv" Jan 30 16:38:22 crc kubenswrapper[4766]: I0130 16:38:22.325710 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cbc79777-d574-4d18-953a-6d51b5c2bd84-util\") pod \"cbc79777-d574-4d18-953a-6d51b5c2bd84\" (UID: \"cbc79777-d574-4d18-953a-6d51b5c2bd84\") " Jan 30 16:38:22 crc kubenswrapper[4766]: I0130 16:38:22.326155 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5764\" (UniqueName: \"kubernetes.io/projected/cbc79777-d574-4d18-953a-6d51b5c2bd84-kube-api-access-z5764\") pod \"cbc79777-d574-4d18-953a-6d51b5c2bd84\" (UID: \"cbc79777-d574-4d18-953a-6d51b5c2bd84\") " Jan 30 16:38:22 crc kubenswrapper[4766]: I0130 16:38:22.326357 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cbc79777-d574-4d18-953a-6d51b5c2bd84-bundle\") pod \"cbc79777-d574-4d18-953a-6d51b5c2bd84\" (UID: \"cbc79777-d574-4d18-953a-6d51b5c2bd84\") " Jan 30 16:38:22 crc kubenswrapper[4766]: I0130 16:38:22.327383 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbc79777-d574-4d18-953a-6d51b5c2bd84-bundle" (OuterVolumeSpecName: "bundle") pod "cbc79777-d574-4d18-953a-6d51b5c2bd84" (UID: "cbc79777-d574-4d18-953a-6d51b5c2bd84"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:38:22 crc kubenswrapper[4766]: I0130 16:38:22.333169 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbc79777-d574-4d18-953a-6d51b5c2bd84-kube-api-access-z5764" (OuterVolumeSpecName: "kube-api-access-z5764") pod "cbc79777-d574-4d18-953a-6d51b5c2bd84" (UID: "cbc79777-d574-4d18-953a-6d51b5c2bd84"). InnerVolumeSpecName "kube-api-access-z5764". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:38:22 crc kubenswrapper[4766]: I0130 16:38:22.342711 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbc79777-d574-4d18-953a-6d51b5c2bd84-util" (OuterVolumeSpecName: "util") pod "cbc79777-d574-4d18-953a-6d51b5c2bd84" (UID: "cbc79777-d574-4d18-953a-6d51b5c2bd84"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:38:22 crc kubenswrapper[4766]: I0130 16:38:22.428005 4766 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cbc79777-d574-4d18-953a-6d51b5c2bd84-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:38:22 crc kubenswrapper[4766]: I0130 16:38:22.428047 4766 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cbc79777-d574-4d18-953a-6d51b5c2bd84-util\") on node \"crc\" DevicePath \"\"" Jan 30 16:38:22 crc kubenswrapper[4766]: I0130 16:38:22.428056 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5764\" (UniqueName: \"kubernetes.io/projected/cbc79777-d574-4d18-953a-6d51b5c2bd84-kube-api-access-z5764\") on node \"crc\" DevicePath \"\"" Jan 30 16:38:22 crc kubenswrapper[4766]: I0130 16:38:22.999306 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv" event={"ID":"cbc79777-d574-4d18-953a-6d51b5c2bd84","Type":"ContainerDied","Data":"1db96f7c4ef988280e169d2272439764ce0f3f81cbcce598bb9f124770611928"} Jan 30 16:38:22 crc kubenswrapper[4766]: I0130 16:38:22.999368 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv" Jan 30 16:38:22 crc kubenswrapper[4766]: I0130 16:38:22.999374 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1db96f7c4ef988280e169d2272439764ce0f3f81cbcce598bb9f124770611928" Jan 30 16:38:28 crc kubenswrapper[4766]: I0130 16:38:28.579154 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-5c7c85d9bc-85t58"] Jan 30 16:38:28 crc kubenswrapper[4766]: E0130 16:38:28.580022 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbc79777-d574-4d18-953a-6d51b5c2bd84" containerName="util" Jan 30 16:38:28 crc kubenswrapper[4766]: I0130 16:38:28.580039 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbc79777-d574-4d18-953a-6d51b5c2bd84" containerName="util" Jan 30 16:38:28 crc kubenswrapper[4766]: E0130 16:38:28.580059 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbc79777-d574-4d18-953a-6d51b5c2bd84" containerName="extract" Jan 30 16:38:28 crc kubenswrapper[4766]: I0130 16:38:28.580067 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbc79777-d574-4d18-953a-6d51b5c2bd84" containerName="extract" Jan 30 16:38:28 crc kubenswrapper[4766]: E0130 16:38:28.580082 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbc79777-d574-4d18-953a-6d51b5c2bd84" containerName="pull" Jan 30 16:38:28 crc kubenswrapper[4766]: I0130 16:38:28.580091 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbc79777-d574-4d18-953a-6d51b5c2bd84" containerName="pull" Jan 30 16:38:28 crc kubenswrapper[4766]: I0130 16:38:28.580234 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbc79777-d574-4d18-953a-6d51b5c2bd84" containerName="extract" Jan 30 16:38:28 crc kubenswrapper[4766]: I0130 16:38:28.580659 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5c7c85d9bc-85t58" Jan 30 16:38:28 crc kubenswrapper[4766]: I0130 16:38:28.582706 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-vpl4v" Jan 30 16:38:28 crc kubenswrapper[4766]: I0130 16:38:28.600522 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5c7c85d9bc-85t58"] Jan 30 16:38:28 crc kubenswrapper[4766]: I0130 16:38:28.633482 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c62wz\" (UniqueName: \"kubernetes.io/projected/e1df6663-4a1f-4900-8eba-215a6f08beb0-kube-api-access-c62wz\") pod \"openstack-operator-controller-init-5c7c85d9bc-85t58\" (UID: \"e1df6663-4a1f-4900-8eba-215a6f08beb0\") " pod="openstack-operators/openstack-operator-controller-init-5c7c85d9bc-85t58" Jan 30 16:38:28 crc kubenswrapper[4766]: I0130 16:38:28.734568 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c62wz\" (UniqueName: \"kubernetes.io/projected/e1df6663-4a1f-4900-8eba-215a6f08beb0-kube-api-access-c62wz\") pod \"openstack-operator-controller-init-5c7c85d9bc-85t58\" (UID: \"e1df6663-4a1f-4900-8eba-215a6f08beb0\") " pod="openstack-operators/openstack-operator-controller-init-5c7c85d9bc-85t58" Jan 30 16:38:28 crc kubenswrapper[4766]: I0130 16:38:28.762552 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c62wz\" (UniqueName: \"kubernetes.io/projected/e1df6663-4a1f-4900-8eba-215a6f08beb0-kube-api-access-c62wz\") pod \"openstack-operator-controller-init-5c7c85d9bc-85t58\" (UID: \"e1df6663-4a1f-4900-8eba-215a6f08beb0\") " pod="openstack-operators/openstack-operator-controller-init-5c7c85d9bc-85t58" Jan 30 16:38:28 crc kubenswrapper[4766]: I0130 16:38:28.900901 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5c7c85d9bc-85t58" Jan 30 16:38:29 crc kubenswrapper[4766]: I0130 16:38:29.396564 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5c7c85d9bc-85t58"] Jan 30 16:38:30 crc kubenswrapper[4766]: I0130 16:38:30.060239 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5c7c85d9bc-85t58" event={"ID":"e1df6663-4a1f-4900-8eba-215a6f08beb0","Type":"ContainerStarted","Data":"751f479300a9badf6846b64d74f180b7def8679b381884837e34197959023b59"} Jan 30 16:38:35 crc kubenswrapper[4766]: I0130 16:38:35.103491 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5c7c85d9bc-85t58" event={"ID":"e1df6663-4a1f-4900-8eba-215a6f08beb0","Type":"ContainerStarted","Data":"1f4c9771221d4d4aa209204af8c2d10f36e887f858fdcba2df171f5191f3966c"} Jan 30 16:38:35 crc kubenswrapper[4766]: I0130 16:38:35.104050 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-5c7c85d9bc-85t58" Jan 30 16:38:35 crc kubenswrapper[4766]: I0130 16:38:35.134937 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-5c7c85d9bc-85t58" podStartSLOduration=1.89751764 podStartE2EDuration="7.134916884s" podCreationTimestamp="2026-01-30 16:38:28 +0000 UTC" firstStartedPulling="2026-01-30 16:38:29.398818086 +0000 UTC m=+964.036775432" lastFinishedPulling="2026-01-30 16:38:34.63621733 +0000 UTC m=+969.274174676" observedRunningTime="2026-01-30 16:38:35.129547205 +0000 UTC m=+969.767504551" watchObservedRunningTime="2026-01-30 16:38:35.134916884 +0000 UTC m=+969.772874230" Jan 30 16:38:39 crc kubenswrapper[4766]: I0130 16:38:39.046057 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:38:39 crc kubenswrapper[4766]: I0130 16:38:39.046687 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:38:39 crc kubenswrapper[4766]: I0130 16:38:39.046753 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 16:38:39 crc kubenswrapper[4766]: I0130 16:38:39.047493 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5e25fe15fa17987c12e4d9db1a1dd14967f9d491c11f7c6086924c59f51346cf"} pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 16:38:39 crc kubenswrapper[4766]: I0130 16:38:39.047556 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" containerID="cri-o://5e25fe15fa17987c12e4d9db1a1dd14967f9d491c11f7c6086924c59f51346cf" gracePeriod=600 Jan 30 16:38:40 crc kubenswrapper[4766]: I0130 16:38:40.140599 4766 generic.go:334] "Generic (PLEG): container finished" podID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerID="5e25fe15fa17987c12e4d9db1a1dd14967f9d491c11f7c6086924c59f51346cf" exitCode=0 Jan 30 16:38:40 crc kubenswrapper[4766]: I0130 16:38:40.140647 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerDied","Data":"5e25fe15fa17987c12e4d9db1a1dd14967f9d491c11f7c6086924c59f51346cf"} Jan 30 16:38:40 crc kubenswrapper[4766]: I0130 16:38:40.140958 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerStarted","Data":"ff8a362ea851503bbb575c0aae10eba4412530904ed767a62c62bad94b884ce0"} Jan 30 16:38:40 crc kubenswrapper[4766]: I0130 16:38:40.140980 4766 scope.go:117] "RemoveContainer" containerID="2324c4835fd4bdd1303bb3b79291e1e367ad78303906e6548593c60cc4a66d08" Jan 30 16:38:48 crc kubenswrapper[4766]: I0130 16:38:48.903881 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-5c7c85d9bc-85t58" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.238032 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-fc589b45f-ssl7s"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.239571 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-ssl7s" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.242072 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-z9hxc" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.247802 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-787499fbb-mlkcx"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.248955 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-787499fbb-mlkcx" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.251009 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-4qwsx" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.257103 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-8f4c5cb64-rjgtk"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.258372 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-rjgtk" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.261820 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-x572m" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.273577 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-fc589b45f-ssl7s"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.308081 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-8f4c5cb64-rjgtk"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.323261 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-787499fbb-mlkcx"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.327940 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh6r4\" (UniqueName: \"kubernetes.io/projected/72b84e1c-8ed8-4fae-8dff-ca2576579904-kube-api-access-lh6r4\") pod \"cinder-operator-controller-manager-787499fbb-mlkcx\" (UID: \"72b84e1c-8ed8-4fae-8dff-ca2576579904\") " pod="openstack-operators/cinder-operator-controller-manager-787499fbb-mlkcx" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.328250 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6s8v\" (UniqueName: \"kubernetes.io/projected/c610cc53-6813-4c5b-86e9-b421aaa21666-kube-api-access-z6s8v\") pod \"designate-operator-controller-manager-8f4c5cb64-rjgtk\" (UID: \"c610cc53-6813-4c5b-86e9-b421aaa21666\") " pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-rjgtk" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.328393 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d29ph\" (UniqueName: \"kubernetes.io/projected/46a7c725-b480-4f85-91d0-24831e713b26-kube-api-access-d29ph\") pod \"barbican-operator-controller-manager-fc589b45f-ssl7s\" (UID: \"46a7c725-b480-4f85-91d0-24831e713b26\") " pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-ssl7s" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.349745 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-6bfc9d4d48-7287m"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.351132 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-6bfc9d4d48-7287m" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.357665 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-m546p" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.361247 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-65dc6c8d9c-8hrwp"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.362370 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-8hrwp" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.364675 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-88q26" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.369228 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-65dc6c8d9c-8hrwp"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.387571 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-6bfc9d4d48-7287m"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.408030 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-lhxhc"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.409012 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-lhxhc" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.411985 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-dfdjc" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.432789 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lh6r4\" (UniqueName: \"kubernetes.io/projected/72b84e1c-8ed8-4fae-8dff-ca2576579904-kube-api-access-lh6r4\") pod \"cinder-operator-controller-manager-787499fbb-mlkcx\" (UID: \"72b84e1c-8ed8-4fae-8dff-ca2576579904\") " pod="openstack-operators/cinder-operator-controller-manager-787499fbb-mlkcx" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.432851 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6s8v\" (UniqueName: \"kubernetes.io/projected/c610cc53-6813-4c5b-86e9-b421aaa21666-kube-api-access-z6s8v\") pod \"designate-operator-controller-manager-8f4c5cb64-rjgtk\" (UID: \"c610cc53-6813-4c5b-86e9-b421aaa21666\") " pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-rjgtk" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.432882 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsz4j\" (UniqueName: \"kubernetes.io/projected/2a5fe995-2904-4751-ae74-958efaa8596a-kube-api-access-vsz4j\") pod \"heat-operator-controller-manager-65dc6c8d9c-8hrwp\" (UID: \"2a5fe995-2904-4751-ae74-958efaa8596a\") " pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-8hrwp" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.432908 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7dbw\" (UniqueName: \"kubernetes.io/projected/d34f90ce-9c03-441f-85cb-67b1666672fc-kube-api-access-s7dbw\") pod \"glance-operator-controller-manager-6bfc9d4d48-7287m\" (UID: \"d34f90ce-9c03-441f-85cb-67b1666672fc\") " pod="openstack-operators/glance-operator-controller-manager-6bfc9d4d48-7287m" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.432943 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d29ph\" (UniqueName: \"kubernetes.io/projected/46a7c725-b480-4f85-91d0-24831e713b26-kube-api-access-d29ph\") pod \"barbican-operator-controller-manager-fc589b45f-ssl7s\" (UID: \"46a7c725-b480-4f85-91d0-24831e713b26\") " pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-ssl7s" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.440279 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-ddthn"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.441069 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-ddthn" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.444880 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-7lj62" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.445091 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.463257 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-ddthn"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.471691 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-lhxhc"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.486722 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6s8v\" (UniqueName: \"kubernetes.io/projected/c610cc53-6813-4c5b-86e9-b421aaa21666-kube-api-access-z6s8v\") pod \"designate-operator-controller-manager-8f4c5cb64-rjgtk\" (UID: \"c610cc53-6813-4c5b-86e9-b421aaa21666\") " pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-rjgtk" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.492880 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d29ph\" (UniqueName: \"kubernetes.io/projected/46a7c725-b480-4f85-91d0-24831e713b26-kube-api-access-d29ph\") pod \"barbican-operator-controller-manager-fc589b45f-ssl7s\" (UID: \"46a7c725-b480-4f85-91d0-24831e713b26\") " pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-ssl7s" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.495996 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-jhbv7"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.497312 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-jhbv7" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.502865 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-s58qv" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.515079 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lh6r4\" (UniqueName: \"kubernetes.io/projected/72b84e1c-8ed8-4fae-8dff-ca2576579904-kube-api-access-lh6r4\") pod \"cinder-operator-controller-manager-787499fbb-mlkcx\" (UID: \"72b84e1c-8ed8-4fae-8dff-ca2576579904\") " pod="openstack-operators/cinder-operator-controller-manager-787499fbb-mlkcx" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.539426 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-jhbv7"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.544109 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6zvn\" (UniqueName: \"kubernetes.io/projected/16fd0d31-da4c-4c6b-bbc4-8302daee3ee5-kube-api-access-f6zvn\") pod \"ironic-operator-controller-manager-6fd9bbb6f6-jhbv7\" (UID: \"16fd0d31-da4c-4c6b-bbc4-8302daee3ee5\") " pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-jhbv7" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.544254 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc7d9\" (UniqueName: \"kubernetes.io/projected/09fcb126-016c-4b79-91d5-90e98e3da7f3-kube-api-access-dc7d9\") pod \"infra-operator-controller-manager-79955696d6-ddthn\" (UID: \"09fcb126-016c-4b79-91d5-90e98e3da7f3\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-ddthn" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.544293 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsz4j\" (UniqueName: \"kubernetes.io/projected/2a5fe995-2904-4751-ae74-958efaa8596a-kube-api-access-vsz4j\") pod \"heat-operator-controller-manager-65dc6c8d9c-8hrwp\" (UID: \"2a5fe995-2904-4751-ae74-958efaa8596a\") " pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-8hrwp" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.544330 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/09fcb126-016c-4b79-91d5-90e98e3da7f3-cert\") pod \"infra-operator-controller-manager-79955696d6-ddthn\" (UID: \"09fcb126-016c-4b79-91d5-90e98e3da7f3\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-ddthn" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.544356 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7dbw\" (UniqueName: \"kubernetes.io/projected/d34f90ce-9c03-441f-85cb-67b1666672fc-kube-api-access-s7dbw\") pod \"glance-operator-controller-manager-6bfc9d4d48-7287m\" (UID: \"d34f90ce-9c03-441f-85cb-67b1666672fc\") " pod="openstack-operators/glance-operator-controller-manager-6bfc9d4d48-7287m" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.544401 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnwzb\" (UniqueName: \"kubernetes.io/projected/be908bdc-d0b5-4409-b088-b9b51de3cfb0-kube-api-access-nnwzb\") pod \"horizon-operator-controller-manager-5fb775575f-lhxhc\" (UID: \"be908bdc-d0b5-4409-b088-b9b51de3cfb0\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-lhxhc" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.559634 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7d96d95959-l4pbc"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.561433 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7d96d95959-l4pbc" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.562543 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-ssl7s" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.569341 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-64469b487f-xkfn6"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.570249 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-64469b487f-xkfn6" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.575558 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-fwd4v" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.576226 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-765t7" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.577613 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-787499fbb-mlkcx" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.600757 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7d96d95959-l4pbc"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.601692 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-rjgtk" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.637299 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7dbw\" (UniqueName: \"kubernetes.io/projected/d34f90ce-9c03-441f-85cb-67b1666672fc-kube-api-access-s7dbw\") pod \"glance-operator-controller-manager-6bfc9d4d48-7287m\" (UID: \"d34f90ce-9c03-441f-85cb-67b1666672fc\") " pod="openstack-operators/glance-operator-controller-manager-6bfc9d4d48-7287m" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.637874 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsz4j\" (UniqueName: \"kubernetes.io/projected/2a5fe995-2904-4751-ae74-958efaa8596a-kube-api-access-vsz4j\") pod \"heat-operator-controller-manager-65dc6c8d9c-8hrwp\" (UID: \"2a5fe995-2904-4751-ae74-958efaa8596a\") " pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-8hrwp" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.655660 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dc7d9\" (UniqueName: \"kubernetes.io/projected/09fcb126-016c-4b79-91d5-90e98e3da7f3-kube-api-access-dc7d9\") pod \"infra-operator-controller-manager-79955696d6-ddthn\" (UID: \"09fcb126-016c-4b79-91d5-90e98e3da7f3\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-ddthn" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.659878 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/09fcb126-016c-4b79-91d5-90e98e3da7f3-cert\") pod \"infra-operator-controller-manager-79955696d6-ddthn\" (UID: \"09fcb126-016c-4b79-91d5-90e98e3da7f3\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-ddthn" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.660062 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnwzb\" (UniqueName: \"kubernetes.io/projected/be908bdc-d0b5-4409-b088-b9b51de3cfb0-kube-api-access-nnwzb\") pod \"horizon-operator-controller-manager-5fb775575f-lhxhc\" (UID: \"be908bdc-d0b5-4409-b088-b9b51de3cfb0\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-lhxhc" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.660145 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs6qh\" (UniqueName: \"kubernetes.io/projected/0974b654-1fc0-4d97-9be3-eca153de4c57-kube-api-access-zs6qh\") pod \"manila-operator-controller-manager-7d96d95959-l4pbc\" (UID: \"0974b654-1fc0-4d97-9be3-eca153de4c57\") " pod="openstack-operators/manila-operator-controller-manager-7d96d95959-l4pbc" Jan 30 16:39:07 crc kubenswrapper[4766]: E0130 16:39:07.660228 4766 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 16:39:07 crc kubenswrapper[4766]: E0130 16:39:07.660313 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09fcb126-016c-4b79-91d5-90e98e3da7f3-cert podName:09fcb126-016c-4b79-91d5-90e98e3da7f3 nodeName:}" failed. No retries permitted until 2026-01-30 16:39:08.160292789 +0000 UTC m=+1002.798250135 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/09fcb126-016c-4b79-91d5-90e98e3da7f3-cert") pod "infra-operator-controller-manager-79955696d6-ddthn" (UID: "09fcb126-016c-4b79-91d5-90e98e3da7f3") : secret "infra-operator-webhook-server-cert" not found Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.667557 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6zvn\" (UniqueName: \"kubernetes.io/projected/16fd0d31-da4c-4c6b-bbc4-8302daee3ee5-kube-api-access-f6zvn\") pod \"ironic-operator-controller-manager-6fd9bbb6f6-jhbv7\" (UID: \"16fd0d31-da4c-4c6b-bbc4-8302daee3ee5\") " pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-jhbv7" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.667680 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-867g6\" (UniqueName: \"kubernetes.io/projected/b0db2f42-5872-4cac-9ee0-5990c49e0a26-kube-api-access-867g6\") pod \"keystone-operator-controller-manager-64469b487f-xkfn6\" (UID: \"b0db2f42-5872-4cac-9ee0-5990c49e0a26\") " pod="openstack-operators/keystone-operator-controller-manager-64469b487f-xkfn6" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.699473 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-6bfc9d4d48-7287m" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.703334 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-8hrwp" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.720087 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnwzb\" (UniqueName: \"kubernetes.io/projected/be908bdc-d0b5-4409-b088-b9b51de3cfb0-kube-api-access-nnwzb\") pod \"horizon-operator-controller-manager-5fb775575f-lhxhc\" (UID: \"be908bdc-d0b5-4409-b088-b9b51de3cfb0\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-lhxhc" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.731229 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6zvn\" (UniqueName: \"kubernetes.io/projected/16fd0d31-da4c-4c6b-bbc4-8302daee3ee5-kube-api-access-f6zvn\") pod \"ironic-operator-controller-manager-6fd9bbb6f6-jhbv7\" (UID: \"16fd0d31-da4c-4c6b-bbc4-8302daee3ee5\") " pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-jhbv7" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.732589 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dc7d9\" (UniqueName: \"kubernetes.io/projected/09fcb126-016c-4b79-91d5-90e98e3da7f3-kube-api-access-dc7d9\") pod \"infra-operator-controller-manager-79955696d6-ddthn\" (UID: \"09fcb126-016c-4b79-91d5-90e98e3da7f3\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-ddthn" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.732929 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-lhxhc" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.756504 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-jzztd"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.757483 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-jzztd" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.761003 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-8x6s2" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.796888 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-64469b487f-xkfn6"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.799973 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zs6qh\" (UniqueName: \"kubernetes.io/projected/0974b654-1fc0-4d97-9be3-eca153de4c57-kube-api-access-zs6qh\") pod \"manila-operator-controller-manager-7d96d95959-l4pbc\" (UID: \"0974b654-1fc0-4d97-9be3-eca153de4c57\") " pod="openstack-operators/manila-operator-controller-manager-7d96d95959-l4pbc" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.800056 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-867g6\" (UniqueName: \"kubernetes.io/projected/b0db2f42-5872-4cac-9ee0-5990c49e0a26-kube-api-access-867g6\") pod \"keystone-operator-controller-manager-64469b487f-xkfn6\" (UID: \"b0db2f42-5872-4cac-9ee0-5990c49e0a26\") " pod="openstack-operators/keystone-operator-controller-manager-64469b487f-xkfn6" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.808359 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-576995988b-kkvlj"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.809597 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-576995988b-kkvlj" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.812616 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-dhd5w" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.819271 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-jzztd"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.826101 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-576995988b-kkvlj"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.841114 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zs6qh\" (UniqueName: \"kubernetes.io/projected/0974b654-1fc0-4d97-9be3-eca153de4c57-kube-api-access-zs6qh\") pod \"manila-operator-controller-manager-7d96d95959-l4pbc\" (UID: \"0974b654-1fc0-4d97-9be3-eca153de4c57\") " pod="openstack-operators/manila-operator-controller-manager-7d96d95959-l4pbc" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.854950 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-867g6\" (UniqueName: \"kubernetes.io/projected/b0db2f42-5872-4cac-9ee0-5990c49e0a26-kube-api-access-867g6\") pod \"keystone-operator-controller-manager-64469b487f-xkfn6\" (UID: \"b0db2f42-5872-4cac-9ee0-5990c49e0a26\") " pod="openstack-operators/keystone-operator-controller-manager-64469b487f-xkfn6" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.856290 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-5644b66645-6jc7f"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.857435 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5644b66645-6jc7f" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.860579 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-jhbv7" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.866987 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-p9fn5" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.876268 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5644b66645-6jc7f"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.883779 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-694c6dcf95-swq4p"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.884979 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-swq4p" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.892455 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-694c6dcf95-swq4p"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.895198 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-n8kpd" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.901571 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgw69\" (UniqueName: \"kubernetes.io/projected/d4c39f8d-f83d-4311-bb99-24dfa7eaeafd-kube-api-access-pgw69\") pod \"neutron-operator-controller-manager-576995988b-kkvlj\" (UID: \"d4c39f8d-f83d-4311-bb99-24dfa7eaeafd\") " pod="openstack-operators/neutron-operator-controller-manager-576995988b-kkvlj" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.901648 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7m9x\" (UniqueName: \"kubernetes.io/projected/1ea9d2ea-ca11-428c-ab61-28bf391bcd4f-kube-api-access-r7m9x\") pod \"mariadb-operator-controller-manager-67bf948998-jzztd\" (UID: \"1ea9d2ea-ca11-428c-ab61-28bf391bcd4f\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-jzztd" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.903816 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-2jmqd"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.909878 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-2jmqd" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.911956 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-l2sxb" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.919836 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-2jmqd"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.931838 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.932998 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.935832 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.936166 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-4c97f" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.953548 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-bm24k"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.955505 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-bm24k" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.960631 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7d96d95959-l4pbc" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.976121 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-bm24k"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.982687 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-64469b487f-xkfn6" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.988150 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r"] Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.004452 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-z5dp6" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.004738 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-566d8d7445-l44w4"] Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.005889 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-566d8d7445-l44w4" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.006806 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgw69\" (UniqueName: \"kubernetes.io/projected/d4c39f8d-f83d-4311-bb99-24dfa7eaeafd-kube-api-access-pgw69\") pod \"neutron-operator-controller-manager-576995988b-kkvlj\" (UID: \"d4c39f8d-f83d-4311-bb99-24dfa7eaeafd\") " pod="openstack-operators/neutron-operator-controller-manager-576995988b-kkvlj" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.006844 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbfnj\" (UniqueName: \"kubernetes.io/projected/a5c180e1-f6bc-49b7-b4cf-4812d0bba5ac-kube-api-access-hbfnj\") pod \"octavia-operator-controller-manager-694c6dcf95-swq4p\" (UID: \"a5c180e1-f6bc-49b7-b4cf-4812d0bba5ac\") " pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-swq4p" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.006879 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj79c\" (UniqueName: \"kubernetes.io/projected/8ce68dfd-dfb4-42b9-8fc1-0da8b8336b90-kube-api-access-nj79c\") pod \"ovn-operator-controller-manager-788c46999f-2jmqd\" (UID: \"8ce68dfd-dfb4-42b9-8fc1-0da8b8336b90\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-2jmqd" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.006917 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7m9x\" (UniqueName: \"kubernetes.io/projected/1ea9d2ea-ca11-428c-ab61-28bf391bcd4f-kube-api-access-r7m9x\") pod \"mariadb-operator-controller-manager-67bf948998-jzztd\" (UID: \"1ea9d2ea-ca11-428c-ab61-28bf391bcd4f\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-jzztd" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.006962 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d8vj\" (UniqueName: \"kubernetes.io/projected/0582a100-4b50-452f-baca-e67b4d6f2891-kube-api-access-2d8vj\") pod \"nova-operator-controller-manager-5644b66645-6jc7f\" (UID: \"0582a100-4b50-452f-baca-e67b4d6f2891\") " pod="openstack-operators/nova-operator-controller-manager-5644b66645-6jc7f" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.012859 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-6qgsq" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.067038 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7m9x\" (UniqueName: \"kubernetes.io/projected/1ea9d2ea-ca11-428c-ab61-28bf391bcd4f-kube-api-access-r7m9x\") pod \"mariadb-operator-controller-manager-67bf948998-jzztd\" (UID: \"1ea9d2ea-ca11-428c-ab61-28bf391bcd4f\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-jzztd" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.091126 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgw69\" (UniqueName: \"kubernetes.io/projected/d4c39f8d-f83d-4311-bb99-24dfa7eaeafd-kube-api-access-pgw69\") pod \"neutron-operator-controller-manager-576995988b-kkvlj\" (UID: \"d4c39f8d-f83d-4311-bb99-24dfa7eaeafd\") " pod="openstack-operators/neutron-operator-controller-manager-576995988b-kkvlj" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.092435 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-jzztd" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.098245 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-69484b8d9d-tqxks"] Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.106486 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-566d8d7445-l44w4"] Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.106650 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-69484b8d9d-tqxks" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.106911 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-69484b8d9d-tqxks"] Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.107880 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/90a2893c-9d38-4d53-93d9-a50421172933-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r\" (UID: \"90a2893c-9d38-4d53-93d9-a50421172933\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.107918 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2d8vj\" (UniqueName: \"kubernetes.io/projected/0582a100-4b50-452f-baca-e67b4d6f2891-kube-api-access-2d8vj\") pod \"nova-operator-controller-manager-5644b66645-6jc7f\" (UID: \"0582a100-4b50-452f-baca-e67b4d6f2891\") " pod="openstack-operators/nova-operator-controller-manager-5644b66645-6jc7f" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.107984 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbfnj\" (UniqueName: \"kubernetes.io/projected/a5c180e1-f6bc-49b7-b4cf-4812d0bba5ac-kube-api-access-hbfnj\") pod \"octavia-operator-controller-manager-694c6dcf95-swq4p\" (UID: \"a5c180e1-f6bc-49b7-b4cf-4812d0bba5ac\") " pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-swq4p" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.108010 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj79c\" (UniqueName: \"kubernetes.io/projected/8ce68dfd-dfb4-42b9-8fc1-0da8b8336b90-kube-api-access-nj79c\") pod \"ovn-operator-controller-manager-788c46999f-2jmqd\" (UID: \"8ce68dfd-dfb4-42b9-8fc1-0da8b8336b90\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-2jmqd" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.108033 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngcqx\" (UniqueName: \"kubernetes.io/projected/90a2893c-9d38-4d53-93d9-a50421172933-kube-api-access-ngcqx\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r\" (UID: \"90a2893c-9d38-4d53-93d9-a50421172933\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.108062 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ph9c7\" (UniqueName: \"kubernetes.io/projected/04cf0394-fb7b-41a9-a9bb-6fec8537d393-kube-api-access-ph9c7\") pod \"placement-operator-controller-manager-5b964cf4cd-bm24k\" (UID: \"04cf0394-fb7b-41a9-a9bb-6fec8537d393\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-bm24k" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.108085 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x9t4\" (UniqueName: \"kubernetes.io/projected/5eacef6b-7362-4c43-912a-eb3e6ccce6e9-kube-api-access-8x9t4\") pod \"swift-operator-controller-manager-566d8d7445-l44w4\" (UID: \"5eacef6b-7362-4c43-912a-eb3e6ccce6e9\") " pod="openstack-operators/swift-operator-controller-manager-566d8d7445-l44w4" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.112473 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-mb7mw" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.118993 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-d7xxm"] Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.145146 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-d7xxm"] Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.145285 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-d7xxm" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.146339 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-576995988b-kkvlj" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.152051 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2d8vj\" (UniqueName: \"kubernetes.io/projected/0582a100-4b50-452f-baca-e67b4d6f2891-kube-api-access-2d8vj\") pod \"nova-operator-controller-manager-5644b66645-6jc7f\" (UID: \"0582a100-4b50-452f-baca-e67b4d6f2891\") " pod="openstack-operators/nova-operator-controller-manager-5644b66645-6jc7f" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.158689 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-sfl6p" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.169942 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbfnj\" (UniqueName: \"kubernetes.io/projected/a5c180e1-f6bc-49b7-b4cf-4812d0bba5ac-kube-api-access-hbfnj\") pod \"octavia-operator-controller-manager-694c6dcf95-swq4p\" (UID: \"a5c180e1-f6bc-49b7-b4cf-4812d0bba5ac\") " pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-swq4p" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.170104 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj79c\" (UniqueName: \"kubernetes.io/projected/8ce68dfd-dfb4-42b9-8fc1-0da8b8336b90-kube-api-access-nj79c\") pod \"ovn-operator-controller-manager-788c46999f-2jmqd\" (UID: \"8ce68dfd-dfb4-42b9-8fc1-0da8b8336b90\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-2jmqd" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.181213 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-586b95b788-dklb4"] Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.182265 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-dklb4" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.184589 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-mxqzz" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.197849 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5644b66645-6jc7f" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.210029 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/90a2893c-9d38-4d53-93d9-a50421172933-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r\" (UID: \"90a2893c-9d38-4d53-93d9-a50421172933\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.210098 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/09fcb126-016c-4b79-91d5-90e98e3da7f3-cert\") pod \"infra-operator-controller-manager-79955696d6-ddthn\" (UID: \"09fcb126-016c-4b79-91d5-90e98e3da7f3\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-ddthn" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.210130 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phsmp\" (UniqueName: \"kubernetes.io/projected/0c603c94-f0b0-4820-a5a1-0ab9a76ceb49-kube-api-access-phsmp\") pod \"telemetry-operator-controller-manager-69484b8d9d-tqxks\" (UID: \"0c603c94-f0b0-4820-a5a1-0ab9a76ceb49\") " pod="openstack-operators/telemetry-operator-controller-manager-69484b8d9d-tqxks" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.210162 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngcqx\" (UniqueName: \"kubernetes.io/projected/90a2893c-9d38-4d53-93d9-a50421172933-kube-api-access-ngcqx\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r\" (UID: \"90a2893c-9d38-4d53-93d9-a50421172933\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.210203 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ph9c7\" (UniqueName: \"kubernetes.io/projected/04cf0394-fb7b-41a9-a9bb-6fec8537d393-kube-api-access-ph9c7\") pod \"placement-operator-controller-manager-5b964cf4cd-bm24k\" (UID: \"04cf0394-fb7b-41a9-a9bb-6fec8537d393\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-bm24k" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.210229 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8x9t4\" (UniqueName: \"kubernetes.io/projected/5eacef6b-7362-4c43-912a-eb3e6ccce6e9-kube-api-access-8x9t4\") pod \"swift-operator-controller-manager-566d8d7445-l44w4\" (UID: \"5eacef6b-7362-4c43-912a-eb3e6ccce6e9\") " pod="openstack-operators/swift-operator-controller-manager-566d8d7445-l44w4" Jan 30 16:39:08 crc kubenswrapper[4766]: E0130 16:39:08.210643 4766 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 16:39:08 crc kubenswrapper[4766]: E0130 16:39:08.210684 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/90a2893c-9d38-4d53-93d9-a50421172933-cert podName:90a2893c-9d38-4d53-93d9-a50421172933 nodeName:}" failed. No retries permitted until 2026-01-30 16:39:08.710671097 +0000 UTC m=+1003.348628443 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/90a2893c-9d38-4d53-93d9-a50421172933-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r" (UID: "90a2893c-9d38-4d53-93d9-a50421172933") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 16:39:08 crc kubenswrapper[4766]: E0130 16:39:08.210826 4766 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 16:39:08 crc kubenswrapper[4766]: E0130 16:39:08.210851 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09fcb126-016c-4b79-91d5-90e98e3da7f3-cert podName:09fcb126-016c-4b79-91d5-90e98e3da7f3 nodeName:}" failed. No retries permitted until 2026-01-30 16:39:09.210843832 +0000 UTC m=+1003.848801178 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/09fcb126-016c-4b79-91d5-90e98e3da7f3-cert") pod "infra-operator-controller-manager-79955696d6-ddthn" (UID: "09fcb126-016c-4b79-91d5-90e98e3da7f3") : secret "infra-operator-webhook-server-cert" not found Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.220842 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-swq4p" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.234069 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-586b95b788-dklb4"] Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.275436 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-2jmqd" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.292784 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngcqx\" (UniqueName: \"kubernetes.io/projected/90a2893c-9d38-4d53-93d9-a50421172933-kube-api-access-ngcqx\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r\" (UID: \"90a2893c-9d38-4d53-93d9-a50421172933\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.294857 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ph9c7\" (UniqueName: \"kubernetes.io/projected/04cf0394-fb7b-41a9-a9bb-6fec8537d393-kube-api-access-ph9c7\") pod \"placement-operator-controller-manager-5b964cf4cd-bm24k\" (UID: \"04cf0394-fb7b-41a9-a9bb-6fec8537d393\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-bm24k" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.321167 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krdz6\" (UniqueName: \"kubernetes.io/projected/c03d46f4-f454-4b31-b4c7-5c324390d8ec-kube-api-access-krdz6\") pod \"test-operator-controller-manager-56f8bfcd9f-d7xxm\" (UID: \"c03d46f4-f454-4b31-b4c7-5c324390d8ec\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-d7xxm" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.321255 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phsmp\" (UniqueName: \"kubernetes.io/projected/0c603c94-f0b0-4820-a5a1-0ab9a76ceb49-kube-api-access-phsmp\") pod \"telemetry-operator-controller-manager-69484b8d9d-tqxks\" (UID: \"0c603c94-f0b0-4820-a5a1-0ab9a76ceb49\") " pod="openstack-operators/telemetry-operator-controller-manager-69484b8d9d-tqxks" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.321393 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99vnv\" (UniqueName: \"kubernetes.io/projected/55fb4fd9-f80b-474b-b9c9-758720536349-kube-api-access-99vnv\") pod \"watcher-operator-controller-manager-586b95b788-dklb4\" (UID: \"55fb4fd9-f80b-474b-b9c9-758720536349\") " pod="openstack-operators/watcher-operator-controller-manager-586b95b788-dklb4" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.327898 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8x9t4\" (UniqueName: \"kubernetes.io/projected/5eacef6b-7362-4c43-912a-eb3e6ccce6e9-kube-api-access-8x9t4\") pod \"swift-operator-controller-manager-566d8d7445-l44w4\" (UID: \"5eacef6b-7362-4c43-912a-eb3e6ccce6e9\") " pod="openstack-operators/swift-operator-controller-manager-566d8d7445-l44w4" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.344235 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-bm24k" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.359666 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phsmp\" (UniqueName: \"kubernetes.io/projected/0c603c94-f0b0-4820-a5a1-0ab9a76ceb49-kube-api-access-phsmp\") pod \"telemetry-operator-controller-manager-69484b8d9d-tqxks\" (UID: \"0c603c94-f0b0-4820-a5a1-0ab9a76ceb49\") " pod="openstack-operators/telemetry-operator-controller-manager-69484b8d9d-tqxks" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.370375 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-566d8d7445-l44w4" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.424522 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99vnv\" (UniqueName: \"kubernetes.io/projected/55fb4fd9-f80b-474b-b9c9-758720536349-kube-api-access-99vnv\") pod \"watcher-operator-controller-manager-586b95b788-dklb4\" (UID: \"55fb4fd9-f80b-474b-b9c9-758720536349\") " pod="openstack-operators/watcher-operator-controller-manager-586b95b788-dklb4" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.424656 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krdz6\" (UniqueName: \"kubernetes.io/projected/c03d46f4-f454-4b31-b4c7-5c324390d8ec-kube-api-access-krdz6\") pod \"test-operator-controller-manager-56f8bfcd9f-d7xxm\" (UID: \"c03d46f4-f454-4b31-b4c7-5c324390d8ec\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-d7xxm" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.447302 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-86bf68df65-m95g8"] Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.448151 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-86bf68df65-m95g8" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.459263 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-9tl7m" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.459639 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.461214 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.473282 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-86bf68df65-m95g8"] Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.485822 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-69484b8d9d-tqxks" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.501110 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krdz6\" (UniqueName: \"kubernetes.io/projected/c03d46f4-f454-4b31-b4c7-5c324390d8ec-kube-api-access-krdz6\") pod \"test-operator-controller-manager-56f8bfcd9f-d7xxm\" (UID: \"c03d46f4-f454-4b31-b4c7-5c324390d8ec\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-d7xxm" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.516475 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-49xwp"] Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.517478 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-49xwp" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.525587 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-phwmg" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.529894 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-49xwp"] Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.535901 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99vnv\" (UniqueName: \"kubernetes.io/projected/55fb4fd9-f80b-474b-b9c9-758720536349-kube-api-access-99vnv\") pod \"watcher-operator-controller-manager-586b95b788-dklb4\" (UID: \"55fb4fd9-f80b-474b-b9c9-758720536349\") " pod="openstack-operators/watcher-operator-controller-manager-586b95b788-dklb4" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.551502 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-d7xxm" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.628965 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqblv\" (UniqueName: \"kubernetes.io/projected/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-kube-api-access-kqblv\") pod \"openstack-operator-controller-manager-86bf68df65-m95g8\" (UID: \"b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f\") " pod="openstack-operators/openstack-operator-controller-manager-86bf68df65-m95g8" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.629386 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-webhook-certs\") pod \"openstack-operator-controller-manager-86bf68df65-m95g8\" (UID: \"b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f\") " pod="openstack-operators/openstack-operator-controller-manager-86bf68df65-m95g8" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.629411 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsxcs\" (UniqueName: \"kubernetes.io/projected/dc1c52ba-db5b-40ac-87da-de36346e8491-kube-api-access-lsxcs\") pod \"rabbitmq-cluster-operator-manager-668c99d594-49xwp\" (UID: \"dc1c52ba-db5b-40ac-87da-de36346e8491\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-49xwp" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.629444 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-metrics-certs\") pod \"openstack-operator-controller-manager-86bf68df65-m95g8\" (UID: \"b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f\") " pod="openstack-operators/openstack-operator-controller-manager-86bf68df65-m95g8" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.733633 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqblv\" (UniqueName: \"kubernetes.io/projected/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-kube-api-access-kqblv\") pod \"openstack-operator-controller-manager-86bf68df65-m95g8\" (UID: \"b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f\") " pod="openstack-operators/openstack-operator-controller-manager-86bf68df65-m95g8" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.733709 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/90a2893c-9d38-4d53-93d9-a50421172933-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r\" (UID: \"90a2893c-9d38-4d53-93d9-a50421172933\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.733739 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-webhook-certs\") pod \"openstack-operator-controller-manager-86bf68df65-m95g8\" (UID: \"b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f\") " pod="openstack-operators/openstack-operator-controller-manager-86bf68df65-m95g8" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.733760 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsxcs\" (UniqueName: \"kubernetes.io/projected/dc1c52ba-db5b-40ac-87da-de36346e8491-kube-api-access-lsxcs\") pod \"rabbitmq-cluster-operator-manager-668c99d594-49xwp\" (UID: \"dc1c52ba-db5b-40ac-87da-de36346e8491\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-49xwp" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.733792 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-metrics-certs\") pod \"openstack-operator-controller-manager-86bf68df65-m95g8\" (UID: \"b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f\") " pod="openstack-operators/openstack-operator-controller-manager-86bf68df65-m95g8" Jan 30 16:39:08 crc kubenswrapper[4766]: E0130 16:39:08.733977 4766 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 16:39:08 crc kubenswrapper[4766]: E0130 16:39:08.734032 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-metrics-certs podName:b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f nodeName:}" failed. No retries permitted until 2026-01-30 16:39:09.234016931 +0000 UTC m=+1003.871974277 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-metrics-certs") pod "openstack-operator-controller-manager-86bf68df65-m95g8" (UID: "b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f") : secret "metrics-server-cert" not found Jan 30 16:39:08 crc kubenswrapper[4766]: E0130 16:39:08.734590 4766 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 16:39:08 crc kubenswrapper[4766]: E0130 16:39:08.734627 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/90a2893c-9d38-4d53-93d9-a50421172933-cert podName:90a2893c-9d38-4d53-93d9-a50421172933 nodeName:}" failed. No retries permitted until 2026-01-30 16:39:09.734618917 +0000 UTC m=+1004.372576263 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/90a2893c-9d38-4d53-93d9-a50421172933-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r" (UID: "90a2893c-9d38-4d53-93d9-a50421172933") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 16:39:08 crc kubenswrapper[4766]: E0130 16:39:08.734663 4766 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 16:39:08 crc kubenswrapper[4766]: E0130 16:39:08.734682 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-webhook-certs podName:b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f nodeName:}" failed. No retries permitted until 2026-01-30 16:39:09.234675989 +0000 UTC m=+1003.872633325 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-webhook-certs") pod "openstack-operator-controller-manager-86bf68df65-m95g8" (UID: "b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f") : secret "webhook-server-cert" not found Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.782857 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqblv\" (UniqueName: \"kubernetes.io/projected/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-kube-api-access-kqblv\") pod \"openstack-operator-controller-manager-86bf68df65-m95g8\" (UID: \"b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f\") " pod="openstack-operators/openstack-operator-controller-manager-86bf68df65-m95g8" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.794752 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-dklb4" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.846925 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsxcs\" (UniqueName: \"kubernetes.io/projected/dc1c52ba-db5b-40ac-87da-de36346e8491-kube-api-access-lsxcs\") pod \"rabbitmq-cluster-operator-manager-668c99d594-49xwp\" (UID: \"dc1c52ba-db5b-40ac-87da-de36346e8491\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-49xwp" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.970631 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-49xwp" Jan 30 16:39:09 crc kubenswrapper[4766]: I0130 16:39:09.138413 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-787499fbb-mlkcx"] Jan 30 16:39:09 crc kubenswrapper[4766]: I0130 16:39:09.248694 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-metrics-certs\") pod \"openstack-operator-controller-manager-86bf68df65-m95g8\" (UID: \"b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f\") " pod="openstack-operators/openstack-operator-controller-manager-86bf68df65-m95g8" Jan 30 16:39:09 crc kubenswrapper[4766]: I0130 16:39:09.249345 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/09fcb126-016c-4b79-91d5-90e98e3da7f3-cert\") pod \"infra-operator-controller-manager-79955696d6-ddthn\" (UID: \"09fcb126-016c-4b79-91d5-90e98e3da7f3\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-ddthn" Jan 30 16:39:09 crc kubenswrapper[4766]: E0130 16:39:09.249370 4766 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 16:39:09 crc kubenswrapper[4766]: E0130 16:39:09.249472 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-metrics-certs podName:b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f nodeName:}" failed. No retries permitted until 2026-01-30 16:39:10.249449116 +0000 UTC m=+1004.887406502 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-metrics-certs") pod "openstack-operator-controller-manager-86bf68df65-m95g8" (UID: "b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f") : secret "metrics-server-cert" not found Jan 30 16:39:09 crc kubenswrapper[4766]: I0130 16:39:09.249554 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-webhook-certs\") pod \"openstack-operator-controller-manager-86bf68df65-m95g8\" (UID: \"b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f\") " pod="openstack-operators/openstack-operator-controller-manager-86bf68df65-m95g8" Jan 30 16:39:09 crc kubenswrapper[4766]: E0130 16:39:09.249645 4766 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 16:39:09 crc kubenswrapper[4766]: E0130 16:39:09.249759 4766 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 16:39:09 crc kubenswrapper[4766]: E0130 16:39:09.249766 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09fcb126-016c-4b79-91d5-90e98e3da7f3-cert podName:09fcb126-016c-4b79-91d5-90e98e3da7f3 nodeName:}" failed. No retries permitted until 2026-01-30 16:39:11.249718924 +0000 UTC m=+1005.887676280 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/09fcb126-016c-4b79-91d5-90e98e3da7f3-cert") pod "infra-operator-controller-manager-79955696d6-ddthn" (UID: "09fcb126-016c-4b79-91d5-90e98e3da7f3") : secret "infra-operator-webhook-server-cert" not found Jan 30 16:39:09 crc kubenswrapper[4766]: E0130 16:39:09.249799 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-webhook-certs podName:b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f nodeName:}" failed. No retries permitted until 2026-01-30 16:39:10.249787225 +0000 UTC m=+1004.887744621 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-webhook-certs") pod "openstack-operator-controller-manager-86bf68df65-m95g8" (UID: "b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f") : secret "webhook-server-cert" not found Jan 30 16:39:09 crc kubenswrapper[4766]: I0130 16:39:09.354194 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-787499fbb-mlkcx" event={"ID":"72b84e1c-8ed8-4fae-8dff-ca2576579904","Type":"ContainerStarted","Data":"60589c57f1b9fc748ea034d80c5d0190674d723fc5ce9c74e34d6da7c3f4f1f4"} Jan 30 16:39:09 crc kubenswrapper[4766]: I0130 16:39:09.626210 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-6bfc9d4d48-7287m"] Jan 30 16:39:09 crc kubenswrapper[4766]: W0130 16:39:09.629603 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd34f90ce_9c03_441f_85cb_67b1666672fc.slice/crio-78e165d7b3a6c6b87b2f5d0c693b5622d778b4290226302193c8dfbb9b0cd281 WatchSource:0}: Error finding container 78e165d7b3a6c6b87b2f5d0c693b5622d778b4290226302193c8dfbb9b0cd281: Status 404 returned error can't find the container with id 78e165d7b3a6c6b87b2f5d0c693b5622d778b4290226302193c8dfbb9b0cd281 Jan 30 16:39:09 crc kubenswrapper[4766]: I0130 16:39:09.676105 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-8f4c5cb64-rjgtk"] Jan 30 16:39:09 crc kubenswrapper[4766]: I0130 16:39:09.686012 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-fc589b45f-ssl7s"] Jan 30 16:39:09 crc kubenswrapper[4766]: I0130 16:39:09.734505 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-lhxhc"] Jan 30 16:39:09 crc kubenswrapper[4766]: I0130 16:39:09.767888 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/90a2893c-9d38-4d53-93d9-a50421172933-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r\" (UID: \"90a2893c-9d38-4d53-93d9-a50421172933\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r" Jan 30 16:39:09 crc kubenswrapper[4766]: E0130 16:39:09.768006 4766 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 16:39:09 crc kubenswrapper[4766]: E0130 16:39:09.768044 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/90a2893c-9d38-4d53-93d9-a50421172933-cert podName:90a2893c-9d38-4d53-93d9-a50421172933 nodeName:}" failed. No retries permitted until 2026-01-30 16:39:11.768031238 +0000 UTC m=+1006.405988584 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/90a2893c-9d38-4d53-93d9-a50421172933-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r" (UID: "90a2893c-9d38-4d53-93d9-a50421172933") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 16:39:09 crc kubenswrapper[4766]: I0130 16:39:09.878731 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-jhbv7"] Jan 30 16:39:09 crc kubenswrapper[4766]: W0130 16:39:09.880779 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16fd0d31_da4c_4c6b_bbc4_8302daee3ee5.slice/crio-14a496dd48f87c96d1b8058ba219826e827118612b8ac815646c6233d4808189 WatchSource:0}: Error finding container 14a496dd48f87c96d1b8058ba219826e827118612b8ac815646c6233d4808189: Status 404 returned error can't find the container with id 14a496dd48f87c96d1b8058ba219826e827118612b8ac815646c6233d4808189 Jan 30 16:39:09 crc kubenswrapper[4766]: I0130 16:39:09.910269 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-64469b487f-xkfn6"] Jan 30 16:39:09 crc kubenswrapper[4766]: I0130 16:39:09.943049 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-65dc6c8d9c-8hrwp"] Jan 30 16:39:09 crc kubenswrapper[4766]: W0130 16:39:09.944561 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0db2f42_5872_4cac_9ee0_5990c49e0a26.slice/crio-cd74f86235f57e1705da50e756093414bd5f0451e027dd23c4b5f8e5e8291f3a WatchSource:0}: Error finding container cd74f86235f57e1705da50e756093414bd5f0451e027dd23c4b5f8e5e8291f3a: Status 404 returned error can't find the container with id cd74f86235f57e1705da50e756093414bd5f0451e027dd23c4b5f8e5e8291f3a Jan 30 16:39:09 crc kubenswrapper[4766]: W0130 16:39:09.947693 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a5fe995_2904_4751_ae74_958efaa8596a.slice/crio-815dff27c6c9d212aded026399c51d20dbe39dec8cd198d83ead0d32051d1b6d WatchSource:0}: Error finding container 815dff27c6c9d212aded026399c51d20dbe39dec8cd198d83ead0d32051d1b6d: Status 404 returned error can't find the container with id 815dff27c6c9d212aded026399c51d20dbe39dec8cd198d83ead0d32051d1b6d Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.246944 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-49xwp"] Jan 30 16:39:10 crc kubenswrapper[4766]: W0130 16:39:10.255847 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc1c52ba_db5b_40ac_87da_de36346e8491.slice/crio-61a50f674369938611d93cd1201b24380e4fe6ef1b3fb05b8a28ef44fd6a6a82 WatchSource:0}: Error finding container 61a50f674369938611d93cd1201b24380e4fe6ef1b3fb05b8a28ef44fd6a6a82: Status 404 returned error can't find the container with id 61a50f674369938611d93cd1201b24380e4fe6ef1b3fb05b8a28ef44fd6a6a82 Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.273408 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7d96d95959-l4pbc"] Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.276429 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-webhook-certs\") pod \"openstack-operator-controller-manager-86bf68df65-m95g8\" (UID: \"b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f\") " pod="openstack-operators/openstack-operator-controller-manager-86bf68df65-m95g8" Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.276521 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-metrics-certs\") pod \"openstack-operator-controller-manager-86bf68df65-m95g8\" (UID: \"b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f\") " pod="openstack-operators/openstack-operator-controller-manager-86bf68df65-m95g8" Jan 30 16:39:10 crc kubenswrapper[4766]: E0130 16:39:10.276739 4766 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 16:39:10 crc kubenswrapper[4766]: E0130 16:39:10.276805 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-metrics-certs podName:b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f nodeName:}" failed. No retries permitted until 2026-01-30 16:39:12.276785959 +0000 UTC m=+1006.914743305 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-metrics-certs") pod "openstack-operator-controller-manager-86bf68df65-m95g8" (UID: "b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f") : secret "metrics-server-cert" not found Jan 30 16:39:10 crc kubenswrapper[4766]: E0130 16:39:10.277234 4766 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 16:39:10 crc kubenswrapper[4766]: E0130 16:39:10.277278 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-webhook-certs podName:b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f nodeName:}" failed. No retries permitted until 2026-01-30 16:39:12.277265262 +0000 UTC m=+1006.915222618 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-webhook-certs") pod "openstack-operator-controller-manager-86bf68df65-m95g8" (UID: "b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f") : secret "webhook-server-cert" not found Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.281369 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-jzztd"] Jan 30 16:39:10 crc kubenswrapper[4766]: W0130 16:39:10.282125 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ea9d2ea_ca11_428c_ab61_28bf391bcd4f.slice/crio-ec801056a6ec4765c2bd8df17157fa6fa4a55d01facd0540bec960ba4b960516 WatchSource:0}: Error finding container ec801056a6ec4765c2bd8df17157fa6fa4a55d01facd0540bec960ba4b960516: Status 404 returned error can't find the container with id ec801056a6ec4765c2bd8df17157fa6fa4a55d01facd0540bec960ba4b960516 Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.289345 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-69484b8d9d-tqxks"] Jan 30 16:39:10 crc kubenswrapper[4766]: W0130 16:39:10.295225 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0582a100_4b50_452f_baca_e67b4d6f2891.slice/crio-8e36a40a79788ad39b40df248cb868f5e29ed28ba048dbfc39e64872ad098a7d WatchSource:0}: Error finding container 8e36a40a79788ad39b40df248cb868f5e29ed28ba048dbfc39e64872ad098a7d: Status 404 returned error can't find the container with id 8e36a40a79788ad39b40df248cb868f5e29ed28ba048dbfc39e64872ad098a7d Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.300365 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-586b95b788-dklb4"] Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.312753 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-576995988b-kkvlj"] Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.313902 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5644b66645-6jc7f"] Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.327911 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-2jmqd"] Jan 30 16:39:10 crc kubenswrapper[4766]: E0130 16:39:10.335217 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/swift-operator@sha256:e5570727bc92a0d4d95be8232fa9ccad32e212f77538a1bf5319b6e951be2011,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8x9t4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-566d8d7445-l44w4_openstack-operators(5eacef6b-7362-4c43-912a-eb3e6ccce6e9): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 16:39:10 crc kubenswrapper[4766]: E0130 16:39:10.335459 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-krdz6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-d7xxm_openstack-operators(c03d46f4-f454-4b31-b4c7-5c324390d8ec): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 16:39:10 crc kubenswrapper[4766]: E0130 16:39:10.335569 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/neutron-operator@sha256:32d8aa084f9ca6788a465b65a4575f7a3bb38255c30c849c955e9173b4351ef2,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pgw69,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-576995988b-kkvlj_openstack-operators(d4c39f8d-f83d-4311-bb99-24dfa7eaeafd): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 16:39:10 crc kubenswrapper[4766]: E0130 16:39:10.335625 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nj79c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-788c46999f-2jmqd_openstack-operators(8ce68dfd-dfb4-42b9-8fc1-0da8b8336b90): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 16:39:10 crc kubenswrapper[4766]: E0130 16:39:10.336363 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-566d8d7445-l44w4" podUID="5eacef6b-7362-4c43-912a-eb3e6ccce6e9" Jan 30 16:39:10 crc kubenswrapper[4766]: E0130 16:39:10.336550 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-d7xxm" podUID="c03d46f4-f454-4b31-b4c7-5c324390d8ec" Jan 30 16:39:10 crc kubenswrapper[4766]: E0130 16:39:10.336631 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/neutron-operator-controller-manager-576995988b-kkvlj" podUID="d4c39f8d-f83d-4311-bb99-24dfa7eaeafd" Jan 30 16:39:10 crc kubenswrapper[4766]: E0130 16:39:10.336767 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-2jmqd" podUID="8ce68dfd-dfb4-42b9-8fc1-0da8b8336b90" Jan 30 16:39:10 crc kubenswrapper[4766]: E0130 16:39:10.338956 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/octavia-operator@sha256:2633ea07b6c1859f0e7aa07e94f46473e5a3732e68cb0150012c2f7705f9320c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hbfnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-694c6dcf95-swq4p_openstack-operators(a5c180e1-f6bc-49b7-b4cf-4812d0bba5ac): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 16:39:10 crc kubenswrapper[4766]: E0130 16:39:10.340162 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-swq4p" podUID="a5c180e1-f6bc-49b7-b4cf-4812d0bba5ac" Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.339372 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-d7xxm"] Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.346116 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-566d8d7445-l44w4"] Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.351846 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-694c6dcf95-swq4p"] Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.358067 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-bm24k"] Jan 30 16:39:10 crc kubenswrapper[4766]: E0130 16:39:10.363324 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ph9c7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5b964cf4cd-bm24k_openstack-operators(04cf0394-fb7b-41a9-a9bb-6fec8537d393): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 16:39:10 crc kubenswrapper[4766]: E0130 16:39:10.365457 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-bm24k" podUID="04cf0394-fb7b-41a9-a9bb-6fec8537d393" Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.371911 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-566d8d7445-l44w4" event={"ID":"5eacef6b-7362-4c43-912a-eb3e6ccce6e9","Type":"ContainerStarted","Data":"899985ba76be1bd97e3368a75f10c705e148b92f9398cd0b6c3068ca08fc87f5"} Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.374111 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-64469b487f-xkfn6" event={"ID":"b0db2f42-5872-4cac-9ee0-5990c49e0a26","Type":"ContainerStarted","Data":"cd74f86235f57e1705da50e756093414bd5f0451e027dd23c4b5f8e5e8291f3a"} Jan 30 16:39:10 crc kubenswrapper[4766]: E0130 16:39:10.375984 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/swift-operator@sha256:e5570727bc92a0d4d95be8232fa9ccad32e212f77538a1bf5319b6e951be2011\\\"\"" pod="openstack-operators/swift-operator-controller-manager-566d8d7445-l44w4" podUID="5eacef6b-7362-4c43-912a-eb3e6ccce6e9" Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.376799 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-jhbv7" event={"ID":"16fd0d31-da4c-4c6b-bbc4-8302daee3ee5","Type":"ContainerStarted","Data":"14a496dd48f87c96d1b8058ba219826e827118612b8ac815646c6233d4808189"} Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.388254 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-576995988b-kkvlj" event={"ID":"d4c39f8d-f83d-4311-bb99-24dfa7eaeafd","Type":"ContainerStarted","Data":"12228b9af4d4bc308023e3a775d30e74e57110d29bbdb312f4dfd3ff0fdf0937"} Jan 30 16:39:10 crc kubenswrapper[4766]: E0130 16:39:10.389835 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/neutron-operator@sha256:32d8aa084f9ca6788a465b65a4575f7a3bb38255c30c849c955e9173b4351ef2\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-576995988b-kkvlj" podUID="d4c39f8d-f83d-4311-bb99-24dfa7eaeafd" Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.389874 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-ssl7s" event={"ID":"46a7c725-b480-4f85-91d0-24831e713b26","Type":"ContainerStarted","Data":"a9655c98b93f1c61d3ee397fadce9fec766d22834caae8357abed7842d073c57"} Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.394753 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-rjgtk" event={"ID":"c610cc53-6813-4c5b-86e9-b421aaa21666","Type":"ContainerStarted","Data":"2d081a00177fa8db702929028c1e6c6cc9bf4739ea0af40d37d12f283db1f362"} Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.398227 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5644b66645-6jc7f" event={"ID":"0582a100-4b50-452f-baca-e67b4d6f2891","Type":"ContainerStarted","Data":"8e36a40a79788ad39b40df248cb868f5e29ed28ba048dbfc39e64872ad098a7d"} Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.399409 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7d96d95959-l4pbc" event={"ID":"0974b654-1fc0-4d97-9be3-eca153de4c57","Type":"ContainerStarted","Data":"fa2a385a8a979eb1c6d6d2cac44e589554ccfefa16fea363575c39fa4ff71408"} Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.400081 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-lhxhc" event={"ID":"be908bdc-d0b5-4409-b088-b9b51de3cfb0","Type":"ContainerStarted","Data":"ae8ac28bd87773b8c1ed6ee0840f4603e2667361073061ffb4cd37d61bd128a6"} Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.400871 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-bm24k" event={"ID":"04cf0394-fb7b-41a9-a9bb-6fec8537d393","Type":"ContainerStarted","Data":"2e89879bd07df8599e3d309460b4b5fbe981645728ecad1e2e363383ab955328"} Jan 30 16:39:10 crc kubenswrapper[4766]: E0130 16:39:10.403431 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-bm24k" podUID="04cf0394-fb7b-41a9-a9bb-6fec8537d393" Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.429719 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-2jmqd" event={"ID":"8ce68dfd-dfb4-42b9-8fc1-0da8b8336b90","Type":"ContainerStarted","Data":"e536f2a5c1eeb77d5973f93e8028eaed0a69d0a6f92bc0b7a0d7de95799e8aa2"} Jan 30 16:39:10 crc kubenswrapper[4766]: E0130 16:39:10.438147 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-2jmqd" podUID="8ce68dfd-dfb4-42b9-8fc1-0da8b8336b90" Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.451405 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-d7xxm" event={"ID":"c03d46f4-f454-4b31-b4c7-5c324390d8ec","Type":"ContainerStarted","Data":"aab8c41ae82cca782bf11aef1eedcb1a459f6f217a5d4750d6ba2674dee810fb"} Jan 30 16:39:10 crc kubenswrapper[4766]: E0130 16:39:10.452907 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-d7xxm" podUID="c03d46f4-f454-4b31-b4c7-5c324390d8ec" Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.458994 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-jzztd" event={"ID":"1ea9d2ea-ca11-428c-ab61-28bf391bcd4f","Type":"ContainerStarted","Data":"ec801056a6ec4765c2bd8df17157fa6fa4a55d01facd0540bec960ba4b960516"} Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.479240 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-6bfc9d4d48-7287m" event={"ID":"d34f90ce-9c03-441f-85cb-67b1666672fc","Type":"ContainerStarted","Data":"78e165d7b3a6c6b87b2f5d0c693b5622d778b4290226302193c8dfbb9b0cd281"} Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.480879 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-8hrwp" event={"ID":"2a5fe995-2904-4751-ae74-958efaa8596a","Type":"ContainerStarted","Data":"815dff27c6c9d212aded026399c51d20dbe39dec8cd198d83ead0d32051d1b6d"} Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.484404 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-69484b8d9d-tqxks" event={"ID":"0c603c94-f0b0-4820-a5a1-0ab9a76ceb49","Type":"ContainerStarted","Data":"bc7d35dda2a93701d1d3d95881e3790fdb4b9319a75d9df49175b79e4e1e2b7c"} Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.489550 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-swq4p" event={"ID":"a5c180e1-f6bc-49b7-b4cf-4812d0bba5ac","Type":"ContainerStarted","Data":"f15be8d71672a3a992472c4cb823c9521797bff25d38501e31b1d8887a39cfb0"} Jan 30 16:39:10 crc kubenswrapper[4766]: E0130 16:39:10.492534 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/octavia-operator@sha256:2633ea07b6c1859f0e7aa07e94f46473e5a3732e68cb0150012c2f7705f9320c\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-swq4p" podUID="a5c180e1-f6bc-49b7-b4cf-4812d0bba5ac" Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.493245 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-49xwp" event={"ID":"dc1c52ba-db5b-40ac-87da-de36346e8491","Type":"ContainerStarted","Data":"61a50f674369938611d93cd1201b24380e4fe6ef1b3fb05b8a28ef44fd6a6a82"} Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.495368 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-dklb4" event={"ID":"55fb4fd9-f80b-474b-b9c9-758720536349","Type":"ContainerStarted","Data":"72b9ee2bece212e3d66ee55f14cfaef4a0a15ee460eb0044c173d03fc5537ad3"} Jan 30 16:39:11 crc kubenswrapper[4766]: I0130 16:39:11.291925 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/09fcb126-016c-4b79-91d5-90e98e3da7f3-cert\") pod \"infra-operator-controller-manager-79955696d6-ddthn\" (UID: \"09fcb126-016c-4b79-91d5-90e98e3da7f3\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-ddthn" Jan 30 16:39:11 crc kubenswrapper[4766]: E0130 16:39:11.292120 4766 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 16:39:11 crc kubenswrapper[4766]: E0130 16:39:11.292169 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09fcb126-016c-4b79-91d5-90e98e3da7f3-cert podName:09fcb126-016c-4b79-91d5-90e98e3da7f3 nodeName:}" failed. No retries permitted until 2026-01-30 16:39:15.292152184 +0000 UTC m=+1009.930109530 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/09fcb126-016c-4b79-91d5-90e98e3da7f3-cert") pod "infra-operator-controller-manager-79955696d6-ddthn" (UID: "09fcb126-016c-4b79-91d5-90e98e3da7f3") : secret "infra-operator-webhook-server-cert" not found Jan 30 16:39:11 crc kubenswrapper[4766]: E0130 16:39:11.545243 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/octavia-operator@sha256:2633ea07b6c1859f0e7aa07e94f46473e5a3732e68cb0150012c2f7705f9320c\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-swq4p" podUID="a5c180e1-f6bc-49b7-b4cf-4812d0bba5ac" Jan 30 16:39:11 crc kubenswrapper[4766]: E0130 16:39:11.579931 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-bm24k" podUID="04cf0394-fb7b-41a9-a9bb-6fec8537d393" Jan 30 16:39:11 crc kubenswrapper[4766]: E0130 16:39:11.579932 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/neutron-operator@sha256:32d8aa084f9ca6788a465b65a4575f7a3bb38255c30c849c955e9173b4351ef2\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-576995988b-kkvlj" podUID="d4c39f8d-f83d-4311-bb99-24dfa7eaeafd" Jan 30 16:39:11 crc kubenswrapper[4766]: E0130 16:39:11.580018 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-d7xxm" podUID="c03d46f4-f454-4b31-b4c7-5c324390d8ec" Jan 30 16:39:11 crc kubenswrapper[4766]: E0130 16:39:11.580076 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/swift-operator@sha256:e5570727bc92a0d4d95be8232fa9ccad32e212f77538a1bf5319b6e951be2011\\\"\"" pod="openstack-operators/swift-operator-controller-manager-566d8d7445-l44w4" podUID="5eacef6b-7362-4c43-912a-eb3e6ccce6e9" Jan 30 16:39:11 crc kubenswrapper[4766]: E0130 16:39:11.580123 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-2jmqd" podUID="8ce68dfd-dfb4-42b9-8fc1-0da8b8336b90" Jan 30 16:39:11 crc kubenswrapper[4766]: I0130 16:39:11.806902 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/90a2893c-9d38-4d53-93d9-a50421172933-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r\" (UID: \"90a2893c-9d38-4d53-93d9-a50421172933\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r" Jan 30 16:39:11 crc kubenswrapper[4766]: E0130 16:39:11.807204 4766 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 16:39:11 crc kubenswrapper[4766]: E0130 16:39:11.807265 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/90a2893c-9d38-4d53-93d9-a50421172933-cert podName:90a2893c-9d38-4d53-93d9-a50421172933 nodeName:}" failed. No retries permitted until 2026-01-30 16:39:15.80724832 +0000 UTC m=+1010.445205666 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/90a2893c-9d38-4d53-93d9-a50421172933-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r" (UID: "90a2893c-9d38-4d53-93d9-a50421172933") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 16:39:12 crc kubenswrapper[4766]: I0130 16:39:12.321095 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-webhook-certs\") pod \"openstack-operator-controller-manager-86bf68df65-m95g8\" (UID: \"b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f\") " pod="openstack-operators/openstack-operator-controller-manager-86bf68df65-m95g8" Jan 30 16:39:12 crc kubenswrapper[4766]: I0130 16:39:12.321202 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-metrics-certs\") pod \"openstack-operator-controller-manager-86bf68df65-m95g8\" (UID: \"b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f\") " pod="openstack-operators/openstack-operator-controller-manager-86bf68df65-m95g8" Jan 30 16:39:12 crc kubenswrapper[4766]: E0130 16:39:12.321284 4766 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 16:39:12 crc kubenswrapper[4766]: E0130 16:39:12.321297 4766 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 16:39:12 crc kubenswrapper[4766]: E0130 16:39:12.321351 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-webhook-certs podName:b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f nodeName:}" failed. No retries permitted until 2026-01-30 16:39:16.321332358 +0000 UTC m=+1010.959289704 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-webhook-certs") pod "openstack-operator-controller-manager-86bf68df65-m95g8" (UID: "b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f") : secret "webhook-server-cert" not found Jan 30 16:39:12 crc kubenswrapper[4766]: E0130 16:39:12.321370 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-metrics-certs podName:b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f nodeName:}" failed. No retries permitted until 2026-01-30 16:39:16.321363149 +0000 UTC m=+1010.959320495 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-metrics-certs") pod "openstack-operator-controller-manager-86bf68df65-m95g8" (UID: "b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f") : secret "metrics-server-cert" not found Jan 30 16:39:15 crc kubenswrapper[4766]: I0130 16:39:15.300688 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/09fcb126-016c-4b79-91d5-90e98e3da7f3-cert\") pod \"infra-operator-controller-manager-79955696d6-ddthn\" (UID: \"09fcb126-016c-4b79-91d5-90e98e3da7f3\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-ddthn" Jan 30 16:39:15 crc kubenswrapper[4766]: E0130 16:39:15.300985 4766 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 16:39:15 crc kubenswrapper[4766]: E0130 16:39:15.301154 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09fcb126-016c-4b79-91d5-90e98e3da7f3-cert podName:09fcb126-016c-4b79-91d5-90e98e3da7f3 nodeName:}" failed. No retries permitted until 2026-01-30 16:39:23.301135054 +0000 UTC m=+1017.939092400 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/09fcb126-016c-4b79-91d5-90e98e3da7f3-cert") pod "infra-operator-controller-manager-79955696d6-ddthn" (UID: "09fcb126-016c-4b79-91d5-90e98e3da7f3") : secret "infra-operator-webhook-server-cert" not found Jan 30 16:39:15 crc kubenswrapper[4766]: I0130 16:39:15.818057 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/90a2893c-9d38-4d53-93d9-a50421172933-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r\" (UID: \"90a2893c-9d38-4d53-93d9-a50421172933\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r" Jan 30 16:39:15 crc kubenswrapper[4766]: E0130 16:39:15.818287 4766 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 16:39:15 crc kubenswrapper[4766]: E0130 16:39:15.818350 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/90a2893c-9d38-4d53-93d9-a50421172933-cert podName:90a2893c-9d38-4d53-93d9-a50421172933 nodeName:}" failed. No retries permitted until 2026-01-30 16:39:23.818332137 +0000 UTC m=+1018.456289483 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/90a2893c-9d38-4d53-93d9-a50421172933-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r" (UID: "90a2893c-9d38-4d53-93d9-a50421172933") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 16:39:16 crc kubenswrapper[4766]: I0130 16:39:16.334020 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-metrics-certs\") pod \"openstack-operator-controller-manager-86bf68df65-m95g8\" (UID: \"b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f\") " pod="openstack-operators/openstack-operator-controller-manager-86bf68df65-m95g8" Jan 30 16:39:16 crc kubenswrapper[4766]: I0130 16:39:16.334223 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-webhook-certs\") pod \"openstack-operator-controller-manager-86bf68df65-m95g8\" (UID: \"b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f\") " pod="openstack-operators/openstack-operator-controller-manager-86bf68df65-m95g8" Jan 30 16:39:16 crc kubenswrapper[4766]: E0130 16:39:16.334380 4766 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 16:39:16 crc kubenswrapper[4766]: E0130 16:39:16.334440 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-webhook-certs podName:b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f nodeName:}" failed. No retries permitted until 2026-01-30 16:39:24.334422062 +0000 UTC m=+1018.972379408 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-webhook-certs") pod "openstack-operator-controller-manager-86bf68df65-m95g8" (UID: "b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f") : secret "webhook-server-cert" not found Jan 30 16:39:16 crc kubenswrapper[4766]: E0130 16:39:16.334876 4766 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 16:39:16 crc kubenswrapper[4766]: E0130 16:39:16.334910 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-metrics-certs podName:b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f nodeName:}" failed. No retries permitted until 2026-01-30 16:39:24.334899765 +0000 UTC m=+1018.972857111 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-metrics-certs") pod "openstack-operator-controller-manager-86bf68df65-m95g8" (UID: "b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f") : secret "metrics-server-cert" not found Jan 30 16:39:23 crc kubenswrapper[4766]: I0130 16:39:23.041915 4766 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 16:39:23 crc kubenswrapper[4766]: I0130 16:39:23.358938 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/09fcb126-016c-4b79-91d5-90e98e3da7f3-cert\") pod \"infra-operator-controller-manager-79955696d6-ddthn\" (UID: \"09fcb126-016c-4b79-91d5-90e98e3da7f3\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-ddthn" Jan 30 16:39:23 crc kubenswrapper[4766]: E0130 16:39:23.359164 4766 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 16:39:23 crc kubenswrapper[4766]: E0130 16:39:23.359231 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09fcb126-016c-4b79-91d5-90e98e3da7f3-cert podName:09fcb126-016c-4b79-91d5-90e98e3da7f3 nodeName:}" failed. No retries permitted until 2026-01-30 16:39:39.359215009 +0000 UTC m=+1033.997172355 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/09fcb126-016c-4b79-91d5-90e98e3da7f3-cert") pod "infra-operator-controller-manager-79955696d6-ddthn" (UID: "09fcb126-016c-4b79-91d5-90e98e3da7f3") : secret "infra-operator-webhook-server-cert" not found Jan 30 16:39:23 crc kubenswrapper[4766]: I0130 16:39:23.867698 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/90a2893c-9d38-4d53-93d9-a50421172933-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r\" (UID: \"90a2893c-9d38-4d53-93d9-a50421172933\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r" Jan 30 16:39:23 crc kubenswrapper[4766]: E0130 16:39:23.868261 4766 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 16:39:23 crc kubenswrapper[4766]: E0130 16:39:23.868438 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/90a2893c-9d38-4d53-93d9-a50421172933-cert podName:90a2893c-9d38-4d53-93d9-a50421172933 nodeName:}" failed. No retries permitted until 2026-01-30 16:39:39.868411743 +0000 UTC m=+1034.506369169 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/90a2893c-9d38-4d53-93d9-a50421172933-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r" (UID: "90a2893c-9d38-4d53-93d9-a50421172933") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 16:39:24 crc kubenswrapper[4766]: I0130 16:39:24.374289 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-webhook-certs\") pod \"openstack-operator-controller-manager-86bf68df65-m95g8\" (UID: \"b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f\") " pod="openstack-operators/openstack-operator-controller-manager-86bf68df65-m95g8" Jan 30 16:39:24 crc kubenswrapper[4766]: I0130 16:39:24.374362 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-metrics-certs\") pod \"openstack-operator-controller-manager-86bf68df65-m95g8\" (UID: \"b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f\") " pod="openstack-operators/openstack-operator-controller-manager-86bf68df65-m95g8" Jan 30 16:39:24 crc kubenswrapper[4766]: E0130 16:39:24.374482 4766 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 16:39:24 crc kubenswrapper[4766]: E0130 16:39:24.374535 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-metrics-certs podName:b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f nodeName:}" failed. No retries permitted until 2026-01-30 16:39:40.374518562 +0000 UTC m=+1035.012475908 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-metrics-certs") pod "openstack-operator-controller-manager-86bf68df65-m95g8" (UID: "b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f") : secret "metrics-server-cert" not found Jan 30 16:39:24 crc kubenswrapper[4766]: E0130 16:39:24.374482 4766 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 16:39:24 crc kubenswrapper[4766]: E0130 16:39:24.374707 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-webhook-certs podName:b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f nodeName:}" failed. No retries permitted until 2026-01-30 16:39:40.374675176 +0000 UTC m=+1035.012632522 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-webhook-certs") pod "openstack-operator-controller-manager-86bf68df65-m95g8" (UID: "b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f") : secret "webhook-server-cert" not found Jan 30 16:39:25 crc kubenswrapper[4766]: E0130 16:39:25.103812 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/keystone-operator@sha256:f6042794464b8ad49246666befd3943cb3ca212334333c0f6fe7a56ff3f6c73f" Jan 30 16:39:25 crc kubenswrapper[4766]: E0130 16:39:25.104069 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/keystone-operator@sha256:f6042794464b8ad49246666befd3943cb3ca212334333c0f6fe7a56ff3f6c73f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-867g6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-64469b487f-xkfn6_openstack-operators(b0db2f42-5872-4cac-9ee0-5990c49e0a26): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 16:39:25 crc kubenswrapper[4766]: E0130 16:39:25.105270 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-64469b487f-xkfn6" podUID="b0db2f42-5872-4cac-9ee0-5990c49e0a26" Jan 30 16:39:25 crc kubenswrapper[4766]: E0130 16:39:25.685497 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/keystone-operator@sha256:f6042794464b8ad49246666befd3943cb3ca212334333c0f6fe7a56ff3f6c73f\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-64469b487f-xkfn6" podUID="b0db2f42-5872-4cac-9ee0-5990c49e0a26" Jan 30 16:39:25 crc kubenswrapper[4766]: E0130 16:39:25.793813 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/manila-operator@sha256:ebc99d4caf2352643c25de5816c34dfe551961e39261e26ff89ee0afdd98819c" Jan 30 16:39:25 crc kubenswrapper[4766]: E0130 16:39:25.794301 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/manila-operator@sha256:ebc99d4caf2352643c25de5816c34dfe551961e39261e26ff89ee0afdd98819c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zs6qh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-7d96d95959-l4pbc_openstack-operators(0974b654-1fc0-4d97-9be3-eca153de4c57): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 16:39:25 crc kubenswrapper[4766]: E0130 16:39:25.795692 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-7d96d95959-l4pbc" podUID="0974b654-1fc0-4d97-9be3-eca153de4c57" Jan 30 16:39:26 crc kubenswrapper[4766]: E0130 16:39:26.691649 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/manila-operator@sha256:ebc99d4caf2352643c25de5816c34dfe551961e39261e26ff89ee0afdd98819c\\\"\"" pod="openstack-operators/manila-operator-controller-manager-7d96d95959-l4pbc" podUID="0974b654-1fc0-4d97-9be3-eca153de4c57" Jan 30 16:39:26 crc kubenswrapper[4766]: E0130 16:39:26.968756 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/heat-operator@sha256:b0b0a4b7f190695830d9c85683e48bf60edfc52a3d095afee09ef2619c4a7d28" Jan 30 16:39:26 crc kubenswrapper[4766]: E0130 16:39:26.968997 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/heat-operator@sha256:b0b0a4b7f190695830d9c85683e48bf60edfc52a3d095afee09ef2619c4a7d28,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vsz4j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-65dc6c8d9c-8hrwp_openstack-operators(2a5fe995-2904-4751-ae74-958efaa8596a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 16:39:26 crc kubenswrapper[4766]: E0130 16:39:26.970207 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-8hrwp" podUID="2a5fe995-2904-4751-ae74-958efaa8596a" Jan 30 16:39:27 crc kubenswrapper[4766]: E0130 16:39:27.736524 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/heat-operator@sha256:b0b0a4b7f190695830d9c85683e48bf60edfc52a3d095afee09ef2619c4a7d28\\\"\"" pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-8hrwp" podUID="2a5fe995-2904-4751-ae74-958efaa8596a" Jan 30 16:39:36 crc kubenswrapper[4766]: E0130 16:39:36.967134 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/nova-operator@sha256:6b951a651861f6e805ceec19cad5a35a8dfe6fd9536acebd3c197ca4659d8a51" Jan 30 16:39:36 crc kubenswrapper[4766]: E0130 16:39:36.968408 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/nova-operator@sha256:6b951a651861f6e805ceec19cad5a35a8dfe6fd9536acebd3c197ca4659d8a51,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2d8vj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-5644b66645-6jc7f_openstack-operators(0582a100-4b50-452f-baca-e67b4d6f2891): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 16:39:36 crc kubenswrapper[4766]: E0130 16:39:36.969668 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-5644b66645-6jc7f" podUID="0582a100-4b50-452f-baca-e67b4d6f2891" Jan 30 16:39:37 crc kubenswrapper[4766]: E0130 16:39:37.181072 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/watcher-operator@sha256:3fd1f7623a4b32505f51f329116f7e13bb4cfd320e920961a5b86441a89326d6" Jan 30 16:39:37 crc kubenswrapper[4766]: E0130 16:39:37.181290 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/watcher-operator@sha256:3fd1f7623a4b32505f51f329116f7e13bb4cfd320e920961a5b86441a89326d6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-99vnv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-586b95b788-dklb4_openstack-operators(55fb4fd9-f80b-474b-b9c9-758720536349): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 16:39:37 crc kubenswrapper[4766]: E0130 16:39:37.182649 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-dklb4" podUID="55fb4fd9-f80b-474b-b9c9-758720536349" Jan 30 16:39:37 crc kubenswrapper[4766]: E0130 16:39:37.606107 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/telemetry-operator@sha256:5bca7e1776db32cb5889c1cfca39662741f9c0f531e6d2e52d9d41afb32ae543" Jan 30 16:39:37 crc kubenswrapper[4766]: E0130 16:39:37.606335 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/telemetry-operator@sha256:5bca7e1776db32cb5889c1cfca39662741f9c0f531e6d2e52d9d41afb32ae543,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-phsmp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-69484b8d9d-tqxks_openstack-operators(0c603c94-f0b0-4820-a5a1-0ab9a76ceb49): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 16:39:37 crc kubenswrapper[4766]: E0130 16:39:37.608682 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-69484b8d9d-tqxks" podUID="0c603c94-f0b0-4820-a5a1-0ab9a76ceb49" Jan 30 16:39:37 crc kubenswrapper[4766]: E0130 16:39:37.768274 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/watcher-operator@sha256:3fd1f7623a4b32505f51f329116f7e13bb4cfd320e920961a5b86441a89326d6\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-dklb4" podUID="55fb4fd9-f80b-474b-b9c9-758720536349" Jan 30 16:39:37 crc kubenswrapper[4766]: E0130 16:39:37.768314 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/nova-operator@sha256:6b951a651861f6e805ceec19cad5a35a8dfe6fd9536acebd3c197ca4659d8a51\\\"\"" pod="openstack-operators/nova-operator-controller-manager-5644b66645-6jc7f" podUID="0582a100-4b50-452f-baca-e67b4d6f2891" Jan 30 16:39:37 crc kubenswrapper[4766]: E0130 16:39:37.768526 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/telemetry-operator@sha256:5bca7e1776db32cb5889c1cfca39662741f9c0f531e6d2e52d9d41afb32ae543\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-69484b8d9d-tqxks" podUID="0c603c94-f0b0-4820-a5a1-0ab9a76ceb49" Jan 30 16:39:38 crc kubenswrapper[4766]: E0130 16:39:38.190071 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 30 16:39:38 crc kubenswrapper[4766]: E0130 16:39:38.190311 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lsxcs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-49xwp_openstack-operators(dc1c52ba-db5b-40ac-87da-de36346e8491): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 16:39:38 crc kubenswrapper[4766]: E0130 16:39:38.191567 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-49xwp" podUID="dc1c52ba-db5b-40ac-87da-de36346e8491" Jan 30 16:39:38 crc kubenswrapper[4766]: E0130 16:39:38.774804 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-49xwp" podUID="dc1c52ba-db5b-40ac-87da-de36346e8491" Jan 30 16:39:39 crc kubenswrapper[4766]: E0130 16:39:39.168167 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/octavia-operator@sha256:2633ea07b6c1859f0e7aa07e94f46473e5a3732e68cb0150012c2f7705f9320c" Jan 30 16:39:39 crc kubenswrapper[4766]: E0130 16:39:39.168388 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/octavia-operator@sha256:2633ea07b6c1859f0e7aa07e94f46473e5a3732e68cb0150012c2f7705f9320c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hbfnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-694c6dcf95-swq4p_openstack-operators(a5c180e1-f6bc-49b7-b4cf-4812d0bba5ac): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 16:39:39 crc kubenswrapper[4766]: E0130 16:39:39.169552 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-swq4p" podUID="a5c180e1-f6bc-49b7-b4cf-4812d0bba5ac" Jan 30 16:39:39 crc kubenswrapper[4766]: I0130 16:39:39.421992 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/09fcb126-016c-4b79-91d5-90e98e3da7f3-cert\") pod \"infra-operator-controller-manager-79955696d6-ddthn\" (UID: \"09fcb126-016c-4b79-91d5-90e98e3da7f3\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-ddthn" Jan 30 16:39:39 crc kubenswrapper[4766]: I0130 16:39:39.428032 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/09fcb126-016c-4b79-91d5-90e98e3da7f3-cert\") pod \"infra-operator-controller-manager-79955696d6-ddthn\" (UID: \"09fcb126-016c-4b79-91d5-90e98e3da7f3\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-ddthn" Jan 30 16:39:39 crc kubenswrapper[4766]: I0130 16:39:39.570075 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-7lj62" Jan 30 16:39:39 crc kubenswrapper[4766]: I0130 16:39:39.578549 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-ddthn" Jan 30 16:39:39 crc kubenswrapper[4766]: E0130 16:39:39.667214 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/neutron-operator@sha256:32d8aa084f9ca6788a465b65a4575f7a3bb38255c30c849c955e9173b4351ef2" Jan 30 16:39:39 crc kubenswrapper[4766]: E0130 16:39:39.667406 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/neutron-operator@sha256:32d8aa084f9ca6788a465b65a4575f7a3bb38255c30c849c955e9173b4351ef2,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pgw69,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-576995988b-kkvlj_openstack-operators(d4c39f8d-f83d-4311-bb99-24dfa7eaeafd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 16:39:39 crc kubenswrapper[4766]: E0130 16:39:39.668791 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-576995988b-kkvlj" podUID="d4c39f8d-f83d-4311-bb99-24dfa7eaeafd" Jan 30 16:39:39 crc kubenswrapper[4766]: I0130 16:39:39.934826 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/90a2893c-9d38-4d53-93d9-a50421172933-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r\" (UID: \"90a2893c-9d38-4d53-93d9-a50421172933\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r" Jan 30 16:39:39 crc kubenswrapper[4766]: I0130 16:39:39.940503 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/90a2893c-9d38-4d53-93d9-a50421172933-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r\" (UID: \"90a2893c-9d38-4d53-93d9-a50421172933\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r" Jan 30 16:39:40 crc kubenswrapper[4766]: I0130 16:39:40.129623 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-4c97f" Jan 30 16:39:40 crc kubenswrapper[4766]: I0130 16:39:40.137272 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r" Jan 30 16:39:40 crc kubenswrapper[4766]: E0130 16:39:40.233744 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241" Jan 30 16:39:40 crc kubenswrapper[4766]: E0130 16:39:40.233902 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-krdz6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-d7xxm_openstack-operators(c03d46f4-f454-4b31-b4c7-5c324390d8ec): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 16:39:40 crc kubenswrapper[4766]: E0130 16:39:40.235116 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-d7xxm" podUID="c03d46f4-f454-4b31-b4c7-5c324390d8ec" Jan 30 16:39:40 crc kubenswrapper[4766]: I0130 16:39:40.443741 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-webhook-certs\") pod \"openstack-operator-controller-manager-86bf68df65-m95g8\" (UID: \"b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f\") " pod="openstack-operators/openstack-operator-controller-manager-86bf68df65-m95g8" Jan 30 16:39:40 crc kubenswrapper[4766]: I0130 16:39:40.444891 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-metrics-certs\") pod \"openstack-operator-controller-manager-86bf68df65-m95g8\" (UID: \"b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f\") " pod="openstack-operators/openstack-operator-controller-manager-86bf68df65-m95g8" Jan 30 16:39:40 crc kubenswrapper[4766]: I0130 16:39:40.448074 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-metrics-certs\") pod \"openstack-operator-controller-manager-86bf68df65-m95g8\" (UID: \"b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f\") " pod="openstack-operators/openstack-operator-controller-manager-86bf68df65-m95g8" Jan 30 16:39:40 crc kubenswrapper[4766]: I0130 16:39:40.455292 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-webhook-certs\") pod \"openstack-operator-controller-manager-86bf68df65-m95g8\" (UID: \"b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f\") " pod="openstack-operators/openstack-operator-controller-manager-86bf68df65-m95g8" Jan 30 16:39:40 crc kubenswrapper[4766]: I0130 16:39:40.460909 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-9tl7m" Jan 30 16:39:40 crc kubenswrapper[4766]: I0130 16:39:40.469248 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-86bf68df65-m95g8" Jan 30 16:39:41 crc kubenswrapper[4766]: E0130 16:39:41.302895 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4" Jan 30 16:39:41 crc kubenswrapper[4766]: E0130 16:39:41.303048 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nj79c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-788c46999f-2jmqd_openstack-operators(8ce68dfd-dfb4-42b9-8fc1-0da8b8336b90): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 16:39:41 crc kubenswrapper[4766]: E0130 16:39:41.304344 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-2jmqd" podUID="8ce68dfd-dfb4-42b9-8fc1-0da8b8336b90" Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.340802 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r"] Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.581968 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-ddthn"] Jan 30 16:39:42 crc kubenswrapper[4766]: W0130 16:39:42.591779 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09fcb126_016c_4b79_91d5_90e98e3da7f3.slice/crio-2f91eabc889a8812d95efd41107ecbe1bdc9f26c344a244bc3fa0324cde6a0fa WatchSource:0}: Error finding container 2f91eabc889a8812d95efd41107ecbe1bdc9f26c344a244bc3fa0324cde6a0fa: Status 404 returned error can't find the container with id 2f91eabc889a8812d95efd41107ecbe1bdc9f26c344a244bc3fa0324cde6a0fa Jan 30 16:39:42 crc kubenswrapper[4766]: W0130 16:39:42.673004 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0a6f6d6_6e33_4f4c_a0e4_cff7d180eb6f.slice/crio-bb58f41ba0d7bc3497daaf361500782aaaecb85be2e543a6e8d3c7f64e671995 WatchSource:0}: Error finding container bb58f41ba0d7bc3497daaf361500782aaaecb85be2e543a6e8d3c7f64e671995: Status 404 returned error can't find the container with id bb58f41ba0d7bc3497daaf361500782aaaecb85be2e543a6e8d3c7f64e671995 Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.676873 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-86bf68df65-m95g8"] Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.801772 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-6bfc9d4d48-7287m" event={"ID":"d34f90ce-9c03-441f-85cb-67b1666672fc","Type":"ContainerStarted","Data":"e09e22e7983b7ade69bc147569acb5bc9f1f5d00c149f873a912116fcd2a1764"} Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.801880 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-6bfc9d4d48-7287m" Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.804053 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-8hrwp" event={"ID":"2a5fe995-2904-4751-ae74-958efaa8596a","Type":"ContainerStarted","Data":"8d570920d17fc1ae12b9b54e55967afb6c83352925ae29443e891db9ad479d3b"} Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.804285 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-8hrwp" Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.809661 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7d96d95959-l4pbc" event={"ID":"0974b654-1fc0-4d97-9be3-eca153de4c57","Type":"ContainerStarted","Data":"f700b6e4fa71d69e6da1639c284016417854dccc06a649be11abc844ee20d6d0"} Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.809884 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7d96d95959-l4pbc" Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.811827 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-lhxhc" event={"ID":"be908bdc-d0b5-4409-b088-b9b51de3cfb0","Type":"ContainerStarted","Data":"88b672a2abad1a3cd100abd06985751753dedf6f1e8d215f28e92c387886bbc6"} Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.811965 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-lhxhc" Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.813438 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-ddthn" event={"ID":"09fcb126-016c-4b79-91d5-90e98e3da7f3","Type":"ContainerStarted","Data":"2f91eabc889a8812d95efd41107ecbe1bdc9f26c344a244bc3fa0324cde6a0fa"} Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.816305 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-64469b487f-xkfn6" event={"ID":"b0db2f42-5872-4cac-9ee0-5990c49e0a26","Type":"ContainerStarted","Data":"558205bfda960ba437e4ea5b8dbed5e5538b95235f953071b86f9166d2d19f42"} Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.816564 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-64469b487f-xkfn6" Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.818663 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-bm24k" event={"ID":"04cf0394-fb7b-41a9-a9bb-6fec8537d393","Type":"ContainerStarted","Data":"65e18457d3c4da2d57d465a1bcc526961ce800ed5cd460dd9b1705c353812612"} Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.819324 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-bm24k" Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.825990 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r" event={"ID":"90a2893c-9d38-4d53-93d9-a50421172933","Type":"ContainerStarted","Data":"450e087d04a70c9f4aeb61b4b0b4d183ea1b4016a502df440a52344ca81b1820"} Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.839598 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-787499fbb-mlkcx" event={"ID":"72b84e1c-8ed8-4fae-8dff-ca2576579904","Type":"ContainerStarted","Data":"5a4bf8b1f9323c54345c7c674d450b58f467e170c1cca91fe80e41bb3406bb6b"} Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.839887 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-787499fbb-mlkcx" Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.842791 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-86bf68df65-m95g8" event={"ID":"b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f","Type":"ContainerStarted","Data":"bb58f41ba0d7bc3497daaf361500782aaaecb85be2e543a6e8d3c7f64e671995"} Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.848299 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-ssl7s" event={"ID":"46a7c725-b480-4f85-91d0-24831e713b26","Type":"ContainerStarted","Data":"a92a386f79e80bf4e17b0f08c9f8b25ac6bdd2650b6693d190dfdbfd0c8af1f3"} Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.848641 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-ssl7s" Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.849922 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-566d8d7445-l44w4" event={"ID":"5eacef6b-7362-4c43-912a-eb3e6ccce6e9","Type":"ContainerStarted","Data":"edd4304b41768580abd756eb069e1a9c8ea3a70a213fa5b0ec1e8062c8b94772"} Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.850141 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-566d8d7445-l44w4" Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.851559 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-jhbv7" event={"ID":"16fd0d31-da4c-4c6b-bbc4-8302daee3ee5","Type":"ContainerStarted","Data":"b5792aa3ea345aaa15ab6ddce02fdd1c22763f18607958ecf444c1356b879ebd"} Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.851684 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-jhbv7" Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.858341 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-rjgtk" event={"ID":"c610cc53-6813-4c5b-86e9-b421aaa21666","Type":"ContainerStarted","Data":"fae208f85bbc6e2b44880a03acbc9965f6ac7869b9ba96a6a68a262f79ef1375"} Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.858492 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-rjgtk" Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.859778 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-jzztd" event={"ID":"1ea9d2ea-ca11-428c-ab61-28bf391bcd4f","Type":"ContainerStarted","Data":"3afe49a430f5396920c65317dca74e2283cffca82f0e43e73ee572be0cb9ea13"} Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.859851 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-jzztd" Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.867450 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-6bfc9d4d48-7287m" podStartSLOduration=5.278968101 podStartE2EDuration="35.86743241s" podCreationTimestamp="2026-01-30 16:39:07 +0000 UTC" firstStartedPulling="2026-01-30 16:39:09.634691396 +0000 UTC m=+1004.272648742" lastFinishedPulling="2026-01-30 16:39:40.223155705 +0000 UTC m=+1034.861113051" observedRunningTime="2026-01-30 16:39:42.839241934 +0000 UTC m=+1037.477199280" watchObservedRunningTime="2026-01-30 16:39:42.86743241 +0000 UTC m=+1037.505389756" Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.894109 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7d96d95959-l4pbc" podStartSLOduration=4.110497572 podStartE2EDuration="35.894090745s" podCreationTimestamp="2026-01-30 16:39:07 +0000 UTC" firstStartedPulling="2026-01-30 16:39:10.286649291 +0000 UTC m=+1004.924606637" lastFinishedPulling="2026-01-30 16:39:42.070242464 +0000 UTC m=+1036.708199810" observedRunningTime="2026-01-30 16:39:42.865821365 +0000 UTC m=+1037.503778721" watchObservedRunningTime="2026-01-30 16:39:42.894090745 +0000 UTC m=+1037.532048091" Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.897933 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-8hrwp" podStartSLOduration=3.772302867 podStartE2EDuration="35.8979182s" podCreationTimestamp="2026-01-30 16:39:07 +0000 UTC" firstStartedPulling="2026-01-30 16:39:09.955299636 +0000 UTC m=+1004.593256982" lastFinishedPulling="2026-01-30 16:39:42.080914969 +0000 UTC m=+1036.718872315" observedRunningTime="2026-01-30 16:39:42.896035878 +0000 UTC m=+1037.533993224" watchObservedRunningTime="2026-01-30 16:39:42.8979182 +0000 UTC m=+1037.535875546" Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.938578 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-64469b487f-xkfn6" podStartSLOduration=3.822224051 podStartE2EDuration="35.938554948s" podCreationTimestamp="2026-01-30 16:39:07 +0000 UTC" firstStartedPulling="2026-01-30 16:39:09.953731102 +0000 UTC m=+1004.591688448" lastFinishedPulling="2026-01-30 16:39:42.070061999 +0000 UTC m=+1036.708019345" observedRunningTime="2026-01-30 16:39:42.93204714 +0000 UTC m=+1037.570004486" watchObservedRunningTime="2026-01-30 16:39:42.938554948 +0000 UTC m=+1037.576512294" Jan 30 16:39:43 crc kubenswrapper[4766]: I0130 16:39:43.029651 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-bm24k" podStartSLOduration=4.278456046 podStartE2EDuration="36.029633927s" podCreationTimestamp="2026-01-30 16:39:07 +0000 UTC" firstStartedPulling="2026-01-30 16:39:10.363167468 +0000 UTC m=+1005.001124814" lastFinishedPulling="2026-01-30 16:39:42.114345339 +0000 UTC m=+1036.752302695" observedRunningTime="2026-01-30 16:39:42.977982445 +0000 UTC m=+1037.615939801" watchObservedRunningTime="2026-01-30 16:39:43.029633927 +0000 UTC m=+1037.667591273" Jan 30 16:39:43 crc kubenswrapper[4766]: I0130 16:39:43.417712 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-lhxhc" podStartSLOduration=5.950941508 podStartE2EDuration="36.417685834s" podCreationTimestamp="2026-01-30 16:39:07 +0000 UTC" firstStartedPulling="2026-01-30 16:39:09.756310606 +0000 UTC m=+1004.394267952" lastFinishedPulling="2026-01-30 16:39:40.223054932 +0000 UTC m=+1034.861012278" observedRunningTime="2026-01-30 16:39:43.403449472 +0000 UTC m=+1038.041406818" watchObservedRunningTime="2026-01-30 16:39:43.417685834 +0000 UTC m=+1038.055643180" Jan 30 16:39:43 crc kubenswrapper[4766]: I0130 16:39:43.523927 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-jzztd" podStartSLOduration=7.991083155 podStartE2EDuration="36.52390662s" podCreationTimestamp="2026-01-30 16:39:07 +0000 UTC" firstStartedPulling="2026-01-30 16:39:10.294620951 +0000 UTC m=+1004.932578297" lastFinishedPulling="2026-01-30 16:39:38.827444426 +0000 UTC m=+1033.465401762" observedRunningTime="2026-01-30 16:39:43.515387345 +0000 UTC m=+1038.153344691" watchObservedRunningTime="2026-01-30 16:39:43.52390662 +0000 UTC m=+1038.161863966" Jan 30 16:39:43 crc kubenswrapper[4766]: I0130 16:39:43.673649 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-787499fbb-mlkcx" podStartSLOduration=4.559855796 podStartE2EDuration="36.673631994s" podCreationTimestamp="2026-01-30 16:39:07 +0000 UTC" firstStartedPulling="2026-01-30 16:39:09.173339339 +0000 UTC m=+1003.811296685" lastFinishedPulling="2026-01-30 16:39:41.287115537 +0000 UTC m=+1035.925072883" observedRunningTime="2026-01-30 16:39:43.624415928 +0000 UTC m=+1038.262373284" watchObservedRunningTime="2026-01-30 16:39:43.673631994 +0000 UTC m=+1038.311589340" Jan 30 16:39:43 crc kubenswrapper[4766]: I0130 16:39:43.676418 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-jhbv7" podStartSLOduration=6.906633208 podStartE2EDuration="36.6764049s" podCreationTimestamp="2026-01-30 16:39:07 +0000 UTC" firstStartedPulling="2026-01-30 16:39:09.883250791 +0000 UTC m=+1004.521208137" lastFinishedPulling="2026-01-30 16:39:39.653022483 +0000 UTC m=+1034.290979829" observedRunningTime="2026-01-30 16:39:43.663638498 +0000 UTC m=+1038.301595844" watchObservedRunningTime="2026-01-30 16:39:43.6764049 +0000 UTC m=+1038.314362246" Jan 30 16:39:43 crc kubenswrapper[4766]: I0130 16:39:43.705108 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-rjgtk" podStartSLOduration=7.58458048 podStartE2EDuration="36.70509155s" podCreationTimestamp="2026-01-30 16:39:07 +0000 UTC" firstStartedPulling="2026-01-30 16:39:09.706912705 +0000 UTC m=+1004.344870071" lastFinishedPulling="2026-01-30 16:39:38.827423795 +0000 UTC m=+1033.465381141" observedRunningTime="2026-01-30 16:39:43.701963844 +0000 UTC m=+1038.339921190" watchObservedRunningTime="2026-01-30 16:39:43.70509155 +0000 UTC m=+1038.343048896" Jan 30 16:39:43 crc kubenswrapper[4766]: I0130 16:39:43.750861 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-ssl7s" podStartSLOduration=6.80542964 podStartE2EDuration="36.750841869s" podCreationTimestamp="2026-01-30 16:39:07 +0000 UTC" firstStartedPulling="2026-01-30 16:39:09.707570913 +0000 UTC m=+1004.345528269" lastFinishedPulling="2026-01-30 16:39:39.652983152 +0000 UTC m=+1034.290940498" observedRunningTime="2026-01-30 16:39:43.74612337 +0000 UTC m=+1038.384080716" watchObservedRunningTime="2026-01-30 16:39:43.750841869 +0000 UTC m=+1038.388799215" Jan 30 16:39:43 crc kubenswrapper[4766]: I0130 16:39:43.794886 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-566d8d7445-l44w4" podStartSLOduration=5.075912468 podStartE2EDuration="36.794865022s" podCreationTimestamp="2026-01-30 16:39:07 +0000 UTC" firstStartedPulling="2026-01-30 16:39:10.335054964 +0000 UTC m=+1004.973012310" lastFinishedPulling="2026-01-30 16:39:42.054007518 +0000 UTC m=+1036.691964864" observedRunningTime="2026-01-30 16:39:43.784480856 +0000 UTC m=+1038.422438202" watchObservedRunningTime="2026-01-30 16:39:43.794865022 +0000 UTC m=+1038.432822368" Jan 30 16:39:43 crc kubenswrapper[4766]: I0130 16:39:43.884160 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-86bf68df65-m95g8" event={"ID":"b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f","Type":"ContainerStarted","Data":"307cbff2f69062a50f7d2778ca3b52e7ec43b33e168a7b97c89925fb02a677a9"} Jan 30 16:39:43 crc kubenswrapper[4766]: I0130 16:39:43.945731 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-86bf68df65-m95g8" podStartSLOduration=35.945709637 podStartE2EDuration="35.945709637s" podCreationTimestamp="2026-01-30 16:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:39:43.93822915 +0000 UTC m=+1038.576186496" watchObservedRunningTime="2026-01-30 16:39:43.945709637 +0000 UTC m=+1038.583666983" Jan 30 16:39:44 crc kubenswrapper[4766]: I0130 16:39:44.908625 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-86bf68df65-m95g8" Jan 30 16:39:47 crc kubenswrapper[4766]: I0130 16:39:47.565723 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-ssl7s" Jan 30 16:39:47 crc kubenswrapper[4766]: I0130 16:39:47.582043 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-787499fbb-mlkcx" Jan 30 16:39:47 crc kubenswrapper[4766]: I0130 16:39:47.611407 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-rjgtk" Jan 30 16:39:47 crc kubenswrapper[4766]: I0130 16:39:47.706427 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-6bfc9d4d48-7287m" Jan 30 16:39:47 crc kubenswrapper[4766]: I0130 16:39:47.709513 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-8hrwp" Jan 30 16:39:47 crc kubenswrapper[4766]: I0130 16:39:47.735083 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-lhxhc" Jan 30 16:39:47 crc kubenswrapper[4766]: I0130 16:39:47.867920 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-jhbv7" Jan 30 16:39:47 crc kubenswrapper[4766]: I0130 16:39:47.945438 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r" event={"ID":"90a2893c-9d38-4d53-93d9-a50421172933","Type":"ContainerStarted","Data":"1fcfb977a25dff299c6eb2e51ec9cd97ae99dae6c59d3a0b8bfaf953de13761d"} Jan 30 16:39:47 crc kubenswrapper[4766]: I0130 16:39:47.945503 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r" Jan 30 16:39:47 crc kubenswrapper[4766]: I0130 16:39:47.946938 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-ddthn" event={"ID":"09fcb126-016c-4b79-91d5-90e98e3da7f3","Type":"ContainerStarted","Data":"c55582d89c1c0ace67353aaa342e8b71119e0df6b43182ad4e5f341814f87e18"} Jan 30 16:39:47 crc kubenswrapper[4766]: I0130 16:39:47.947189 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-ddthn" Jan 30 16:39:47 crc kubenswrapper[4766]: I0130 16:39:47.965668 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7d96d95959-l4pbc" Jan 30 16:39:47 crc kubenswrapper[4766]: I0130 16:39:47.979972 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r" podStartSLOduration=36.024514376 podStartE2EDuration="40.979952003s" podCreationTimestamp="2026-01-30 16:39:07 +0000 UTC" firstStartedPulling="2026-01-30 16:39:42.365844966 +0000 UTC m=+1037.003802312" lastFinishedPulling="2026-01-30 16:39:47.321282593 +0000 UTC m=+1041.959239939" observedRunningTime="2026-01-30 16:39:47.97077876 +0000 UTC m=+1042.608736106" watchObservedRunningTime="2026-01-30 16:39:47.979952003 +0000 UTC m=+1042.617909349" Jan 30 16:39:47 crc kubenswrapper[4766]: I0130 16:39:47.989014 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-64469b487f-xkfn6" Jan 30 16:39:48 crc kubenswrapper[4766]: I0130 16:39:48.038125 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-ddthn" podStartSLOduration=36.308273741 podStartE2EDuration="41.038103444s" podCreationTimestamp="2026-01-30 16:39:07 +0000 UTC" firstStartedPulling="2026-01-30 16:39:42.594552875 +0000 UTC m=+1037.232510221" lastFinishedPulling="2026-01-30 16:39:47.324382578 +0000 UTC m=+1041.962339924" observedRunningTime="2026-01-30 16:39:48.029364474 +0000 UTC m=+1042.667321820" watchObservedRunningTime="2026-01-30 16:39:48.038103444 +0000 UTC m=+1042.676060790" Jan 30 16:39:48 crc kubenswrapper[4766]: I0130 16:39:48.095981 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-jzztd" Jan 30 16:39:48 crc kubenswrapper[4766]: I0130 16:39:48.346983 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-bm24k" Jan 30 16:39:48 crc kubenswrapper[4766]: I0130 16:39:48.373332 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-566d8d7445-l44w4" Jan 30 16:39:49 crc kubenswrapper[4766]: I0130 16:39:49.965820 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5644b66645-6jc7f" event={"ID":"0582a100-4b50-452f-baca-e67b4d6f2891","Type":"ContainerStarted","Data":"75dfd5a4a476f5dc63034938322a4851a7df22523ca53c97a008085c9a1540ac"} Jan 30 16:39:49 crc kubenswrapper[4766]: I0130 16:39:49.967614 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-5644b66645-6jc7f" Jan 30 16:39:49 crc kubenswrapper[4766]: I0130 16:39:49.987168 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-5644b66645-6jc7f" podStartSLOduration=3.72412896 podStartE2EDuration="42.987150733s" podCreationTimestamp="2026-01-30 16:39:07 +0000 UTC" firstStartedPulling="2026-01-30 16:39:10.30476144 +0000 UTC m=+1004.942718786" lastFinishedPulling="2026-01-30 16:39:49.567783213 +0000 UTC m=+1044.205740559" observedRunningTime="2026-01-30 16:39:49.984528271 +0000 UTC m=+1044.622485617" watchObservedRunningTime="2026-01-30 16:39:49.987150733 +0000 UTC m=+1044.625108079" Jan 30 16:39:50 crc kubenswrapper[4766]: I0130 16:39:50.476378 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-86bf68df65-m95g8" Jan 30 16:39:50 crc kubenswrapper[4766]: I0130 16:39:50.973968 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-dklb4" event={"ID":"55fb4fd9-f80b-474b-b9c9-758720536349","Type":"ContainerStarted","Data":"6772c34b63fb25310f7fb07c2b13db0b2c7e0b518065a85bba060f0f1f999c42"} Jan 30 16:39:50 crc kubenswrapper[4766]: I0130 16:39:50.974598 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-dklb4" Jan 30 16:39:51 crc kubenswrapper[4766]: E0130 16:39:51.040590 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/octavia-operator@sha256:2633ea07b6c1859f0e7aa07e94f46473e5a3732e68cb0150012c2f7705f9320c\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-swq4p" podUID="a5c180e1-f6bc-49b7-b4cf-4812d0bba5ac" Jan 30 16:39:51 crc kubenswrapper[4766]: I0130 16:39:51.055931 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-dklb4" podStartSLOduration=4.333985016 podStartE2EDuration="44.055913177s" podCreationTimestamp="2026-01-30 16:39:07 +0000 UTC" firstStartedPulling="2026-01-30 16:39:10.312589045 +0000 UTC m=+1004.950546381" lastFinishedPulling="2026-01-30 16:39:50.034517196 +0000 UTC m=+1044.672474542" observedRunningTime="2026-01-30 16:39:50.991079512 +0000 UTC m=+1045.629036858" watchObservedRunningTime="2026-01-30 16:39:51.055913177 +0000 UTC m=+1045.693870523" Jan 30 16:39:51 crc kubenswrapper[4766]: I0130 16:39:51.982754 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-69484b8d9d-tqxks" event={"ID":"0c603c94-f0b0-4820-a5a1-0ab9a76ceb49","Type":"ContainerStarted","Data":"ba91c91e87fa10084404ed3945fd007525003e77d9737cc0989457e8aa91b7a4"} Jan 30 16:39:51 crc kubenswrapper[4766]: I0130 16:39:51.983224 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-69484b8d9d-tqxks" Jan 30 16:39:51 crc kubenswrapper[4766]: I0130 16:39:51.999698 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-69484b8d9d-tqxks" podStartSLOduration=3.6415919260000003 podStartE2EDuration="44.999678359s" podCreationTimestamp="2026-01-30 16:39:07 +0000 UTC" firstStartedPulling="2026-01-30 16:39:10.335009793 +0000 UTC m=+1004.972967139" lastFinishedPulling="2026-01-30 16:39:51.693096226 +0000 UTC m=+1046.331053572" observedRunningTime="2026-01-30 16:39:51.995650559 +0000 UTC m=+1046.633607915" watchObservedRunningTime="2026-01-30 16:39:51.999678359 +0000 UTC m=+1046.637635705" Jan 30 16:39:52 crc kubenswrapper[4766]: E0130 16:39:52.040026 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/neutron-operator@sha256:32d8aa084f9ca6788a465b65a4575f7a3bb38255c30c849c955e9173b4351ef2\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-576995988b-kkvlj" podUID="d4c39f8d-f83d-4311-bb99-24dfa7eaeafd" Jan 30 16:39:52 crc kubenswrapper[4766]: E0130 16:39:52.040427 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-d7xxm" podUID="c03d46f4-f454-4b31-b4c7-5c324390d8ec" Jan 30 16:39:53 crc kubenswrapper[4766]: E0130 16:39:53.040804 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-2jmqd" podUID="8ce68dfd-dfb4-42b9-8fc1-0da8b8336b90" Jan 30 16:39:55 crc kubenswrapper[4766]: I0130 16:39:55.010842 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-49xwp" event={"ID":"dc1c52ba-db5b-40ac-87da-de36346e8491","Type":"ContainerStarted","Data":"20bd4af3c341e4f3016a83e89e09799067fe9f419c9e5d74103a386bf16e6711"} Jan 30 16:39:55 crc kubenswrapper[4766]: I0130 16:39:55.030723 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-49xwp" podStartSLOduration=3.712452768 podStartE2EDuration="47.030690525s" podCreationTimestamp="2026-01-30 16:39:08 +0000 UTC" firstStartedPulling="2026-01-30 16:39:10.260932613 +0000 UTC m=+1004.898889949" lastFinishedPulling="2026-01-30 16:39:53.57917036 +0000 UTC m=+1048.217127706" observedRunningTime="2026-01-30 16:39:55.0290555 +0000 UTC m=+1049.667012846" watchObservedRunningTime="2026-01-30 16:39:55.030690525 +0000 UTC m=+1049.668647881" Jan 30 16:39:58 crc kubenswrapper[4766]: I0130 16:39:58.206076 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-5644b66645-6jc7f" Jan 30 16:39:58 crc kubenswrapper[4766]: I0130 16:39:58.489301 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-69484b8d9d-tqxks" Jan 30 16:39:58 crc kubenswrapper[4766]: I0130 16:39:58.799637 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-dklb4" Jan 30 16:39:59 crc kubenswrapper[4766]: I0130 16:39:59.584831 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-ddthn" Jan 30 16:40:00 crc kubenswrapper[4766]: I0130 16:40:00.147901 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r" Jan 30 16:40:08 crc kubenswrapper[4766]: I0130 16:40:08.116009 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-2jmqd" event={"ID":"8ce68dfd-dfb4-42b9-8fc1-0da8b8336b90","Type":"ContainerStarted","Data":"646c585ca92c0ecda837027335aa38cbb31bdecdb042a33c6d49a09bc43d110e"} Jan 30 16:40:08 crc kubenswrapper[4766]: I0130 16:40:08.117465 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-2jmqd" Jan 30 16:40:08 crc kubenswrapper[4766]: I0130 16:40:08.118803 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-swq4p" event={"ID":"a5c180e1-f6bc-49b7-b4cf-4812d0bba5ac","Type":"ContainerStarted","Data":"33503755cbeb7db55916eb0f2e9c15282992a3ac49d9479113aeeb520f1c1c3b"} Jan 30 16:40:08 crc kubenswrapper[4766]: I0130 16:40:08.119220 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-swq4p" Jan 30 16:40:08 crc kubenswrapper[4766]: I0130 16:40:08.120963 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-d7xxm" event={"ID":"c03d46f4-f454-4b31-b4c7-5c324390d8ec","Type":"ContainerStarted","Data":"f2c6d80928e55486925c0cba8e3b1fbdc73e64062b3f864cebed2d12441d42ac"} Jan 30 16:40:08 crc kubenswrapper[4766]: I0130 16:40:08.121438 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-d7xxm" Jan 30 16:40:08 crc kubenswrapper[4766]: I0130 16:40:08.122535 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-576995988b-kkvlj" event={"ID":"d4c39f8d-f83d-4311-bb99-24dfa7eaeafd","Type":"ContainerStarted","Data":"9f51c8544da8e8cc080e3c1176f30bd20516683d5fddb56b50e86abde53669db"} Jan 30 16:40:08 crc kubenswrapper[4766]: I0130 16:40:08.122694 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-576995988b-kkvlj" Jan 30 16:40:08 crc kubenswrapper[4766]: I0130 16:40:08.141437 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-2jmqd" podStartSLOduration=3.769448308 podStartE2EDuration="1m1.141386924s" podCreationTimestamp="2026-01-30 16:39:07 +0000 UTC" firstStartedPulling="2026-01-30 16:39:10.335545628 +0000 UTC m=+1004.973502974" lastFinishedPulling="2026-01-30 16:40:07.707484244 +0000 UTC m=+1062.345441590" observedRunningTime="2026-01-30 16:40:08.141014154 +0000 UTC m=+1062.778971520" watchObservedRunningTime="2026-01-30 16:40:08.141386924 +0000 UTC m=+1062.779344270" Jan 30 16:40:08 crc kubenswrapper[4766]: I0130 16:40:08.165774 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-swq4p" podStartSLOduration=4.024632645 podStartE2EDuration="1m1.165754445s" podCreationTimestamp="2026-01-30 16:39:07 +0000 UTC" firstStartedPulling="2026-01-30 16:39:10.338786687 +0000 UTC m=+1004.976744033" lastFinishedPulling="2026-01-30 16:40:07.479908487 +0000 UTC m=+1062.117865833" observedRunningTime="2026-01-30 16:40:08.159869313 +0000 UTC m=+1062.797826669" watchObservedRunningTime="2026-01-30 16:40:08.165754445 +0000 UTC m=+1062.803711791" Jan 30 16:40:08 crc kubenswrapper[4766]: I0130 16:40:08.183098 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-576995988b-kkvlj" podStartSLOduration=3.709216708 podStartE2EDuration="1m1.183078722s" podCreationTimestamp="2026-01-30 16:39:07 +0000 UTC" firstStartedPulling="2026-01-30 16:39:10.335430854 +0000 UTC m=+1004.973388200" lastFinishedPulling="2026-01-30 16:40:07.809292868 +0000 UTC m=+1062.447250214" observedRunningTime="2026-01-30 16:40:08.17790882 +0000 UTC m=+1062.815866176" watchObservedRunningTime="2026-01-30 16:40:08.183078722 +0000 UTC m=+1062.821036068" Jan 30 16:40:08 crc kubenswrapper[4766]: I0130 16:40:08.195517 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-d7xxm" podStartSLOduration=3.821496211 podStartE2EDuration="1m1.195495584s" podCreationTimestamp="2026-01-30 16:39:07 +0000 UTC" firstStartedPulling="2026-01-30 16:39:10.335011793 +0000 UTC m=+1004.972969139" lastFinishedPulling="2026-01-30 16:40:07.709011166 +0000 UTC m=+1062.346968512" observedRunningTime="2026-01-30 16:40:08.192311256 +0000 UTC m=+1062.830268622" watchObservedRunningTime="2026-01-30 16:40:08.195495584 +0000 UTC m=+1062.833452930" Jan 30 16:40:18 crc kubenswrapper[4766]: I0130 16:40:18.150624 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-576995988b-kkvlj" Jan 30 16:40:18 crc kubenswrapper[4766]: I0130 16:40:18.223722 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-swq4p" Jan 30 16:40:18 crc kubenswrapper[4766]: I0130 16:40:18.278169 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-2jmqd" Jan 30 16:40:18 crc kubenswrapper[4766]: I0130 16:40:18.554974 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-d7xxm" Jan 30 16:40:35 crc kubenswrapper[4766]: I0130 16:40:35.609397 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-7v65m"] Jan 30 16:40:35 crc kubenswrapper[4766]: I0130 16:40:35.611210 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-7v65m" Jan 30 16:40:35 crc kubenswrapper[4766]: I0130 16:40:35.614445 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-6ld2n" Jan 30 16:40:35 crc kubenswrapper[4766]: I0130 16:40:35.615354 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 30 16:40:35 crc kubenswrapper[4766]: I0130 16:40:35.615462 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 30 16:40:35 crc kubenswrapper[4766]: I0130 16:40:35.615519 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 30 16:40:35 crc kubenswrapper[4766]: I0130 16:40:35.650620 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-7v65m"] Jan 30 16:40:35 crc kubenswrapper[4766]: I0130 16:40:35.692914 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-69ttv"] Jan 30 16:40:35 crc kubenswrapper[4766]: I0130 16:40:35.699101 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-69ttv" Jan 30 16:40:35 crc kubenswrapper[4766]: I0130 16:40:35.702038 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 30 16:40:35 crc kubenswrapper[4766]: I0130 16:40:35.710217 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/900942aa-a667-42dc-9ddf-a1909585c2e3-config\") pod \"dnsmasq-dns-675f4bcbfc-7v65m\" (UID: \"900942aa-a667-42dc-9ddf-a1909585c2e3\") " pod="openstack/dnsmasq-dns-675f4bcbfc-7v65m" Jan 30 16:40:35 crc kubenswrapper[4766]: I0130 16:40:35.710323 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5xtr\" (UniqueName: \"kubernetes.io/projected/900942aa-a667-42dc-9ddf-a1909585c2e3-kube-api-access-m5xtr\") pod \"dnsmasq-dns-675f4bcbfc-7v65m\" (UID: \"900942aa-a667-42dc-9ddf-a1909585c2e3\") " pod="openstack/dnsmasq-dns-675f4bcbfc-7v65m" Jan 30 16:40:35 crc kubenswrapper[4766]: I0130 16:40:35.725262 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-69ttv"] Jan 30 16:40:35 crc kubenswrapper[4766]: I0130 16:40:35.812009 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55ba5675-86b8-409a-b2f5-c0dbd6b95f2b-config\") pod \"dnsmasq-dns-78dd6ddcc-69ttv\" (UID: \"55ba5675-86b8-409a-b2f5-c0dbd6b95f2b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-69ttv" Jan 30 16:40:35 crc kubenswrapper[4766]: I0130 16:40:35.812089 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/900942aa-a667-42dc-9ddf-a1909585c2e3-config\") pod \"dnsmasq-dns-675f4bcbfc-7v65m\" (UID: \"900942aa-a667-42dc-9ddf-a1909585c2e3\") " pod="openstack/dnsmasq-dns-675f4bcbfc-7v65m" Jan 30 16:40:35 crc kubenswrapper[4766]: I0130 16:40:35.812259 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5xtr\" (UniqueName: \"kubernetes.io/projected/900942aa-a667-42dc-9ddf-a1909585c2e3-kube-api-access-m5xtr\") pod \"dnsmasq-dns-675f4bcbfc-7v65m\" (UID: \"900942aa-a667-42dc-9ddf-a1909585c2e3\") " pod="openstack/dnsmasq-dns-675f4bcbfc-7v65m" Jan 30 16:40:35 crc kubenswrapper[4766]: I0130 16:40:35.812330 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qw52n\" (UniqueName: \"kubernetes.io/projected/55ba5675-86b8-409a-b2f5-c0dbd6b95f2b-kube-api-access-qw52n\") pod \"dnsmasq-dns-78dd6ddcc-69ttv\" (UID: \"55ba5675-86b8-409a-b2f5-c0dbd6b95f2b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-69ttv" Jan 30 16:40:35 crc kubenswrapper[4766]: I0130 16:40:35.812376 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55ba5675-86b8-409a-b2f5-c0dbd6b95f2b-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-69ttv\" (UID: \"55ba5675-86b8-409a-b2f5-c0dbd6b95f2b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-69ttv" Jan 30 16:40:35 crc kubenswrapper[4766]: I0130 16:40:35.813325 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/900942aa-a667-42dc-9ddf-a1909585c2e3-config\") pod \"dnsmasq-dns-675f4bcbfc-7v65m\" (UID: \"900942aa-a667-42dc-9ddf-a1909585c2e3\") " pod="openstack/dnsmasq-dns-675f4bcbfc-7v65m" Jan 30 16:40:35 crc kubenswrapper[4766]: I0130 16:40:35.849605 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5xtr\" (UniqueName: \"kubernetes.io/projected/900942aa-a667-42dc-9ddf-a1909585c2e3-kube-api-access-m5xtr\") pod \"dnsmasq-dns-675f4bcbfc-7v65m\" (UID: \"900942aa-a667-42dc-9ddf-a1909585c2e3\") " pod="openstack/dnsmasq-dns-675f4bcbfc-7v65m" Jan 30 16:40:35 crc kubenswrapper[4766]: I0130 16:40:35.913236 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qw52n\" (UniqueName: \"kubernetes.io/projected/55ba5675-86b8-409a-b2f5-c0dbd6b95f2b-kube-api-access-qw52n\") pod \"dnsmasq-dns-78dd6ddcc-69ttv\" (UID: \"55ba5675-86b8-409a-b2f5-c0dbd6b95f2b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-69ttv" Jan 30 16:40:35 crc kubenswrapper[4766]: I0130 16:40:35.913302 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55ba5675-86b8-409a-b2f5-c0dbd6b95f2b-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-69ttv\" (UID: \"55ba5675-86b8-409a-b2f5-c0dbd6b95f2b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-69ttv" Jan 30 16:40:35 crc kubenswrapper[4766]: I0130 16:40:35.913362 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55ba5675-86b8-409a-b2f5-c0dbd6b95f2b-config\") pod \"dnsmasq-dns-78dd6ddcc-69ttv\" (UID: \"55ba5675-86b8-409a-b2f5-c0dbd6b95f2b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-69ttv" Jan 30 16:40:35 crc kubenswrapper[4766]: I0130 16:40:35.914458 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55ba5675-86b8-409a-b2f5-c0dbd6b95f2b-config\") pod \"dnsmasq-dns-78dd6ddcc-69ttv\" (UID: \"55ba5675-86b8-409a-b2f5-c0dbd6b95f2b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-69ttv" Jan 30 16:40:35 crc kubenswrapper[4766]: I0130 16:40:35.914602 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55ba5675-86b8-409a-b2f5-c0dbd6b95f2b-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-69ttv\" (UID: \"55ba5675-86b8-409a-b2f5-c0dbd6b95f2b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-69ttv" Jan 30 16:40:35 crc kubenswrapper[4766]: I0130 16:40:35.928071 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-7v65m" Jan 30 16:40:35 crc kubenswrapper[4766]: I0130 16:40:35.939102 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qw52n\" (UniqueName: \"kubernetes.io/projected/55ba5675-86b8-409a-b2f5-c0dbd6b95f2b-kube-api-access-qw52n\") pod \"dnsmasq-dns-78dd6ddcc-69ttv\" (UID: \"55ba5675-86b8-409a-b2f5-c0dbd6b95f2b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-69ttv" Jan 30 16:40:36 crc kubenswrapper[4766]: I0130 16:40:36.020728 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-69ttv" Jan 30 16:40:36 crc kubenswrapper[4766]: I0130 16:40:36.561653 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-7v65m"] Jan 30 16:40:36 crc kubenswrapper[4766]: I0130 16:40:36.656556 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-69ttv"] Jan 30 16:40:36 crc kubenswrapper[4766]: W0130 16:40:36.659669 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55ba5675_86b8_409a_b2f5_c0dbd6b95f2b.slice/crio-7756343962f10d73aa86319a654f52c14c753be69477e9ff822516b343136a68 WatchSource:0}: Error finding container 7756343962f10d73aa86319a654f52c14c753be69477e9ff822516b343136a68: Status 404 returned error can't find the container with id 7756343962f10d73aa86319a654f52c14c753be69477e9ff822516b343136a68 Jan 30 16:40:37 crc kubenswrapper[4766]: I0130 16:40:37.324687 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-69ttv" event={"ID":"55ba5675-86b8-409a-b2f5-c0dbd6b95f2b","Type":"ContainerStarted","Data":"7756343962f10d73aa86319a654f52c14c753be69477e9ff822516b343136a68"} Jan 30 16:40:37 crc kubenswrapper[4766]: I0130 16:40:37.326446 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-7v65m" event={"ID":"900942aa-a667-42dc-9ddf-a1909585c2e3","Type":"ContainerStarted","Data":"4dd23f899f0d12a8b608725fc3a9970423f5d27f8151e6c03d79ba260849d2dc"} Jan 30 16:40:38 crc kubenswrapper[4766]: I0130 16:40:38.119032 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-7v65m"] Jan 30 16:40:38 crc kubenswrapper[4766]: I0130 16:40:38.144473 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-6647p"] Jan 30 16:40:38 crc kubenswrapper[4766]: I0130 16:40:38.146290 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-6647p" Jan 30 16:40:38 crc kubenswrapper[4766]: I0130 16:40:38.153315 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-6647p"] Jan 30 16:40:38 crc kubenswrapper[4766]: I0130 16:40:38.256868 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsqpd\" (UniqueName: \"kubernetes.io/projected/a31c7217-d6d2-4cc1-ab83-016373333c80-kube-api-access-dsqpd\") pod \"dnsmasq-dns-5ccc8479f9-6647p\" (UID: \"a31c7217-d6d2-4cc1-ab83-016373333c80\") " pod="openstack/dnsmasq-dns-5ccc8479f9-6647p" Jan 30 16:40:38 crc kubenswrapper[4766]: I0130 16:40:38.256935 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a31c7217-d6d2-4cc1-ab83-016373333c80-config\") pod \"dnsmasq-dns-5ccc8479f9-6647p\" (UID: \"a31c7217-d6d2-4cc1-ab83-016373333c80\") " pod="openstack/dnsmasq-dns-5ccc8479f9-6647p" Jan 30 16:40:38 crc kubenswrapper[4766]: I0130 16:40:38.256974 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a31c7217-d6d2-4cc1-ab83-016373333c80-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-6647p\" (UID: \"a31c7217-d6d2-4cc1-ab83-016373333c80\") " pod="openstack/dnsmasq-dns-5ccc8479f9-6647p" Jan 30 16:40:38 crc kubenswrapper[4766]: I0130 16:40:38.364727 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsqpd\" (UniqueName: \"kubernetes.io/projected/a31c7217-d6d2-4cc1-ab83-016373333c80-kube-api-access-dsqpd\") pod \"dnsmasq-dns-5ccc8479f9-6647p\" (UID: \"a31c7217-d6d2-4cc1-ab83-016373333c80\") " pod="openstack/dnsmasq-dns-5ccc8479f9-6647p" Jan 30 16:40:38 crc kubenswrapper[4766]: I0130 16:40:38.364782 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a31c7217-d6d2-4cc1-ab83-016373333c80-config\") pod \"dnsmasq-dns-5ccc8479f9-6647p\" (UID: \"a31c7217-d6d2-4cc1-ab83-016373333c80\") " pod="openstack/dnsmasq-dns-5ccc8479f9-6647p" Jan 30 16:40:38 crc kubenswrapper[4766]: I0130 16:40:38.364832 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a31c7217-d6d2-4cc1-ab83-016373333c80-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-6647p\" (UID: \"a31c7217-d6d2-4cc1-ab83-016373333c80\") " pod="openstack/dnsmasq-dns-5ccc8479f9-6647p" Jan 30 16:40:38 crc kubenswrapper[4766]: I0130 16:40:38.365900 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a31c7217-d6d2-4cc1-ab83-016373333c80-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-6647p\" (UID: \"a31c7217-d6d2-4cc1-ab83-016373333c80\") " pod="openstack/dnsmasq-dns-5ccc8479f9-6647p" Jan 30 16:40:38 crc kubenswrapper[4766]: I0130 16:40:38.368036 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a31c7217-d6d2-4cc1-ab83-016373333c80-config\") pod \"dnsmasq-dns-5ccc8479f9-6647p\" (UID: \"a31c7217-d6d2-4cc1-ab83-016373333c80\") " pod="openstack/dnsmasq-dns-5ccc8479f9-6647p" Jan 30 16:40:38 crc kubenswrapper[4766]: I0130 16:40:38.412665 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsqpd\" (UniqueName: \"kubernetes.io/projected/a31c7217-d6d2-4cc1-ab83-016373333c80-kube-api-access-dsqpd\") pod \"dnsmasq-dns-5ccc8479f9-6647p\" (UID: \"a31c7217-d6d2-4cc1-ab83-016373333c80\") " pod="openstack/dnsmasq-dns-5ccc8479f9-6647p" Jan 30 16:40:38 crc kubenswrapper[4766]: I0130 16:40:38.499984 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-6647p" Jan 30 16:40:38 crc kubenswrapper[4766]: I0130 16:40:38.513115 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-69ttv"] Jan 30 16:40:38 crc kubenswrapper[4766]: I0130 16:40:38.558872 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rvfhb"] Jan 30 16:40:38 crc kubenswrapper[4766]: I0130 16:40:38.560060 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-rvfhb" Jan 30 16:40:38 crc kubenswrapper[4766]: I0130 16:40:38.598268 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rvfhb"] Jan 30 16:40:38 crc kubenswrapper[4766]: I0130 16:40:38.674202 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvdkd\" (UniqueName: \"kubernetes.io/projected/7c2933e1-c67d-45a6-8e08-fac512f6473b-kube-api-access-jvdkd\") pod \"dnsmasq-dns-57d769cc4f-rvfhb\" (UID: \"7c2933e1-c67d-45a6-8e08-fac512f6473b\") " pod="openstack/dnsmasq-dns-57d769cc4f-rvfhb" Jan 30 16:40:38 crc kubenswrapper[4766]: I0130 16:40:38.674387 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c2933e1-c67d-45a6-8e08-fac512f6473b-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-rvfhb\" (UID: \"7c2933e1-c67d-45a6-8e08-fac512f6473b\") " pod="openstack/dnsmasq-dns-57d769cc4f-rvfhb" Jan 30 16:40:38 crc kubenswrapper[4766]: I0130 16:40:38.674607 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c2933e1-c67d-45a6-8e08-fac512f6473b-config\") pod \"dnsmasq-dns-57d769cc4f-rvfhb\" (UID: \"7c2933e1-c67d-45a6-8e08-fac512f6473b\") " pod="openstack/dnsmasq-dns-57d769cc4f-rvfhb" Jan 30 16:40:38 crc kubenswrapper[4766]: I0130 16:40:38.775883 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c2933e1-c67d-45a6-8e08-fac512f6473b-config\") pod \"dnsmasq-dns-57d769cc4f-rvfhb\" (UID: \"7c2933e1-c67d-45a6-8e08-fac512f6473b\") " pod="openstack/dnsmasq-dns-57d769cc4f-rvfhb" Jan 30 16:40:38 crc kubenswrapper[4766]: I0130 16:40:38.775984 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvdkd\" (UniqueName: \"kubernetes.io/projected/7c2933e1-c67d-45a6-8e08-fac512f6473b-kube-api-access-jvdkd\") pod \"dnsmasq-dns-57d769cc4f-rvfhb\" (UID: \"7c2933e1-c67d-45a6-8e08-fac512f6473b\") " pod="openstack/dnsmasq-dns-57d769cc4f-rvfhb" Jan 30 16:40:38 crc kubenswrapper[4766]: I0130 16:40:38.776057 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c2933e1-c67d-45a6-8e08-fac512f6473b-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-rvfhb\" (UID: \"7c2933e1-c67d-45a6-8e08-fac512f6473b\") " pod="openstack/dnsmasq-dns-57d769cc4f-rvfhb" Jan 30 16:40:38 crc kubenswrapper[4766]: I0130 16:40:38.777104 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c2933e1-c67d-45a6-8e08-fac512f6473b-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-rvfhb\" (UID: \"7c2933e1-c67d-45a6-8e08-fac512f6473b\") " pod="openstack/dnsmasq-dns-57d769cc4f-rvfhb" Jan 30 16:40:38 crc kubenswrapper[4766]: I0130 16:40:38.777104 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c2933e1-c67d-45a6-8e08-fac512f6473b-config\") pod \"dnsmasq-dns-57d769cc4f-rvfhb\" (UID: \"7c2933e1-c67d-45a6-8e08-fac512f6473b\") " pod="openstack/dnsmasq-dns-57d769cc4f-rvfhb" Jan 30 16:40:38 crc kubenswrapper[4766]: I0130 16:40:38.816169 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvdkd\" (UniqueName: \"kubernetes.io/projected/7c2933e1-c67d-45a6-8e08-fac512f6473b-kube-api-access-jvdkd\") pod \"dnsmasq-dns-57d769cc4f-rvfhb\" (UID: \"7c2933e1-c67d-45a6-8e08-fac512f6473b\") " pod="openstack/dnsmasq-dns-57d769cc4f-rvfhb" Jan 30 16:40:38 crc kubenswrapper[4766]: I0130 16:40:38.969396 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-rvfhb" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.045779 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.045851 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.380842 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.391136 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.394483 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.396443 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.402738 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.402789 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.403042 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.403348 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.404337 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-sx6cl" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.405532 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.503012 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjnbx\" (UniqueName: \"kubernetes.io/projected/b21357e1-82c9-419a-a191-359c84d6d001-kube-api-access-vjnbx\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.503082 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b21357e1-82c9-419a-a191-359c84d6d001-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.503112 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b21357e1-82c9-419a-a191-359c84d6d001-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.503157 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b21357e1-82c9-419a-a191-359c84d6d001-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.503209 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b21357e1-82c9-419a-a191-359c84d6d001-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.503244 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.503261 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b21357e1-82c9-419a-a191-359c84d6d001-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.503289 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b21357e1-82c9-419a-a191-359c84d6d001-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.503305 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b21357e1-82c9-419a-a191-359c84d6d001-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.503325 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b21357e1-82c9-419a-a191-359c84d6d001-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.503350 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b21357e1-82c9-419a-a191-359c84d6d001-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.548639 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-6647p"] Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.608059 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b21357e1-82c9-419a-a191-359c84d6d001-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.608466 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b21357e1-82c9-419a-a191-359c84d6d001-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.608495 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b21357e1-82c9-419a-a191-359c84d6d001-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.608560 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b21357e1-82c9-419a-a191-359c84d6d001-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.608634 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.608661 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b21357e1-82c9-419a-a191-359c84d6d001-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.608711 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b21357e1-82c9-419a-a191-359c84d6d001-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.608736 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b21357e1-82c9-419a-a191-359c84d6d001-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.608778 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b21357e1-82c9-419a-a191-359c84d6d001-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.608816 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b21357e1-82c9-419a-a191-359c84d6d001-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.608844 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjnbx\" (UniqueName: \"kubernetes.io/projected/b21357e1-82c9-419a-a191-359c84d6d001-kube-api-access-vjnbx\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.609269 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.610612 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b21357e1-82c9-419a-a191-359c84d6d001-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.611060 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b21357e1-82c9-419a-a191-359c84d6d001-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.612035 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b21357e1-82c9-419a-a191-359c84d6d001-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.612373 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b21357e1-82c9-419a-a191-359c84d6d001-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.613527 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b21357e1-82c9-419a-a191-359c84d6d001-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.618895 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b21357e1-82c9-419a-a191-359c84d6d001-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.620853 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b21357e1-82c9-419a-a191-359c84d6d001-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.623358 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b21357e1-82c9-419a-a191-359c84d6d001-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.637799 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b21357e1-82c9-419a-a191-359c84d6d001-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.638429 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjnbx\" (UniqueName: \"kubernetes.io/projected/b21357e1-82c9-419a-a191-359c84d6d001-kube-api-access-vjnbx\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.734556 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.737845 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rvfhb"] Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.756165 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.761203 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.765677 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.774575 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.774731 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.774845 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.774945 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.775300 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.788906 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.794968 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.802282 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-vc5hz" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.819098 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bc2a138c-9abd-427b-815c-cbb9e12459f6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.819152 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bc2a138c-9abd-427b-815c-cbb9e12459f6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.819208 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bc2a138c-9abd-427b-815c-cbb9e12459f6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.819248 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bc2a138c-9abd-427b-815c-cbb9e12459f6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.819407 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bc2a138c-9abd-427b-815c-cbb9e12459f6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.819452 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bc2a138c-9abd-427b-815c-cbb9e12459f6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.819512 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.819544 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bc2a138c-9abd-427b-815c-cbb9e12459f6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.819564 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bc2a138c-9abd-427b-815c-cbb9e12459f6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.819594 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bc2a138c-9abd-427b-815c-cbb9e12459f6-config-data\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.819621 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbx8k\" (UniqueName: \"kubernetes.io/projected/bc2a138c-9abd-427b-815c-cbb9e12459f6-kube-api-access-kbx8k\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.932482 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bc2a138c-9abd-427b-815c-cbb9e12459f6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.932600 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bc2a138c-9abd-427b-815c-cbb9e12459f6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.932658 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.932723 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bc2a138c-9abd-427b-815c-cbb9e12459f6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.932752 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bc2a138c-9abd-427b-815c-cbb9e12459f6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.932805 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bc2a138c-9abd-427b-815c-cbb9e12459f6-config-data\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.932844 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbx8k\" (UniqueName: \"kubernetes.io/projected/bc2a138c-9abd-427b-815c-cbb9e12459f6-kube-api-access-kbx8k\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.932952 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bc2a138c-9abd-427b-815c-cbb9e12459f6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.932990 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bc2a138c-9abd-427b-815c-cbb9e12459f6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.933022 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bc2a138c-9abd-427b-815c-cbb9e12459f6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.933080 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bc2a138c-9abd-427b-815c-cbb9e12459f6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.936964 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bc2a138c-9abd-427b-815c-cbb9e12459f6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.938517 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bc2a138c-9abd-427b-815c-cbb9e12459f6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.939253 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bc2a138c-9abd-427b-815c-cbb9e12459f6-config-data\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.942454 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.946147 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bc2a138c-9abd-427b-815c-cbb9e12459f6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.948305 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bc2a138c-9abd-427b-815c-cbb9e12459f6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.958043 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bc2a138c-9abd-427b-815c-cbb9e12459f6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.961274 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbx8k\" (UniqueName: \"kubernetes.io/projected/bc2a138c-9abd-427b-815c-cbb9e12459f6-kube-api-access-kbx8k\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.961969 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bc2a138c-9abd-427b-815c-cbb9e12459f6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.977474 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bc2a138c-9abd-427b-815c-cbb9e12459f6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.978146 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bc2a138c-9abd-427b-815c-cbb9e12459f6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:40 crc kubenswrapper[4766]: I0130 16:40:40.017517 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:40 crc kubenswrapper[4766]: I0130 16:40:40.148125 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 16:40:40 crc kubenswrapper[4766]: I0130 16:40:40.410604 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-6647p" event={"ID":"a31c7217-d6d2-4cc1-ab83-016373333c80","Type":"ContainerStarted","Data":"c9a6f86e26a3a7d3d41158b0d1740e813c2a23a97f1db4e06c00cc07c4e615e5"} Jan 30 16:40:40 crc kubenswrapper[4766]: I0130 16:40:40.411638 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-rvfhb" event={"ID":"7c2933e1-c67d-45a6-8e08-fac512f6473b","Type":"ContainerStarted","Data":"a4daa3864bea92099d39184deffbe2394e36c62e86137adfe5c0a64228217582"} Jan 30 16:40:40 crc kubenswrapper[4766]: I0130 16:40:40.511820 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 16:40:40 crc kubenswrapper[4766]: W0130 16:40:40.596460 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb21357e1_82c9_419a_a191_359c84d6d001.slice/crio-3e10ead1aca56572964d46a5892bb1dffdbbed95ee78ced09f4df00421ff6107 WatchSource:0}: Error finding container 3e10ead1aca56572964d46a5892bb1dffdbbed95ee78ced09f4df00421ff6107: Status 404 returned error can't find the container with id 3e10ead1aca56572964d46a5892bb1dffdbbed95ee78ced09f4df00421ff6107 Jan 30 16:40:40 crc kubenswrapper[4766]: I0130 16:40:40.665905 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 16:40:40 crc kubenswrapper[4766]: W0130 16:40:40.679973 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc2a138c_9abd_427b_815c_cbb9e12459f6.slice/crio-737ac00e5e8f2d0fe8c8cc8ad014b2d9c4eb214f4c0587d701ecfb018001f677 WatchSource:0}: Error finding container 737ac00e5e8f2d0fe8c8cc8ad014b2d9c4eb214f4c0587d701ecfb018001f677: Status 404 returned error can't find the container with id 737ac00e5e8f2d0fe8c8cc8ad014b2d9c4eb214f4c0587d701ecfb018001f677 Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.129231 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.130548 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.158031 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.158433 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.159751 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.160650 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-x2qq7" Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.173450 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.181063 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.294512 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4qrv\" (UniqueName: \"kubernetes.io/projected/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-kube-api-access-t4qrv\") pod \"openstack-galera-0\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") " pod="openstack/openstack-galera-0" Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.294583 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-config-data-default\") pod \"openstack-galera-0\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") " pod="openstack/openstack-galera-0" Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.294618 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") " pod="openstack/openstack-galera-0" Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.294670 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-operator-scripts\") pod \"openstack-galera-0\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") " pod="openstack/openstack-galera-0" Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.294694 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") " pod="openstack/openstack-galera-0" Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.294767 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-kolla-config\") pod \"openstack-galera-0\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") " pod="openstack/openstack-galera-0" Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.294791 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") " pod="openstack/openstack-galera-0" Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.294812 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-config-data-generated\") pod \"openstack-galera-0\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") " pod="openstack/openstack-galera-0" Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.395945 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4qrv\" (UniqueName: \"kubernetes.io/projected/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-kube-api-access-t4qrv\") pod \"openstack-galera-0\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") " pod="openstack/openstack-galera-0" Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.396403 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-config-data-default\") pod \"openstack-galera-0\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") " pod="openstack/openstack-galera-0" Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.396484 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") " pod="openstack/openstack-galera-0" Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.396598 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-operator-scripts\") pod \"openstack-galera-0\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") " pod="openstack/openstack-galera-0" Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.396648 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") " pod="openstack/openstack-galera-0" Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.396741 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-kolla-config\") pod \"openstack-galera-0\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") " pod="openstack/openstack-galera-0" Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.396816 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") " pod="openstack/openstack-galera-0" Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.396843 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-config-data-generated\") pod \"openstack-galera-0\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") " pod="openstack/openstack-galera-0" Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.397806 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-config-data-generated\") pod \"openstack-galera-0\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") " pod="openstack/openstack-galera-0" Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.398928 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/openstack-galera-0" Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.399929 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-config-data-default\") pod \"openstack-galera-0\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") " pod="openstack/openstack-galera-0" Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.401055 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-kolla-config\") pod \"openstack-galera-0\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") " pod="openstack/openstack-galera-0" Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.405736 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-operator-scripts\") pod \"openstack-galera-0\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") " pod="openstack/openstack-galera-0" Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.416272 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") " pod="openstack/openstack-galera-0" Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.418805 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") " pod="openstack/openstack-galera-0" Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.425846 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4qrv\" (UniqueName: \"kubernetes.io/projected/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-kube-api-access-t4qrv\") pod \"openstack-galera-0\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") " pod="openstack/openstack-galera-0" Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.439121 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") " pod="openstack/openstack-galera-0" Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.478205 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.556205 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b21357e1-82c9-419a-a191-359c84d6d001","Type":"ContainerStarted","Data":"3e10ead1aca56572964d46a5892bb1dffdbbed95ee78ced09f4df00421ff6107"} Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.599821 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bc2a138c-9abd-427b-815c-cbb9e12459f6","Type":"ContainerStarted","Data":"737ac00e5e8f2d0fe8c8cc8ad014b2d9c4eb214f4c0587d701ecfb018001f677"} Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.395812 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 16:40:42 crc kubenswrapper[4766]: W0130 16:40:42.475710 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62dd6ad1_1550_48cf_b103_b7ab6dd93c97.slice/crio-7cd3716ef2ba5300e2a9e059a29e8e25763df286461c739788ee844a36ee0a0f WatchSource:0}: Error finding container 7cd3716ef2ba5300e2a9e059a29e8e25763df286461c739788ee844a36ee0a0f: Status 404 returned error can't find the container with id 7cd3716ef2ba5300e2a9e059a29e8e25763df286461c739788ee844a36ee0a0f Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.634129 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"62dd6ad1-1550-48cf-b103-b7ab6dd93c97","Type":"ContainerStarted","Data":"7cd3716ef2ba5300e2a9e059a29e8e25763df286461c739788ee844a36ee0a0f"} Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.638222 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.639996 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.642781 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.643670 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-zd2kf" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.643778 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.643852 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.646825 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.746846 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q47vz\" (UniqueName: \"kubernetes.io/projected/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-kube-api-access-q47vz\") pod \"openstack-cell1-galera-0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.746902 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.746942 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.747010 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.747262 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.747291 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.747307 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.747327 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.833353 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.848555 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.849749 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.849782 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.849812 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.849876 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q47vz\" (UniqueName: \"kubernetes.io/projected/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-kube-api-access-q47vz\") pod \"openstack-cell1-galera-0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.849903 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.849935 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.849966 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.850011 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.850801 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.852557 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.852748 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-cell1-galera-0" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.854005 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.854617 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.854661 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.854872 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.857072 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-fngzp" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.862074 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.863303 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.864803 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.911843 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.918580 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q47vz\" (UniqueName: \"kubernetes.io/projected/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-kube-api-access-q47vz\") pod \"openstack-cell1-galera-0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.951991 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/61f7793d-39bd-4e96-a857-7de972f0c76d-kolla-config\") pod \"memcached-0\" (UID: \"61f7793d-39bd-4e96-a857-7de972f0c76d\") " pod="openstack/memcached-0" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.952050 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f7793d-39bd-4e96-a857-7de972f0c76d-combined-ca-bundle\") pod \"memcached-0\" (UID: \"61f7793d-39bd-4e96-a857-7de972f0c76d\") " pod="openstack/memcached-0" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.952069 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/61f7793d-39bd-4e96-a857-7de972f0c76d-memcached-tls-certs\") pod \"memcached-0\" (UID: \"61f7793d-39bd-4e96-a857-7de972f0c76d\") " pod="openstack/memcached-0" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.952112 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbnzx\" (UniqueName: \"kubernetes.io/projected/61f7793d-39bd-4e96-a857-7de972f0c76d-kube-api-access-mbnzx\") pod \"memcached-0\" (UID: \"61f7793d-39bd-4e96-a857-7de972f0c76d\") " pod="openstack/memcached-0" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.952141 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/61f7793d-39bd-4e96-a857-7de972f0c76d-config-data\") pod \"memcached-0\" (UID: \"61f7793d-39bd-4e96-a857-7de972f0c76d\") " pod="openstack/memcached-0" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.975431 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 16:40:43 crc kubenswrapper[4766]: I0130 16:40:43.057613 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/61f7793d-39bd-4e96-a857-7de972f0c76d-config-data\") pod \"memcached-0\" (UID: \"61f7793d-39bd-4e96-a857-7de972f0c76d\") " pod="openstack/memcached-0" Jan 30 16:40:43 crc kubenswrapper[4766]: I0130 16:40:43.057777 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/61f7793d-39bd-4e96-a857-7de972f0c76d-kolla-config\") pod \"memcached-0\" (UID: \"61f7793d-39bd-4e96-a857-7de972f0c76d\") " pod="openstack/memcached-0" Jan 30 16:40:43 crc kubenswrapper[4766]: I0130 16:40:43.057804 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f7793d-39bd-4e96-a857-7de972f0c76d-combined-ca-bundle\") pod \"memcached-0\" (UID: \"61f7793d-39bd-4e96-a857-7de972f0c76d\") " pod="openstack/memcached-0" Jan 30 16:40:43 crc kubenswrapper[4766]: I0130 16:40:43.057822 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/61f7793d-39bd-4e96-a857-7de972f0c76d-memcached-tls-certs\") pod \"memcached-0\" (UID: \"61f7793d-39bd-4e96-a857-7de972f0c76d\") " pod="openstack/memcached-0" Jan 30 16:40:43 crc kubenswrapper[4766]: I0130 16:40:43.057878 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbnzx\" (UniqueName: \"kubernetes.io/projected/61f7793d-39bd-4e96-a857-7de972f0c76d-kube-api-access-mbnzx\") pod \"memcached-0\" (UID: \"61f7793d-39bd-4e96-a857-7de972f0c76d\") " pod="openstack/memcached-0" Jan 30 16:40:43 crc kubenswrapper[4766]: I0130 16:40:43.059453 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/61f7793d-39bd-4e96-a857-7de972f0c76d-config-data\") pod \"memcached-0\" (UID: \"61f7793d-39bd-4e96-a857-7de972f0c76d\") " pod="openstack/memcached-0" Jan 30 16:40:43 crc kubenswrapper[4766]: I0130 16:40:43.059862 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/61f7793d-39bd-4e96-a857-7de972f0c76d-kolla-config\") pod \"memcached-0\" (UID: \"61f7793d-39bd-4e96-a857-7de972f0c76d\") " pod="openstack/memcached-0" Jan 30 16:40:43 crc kubenswrapper[4766]: I0130 16:40:43.084970 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/61f7793d-39bd-4e96-a857-7de972f0c76d-memcached-tls-certs\") pod \"memcached-0\" (UID: \"61f7793d-39bd-4e96-a857-7de972f0c76d\") " pod="openstack/memcached-0" Jan 30 16:40:43 crc kubenswrapper[4766]: I0130 16:40:43.084976 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f7793d-39bd-4e96-a857-7de972f0c76d-combined-ca-bundle\") pod \"memcached-0\" (UID: \"61f7793d-39bd-4e96-a857-7de972f0c76d\") " pod="openstack/memcached-0" Jan 30 16:40:43 crc kubenswrapper[4766]: I0130 16:40:43.094162 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbnzx\" (UniqueName: \"kubernetes.io/projected/61f7793d-39bd-4e96-a857-7de972f0c76d-kube-api-access-mbnzx\") pod \"memcached-0\" (UID: \"61f7793d-39bd-4e96-a857-7de972f0c76d\") " pod="openstack/memcached-0" Jan 30 16:40:43 crc kubenswrapper[4766]: I0130 16:40:43.281879 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 16:40:43 crc kubenswrapper[4766]: I0130 16:40:43.882575 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 16:40:43 crc kubenswrapper[4766]: I0130 16:40:43.908237 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 30 16:40:43 crc kubenswrapper[4766]: W0130 16:40:43.960153 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod61f7793d_39bd_4e96_a857_7de972f0c76d.slice/crio-38540b330474d27ec43c9b991dc1ee2efa4d90bf561735549986060c7b3311d2 WatchSource:0}: Error finding container 38540b330474d27ec43c9b991dc1ee2efa4d90bf561735549986060c7b3311d2: Status 404 returned error can't find the container with id 38540b330474d27ec43c9b991dc1ee2efa4d90bf561735549986060c7b3311d2 Jan 30 16:40:44 crc kubenswrapper[4766]: W0130 16:40:44.044207 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ad68dc2_23ff_4044_b74d_149ae8f02bc0.slice/crio-86807e61b818028e1b27b632e251a892f0f024f763279e3a716bc66141f0adc3 WatchSource:0}: Error finding container 86807e61b818028e1b27b632e251a892f0f024f763279e3a716bc66141f0adc3: Status 404 returned error can't find the container with id 86807e61b818028e1b27b632e251a892f0f024f763279e3a716bc66141f0adc3 Jan 30 16:40:44 crc kubenswrapper[4766]: I0130 16:40:44.799132 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"61f7793d-39bd-4e96-a857-7de972f0c76d","Type":"ContainerStarted","Data":"38540b330474d27ec43c9b991dc1ee2efa4d90bf561735549986060c7b3311d2"} Jan 30 16:40:44 crc kubenswrapper[4766]: I0130 16:40:44.837606 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9ad68dc2-23ff-4044-b74d-149ae8f02bc0","Type":"ContainerStarted","Data":"86807e61b818028e1b27b632e251a892f0f024f763279e3a716bc66141f0adc3"} Jan 30 16:40:45 crc kubenswrapper[4766]: I0130 16:40:45.018046 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 16:40:45 crc kubenswrapper[4766]: I0130 16:40:45.019197 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 16:40:45 crc kubenswrapper[4766]: I0130 16:40:45.021158 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-db5vw" Jan 30 16:40:45 crc kubenswrapper[4766]: I0130 16:40:45.044023 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 16:40:45 crc kubenswrapper[4766]: I0130 16:40:45.134021 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpp5m\" (UniqueName: \"kubernetes.io/projected/17273647-f97c-490b-a766-fd4f004d3732-kube-api-access-hpp5m\") pod \"kube-state-metrics-0\" (UID: \"17273647-f97c-490b-a766-fd4f004d3732\") " pod="openstack/kube-state-metrics-0" Jan 30 16:40:45 crc kubenswrapper[4766]: I0130 16:40:45.236250 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpp5m\" (UniqueName: \"kubernetes.io/projected/17273647-f97c-490b-a766-fd4f004d3732-kube-api-access-hpp5m\") pod \"kube-state-metrics-0\" (UID: \"17273647-f97c-490b-a766-fd4f004d3732\") " pod="openstack/kube-state-metrics-0" Jan 30 16:40:45 crc kubenswrapper[4766]: I0130 16:40:45.276775 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpp5m\" (UniqueName: \"kubernetes.io/projected/17273647-f97c-490b-a766-fd4f004d3732-kube-api-access-hpp5m\") pod \"kube-state-metrics-0\" (UID: \"17273647-f97c-490b-a766-fd4f004d3732\") " pod="openstack/kube-state-metrics-0" Jan 30 16:40:45 crc kubenswrapper[4766]: I0130 16:40:45.356806 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 16:40:46 crc kubenswrapper[4766]: I0130 16:40:46.094564 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 16:40:46 crc kubenswrapper[4766]: I0130 16:40:46.866695 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"17273647-f97c-490b-a766-fd4f004d3732","Type":"ContainerStarted","Data":"6ab83b607cb34660892c3f858dbee7a7095d74efd1f6621864cf951d1afb4fc6"} Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.327690 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-clmnh"] Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.329260 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-clmnh" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.335020 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-nwj8z" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.335384 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.335645 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.338931 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-clmnh"] Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.392224 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-l6hkn"] Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.394469 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-l6hkn" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.402709 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-l6hkn"] Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.418703 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-var-log-ovn\") pod \"ovn-controller-clmnh\" (UID: \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\") " pod="openstack/ovn-controller-clmnh" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.418767 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-ovn-controller-tls-certs\") pod \"ovn-controller-clmnh\" (UID: \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\") " pod="openstack/ovn-controller-clmnh" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.418797 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-combined-ca-bundle\") pod \"ovn-controller-clmnh\" (UID: \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\") " pod="openstack/ovn-controller-clmnh" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.418851 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-var-run\") pod \"ovn-controller-clmnh\" (UID: \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\") " pod="openstack/ovn-controller-clmnh" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.418884 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-var-run-ovn\") pod \"ovn-controller-clmnh\" (UID: \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\") " pod="openstack/ovn-controller-clmnh" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.418948 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvm4n\" (UniqueName: \"kubernetes.io/projected/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-kube-api-access-hvm4n\") pod \"ovn-controller-clmnh\" (UID: \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\") " pod="openstack/ovn-controller-clmnh" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.418991 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-scripts\") pod \"ovn-controller-clmnh\" (UID: \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\") " pod="openstack/ovn-controller-clmnh" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.520482 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-var-run-ovn\") pod \"ovn-controller-clmnh\" (UID: \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\") " pod="openstack/ovn-controller-clmnh" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.520567 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2a501828-e06b-4096-b555-1ecd9323ee20-var-lib\") pod \"ovn-controller-ovs-l6hkn\" (UID: \"2a501828-e06b-4096-b555-1ecd9323ee20\") " pod="openstack/ovn-controller-ovs-l6hkn" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.520593 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2mp4\" (UniqueName: \"kubernetes.io/projected/2a501828-e06b-4096-b555-1ecd9323ee20-kube-api-access-p2mp4\") pod \"ovn-controller-ovs-l6hkn\" (UID: \"2a501828-e06b-4096-b555-1ecd9323ee20\") " pod="openstack/ovn-controller-ovs-l6hkn" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.520626 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a501828-e06b-4096-b555-1ecd9323ee20-scripts\") pod \"ovn-controller-ovs-l6hkn\" (UID: \"2a501828-e06b-4096-b555-1ecd9323ee20\") " pod="openstack/ovn-controller-ovs-l6hkn" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.520667 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2a501828-e06b-4096-b555-1ecd9323ee20-var-log\") pod \"ovn-controller-ovs-l6hkn\" (UID: \"2a501828-e06b-4096-b555-1ecd9323ee20\") " pod="openstack/ovn-controller-ovs-l6hkn" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.521496 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-var-run-ovn\") pod \"ovn-controller-clmnh\" (UID: \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\") " pod="openstack/ovn-controller-clmnh" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.520706 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvm4n\" (UniqueName: \"kubernetes.io/projected/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-kube-api-access-hvm4n\") pod \"ovn-controller-clmnh\" (UID: \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\") " pod="openstack/ovn-controller-clmnh" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.521604 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2a501828-e06b-4096-b555-1ecd9323ee20-var-run\") pod \"ovn-controller-ovs-l6hkn\" (UID: \"2a501828-e06b-4096-b555-1ecd9323ee20\") " pod="openstack/ovn-controller-ovs-l6hkn" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.521632 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2a501828-e06b-4096-b555-1ecd9323ee20-etc-ovs\") pod \"ovn-controller-ovs-l6hkn\" (UID: \"2a501828-e06b-4096-b555-1ecd9323ee20\") " pod="openstack/ovn-controller-ovs-l6hkn" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.521656 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-scripts\") pod \"ovn-controller-clmnh\" (UID: \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\") " pod="openstack/ovn-controller-clmnh" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.521713 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-var-log-ovn\") pod \"ovn-controller-clmnh\" (UID: \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\") " pod="openstack/ovn-controller-clmnh" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.521738 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-ovn-controller-tls-certs\") pod \"ovn-controller-clmnh\" (UID: \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\") " pod="openstack/ovn-controller-clmnh" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.521764 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-combined-ca-bundle\") pod \"ovn-controller-clmnh\" (UID: \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\") " pod="openstack/ovn-controller-clmnh" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.521828 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-var-run\") pod \"ovn-controller-clmnh\" (UID: \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\") " pod="openstack/ovn-controller-clmnh" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.521991 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-var-run\") pod \"ovn-controller-clmnh\" (UID: \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\") " pod="openstack/ovn-controller-clmnh" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.522151 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-var-log-ovn\") pod \"ovn-controller-clmnh\" (UID: \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\") " pod="openstack/ovn-controller-clmnh" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.527131 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-scripts\") pod \"ovn-controller-clmnh\" (UID: \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\") " pod="openstack/ovn-controller-clmnh" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.536805 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-combined-ca-bundle\") pod \"ovn-controller-clmnh\" (UID: \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\") " pod="openstack/ovn-controller-clmnh" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.537388 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-ovn-controller-tls-certs\") pod \"ovn-controller-clmnh\" (UID: \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\") " pod="openstack/ovn-controller-clmnh" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.547117 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvm4n\" (UniqueName: \"kubernetes.io/projected/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-kube-api-access-hvm4n\") pod \"ovn-controller-clmnh\" (UID: \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\") " pod="openstack/ovn-controller-clmnh" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.623775 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2a501828-e06b-4096-b555-1ecd9323ee20-var-lib\") pod \"ovn-controller-ovs-l6hkn\" (UID: \"2a501828-e06b-4096-b555-1ecd9323ee20\") " pod="openstack/ovn-controller-ovs-l6hkn" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.623853 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2mp4\" (UniqueName: \"kubernetes.io/projected/2a501828-e06b-4096-b555-1ecd9323ee20-kube-api-access-p2mp4\") pod \"ovn-controller-ovs-l6hkn\" (UID: \"2a501828-e06b-4096-b555-1ecd9323ee20\") " pod="openstack/ovn-controller-ovs-l6hkn" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.623881 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a501828-e06b-4096-b555-1ecd9323ee20-scripts\") pod \"ovn-controller-ovs-l6hkn\" (UID: \"2a501828-e06b-4096-b555-1ecd9323ee20\") " pod="openstack/ovn-controller-ovs-l6hkn" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.623912 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2a501828-e06b-4096-b555-1ecd9323ee20-var-log\") pod \"ovn-controller-ovs-l6hkn\" (UID: \"2a501828-e06b-4096-b555-1ecd9323ee20\") " pod="openstack/ovn-controller-ovs-l6hkn" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.623945 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2a501828-e06b-4096-b555-1ecd9323ee20-var-run\") pod \"ovn-controller-ovs-l6hkn\" (UID: \"2a501828-e06b-4096-b555-1ecd9323ee20\") " pod="openstack/ovn-controller-ovs-l6hkn" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.623963 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2a501828-e06b-4096-b555-1ecd9323ee20-etc-ovs\") pod \"ovn-controller-ovs-l6hkn\" (UID: \"2a501828-e06b-4096-b555-1ecd9323ee20\") " pod="openstack/ovn-controller-ovs-l6hkn" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.624095 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2a501828-e06b-4096-b555-1ecd9323ee20-var-lib\") pod \"ovn-controller-ovs-l6hkn\" (UID: \"2a501828-e06b-4096-b555-1ecd9323ee20\") " pod="openstack/ovn-controller-ovs-l6hkn" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.624441 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2a501828-e06b-4096-b555-1ecd9323ee20-etc-ovs\") pod \"ovn-controller-ovs-l6hkn\" (UID: \"2a501828-e06b-4096-b555-1ecd9323ee20\") " pod="openstack/ovn-controller-ovs-l6hkn" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.624545 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2a501828-e06b-4096-b555-1ecd9323ee20-var-log\") pod \"ovn-controller-ovs-l6hkn\" (UID: \"2a501828-e06b-4096-b555-1ecd9323ee20\") " pod="openstack/ovn-controller-ovs-l6hkn" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.624586 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2a501828-e06b-4096-b555-1ecd9323ee20-var-run\") pod \"ovn-controller-ovs-l6hkn\" (UID: \"2a501828-e06b-4096-b555-1ecd9323ee20\") " pod="openstack/ovn-controller-ovs-l6hkn" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.627828 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a501828-e06b-4096-b555-1ecd9323ee20-scripts\") pod \"ovn-controller-ovs-l6hkn\" (UID: \"2a501828-e06b-4096-b555-1ecd9323ee20\") " pod="openstack/ovn-controller-ovs-l6hkn" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.645059 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2mp4\" (UniqueName: \"kubernetes.io/projected/2a501828-e06b-4096-b555-1ecd9323ee20-kube-api-access-p2mp4\") pod \"ovn-controller-ovs-l6hkn\" (UID: \"2a501828-e06b-4096-b555-1ecd9323ee20\") " pod="openstack/ovn-controller-ovs-l6hkn" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.662073 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-clmnh" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.731956 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-l6hkn" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.861187 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.863346 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.866477 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.866710 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.866851 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.867095 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.867258 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-zxvhd" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.882688 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.930146 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.930227 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85mnc\" (UniqueName: \"kubernetes.io/projected/1e751b80-d475-4bfd-a382-5d9e1618e5aa-kube-api-access-85mnc\") pod \"ovsdbserver-nb-0\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.930259 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e751b80-d475-4bfd-a382-5d9e1618e5aa-config\") pod \"ovsdbserver-nb-0\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.930326 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e751b80-d475-4bfd-a382-5d9e1618e5aa-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.930423 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1e751b80-d475-4bfd-a382-5d9e1618e5aa-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.930461 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e751b80-d475-4bfd-a382-5d9e1618e5aa-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.930503 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1e751b80-d475-4bfd-a382-5d9e1618e5aa-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.930524 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e751b80-d475-4bfd-a382-5d9e1618e5aa-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:40:49 crc kubenswrapper[4766]: I0130 16:40:49.033286 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e751b80-d475-4bfd-a382-5d9e1618e5aa-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:40:49 crc kubenswrapper[4766]: I0130 16:40:49.033363 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1e751b80-d475-4bfd-a382-5d9e1618e5aa-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:40:49 crc kubenswrapper[4766]: I0130 16:40:49.033400 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e751b80-d475-4bfd-a382-5d9e1618e5aa-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:40:49 crc kubenswrapper[4766]: I0130 16:40:49.033451 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1e751b80-d475-4bfd-a382-5d9e1618e5aa-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:40:49 crc kubenswrapper[4766]: I0130 16:40:49.033510 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e751b80-d475-4bfd-a382-5d9e1618e5aa-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:40:49 crc kubenswrapper[4766]: I0130 16:40:49.033544 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:40:49 crc kubenswrapper[4766]: I0130 16:40:49.033575 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85mnc\" (UniqueName: \"kubernetes.io/projected/1e751b80-d475-4bfd-a382-5d9e1618e5aa-kube-api-access-85mnc\") pod \"ovsdbserver-nb-0\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:40:49 crc kubenswrapper[4766]: I0130 16:40:49.033606 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e751b80-d475-4bfd-a382-5d9e1618e5aa-config\") pod \"ovsdbserver-nb-0\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:40:49 crc kubenswrapper[4766]: I0130 16:40:49.034616 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e751b80-d475-4bfd-a382-5d9e1618e5aa-config\") pod \"ovsdbserver-nb-0\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:40:49 crc kubenswrapper[4766]: I0130 16:40:49.037600 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/ovsdbserver-nb-0" Jan 30 16:40:49 crc kubenswrapper[4766]: I0130 16:40:49.037944 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1e751b80-d475-4bfd-a382-5d9e1618e5aa-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:40:49 crc kubenswrapper[4766]: I0130 16:40:49.039685 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1e751b80-d475-4bfd-a382-5d9e1618e5aa-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:40:49 crc kubenswrapper[4766]: I0130 16:40:49.041375 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e751b80-d475-4bfd-a382-5d9e1618e5aa-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:40:49 crc kubenswrapper[4766]: I0130 16:40:49.042403 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e751b80-d475-4bfd-a382-5d9e1618e5aa-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:40:49 crc kubenswrapper[4766]: I0130 16:40:49.049843 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e751b80-d475-4bfd-a382-5d9e1618e5aa-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:40:49 crc kubenswrapper[4766]: I0130 16:40:49.062140 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85mnc\" (UniqueName: \"kubernetes.io/projected/1e751b80-d475-4bfd-a382-5d9e1618e5aa-kube-api-access-85mnc\") pod \"ovsdbserver-nb-0\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:40:49 crc kubenswrapper[4766]: I0130 16:40:49.063128 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:40:49 crc kubenswrapper[4766]: I0130 16:40:49.210649 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.125759 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.131968 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.134522 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.136267 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-8khlz" Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.136712 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.137002 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.138787 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.208942 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.209065 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.209395 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.209645 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-config\") pod \"ovsdbserver-sb-0\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.209727 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.209801 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4q2q\" (UniqueName: \"kubernetes.io/projected/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-kube-api-access-g4q2q\") pod \"ovsdbserver-sb-0\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.209834 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.210004 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.312122 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.312227 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.312590 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-config\") pod \"ovsdbserver-sb-0\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.312639 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.312697 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4q2q\" (UniqueName: \"kubernetes.io/projected/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-kube-api-access-g4q2q\") pod \"ovsdbserver-sb-0\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.312725 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.312754 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.312820 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.313107 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/ovsdbserver-sb-0" Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.313944 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-config\") pod \"ovsdbserver-sb-0\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.314003 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.314347 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.324131 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.327382 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.327491 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.331711 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4q2q\" (UniqueName: \"kubernetes.io/projected/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-kube-api-access-g4q2q\") pod \"ovsdbserver-sb-0\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.343168 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.473568 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 16:41:03 crc kubenswrapper[4766]: E0130 16:41:03.805193 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 30 16:41:03 crc kubenswrapper[4766]: E0130 16:41:03.805673 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m5xtr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-7v65m_openstack(900942aa-a667-42dc-9ddf-a1909585c2e3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 16:41:03 crc kubenswrapper[4766]: E0130 16:41:03.805163 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 30 16:41:03 crc kubenswrapper[4766]: E0130 16:41:03.805850 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jvdkd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-rvfhb_openstack(7c2933e1-c67d-45a6-8e08-fac512f6473b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 16:41:03 crc kubenswrapper[4766]: E0130 16:41:03.807040 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-7v65m" podUID="900942aa-a667-42dc-9ddf-a1909585c2e3" Jan 30 16:41:03 crc kubenswrapper[4766]: E0130 16:41:03.807106 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-rvfhb" podUID="7c2933e1-c67d-45a6-8e08-fac512f6473b" Jan 30 16:41:03 crc kubenswrapper[4766]: E0130 16:41:03.818515 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 30 16:41:03 crc kubenswrapper[4766]: E0130 16:41:03.818734 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qw52n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-69ttv_openstack(55ba5675-86b8-409a-b2f5-c0dbd6b95f2b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 16:41:03 crc kubenswrapper[4766]: E0130 16:41:03.820076 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-69ttv" podUID="55ba5675-86b8-409a-b2f5-c0dbd6b95f2b" Jan 30 16:41:04 crc kubenswrapper[4766]: E0130 16:41:04.047818 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-rvfhb" podUID="7c2933e1-c67d-45a6-8e08-fac512f6473b" Jan 30 16:41:04 crc kubenswrapper[4766]: E0130 16:41:04.050604 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 30 16:41:04 crc kubenswrapper[4766]: E0130 16:41:04.050883 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kbx8k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(bc2a138c-9abd-427b-815c-cbb9e12459f6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 16:41:04 crc kubenswrapper[4766]: E0130 16:41:04.052406 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="bc2a138c-9abd-427b-815c-cbb9e12459f6" Jan 30 16:41:05 crc kubenswrapper[4766]: E0130 16:41:05.054037 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="bc2a138c-9abd-427b-815c-cbb9e12459f6" Jan 30 16:41:05 crc kubenswrapper[4766]: E0130 16:41:05.816076 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 30 16:41:05 crc kubenswrapper[4766]: E0130 16:41:05.816287 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t4qrv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(62dd6ad1-1550-48cf-b103-b7ab6dd93c97): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 16:41:05 crc kubenswrapper[4766]: E0130 16:41:05.817701 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="62dd6ad1-1550-48cf-b103-b7ab6dd93c97" Jan 30 16:41:05 crc kubenswrapper[4766]: E0130 16:41:05.867727 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 30 16:41:05 crc kubenswrapper[4766]: E0130 16:41:05.867889 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfdh5dfhb6h64h676hc4h78h97h669h54chfbh696hb5h54bh5d4h6bh64h644h677h584h5cbh698h9dh5bbh5f8h5b8hcdh644h5c7h694hbfh589q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsqpd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5ccc8479f9-6647p_openstack(a31c7217-d6d2-4cc1-ab83-016373333c80): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 16:41:05 crc kubenswrapper[4766]: E0130 16:41:05.868996 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5ccc8479f9-6647p" podUID="a31c7217-d6d2-4cc1-ab83-016373333c80" Jan 30 16:41:06 crc kubenswrapper[4766]: E0130 16:41:06.060677 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="62dd6ad1-1550-48cf-b103-b7ab6dd93c97" Jan 30 16:41:06 crc kubenswrapper[4766]: E0130 16:41:06.060975 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-5ccc8479f9-6647p" podUID="a31c7217-d6d2-4cc1-ab83-016373333c80" Jan 30 16:41:07 crc kubenswrapper[4766]: I0130 16:41:07.262531 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-7v65m" Jan 30 16:41:07 crc kubenswrapper[4766]: I0130 16:41:07.265084 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-69ttv" Jan 30 16:41:07 crc kubenswrapper[4766]: I0130 16:41:07.409167 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5xtr\" (UniqueName: \"kubernetes.io/projected/900942aa-a667-42dc-9ddf-a1909585c2e3-kube-api-access-m5xtr\") pod \"900942aa-a667-42dc-9ddf-a1909585c2e3\" (UID: \"900942aa-a667-42dc-9ddf-a1909585c2e3\") " Jan 30 16:41:07 crc kubenswrapper[4766]: I0130 16:41:07.409253 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qw52n\" (UniqueName: \"kubernetes.io/projected/55ba5675-86b8-409a-b2f5-c0dbd6b95f2b-kube-api-access-qw52n\") pod \"55ba5675-86b8-409a-b2f5-c0dbd6b95f2b\" (UID: \"55ba5675-86b8-409a-b2f5-c0dbd6b95f2b\") " Jan 30 16:41:07 crc kubenswrapper[4766]: I0130 16:41:07.409388 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55ba5675-86b8-409a-b2f5-c0dbd6b95f2b-dns-svc\") pod \"55ba5675-86b8-409a-b2f5-c0dbd6b95f2b\" (UID: \"55ba5675-86b8-409a-b2f5-c0dbd6b95f2b\") " Jan 30 16:41:07 crc kubenswrapper[4766]: I0130 16:41:07.409428 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/900942aa-a667-42dc-9ddf-a1909585c2e3-config\") pod \"900942aa-a667-42dc-9ddf-a1909585c2e3\" (UID: \"900942aa-a667-42dc-9ddf-a1909585c2e3\") " Jan 30 16:41:07 crc kubenswrapper[4766]: I0130 16:41:07.409707 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55ba5675-86b8-409a-b2f5-c0dbd6b95f2b-config\") pod \"55ba5675-86b8-409a-b2f5-c0dbd6b95f2b\" (UID: \"55ba5675-86b8-409a-b2f5-c0dbd6b95f2b\") " Jan 30 16:41:07 crc kubenswrapper[4766]: I0130 16:41:07.410257 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55ba5675-86b8-409a-b2f5-c0dbd6b95f2b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "55ba5675-86b8-409a-b2f5-c0dbd6b95f2b" (UID: "55ba5675-86b8-409a-b2f5-c0dbd6b95f2b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:41:07 crc kubenswrapper[4766]: I0130 16:41:07.411142 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55ba5675-86b8-409a-b2f5-c0dbd6b95f2b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:07 crc kubenswrapper[4766]: I0130 16:41:07.411223 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55ba5675-86b8-409a-b2f5-c0dbd6b95f2b-config" (OuterVolumeSpecName: "config") pod "55ba5675-86b8-409a-b2f5-c0dbd6b95f2b" (UID: "55ba5675-86b8-409a-b2f5-c0dbd6b95f2b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:41:07 crc kubenswrapper[4766]: I0130 16:41:07.415080 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/900942aa-a667-42dc-9ddf-a1909585c2e3-config" (OuterVolumeSpecName: "config") pod "900942aa-a667-42dc-9ddf-a1909585c2e3" (UID: "900942aa-a667-42dc-9ddf-a1909585c2e3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:41:07 crc kubenswrapper[4766]: I0130 16:41:07.415873 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/900942aa-a667-42dc-9ddf-a1909585c2e3-kube-api-access-m5xtr" (OuterVolumeSpecName: "kube-api-access-m5xtr") pod "900942aa-a667-42dc-9ddf-a1909585c2e3" (UID: "900942aa-a667-42dc-9ddf-a1909585c2e3"). InnerVolumeSpecName "kube-api-access-m5xtr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:41:07 crc kubenswrapper[4766]: I0130 16:41:07.417545 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55ba5675-86b8-409a-b2f5-c0dbd6b95f2b-kube-api-access-qw52n" (OuterVolumeSpecName: "kube-api-access-qw52n") pod "55ba5675-86b8-409a-b2f5-c0dbd6b95f2b" (UID: "55ba5675-86b8-409a-b2f5-c0dbd6b95f2b"). InnerVolumeSpecName "kube-api-access-qw52n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:41:07 crc kubenswrapper[4766]: I0130 16:41:07.515587 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/900942aa-a667-42dc-9ddf-a1909585c2e3-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:07 crc kubenswrapper[4766]: I0130 16:41:07.515666 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55ba5675-86b8-409a-b2f5-c0dbd6b95f2b-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:07 crc kubenswrapper[4766]: I0130 16:41:07.515679 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5xtr\" (UniqueName: \"kubernetes.io/projected/900942aa-a667-42dc-9ddf-a1909585c2e3-kube-api-access-m5xtr\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:07 crc kubenswrapper[4766]: I0130 16:41:07.515705 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qw52n\" (UniqueName: \"kubernetes.io/projected/55ba5675-86b8-409a-b2f5-c0dbd6b95f2b-kube-api-access-qw52n\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:07 crc kubenswrapper[4766]: I0130 16:41:07.788979 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-clmnh"] Jan 30 16:41:07 crc kubenswrapper[4766]: I0130 16:41:07.818646 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 16:41:08 crc kubenswrapper[4766]: I0130 16:41:08.073096 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 16:41:08 crc kubenswrapper[4766]: I0130 16:41:08.093038 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-7v65m" event={"ID":"900942aa-a667-42dc-9ddf-a1909585c2e3","Type":"ContainerDied","Data":"4dd23f899f0d12a8b608725fc3a9970423f5d27f8151e6c03d79ba260849d2dc"} Jan 30 16:41:08 crc kubenswrapper[4766]: I0130 16:41:08.093103 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-7v65m" Jan 30 16:41:08 crc kubenswrapper[4766]: I0130 16:41:08.097121 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"61f7793d-39bd-4e96-a857-7de972f0c76d","Type":"ContainerStarted","Data":"7526886bd5bb2b792b565e84d6fd278abe954f56801bb63be7f6750c601e890f"} Jan 30 16:41:08 crc kubenswrapper[4766]: I0130 16:41:08.097513 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 30 16:41:08 crc kubenswrapper[4766]: I0130 16:41:08.101096 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9ad68dc2-23ff-4044-b74d-149ae8f02bc0","Type":"ContainerStarted","Data":"e32b2cafc5c1ce2a47e798839cf2284131d3d57bc770f6871e99b00c69493387"} Jan 30 16:41:08 crc kubenswrapper[4766]: I0130 16:41:08.104142 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-69ttv" event={"ID":"55ba5675-86b8-409a-b2f5-c0dbd6b95f2b","Type":"ContainerDied","Data":"7756343962f10d73aa86319a654f52c14c753be69477e9ff822516b343136a68"} Jan 30 16:41:08 crc kubenswrapper[4766]: I0130 16:41:08.104253 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-69ttv" Jan 30 16:41:08 crc kubenswrapper[4766]: I0130 16:41:08.136210 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.760907124 podStartE2EDuration="26.136146395s" podCreationTimestamp="2026-01-30 16:40:42 +0000 UTC" firstStartedPulling="2026-01-30 16:40:44.015766462 +0000 UTC m=+1098.653723808" lastFinishedPulling="2026-01-30 16:41:07.391005733 +0000 UTC m=+1122.028963079" observedRunningTime="2026-01-30 16:41:08.129992516 +0000 UTC m=+1122.767949862" watchObservedRunningTime="2026-01-30 16:41:08.136146395 +0000 UTC m=+1122.774103741" Jan 30 16:41:08 crc kubenswrapper[4766]: I0130 16:41:08.186679 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-7v65m"] Jan 30 16:41:08 crc kubenswrapper[4766]: I0130 16:41:08.189295 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-7v65m"] Jan 30 16:41:08 crc kubenswrapper[4766]: I0130 16:41:08.251764 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-69ttv"] Jan 30 16:41:08 crc kubenswrapper[4766]: I0130 16:41:08.264399 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-69ttv"] Jan 30 16:41:08 crc kubenswrapper[4766]: I0130 16:41:08.422112 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-l6hkn"] Jan 30 16:41:08 crc kubenswrapper[4766]: W0130 16:41:08.550050 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a501828_e06b_4096_b555_1ecd9323ee20.slice/crio-f054a0fee68ab2bd51f8c1a2db002cd94be5729245e8ef0109de145c3c8117f0 WatchSource:0}: Error finding container f054a0fee68ab2bd51f8c1a2db002cd94be5729245e8ef0109de145c3c8117f0: Status 404 returned error can't find the container with id f054a0fee68ab2bd51f8c1a2db002cd94be5729245e8ef0109de145c3c8117f0 Jan 30 16:41:09 crc kubenswrapper[4766]: I0130 16:41:09.045638 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:41:09 crc kubenswrapper[4766]: I0130 16:41:09.045705 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:41:09 crc kubenswrapper[4766]: I0130 16:41:09.112947 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b21357e1-82c9-419a-a191-359c84d6d001","Type":"ContainerStarted","Data":"9ef10f5c3d1f4d9a253dd3e9c4606457ea1bec6b576606c275600dbaeca7eb9d"} Jan 30 16:41:09 crc kubenswrapper[4766]: I0130 16:41:09.115132 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-clmnh" event={"ID":"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9","Type":"ContainerStarted","Data":"35bff03af4700c59de26d7f263ff6609c1c1e4962e327e55accdbc5ea2056c14"} Jan 30 16:41:09 crc kubenswrapper[4766]: I0130 16:41:09.118687 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-l6hkn" event={"ID":"2a501828-e06b-4096-b555-1ecd9323ee20","Type":"ContainerStarted","Data":"f054a0fee68ab2bd51f8c1a2db002cd94be5729245e8ef0109de145c3c8117f0"} Jan 30 16:41:09 crc kubenswrapper[4766]: I0130 16:41:09.123474 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"1e751b80-d475-4bfd-a382-5d9e1618e5aa","Type":"ContainerStarted","Data":"e1760b87e9caefe6e9c0ac6d3d9d8457bd91e81888eeb4755458d5a683cbea69"} Jan 30 16:41:09 crc kubenswrapper[4766]: I0130 16:41:09.125532 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c4c6022b-f99b-41de-8048-ac8e4c4fa68f","Type":"ContainerStarted","Data":"44d944c146c567ab0a586afa23a8e30b46436b5558ae7e1ed7aeb15de65469a1"} Jan 30 16:41:10 crc kubenswrapper[4766]: I0130 16:41:10.050900 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55ba5675-86b8-409a-b2f5-c0dbd6b95f2b" path="/var/lib/kubelet/pods/55ba5675-86b8-409a-b2f5-c0dbd6b95f2b/volumes" Jan 30 16:41:10 crc kubenswrapper[4766]: I0130 16:41:10.051435 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="900942aa-a667-42dc-9ddf-a1909585c2e3" path="/var/lib/kubelet/pods/900942aa-a667-42dc-9ddf-a1909585c2e3/volumes" Jan 30 16:41:11 crc kubenswrapper[4766]: I0130 16:41:11.156397 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"17273647-f97c-490b-a766-fd4f004d3732","Type":"ContainerStarted","Data":"e7df447fae2323214b04214ed5721ca2aff6ea2f59b7f5db7687b2d27c39a32a"} Jan 30 16:41:11 crc kubenswrapper[4766]: I0130 16:41:11.156800 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 30 16:41:11 crc kubenswrapper[4766]: I0130 16:41:11.179125 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.156861909 podStartE2EDuration="26.17910614s" podCreationTimestamp="2026-01-30 16:40:45 +0000 UTC" firstStartedPulling="2026-01-30 16:40:46.114594005 +0000 UTC m=+1100.752551351" lastFinishedPulling="2026-01-30 16:41:10.136838236 +0000 UTC m=+1124.774795582" observedRunningTime="2026-01-30 16:41:11.177712382 +0000 UTC m=+1125.815669728" watchObservedRunningTime="2026-01-30 16:41:11.17910614 +0000 UTC m=+1125.817063486" Jan 30 16:41:13 crc kubenswrapper[4766]: I0130 16:41:13.172982 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-l6hkn" event={"ID":"2a501828-e06b-4096-b555-1ecd9323ee20","Type":"ContainerStarted","Data":"227e5efd4255dd7061992117871a77b87ce5c9b6b3d5ba505bf41d645da12be4"} Jan 30 16:41:13 crc kubenswrapper[4766]: I0130 16:41:13.175426 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"1e751b80-d475-4bfd-a382-5d9e1618e5aa","Type":"ContainerStarted","Data":"20e080fafb462224d035f80d6933976aeeea05d7d2ed407630e50efdc1f07cd7"} Jan 30 16:41:13 crc kubenswrapper[4766]: I0130 16:41:13.177741 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c4c6022b-f99b-41de-8048-ac8e4c4fa68f","Type":"ContainerStarted","Data":"35c50dacc5fd194e0367ec397b84d1ebda25e534558fb6144d3b0aa1f4575270"} Jan 30 16:41:13 crc kubenswrapper[4766]: I0130 16:41:13.180732 4766 generic.go:334] "Generic (PLEG): container finished" podID="9ad68dc2-23ff-4044-b74d-149ae8f02bc0" containerID="e32b2cafc5c1ce2a47e798839cf2284131d3d57bc770f6871e99b00c69493387" exitCode=0 Jan 30 16:41:13 crc kubenswrapper[4766]: I0130 16:41:13.180805 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9ad68dc2-23ff-4044-b74d-149ae8f02bc0","Type":"ContainerDied","Data":"e32b2cafc5c1ce2a47e798839cf2284131d3d57bc770f6871e99b00c69493387"} Jan 30 16:41:13 crc kubenswrapper[4766]: I0130 16:41:13.185698 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-clmnh" event={"ID":"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9","Type":"ContainerStarted","Data":"cc06e17c8227a3be8709faf659e52c8b8081ab19b313069647e67f5a0b8b13e7"} Jan 30 16:41:13 crc kubenswrapper[4766]: I0130 16:41:13.186366 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-clmnh" Jan 30 16:41:13 crc kubenswrapper[4766]: I0130 16:41:13.223399 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-clmnh" podStartSLOduration=21.022885557 podStartE2EDuration="25.223377541s" podCreationTimestamp="2026-01-30 16:40:48 +0000 UTC" firstStartedPulling="2026-01-30 16:41:08.54913066 +0000 UTC m=+1123.187088016" lastFinishedPulling="2026-01-30 16:41:12.749622654 +0000 UTC m=+1127.387580000" observedRunningTime="2026-01-30 16:41:13.221274813 +0000 UTC m=+1127.859232179" watchObservedRunningTime="2026-01-30 16:41:13.223377541 +0000 UTC m=+1127.861334887" Jan 30 16:41:13 crc kubenswrapper[4766]: I0130 16:41:13.283336 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 30 16:41:14 crc kubenswrapper[4766]: I0130 16:41:14.197774 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9ad68dc2-23ff-4044-b74d-149ae8f02bc0","Type":"ContainerStarted","Data":"83eef1fac3cc96895ab4ddd98d9e41ad0d9179a5c5f100993449cfa02dfc79ae"} Jan 30 16:41:14 crc kubenswrapper[4766]: I0130 16:41:14.203528 4766 generic.go:334] "Generic (PLEG): container finished" podID="2a501828-e06b-4096-b555-1ecd9323ee20" containerID="227e5efd4255dd7061992117871a77b87ce5c9b6b3d5ba505bf41d645da12be4" exitCode=0 Jan 30 16:41:14 crc kubenswrapper[4766]: I0130 16:41:14.203659 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-l6hkn" event={"ID":"2a501828-e06b-4096-b555-1ecd9323ee20","Type":"ContainerDied","Data":"227e5efd4255dd7061992117871a77b87ce5c9b6b3d5ba505bf41d645da12be4"} Jan 30 16:41:14 crc kubenswrapper[4766]: I0130 16:41:14.224109 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=9.901606543 podStartE2EDuration="33.224086471s" podCreationTimestamp="2026-01-30 16:40:41 +0000 UTC" firstStartedPulling="2026-01-30 16:40:44.074838139 +0000 UTC m=+1098.712795485" lastFinishedPulling="2026-01-30 16:41:07.397318067 +0000 UTC m=+1122.035275413" observedRunningTime="2026-01-30 16:41:14.219585108 +0000 UTC m=+1128.857542454" watchObservedRunningTime="2026-01-30 16:41:14.224086471 +0000 UTC m=+1128.862043817" Jan 30 16:41:15 crc kubenswrapper[4766]: I0130 16:41:15.226924 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"1e751b80-d475-4bfd-a382-5d9e1618e5aa","Type":"ContainerStarted","Data":"68be686c2198473cf235baf71f611a27995c8888c56e86a3626a67b42470e28a"} Jan 30 16:41:15 crc kubenswrapper[4766]: I0130 16:41:15.233306 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c4c6022b-f99b-41de-8048-ac8e4c4fa68f","Type":"ContainerStarted","Data":"0e83e4f15db60d1d22bf2322b23168b3c373a79d29a5171d8b43db0aa0812d3a"} Jan 30 16:41:15 crc kubenswrapper[4766]: I0130 16:41:15.242893 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-l6hkn" event={"ID":"2a501828-e06b-4096-b555-1ecd9323ee20","Type":"ContainerStarted","Data":"087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2"} Jan 30 16:41:15 crc kubenswrapper[4766]: I0130 16:41:15.258515 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=22.084553136 podStartE2EDuration="28.258498851s" podCreationTimestamp="2026-01-30 16:40:47 +0000 UTC" firstStartedPulling="2026-01-30 16:41:08.546397254 +0000 UTC m=+1123.184354600" lastFinishedPulling="2026-01-30 16:41:14.720342979 +0000 UTC m=+1129.358300315" observedRunningTime="2026-01-30 16:41:15.256350371 +0000 UTC m=+1129.894307727" watchObservedRunningTime="2026-01-30 16:41:15.258498851 +0000 UTC m=+1129.896456207" Jan 30 16:41:15 crc kubenswrapper[4766]: I0130 16:41:15.296742 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=17.882150282 podStartE2EDuration="24.296717943s" podCreationTimestamp="2026-01-30 16:40:51 +0000 UTC" firstStartedPulling="2026-01-30 16:41:08.288736128 +0000 UTC m=+1122.926693474" lastFinishedPulling="2026-01-30 16:41:14.703303779 +0000 UTC m=+1129.341261135" observedRunningTime="2026-01-30 16:41:15.282956894 +0000 UTC m=+1129.920914260" watchObservedRunningTime="2026-01-30 16:41:15.296717943 +0000 UTC m=+1129.934675289" Jan 30 16:41:15 crc kubenswrapper[4766]: I0130 16:41:15.343091 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rvfhb"] Jan 30 16:41:15 crc kubenswrapper[4766]: I0130 16:41:15.375693 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 30 16:41:15 crc kubenswrapper[4766]: I0130 16:41:15.433278 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-nw485"] Jan 30 16:41:15 crc kubenswrapper[4766]: I0130 16:41:15.435025 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-nw485" Jan 30 16:41:15 crc kubenswrapper[4766]: I0130 16:41:15.450246 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-nw485"] Jan 30 16:41:15 crc kubenswrapper[4766]: I0130 16:41:15.475785 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g88kl\" (UniqueName: \"kubernetes.io/projected/8114a4cb-b868-4813-836e-6e12b1b37c00-kube-api-access-g88kl\") pod \"dnsmasq-dns-7cb5889db5-nw485\" (UID: \"8114a4cb-b868-4813-836e-6e12b1b37c00\") " pod="openstack/dnsmasq-dns-7cb5889db5-nw485" Jan 30 16:41:15 crc kubenswrapper[4766]: I0130 16:41:15.475847 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8114a4cb-b868-4813-836e-6e12b1b37c00-config\") pod \"dnsmasq-dns-7cb5889db5-nw485\" (UID: \"8114a4cb-b868-4813-836e-6e12b1b37c00\") " pod="openstack/dnsmasq-dns-7cb5889db5-nw485" Jan 30 16:41:15 crc kubenswrapper[4766]: I0130 16:41:15.475928 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8114a4cb-b868-4813-836e-6e12b1b37c00-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-nw485\" (UID: \"8114a4cb-b868-4813-836e-6e12b1b37c00\") " pod="openstack/dnsmasq-dns-7cb5889db5-nw485" Jan 30 16:41:15 crc kubenswrapper[4766]: I0130 16:41:15.579105 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g88kl\" (UniqueName: \"kubernetes.io/projected/8114a4cb-b868-4813-836e-6e12b1b37c00-kube-api-access-g88kl\") pod \"dnsmasq-dns-7cb5889db5-nw485\" (UID: \"8114a4cb-b868-4813-836e-6e12b1b37c00\") " pod="openstack/dnsmasq-dns-7cb5889db5-nw485" Jan 30 16:41:15 crc kubenswrapper[4766]: I0130 16:41:15.579164 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8114a4cb-b868-4813-836e-6e12b1b37c00-config\") pod \"dnsmasq-dns-7cb5889db5-nw485\" (UID: \"8114a4cb-b868-4813-836e-6e12b1b37c00\") " pod="openstack/dnsmasq-dns-7cb5889db5-nw485" Jan 30 16:41:15 crc kubenswrapper[4766]: I0130 16:41:15.579243 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8114a4cb-b868-4813-836e-6e12b1b37c00-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-nw485\" (UID: \"8114a4cb-b868-4813-836e-6e12b1b37c00\") " pod="openstack/dnsmasq-dns-7cb5889db5-nw485" Jan 30 16:41:15 crc kubenswrapper[4766]: I0130 16:41:15.580230 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8114a4cb-b868-4813-836e-6e12b1b37c00-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-nw485\" (UID: \"8114a4cb-b868-4813-836e-6e12b1b37c00\") " pod="openstack/dnsmasq-dns-7cb5889db5-nw485" Jan 30 16:41:15 crc kubenswrapper[4766]: I0130 16:41:15.581042 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8114a4cb-b868-4813-836e-6e12b1b37c00-config\") pod \"dnsmasq-dns-7cb5889db5-nw485\" (UID: \"8114a4cb-b868-4813-836e-6e12b1b37c00\") " pod="openstack/dnsmasq-dns-7cb5889db5-nw485" Jan 30 16:41:15 crc kubenswrapper[4766]: I0130 16:41:15.624265 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g88kl\" (UniqueName: \"kubernetes.io/projected/8114a4cb-b868-4813-836e-6e12b1b37c00-kube-api-access-g88kl\") pod \"dnsmasq-dns-7cb5889db5-nw485\" (UID: \"8114a4cb-b868-4813-836e-6e12b1b37c00\") " pod="openstack/dnsmasq-dns-7cb5889db5-nw485" Jan 30 16:41:15 crc kubenswrapper[4766]: I0130 16:41:15.792611 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-nw485" Jan 30 16:41:15 crc kubenswrapper[4766]: I0130 16:41:15.968112 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-rvfhb" Jan 30 16:41:15 crc kubenswrapper[4766]: I0130 16:41:15.992416 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c2933e1-c67d-45a6-8e08-fac512f6473b-dns-svc\") pod \"7c2933e1-c67d-45a6-8e08-fac512f6473b\" (UID: \"7c2933e1-c67d-45a6-8e08-fac512f6473b\") " Jan 30 16:41:15 crc kubenswrapper[4766]: I0130 16:41:15.992995 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c2933e1-c67d-45a6-8e08-fac512f6473b-config\") pod \"7c2933e1-c67d-45a6-8e08-fac512f6473b\" (UID: \"7c2933e1-c67d-45a6-8e08-fac512f6473b\") " Jan 30 16:41:15 crc kubenswrapper[4766]: I0130 16:41:15.993127 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c2933e1-c67d-45a6-8e08-fac512f6473b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7c2933e1-c67d-45a6-8e08-fac512f6473b" (UID: "7c2933e1-c67d-45a6-8e08-fac512f6473b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:41:15 crc kubenswrapper[4766]: I0130 16:41:15.993887 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c2933e1-c67d-45a6-8e08-fac512f6473b-config" (OuterVolumeSpecName: "config") pod "7c2933e1-c67d-45a6-8e08-fac512f6473b" (UID: "7c2933e1-c67d-45a6-8e08-fac512f6473b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:41:15 crc kubenswrapper[4766]: I0130 16:41:15.994051 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvdkd\" (UniqueName: \"kubernetes.io/projected/7c2933e1-c67d-45a6-8e08-fac512f6473b-kube-api-access-jvdkd\") pod \"7c2933e1-c67d-45a6-8e08-fac512f6473b\" (UID: \"7c2933e1-c67d-45a6-8e08-fac512f6473b\") " Jan 30 16:41:15 crc kubenswrapper[4766]: I0130 16:41:15.995123 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c2933e1-c67d-45a6-8e08-fac512f6473b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:15 crc kubenswrapper[4766]: I0130 16:41:15.995350 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c2933e1-c67d-45a6-8e08-fac512f6473b-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.002303 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c2933e1-c67d-45a6-8e08-fac512f6473b-kube-api-access-jvdkd" (OuterVolumeSpecName: "kube-api-access-jvdkd") pod "7c2933e1-c67d-45a6-8e08-fac512f6473b" (UID: "7c2933e1-c67d-45a6-8e08-fac512f6473b"). InnerVolumeSpecName "kube-api-access-jvdkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.097323 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvdkd\" (UniqueName: \"kubernetes.io/projected/7c2933e1-c67d-45a6-8e08-fac512f6473b-kube-api-access-jvdkd\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.211680 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.252865 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-rvfhb" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.252887 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-rvfhb" event={"ID":"7c2933e1-c67d-45a6-8e08-fac512f6473b","Type":"ContainerDied","Data":"a4daa3864bea92099d39184deffbe2394e36c62e86137adfe5c0a64228217582"} Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.258007 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-l6hkn" event={"ID":"2a501828-e06b-4096-b555-1ecd9323ee20","Type":"ContainerStarted","Data":"83993d94a8bc7f594d30caf8ddb5c055e031d5c9a949175233563c06d2f790e9"} Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.258443 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-l6hkn" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.262164 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.321928 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rvfhb"] Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.333135 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rvfhb"] Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.341112 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-nw485"] Jan 30 16:41:16 crc kubenswrapper[4766]: W0130 16:41:16.341333 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8114a4cb_b868_4813_836e_6e12b1b37c00.slice/crio-6a26427f3e3e29a1dbf4dfdb6a3c3ecc4231decf4e2de22b82360cb9a413fd15 WatchSource:0}: Error finding container 6a26427f3e3e29a1dbf4dfdb6a3c3ecc4231decf4e2de22b82360cb9a413fd15: Status 404 returned error can't find the container with id 6a26427f3e3e29a1dbf4dfdb6a3c3ecc4231decf4e2de22b82360cb9a413fd15 Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.342528 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-l6hkn" podStartSLOduration=24.150965957 podStartE2EDuration="28.342504985s" podCreationTimestamp="2026-01-30 16:40:48 +0000 UTC" firstStartedPulling="2026-01-30 16:41:08.556389689 +0000 UTC m=+1123.194347035" lastFinishedPulling="2026-01-30 16:41:12.747928717 +0000 UTC m=+1127.385886063" observedRunningTime="2026-01-30 16:41:16.329395364 +0000 UTC m=+1130.967352740" watchObservedRunningTime="2026-01-30 16:41:16.342504985 +0000 UTC m=+1130.980462341" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.474592 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.515480 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.619890 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.635303 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.638272 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.638577 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-r75sb" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.638765 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.639583 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.643697 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.811715 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/8b182790-0761-450c-85d1-63ddd59ac10f-lock\") pod \"swift-storage-0\" (UID: \"8b182790-0761-450c-85d1-63ddd59ac10f\") " pod="openstack/swift-storage-0" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.811786 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8b182790-0761-450c-85d1-63ddd59ac10f-cache\") pod \"swift-storage-0\" (UID: \"8b182790-0761-450c-85d1-63ddd59ac10f\") " pod="openstack/swift-storage-0" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.811829 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8b182790-0761-450c-85d1-63ddd59ac10f-etc-swift\") pod \"swift-storage-0\" (UID: \"8b182790-0761-450c-85d1-63ddd59ac10f\") " pod="openstack/swift-storage-0" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.811864 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b182790-0761-450c-85d1-63ddd59ac10f-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"8b182790-0761-450c-85d1-63ddd59ac10f\") " pod="openstack/swift-storage-0" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.811958 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp72v\" (UniqueName: \"kubernetes.io/projected/8b182790-0761-450c-85d1-63ddd59ac10f-kube-api-access-cp72v\") pod \"swift-storage-0\" (UID: \"8b182790-0761-450c-85d1-63ddd59ac10f\") " pod="openstack/swift-storage-0" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.812009 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"8b182790-0761-450c-85d1-63ddd59ac10f\") " pod="openstack/swift-storage-0" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.862515 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-n8rf4"] Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.863893 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-n8rf4" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.865675 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.866337 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.868338 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.874716 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-n8rf4"] Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.913525 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/8b182790-0761-450c-85d1-63ddd59ac10f-lock\") pod \"swift-storage-0\" (UID: \"8b182790-0761-450c-85d1-63ddd59ac10f\") " pod="openstack/swift-storage-0" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.913587 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8b182790-0761-450c-85d1-63ddd59ac10f-cache\") pod \"swift-storage-0\" (UID: \"8b182790-0761-450c-85d1-63ddd59ac10f\") " pod="openstack/swift-storage-0" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.913608 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8b182790-0761-450c-85d1-63ddd59ac10f-etc-swift\") pod \"swift-storage-0\" (UID: \"8b182790-0761-450c-85d1-63ddd59ac10f\") " pod="openstack/swift-storage-0" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.913637 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b182790-0761-450c-85d1-63ddd59ac10f-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"8b182790-0761-450c-85d1-63ddd59ac10f\") " pod="openstack/swift-storage-0" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.913703 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cp72v\" (UniqueName: \"kubernetes.io/projected/8b182790-0761-450c-85d1-63ddd59ac10f-kube-api-access-cp72v\") pod \"swift-storage-0\" (UID: \"8b182790-0761-450c-85d1-63ddd59ac10f\") " pod="openstack/swift-storage-0" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.913737 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"8b182790-0761-450c-85d1-63ddd59ac10f\") " pod="openstack/swift-storage-0" Jan 30 16:41:16 crc kubenswrapper[4766]: E0130 16:41:16.913841 4766 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 16:41:16 crc kubenswrapper[4766]: E0130 16:41:16.913871 4766 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 16:41:16 crc kubenswrapper[4766]: E0130 16:41:16.913928 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8b182790-0761-450c-85d1-63ddd59ac10f-etc-swift podName:8b182790-0761-450c-85d1-63ddd59ac10f nodeName:}" failed. No retries permitted until 2026-01-30 16:41:17.413906731 +0000 UTC m=+1132.051864067 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8b182790-0761-450c-85d1-63ddd59ac10f-etc-swift") pod "swift-storage-0" (UID: "8b182790-0761-450c-85d1-63ddd59ac10f") : configmap "swift-ring-files" not found Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.914076 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"8b182790-0761-450c-85d1-63ddd59ac10f\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/swift-storage-0" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.914145 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/8b182790-0761-450c-85d1-63ddd59ac10f-lock\") pod \"swift-storage-0\" (UID: \"8b182790-0761-450c-85d1-63ddd59ac10f\") " pod="openstack/swift-storage-0" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.914259 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8b182790-0761-450c-85d1-63ddd59ac10f-cache\") pod \"swift-storage-0\" (UID: \"8b182790-0761-450c-85d1-63ddd59ac10f\") " pod="openstack/swift-storage-0" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.920378 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b182790-0761-450c-85d1-63ddd59ac10f-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"8b182790-0761-450c-85d1-63ddd59ac10f\") " pod="openstack/swift-storage-0" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.936229 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cp72v\" (UniqueName: \"kubernetes.io/projected/8b182790-0761-450c-85d1-63ddd59ac10f-kube-api-access-cp72v\") pod \"swift-storage-0\" (UID: \"8b182790-0761-450c-85d1-63ddd59ac10f\") " pod="openstack/swift-storage-0" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.945433 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"8b182790-0761-450c-85d1-63ddd59ac10f\") " pod="openstack/swift-storage-0" Jan 30 16:41:17 crc kubenswrapper[4766]: I0130 16:41:17.015436 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6da00370-0819-4857-8fa3-1ffe3e6b628b-ring-data-devices\") pod \"swift-ring-rebalance-n8rf4\" (UID: \"6da00370-0819-4857-8fa3-1ffe3e6b628b\") " pod="openstack/swift-ring-rebalance-n8rf4" Jan 30 16:41:17 crc kubenswrapper[4766]: I0130 16:41:17.015537 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6da00370-0819-4857-8fa3-1ffe3e6b628b-etc-swift\") pod \"swift-ring-rebalance-n8rf4\" (UID: \"6da00370-0819-4857-8fa3-1ffe3e6b628b\") " pod="openstack/swift-ring-rebalance-n8rf4" Jan 30 16:41:17 crc kubenswrapper[4766]: I0130 16:41:17.015577 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hx9v\" (UniqueName: \"kubernetes.io/projected/6da00370-0819-4857-8fa3-1ffe3e6b628b-kube-api-access-8hx9v\") pod \"swift-ring-rebalance-n8rf4\" (UID: \"6da00370-0819-4857-8fa3-1ffe3e6b628b\") " pod="openstack/swift-ring-rebalance-n8rf4" Jan 30 16:41:17 crc kubenswrapper[4766]: I0130 16:41:17.015617 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6da00370-0819-4857-8fa3-1ffe3e6b628b-combined-ca-bundle\") pod \"swift-ring-rebalance-n8rf4\" (UID: \"6da00370-0819-4857-8fa3-1ffe3e6b628b\") " pod="openstack/swift-ring-rebalance-n8rf4" Jan 30 16:41:17 crc kubenswrapper[4766]: I0130 16:41:17.015642 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6da00370-0819-4857-8fa3-1ffe3e6b628b-swiftconf\") pod \"swift-ring-rebalance-n8rf4\" (UID: \"6da00370-0819-4857-8fa3-1ffe3e6b628b\") " pod="openstack/swift-ring-rebalance-n8rf4" Jan 30 16:41:17 crc kubenswrapper[4766]: I0130 16:41:17.015659 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6da00370-0819-4857-8fa3-1ffe3e6b628b-dispersionconf\") pod \"swift-ring-rebalance-n8rf4\" (UID: \"6da00370-0819-4857-8fa3-1ffe3e6b628b\") " pod="openstack/swift-ring-rebalance-n8rf4" Jan 30 16:41:17 crc kubenswrapper[4766]: I0130 16:41:17.015701 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6da00370-0819-4857-8fa3-1ffe3e6b628b-scripts\") pod \"swift-ring-rebalance-n8rf4\" (UID: \"6da00370-0819-4857-8fa3-1ffe3e6b628b\") " pod="openstack/swift-ring-rebalance-n8rf4" Jan 30 16:41:17 crc kubenswrapper[4766]: I0130 16:41:17.117104 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6da00370-0819-4857-8fa3-1ffe3e6b628b-scripts\") pod \"swift-ring-rebalance-n8rf4\" (UID: \"6da00370-0819-4857-8fa3-1ffe3e6b628b\") " pod="openstack/swift-ring-rebalance-n8rf4" Jan 30 16:41:17 crc kubenswrapper[4766]: I0130 16:41:17.117577 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6da00370-0819-4857-8fa3-1ffe3e6b628b-ring-data-devices\") pod \"swift-ring-rebalance-n8rf4\" (UID: \"6da00370-0819-4857-8fa3-1ffe3e6b628b\") " pod="openstack/swift-ring-rebalance-n8rf4" Jan 30 16:41:17 crc kubenswrapper[4766]: I0130 16:41:17.117672 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6da00370-0819-4857-8fa3-1ffe3e6b628b-etc-swift\") pod \"swift-ring-rebalance-n8rf4\" (UID: \"6da00370-0819-4857-8fa3-1ffe3e6b628b\") " pod="openstack/swift-ring-rebalance-n8rf4" Jan 30 16:41:17 crc kubenswrapper[4766]: I0130 16:41:17.117758 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hx9v\" (UniqueName: \"kubernetes.io/projected/6da00370-0819-4857-8fa3-1ffe3e6b628b-kube-api-access-8hx9v\") pod \"swift-ring-rebalance-n8rf4\" (UID: \"6da00370-0819-4857-8fa3-1ffe3e6b628b\") " pod="openstack/swift-ring-rebalance-n8rf4" Jan 30 16:41:17 crc kubenswrapper[4766]: I0130 16:41:17.117812 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6da00370-0819-4857-8fa3-1ffe3e6b628b-combined-ca-bundle\") pod \"swift-ring-rebalance-n8rf4\" (UID: \"6da00370-0819-4857-8fa3-1ffe3e6b628b\") " pod="openstack/swift-ring-rebalance-n8rf4" Jan 30 16:41:17 crc kubenswrapper[4766]: I0130 16:41:17.117839 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6da00370-0819-4857-8fa3-1ffe3e6b628b-dispersionconf\") pod \"swift-ring-rebalance-n8rf4\" (UID: \"6da00370-0819-4857-8fa3-1ffe3e6b628b\") " pod="openstack/swift-ring-rebalance-n8rf4" Jan 30 16:41:17 crc kubenswrapper[4766]: I0130 16:41:17.117865 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6da00370-0819-4857-8fa3-1ffe3e6b628b-swiftconf\") pod \"swift-ring-rebalance-n8rf4\" (UID: \"6da00370-0819-4857-8fa3-1ffe3e6b628b\") " pod="openstack/swift-ring-rebalance-n8rf4" Jan 30 16:41:17 crc kubenswrapper[4766]: I0130 16:41:17.118067 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6da00370-0819-4857-8fa3-1ffe3e6b628b-scripts\") pod \"swift-ring-rebalance-n8rf4\" (UID: \"6da00370-0819-4857-8fa3-1ffe3e6b628b\") " pod="openstack/swift-ring-rebalance-n8rf4" Jan 30 16:41:17 crc kubenswrapper[4766]: I0130 16:41:17.118560 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6da00370-0819-4857-8fa3-1ffe3e6b628b-etc-swift\") pod \"swift-ring-rebalance-n8rf4\" (UID: \"6da00370-0819-4857-8fa3-1ffe3e6b628b\") " pod="openstack/swift-ring-rebalance-n8rf4" Jan 30 16:41:17 crc kubenswrapper[4766]: I0130 16:41:17.118825 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6da00370-0819-4857-8fa3-1ffe3e6b628b-ring-data-devices\") pod \"swift-ring-rebalance-n8rf4\" (UID: \"6da00370-0819-4857-8fa3-1ffe3e6b628b\") " pod="openstack/swift-ring-rebalance-n8rf4" Jan 30 16:41:17 crc kubenswrapper[4766]: I0130 16:41:17.121344 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6da00370-0819-4857-8fa3-1ffe3e6b628b-dispersionconf\") pod \"swift-ring-rebalance-n8rf4\" (UID: \"6da00370-0819-4857-8fa3-1ffe3e6b628b\") " pod="openstack/swift-ring-rebalance-n8rf4" Jan 30 16:41:17 crc kubenswrapper[4766]: I0130 16:41:17.121483 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6da00370-0819-4857-8fa3-1ffe3e6b628b-combined-ca-bundle\") pod \"swift-ring-rebalance-n8rf4\" (UID: \"6da00370-0819-4857-8fa3-1ffe3e6b628b\") " pod="openstack/swift-ring-rebalance-n8rf4" Jan 30 16:41:17 crc kubenswrapper[4766]: I0130 16:41:17.125036 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6da00370-0819-4857-8fa3-1ffe3e6b628b-swiftconf\") pod \"swift-ring-rebalance-n8rf4\" (UID: \"6da00370-0819-4857-8fa3-1ffe3e6b628b\") " pod="openstack/swift-ring-rebalance-n8rf4" Jan 30 16:41:17 crc kubenswrapper[4766]: I0130 16:41:17.140885 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hx9v\" (UniqueName: \"kubernetes.io/projected/6da00370-0819-4857-8fa3-1ffe3e6b628b-kube-api-access-8hx9v\") pod \"swift-ring-rebalance-n8rf4\" (UID: \"6da00370-0819-4857-8fa3-1ffe3e6b628b\") " pod="openstack/swift-ring-rebalance-n8rf4" Jan 30 16:41:17 crc kubenswrapper[4766]: I0130 16:41:17.181907 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-n8rf4" Jan 30 16:41:17 crc kubenswrapper[4766]: I0130 16:41:17.265249 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-nw485" event={"ID":"8114a4cb-b868-4813-836e-6e12b1b37c00","Type":"ContainerStarted","Data":"6a26427f3e3e29a1dbf4dfdb6a3c3ecc4231decf4e2de22b82360cb9a413fd15"} Jan 30 16:41:17 crc kubenswrapper[4766]: I0130 16:41:17.265764 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-l6hkn" Jan 30 16:41:17 crc kubenswrapper[4766]: I0130 16:41:17.265813 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 30 16:41:17 crc kubenswrapper[4766]: I0130 16:41:17.265829 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 30 16:41:17 crc kubenswrapper[4766]: E0130 16:41:17.423391 4766 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 16:41:17 crc kubenswrapper[4766]: E0130 16:41:17.423611 4766 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 16:41:17 crc kubenswrapper[4766]: E0130 16:41:17.423674 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8b182790-0761-450c-85d1-63ddd59ac10f-etc-swift podName:8b182790-0761-450c-85d1-63ddd59ac10f nodeName:}" failed. No retries permitted until 2026-01-30 16:41:18.42365335 +0000 UTC m=+1133.061610696 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8b182790-0761-450c-85d1-63ddd59ac10f-etc-swift") pod "swift-storage-0" (UID: "8b182790-0761-450c-85d1-63ddd59ac10f") : configmap "swift-ring-files" not found Jan 30 16:41:17 crc kubenswrapper[4766]: I0130 16:41:17.423390 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8b182790-0761-450c-85d1-63ddd59ac10f-etc-swift\") pod \"swift-storage-0\" (UID: \"8b182790-0761-450c-85d1-63ddd59ac10f\") " pod="openstack/swift-storage-0" Jan 30 16:41:17 crc kubenswrapper[4766]: I0130 16:41:17.641660 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-n8rf4"] Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.059066 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c2933e1-c67d-45a6-8e08-fac512f6473b" path="/var/lib/kubelet/pods/7c2933e1-c67d-45a6-8e08-fac512f6473b/volumes" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.273591 4766 generic.go:334] "Generic (PLEG): container finished" podID="8114a4cb-b868-4813-836e-6e12b1b37c00" containerID="47fbf09005959840cf9c0719b304d36f50890aa9f40b3160e0f527a56e67579f" exitCode=0 Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.273656 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-nw485" event={"ID":"8114a4cb-b868-4813-836e-6e12b1b37c00","Type":"ContainerDied","Data":"47fbf09005959840cf9c0719b304d36f50890aa9f40b3160e0f527a56e67579f"} Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.281461 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-n8rf4" event={"ID":"6da00370-0819-4857-8fa3-1ffe3e6b628b","Type":"ContainerStarted","Data":"0e2a1beef2986dc171385e28859599afa82cdfc8eed7aa1c2a744690930b7204"} Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.336854 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.340208 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.445883 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8b182790-0761-450c-85d1-63ddd59ac10f-etc-swift\") pod \"swift-storage-0\" (UID: \"8b182790-0761-450c-85d1-63ddd59ac10f\") " pod="openstack/swift-storage-0" Jan 30 16:41:18 crc kubenswrapper[4766]: E0130 16:41:18.446119 4766 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 16:41:18 crc kubenswrapper[4766]: E0130 16:41:18.446152 4766 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 16:41:18 crc kubenswrapper[4766]: E0130 16:41:18.446372 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8b182790-0761-450c-85d1-63ddd59ac10f-etc-swift podName:8b182790-0761-450c-85d1-63ddd59ac10f nodeName:}" failed. No retries permitted until 2026-01-30 16:41:20.446353157 +0000 UTC m=+1135.084310503 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8b182790-0761-450c-85d1-63ddd59ac10f-etc-swift") pod "swift-storage-0" (UID: "8b182790-0761-450c-85d1-63ddd59ac10f") : configmap "swift-ring-files" not found Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.589257 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-6647p"] Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.597047 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-qwlk6"] Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.599822 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89d5d749-qwlk6" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.614446 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-qwlk6"] Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.615585 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.650000 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d-dns-svc\") pod \"dnsmasq-dns-6c89d5d749-qwlk6\" (UID: \"d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d\") " pod="openstack/dnsmasq-dns-6c89d5d749-qwlk6" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.650049 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frqn2\" (UniqueName: \"kubernetes.io/projected/d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d-kube-api-access-frqn2\") pod \"dnsmasq-dns-6c89d5d749-qwlk6\" (UID: \"d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d\") " pod="openstack/dnsmasq-dns-6c89d5d749-qwlk6" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.650110 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d-ovsdbserver-sb\") pod \"dnsmasq-dns-6c89d5d749-qwlk6\" (UID: \"d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d\") " pod="openstack/dnsmasq-dns-6c89d5d749-qwlk6" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.650194 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d-config\") pod \"dnsmasq-dns-6c89d5d749-qwlk6\" (UID: \"d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d\") " pod="openstack/dnsmasq-dns-6c89d5d749-qwlk6" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.752520 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d-config\") pod \"dnsmasq-dns-6c89d5d749-qwlk6\" (UID: \"d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d\") " pod="openstack/dnsmasq-dns-6c89d5d749-qwlk6" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.752618 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d-dns-svc\") pod \"dnsmasq-dns-6c89d5d749-qwlk6\" (UID: \"d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d\") " pod="openstack/dnsmasq-dns-6c89d5d749-qwlk6" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.752650 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frqn2\" (UniqueName: \"kubernetes.io/projected/d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d-kube-api-access-frqn2\") pod \"dnsmasq-dns-6c89d5d749-qwlk6\" (UID: \"d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d\") " pod="openstack/dnsmasq-dns-6c89d5d749-qwlk6" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.753378 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d-ovsdbserver-sb\") pod \"dnsmasq-dns-6c89d5d749-qwlk6\" (UID: \"d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d\") " pod="openstack/dnsmasq-dns-6c89d5d749-qwlk6" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.753898 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d-config\") pod \"dnsmasq-dns-6c89d5d749-qwlk6\" (UID: \"d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d\") " pod="openstack/dnsmasq-dns-6c89d5d749-qwlk6" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.753996 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d-dns-svc\") pod \"dnsmasq-dns-6c89d5d749-qwlk6\" (UID: \"d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d\") " pod="openstack/dnsmasq-dns-6c89d5d749-qwlk6" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.754249 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d-ovsdbserver-sb\") pod \"dnsmasq-dns-6c89d5d749-qwlk6\" (UID: \"d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d\") " pod="openstack/dnsmasq-dns-6c89d5d749-qwlk6" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.797397 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frqn2\" (UniqueName: \"kubernetes.io/projected/d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d-kube-api-access-frqn2\") pod \"dnsmasq-dns-6c89d5d749-qwlk6\" (UID: \"d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d\") " pod="openstack/dnsmasq-dns-6c89d5d749-qwlk6" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.807008 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-rsxl2"] Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.808380 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-rsxl2" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.819472 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.823961 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-rsxl2"] Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.855811 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-ovn-rundir\") pod \"ovn-controller-metrics-rsxl2\" (UID: \"140fa04a-cb22-40ed-a08c-17f4ea13a5c4\") " pod="openstack/ovn-controller-metrics-rsxl2" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.855895 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-rsxl2\" (UID: \"140fa04a-cb22-40ed-a08c-17f4ea13a5c4\") " pod="openstack/ovn-controller-metrics-rsxl2" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.855957 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-ovs-rundir\") pod \"ovn-controller-metrics-rsxl2\" (UID: \"140fa04a-cb22-40ed-a08c-17f4ea13a5c4\") " pod="openstack/ovn-controller-metrics-rsxl2" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.856035 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-config\") pod \"ovn-controller-metrics-rsxl2\" (UID: \"140fa04a-cb22-40ed-a08c-17f4ea13a5c4\") " pod="openstack/ovn-controller-metrics-rsxl2" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.856117 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh9x4\" (UniqueName: \"kubernetes.io/projected/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-kube-api-access-zh9x4\") pod \"ovn-controller-metrics-rsxl2\" (UID: \"140fa04a-cb22-40ed-a08c-17f4ea13a5c4\") " pod="openstack/ovn-controller-metrics-rsxl2" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.856363 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-combined-ca-bundle\") pod \"ovn-controller-metrics-rsxl2\" (UID: \"140fa04a-cb22-40ed-a08c-17f4ea13a5c4\") " pod="openstack/ovn-controller-metrics-rsxl2" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.934053 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89d5d749-qwlk6" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.936709 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-nw485"] Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.958147 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-rsxl2\" (UID: \"140fa04a-cb22-40ed-a08c-17f4ea13a5c4\") " pod="openstack/ovn-controller-metrics-rsxl2" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.958255 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-ovs-rundir\") pod \"ovn-controller-metrics-rsxl2\" (UID: \"140fa04a-cb22-40ed-a08c-17f4ea13a5c4\") " pod="openstack/ovn-controller-metrics-rsxl2" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.958307 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-config\") pod \"ovn-controller-metrics-rsxl2\" (UID: \"140fa04a-cb22-40ed-a08c-17f4ea13a5c4\") " pod="openstack/ovn-controller-metrics-rsxl2" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.958436 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zh9x4\" (UniqueName: \"kubernetes.io/projected/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-kube-api-access-zh9x4\") pod \"ovn-controller-metrics-rsxl2\" (UID: \"140fa04a-cb22-40ed-a08c-17f4ea13a5c4\") " pod="openstack/ovn-controller-metrics-rsxl2" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.958489 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-combined-ca-bundle\") pod \"ovn-controller-metrics-rsxl2\" (UID: \"140fa04a-cb22-40ed-a08c-17f4ea13a5c4\") " pod="openstack/ovn-controller-metrics-rsxl2" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.958539 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-ovn-rundir\") pod \"ovn-controller-metrics-rsxl2\" (UID: \"140fa04a-cb22-40ed-a08c-17f4ea13a5c4\") " pod="openstack/ovn-controller-metrics-rsxl2" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.958619 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-ovn-rundir\") pod \"ovn-controller-metrics-rsxl2\" (UID: \"140fa04a-cb22-40ed-a08c-17f4ea13a5c4\") " pod="openstack/ovn-controller-metrics-rsxl2" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.958645 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-ovs-rundir\") pod \"ovn-controller-metrics-rsxl2\" (UID: \"140fa04a-cb22-40ed-a08c-17f4ea13a5c4\") " pod="openstack/ovn-controller-metrics-rsxl2" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.959523 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-config\") pod \"ovn-controller-metrics-rsxl2\" (UID: \"140fa04a-cb22-40ed-a08c-17f4ea13a5c4\") " pod="openstack/ovn-controller-metrics-rsxl2" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.978226 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-rghwg"] Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.981226 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-rghwg" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.995152 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.004855 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.007673 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.016224 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.016277 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.016489 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.016663 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-njt4v" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.027931 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-rghwg"] Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.047838 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.059975 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-config\") pod \"ovn-northd-0\" (UID: \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\") " pod="openstack/ovn-northd-0" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.060032 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4db25e7-718f-4a48-8dd2-2db2ae9e804c-dns-svc\") pod \"dnsmasq-dns-698758b865-rghwg\" (UID: \"c4db25e7-718f-4a48-8dd2-2db2ae9e804c\") " pod="openstack/dnsmasq-dns-698758b865-rghwg" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.060053 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2t7q\" (UniqueName: \"kubernetes.io/projected/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-kube-api-access-p2t7q\") pod \"ovn-northd-0\" (UID: \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\") " pod="openstack/ovn-northd-0" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.060079 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c4db25e7-718f-4a48-8dd2-2db2ae9e804c-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-rghwg\" (UID: \"c4db25e7-718f-4a48-8dd2-2db2ae9e804c\") " pod="openstack/dnsmasq-dns-698758b865-rghwg" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.060117 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\") " pod="openstack/ovn-northd-0" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.060140 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c4db25e7-718f-4a48-8dd2-2db2ae9e804c-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-rghwg\" (UID: \"c4db25e7-718f-4a48-8dd2-2db2ae9e804c\") " pod="openstack/dnsmasq-dns-698758b865-rghwg" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.060291 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\") " pod="openstack/ovn-northd-0" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.060330 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\") " pod="openstack/ovn-northd-0" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.060380 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbrfs\" (UniqueName: \"kubernetes.io/projected/c4db25e7-718f-4a48-8dd2-2db2ae9e804c-kube-api-access-xbrfs\") pod \"dnsmasq-dns-698758b865-rghwg\" (UID: \"c4db25e7-718f-4a48-8dd2-2db2ae9e804c\") " pod="openstack/dnsmasq-dns-698758b865-rghwg" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.060410 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-scripts\") pod \"ovn-northd-0\" (UID: \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\") " pod="openstack/ovn-northd-0" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.060435 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\") " pod="openstack/ovn-northd-0" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.060456 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4db25e7-718f-4a48-8dd2-2db2ae9e804c-config\") pod \"dnsmasq-dns-698758b865-rghwg\" (UID: \"c4db25e7-718f-4a48-8dd2-2db2ae9e804c\") " pod="openstack/dnsmasq-dns-698758b865-rghwg" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.102526 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-combined-ca-bundle\") pod \"ovn-controller-metrics-rsxl2\" (UID: \"140fa04a-cb22-40ed-a08c-17f4ea13a5c4\") " pod="openstack/ovn-controller-metrics-rsxl2" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.118077 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh9x4\" (UniqueName: \"kubernetes.io/projected/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-kube-api-access-zh9x4\") pod \"ovn-controller-metrics-rsxl2\" (UID: \"140fa04a-cb22-40ed-a08c-17f4ea13a5c4\") " pod="openstack/ovn-controller-metrics-rsxl2" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.118545 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-rsxl2\" (UID: \"140fa04a-cb22-40ed-a08c-17f4ea13a5c4\") " pod="openstack/ovn-controller-metrics-rsxl2" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.161455 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-scripts\") pod \"ovn-northd-0\" (UID: \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\") " pod="openstack/ovn-northd-0" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.161515 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\") " pod="openstack/ovn-northd-0" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.161539 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4db25e7-718f-4a48-8dd2-2db2ae9e804c-config\") pod \"dnsmasq-dns-698758b865-rghwg\" (UID: \"c4db25e7-718f-4a48-8dd2-2db2ae9e804c\") " pod="openstack/dnsmasq-dns-698758b865-rghwg" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.161566 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-config\") pod \"ovn-northd-0\" (UID: \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\") " pod="openstack/ovn-northd-0" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.161598 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4db25e7-718f-4a48-8dd2-2db2ae9e804c-dns-svc\") pod \"dnsmasq-dns-698758b865-rghwg\" (UID: \"c4db25e7-718f-4a48-8dd2-2db2ae9e804c\") " pod="openstack/dnsmasq-dns-698758b865-rghwg" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.161615 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2t7q\" (UniqueName: \"kubernetes.io/projected/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-kube-api-access-p2t7q\") pod \"ovn-northd-0\" (UID: \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\") " pod="openstack/ovn-northd-0" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.161632 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c4db25e7-718f-4a48-8dd2-2db2ae9e804c-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-rghwg\" (UID: \"c4db25e7-718f-4a48-8dd2-2db2ae9e804c\") " pod="openstack/dnsmasq-dns-698758b865-rghwg" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.161679 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\") " pod="openstack/ovn-northd-0" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.161700 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c4db25e7-718f-4a48-8dd2-2db2ae9e804c-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-rghwg\" (UID: \"c4db25e7-718f-4a48-8dd2-2db2ae9e804c\") " pod="openstack/dnsmasq-dns-698758b865-rghwg" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.161736 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\") " pod="openstack/ovn-northd-0" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.161769 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\") " pod="openstack/ovn-northd-0" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.161840 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbrfs\" (UniqueName: \"kubernetes.io/projected/c4db25e7-718f-4a48-8dd2-2db2ae9e804c-kube-api-access-xbrfs\") pod \"dnsmasq-dns-698758b865-rghwg\" (UID: \"c4db25e7-718f-4a48-8dd2-2db2ae9e804c\") " pod="openstack/dnsmasq-dns-698758b865-rghwg" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.162962 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-scripts\") pod \"ovn-northd-0\" (UID: \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\") " pod="openstack/ovn-northd-0" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.163264 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\") " pod="openstack/ovn-northd-0" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.163940 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4db25e7-718f-4a48-8dd2-2db2ae9e804c-config\") pod \"dnsmasq-dns-698758b865-rghwg\" (UID: \"c4db25e7-718f-4a48-8dd2-2db2ae9e804c\") " pod="openstack/dnsmasq-dns-698758b865-rghwg" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.164768 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-config\") pod \"ovn-northd-0\" (UID: \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\") " pod="openstack/ovn-northd-0" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.164911 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c4db25e7-718f-4a48-8dd2-2db2ae9e804c-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-rghwg\" (UID: \"c4db25e7-718f-4a48-8dd2-2db2ae9e804c\") " pod="openstack/dnsmasq-dns-698758b865-rghwg" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.165248 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c4db25e7-718f-4a48-8dd2-2db2ae9e804c-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-rghwg\" (UID: \"c4db25e7-718f-4a48-8dd2-2db2ae9e804c\") " pod="openstack/dnsmasq-dns-698758b865-rghwg" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.166857 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4db25e7-718f-4a48-8dd2-2db2ae9e804c-dns-svc\") pod \"dnsmasq-dns-698758b865-rghwg\" (UID: \"c4db25e7-718f-4a48-8dd2-2db2ae9e804c\") " pod="openstack/dnsmasq-dns-698758b865-rghwg" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.177735 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\") " pod="openstack/ovn-northd-0" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.183965 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\") " pod="openstack/ovn-northd-0" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.193885 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2t7q\" (UniqueName: \"kubernetes.io/projected/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-kube-api-access-p2t7q\") pod \"ovn-northd-0\" (UID: \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\") " pod="openstack/ovn-northd-0" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.194147 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbrfs\" (UniqueName: \"kubernetes.io/projected/c4db25e7-718f-4a48-8dd2-2db2ae9e804c-kube-api-access-xbrfs\") pod \"dnsmasq-dns-698758b865-rghwg\" (UID: \"c4db25e7-718f-4a48-8dd2-2db2ae9e804c\") " pod="openstack/dnsmasq-dns-698758b865-rghwg" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.194442 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\") " pod="openstack/ovn-northd-0" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.341649 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-6647p" event={"ID":"a31c7217-d6d2-4cc1-ab83-016373333c80","Type":"ContainerDied","Data":"c9a6f86e26a3a7d3d41158b0d1740e813c2a23a97f1db4e06c00cc07c4e615e5"} Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.341721 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9a6f86e26a3a7d3d41158b0d1740e813c2a23a97f1db4e06c00cc07c4e615e5" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.347156 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-rsxl2" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.358667 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-nw485" event={"ID":"8114a4cb-b868-4813-836e-6e12b1b37c00","Type":"ContainerStarted","Data":"88fb18e01e6b586a98e304f5d04726ff189a547e7fe84ce42c179ad7614d8d6d"} Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.365540 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.384908 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-rghwg" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.387435 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7cb5889db5-nw485" podStartSLOduration=3.635170537 podStartE2EDuration="4.387416113s" podCreationTimestamp="2026-01-30 16:41:15 +0000 UTC" firstStartedPulling="2026-01-30 16:41:16.343808191 +0000 UTC m=+1130.981765537" lastFinishedPulling="2026-01-30 16:41:17.096053767 +0000 UTC m=+1131.734011113" observedRunningTime="2026-01-30 16:41:19.384864794 +0000 UTC m=+1134.022822140" watchObservedRunningTime="2026-01-30 16:41:19.387416113 +0000 UTC m=+1134.025373459" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.388831 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-6647p" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.578471 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsqpd\" (UniqueName: \"kubernetes.io/projected/a31c7217-d6d2-4cc1-ab83-016373333c80-kube-api-access-dsqpd\") pod \"a31c7217-d6d2-4cc1-ab83-016373333c80\" (UID: \"a31c7217-d6d2-4cc1-ab83-016373333c80\") " Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.578633 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a31c7217-d6d2-4cc1-ab83-016373333c80-dns-svc\") pod \"a31c7217-d6d2-4cc1-ab83-016373333c80\" (UID: \"a31c7217-d6d2-4cc1-ab83-016373333c80\") " Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.578726 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a31c7217-d6d2-4cc1-ab83-016373333c80-config\") pod \"a31c7217-d6d2-4cc1-ab83-016373333c80\" (UID: \"a31c7217-d6d2-4cc1-ab83-016373333c80\") " Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.580225 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31c7217-d6d2-4cc1-ab83-016373333c80-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a31c7217-d6d2-4cc1-ab83-016373333c80" (UID: "a31c7217-d6d2-4cc1-ab83-016373333c80"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.580907 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31c7217-d6d2-4cc1-ab83-016373333c80-config" (OuterVolumeSpecName: "config") pod "a31c7217-d6d2-4cc1-ab83-016373333c80" (UID: "a31c7217-d6d2-4cc1-ab83-016373333c80"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.585024 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a31c7217-d6d2-4cc1-ab83-016373333c80-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.585054 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a31c7217-d6d2-4cc1-ab83-016373333c80-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.627964 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31c7217-d6d2-4cc1-ab83-016373333c80-kube-api-access-dsqpd" (OuterVolumeSpecName: "kube-api-access-dsqpd") pod "a31c7217-d6d2-4cc1-ab83-016373333c80" (UID: "a31c7217-d6d2-4cc1-ab83-016373333c80"). InnerVolumeSpecName "kube-api-access-dsqpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.628050 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-qwlk6"] Jan 30 16:41:19 crc kubenswrapper[4766]: W0130 16:41:19.651972 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd754d8cb_87c5_4ca2_a9d2_e3aef7548f2d.slice/crio-8d9a46eb1216a3100209bf80420923758d0ab5dbd1dd0ccea57c594994eba465 WatchSource:0}: Error finding container 8d9a46eb1216a3100209bf80420923758d0ab5dbd1dd0ccea57c594994eba465: Status 404 returned error can't find the container with id 8d9a46eb1216a3100209bf80420923758d0ab5dbd1dd0ccea57c594994eba465 Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.687714 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dsqpd\" (UniqueName: \"kubernetes.io/projected/a31c7217-d6d2-4cc1-ab83-016373333c80-kube-api-access-dsqpd\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:20 crc kubenswrapper[4766]: I0130 16:41:20.107401 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-rsxl2"] Jan 30 16:41:20 crc kubenswrapper[4766]: W0130 16:41:20.111388 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod140fa04a_cb22_40ed_a08c_17f4ea13a5c4.slice/crio-c5dccc2b2d4eb0624084e2830a6c6a2e7d81c9d945cf1979593549236acac426 WatchSource:0}: Error finding container c5dccc2b2d4eb0624084e2830a6c6a2e7d81c9d945cf1979593549236acac426: Status 404 returned error can't find the container with id c5dccc2b2d4eb0624084e2830a6c6a2e7d81c9d945cf1979593549236acac426 Jan 30 16:41:20 crc kubenswrapper[4766]: I0130 16:41:20.115524 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-rghwg"] Jan 30 16:41:20 crc kubenswrapper[4766]: W0130 16:41:20.117663 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4db25e7_718f_4a48_8dd2_2db2ae9e804c.slice/crio-9182f1033ef23024434f7951cc54bc1f7a26c4fcea86a6ac3668ac33be32ed89 WatchSource:0}: Error finding container 9182f1033ef23024434f7951cc54bc1f7a26c4fcea86a6ac3668ac33be32ed89: Status 404 returned error can't find the container with id 9182f1033ef23024434f7951cc54bc1f7a26c4fcea86a6ac3668ac33be32ed89 Jan 30 16:41:20 crc kubenswrapper[4766]: I0130 16:41:20.192016 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 16:41:20 crc kubenswrapper[4766]: I0130 16:41:20.373933 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-rghwg" event={"ID":"c4db25e7-718f-4a48-8dd2-2db2ae9e804c","Type":"ContainerStarted","Data":"9182f1033ef23024434f7951cc54bc1f7a26c4fcea86a6ac3668ac33be32ed89"} Jan 30 16:41:20 crc kubenswrapper[4766]: I0130 16:41:20.375473 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22","Type":"ContainerStarted","Data":"090eddff40a00fe6ea2b9a4d39ef4e8496a69421f9440b673916d296607e29b3"} Jan 30 16:41:20 crc kubenswrapper[4766]: I0130 16:41:20.379033 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-rsxl2" event={"ID":"140fa04a-cb22-40ed-a08c-17f4ea13a5c4","Type":"ContainerStarted","Data":"c5dccc2b2d4eb0624084e2830a6c6a2e7d81c9d945cf1979593549236acac426"} Jan 30 16:41:20 crc kubenswrapper[4766]: I0130 16:41:20.380252 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89d5d749-qwlk6" event={"ID":"d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d","Type":"ContainerStarted","Data":"8d9a46eb1216a3100209bf80420923758d0ab5dbd1dd0ccea57c594994eba465"} Jan 30 16:41:20 crc kubenswrapper[4766]: I0130 16:41:20.381896 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-6647p" Jan 30 16:41:20 crc kubenswrapper[4766]: I0130 16:41:20.381912 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bc2a138c-9abd-427b-815c-cbb9e12459f6","Type":"ContainerStarted","Data":"420bba712e788513308111db89ced03a759c0a7dc6262370124c82df4dd31af5"} Jan 30 16:41:20 crc kubenswrapper[4766]: I0130 16:41:20.382404 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7cb5889db5-nw485" Jan 30 16:41:20 crc kubenswrapper[4766]: I0130 16:41:20.382591 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7cb5889db5-nw485" podUID="8114a4cb-b868-4813-836e-6e12b1b37c00" containerName="dnsmasq-dns" containerID="cri-o://88fb18e01e6b586a98e304f5d04726ff189a547e7fe84ce42c179ad7614d8d6d" gracePeriod=10 Jan 30 16:41:20 crc kubenswrapper[4766]: I0130 16:41:20.428997 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-6647p"] Jan 30 16:41:20 crc kubenswrapper[4766]: I0130 16:41:20.434602 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-6647p"] Jan 30 16:41:20 crc kubenswrapper[4766]: I0130 16:41:20.511069 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8b182790-0761-450c-85d1-63ddd59ac10f-etc-swift\") pod \"swift-storage-0\" (UID: \"8b182790-0761-450c-85d1-63ddd59ac10f\") " pod="openstack/swift-storage-0" Jan 30 16:41:20 crc kubenswrapper[4766]: E0130 16:41:20.511393 4766 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 16:41:20 crc kubenswrapper[4766]: E0130 16:41:20.511447 4766 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 16:41:20 crc kubenswrapper[4766]: E0130 16:41:20.511508 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8b182790-0761-450c-85d1-63ddd59ac10f-etc-swift podName:8b182790-0761-450c-85d1-63ddd59ac10f nodeName:}" failed. No retries permitted until 2026-01-30 16:41:24.511488332 +0000 UTC m=+1139.149445678 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8b182790-0761-450c-85d1-63ddd59ac10f-etc-swift") pod "swift-storage-0" (UID: "8b182790-0761-450c-85d1-63ddd59ac10f") : configmap "swift-ring-files" not found Jan 30 16:41:21 crc kubenswrapper[4766]: I0130 16:41:21.393595 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-rghwg" event={"ID":"c4db25e7-718f-4a48-8dd2-2db2ae9e804c","Type":"ContainerStarted","Data":"e50ccbe59f4a2cbb46a08d936a0c8b4ab930afea52bcfbf233b4a8e6a0125171"} Jan 30 16:41:21 crc kubenswrapper[4766]: I0130 16:41:21.395857 4766 generic.go:334] "Generic (PLEG): container finished" podID="8114a4cb-b868-4813-836e-6e12b1b37c00" containerID="88fb18e01e6b586a98e304f5d04726ff189a547e7fe84ce42c179ad7614d8d6d" exitCode=0 Jan 30 16:41:21 crc kubenswrapper[4766]: I0130 16:41:21.395919 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-nw485" event={"ID":"8114a4cb-b868-4813-836e-6e12b1b37c00","Type":"ContainerDied","Data":"88fb18e01e6b586a98e304f5d04726ff189a547e7fe84ce42c179ad7614d8d6d"} Jan 30 16:41:21 crc kubenswrapper[4766]: I0130 16:41:21.397414 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-rsxl2" event={"ID":"140fa04a-cb22-40ed-a08c-17f4ea13a5c4","Type":"ContainerStarted","Data":"ca773f6965466e1c966e4078c56699b7af7241f8034d067ce868bbc53f1f1cda"} Jan 30 16:41:21 crc kubenswrapper[4766]: I0130 16:41:21.398911 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89d5d749-qwlk6" event={"ID":"d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d","Type":"ContainerStarted","Data":"9fb0691e704a0ac758a7232bb2f768608170d0a691b085e55b2a3e90678520dd"} Jan 30 16:41:21 crc kubenswrapper[4766]: I0130 16:41:21.401606 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"62dd6ad1-1550-48cf-b103-b7ab6dd93c97","Type":"ContainerStarted","Data":"6760427dc9415b24501e486881c4fff8d34a1c94fce91dc3276f0bac6cf59171"} Jan 30 16:41:22 crc kubenswrapper[4766]: I0130 16:41:22.048588 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31c7217-d6d2-4cc1-ab83-016373333c80" path="/var/lib/kubelet/pods/a31c7217-d6d2-4cc1-ab83-016373333c80/volumes" Jan 30 16:41:22 crc kubenswrapper[4766]: I0130 16:41:22.413946 4766 generic.go:334] "Generic (PLEG): container finished" podID="c4db25e7-718f-4a48-8dd2-2db2ae9e804c" containerID="e50ccbe59f4a2cbb46a08d936a0c8b4ab930afea52bcfbf233b4a8e6a0125171" exitCode=0 Jan 30 16:41:22 crc kubenswrapper[4766]: I0130 16:41:22.414024 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-rghwg" event={"ID":"c4db25e7-718f-4a48-8dd2-2db2ae9e804c","Type":"ContainerDied","Data":"e50ccbe59f4a2cbb46a08d936a0c8b4ab930afea52bcfbf233b4a8e6a0125171"} Jan 30 16:41:22 crc kubenswrapper[4766]: I0130 16:41:22.417300 4766 generic.go:334] "Generic (PLEG): container finished" podID="d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d" containerID="9fb0691e704a0ac758a7232bb2f768608170d0a691b085e55b2a3e90678520dd" exitCode=0 Jan 30 16:41:22 crc kubenswrapper[4766]: I0130 16:41:22.417645 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89d5d749-qwlk6" event={"ID":"d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d","Type":"ContainerDied","Data":"9fb0691e704a0ac758a7232bb2f768608170d0a691b085e55b2a3e90678520dd"} Jan 30 16:41:22 crc kubenswrapper[4766]: I0130 16:41:22.497118 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-rsxl2" podStartSLOduration=4.497097956 podStartE2EDuration="4.497097956s" podCreationTimestamp="2026-01-30 16:41:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:41:22.495078681 +0000 UTC m=+1137.133036047" watchObservedRunningTime="2026-01-30 16:41:22.497097956 +0000 UTC m=+1137.135055322" Jan 30 16:41:22 crc kubenswrapper[4766]: I0130 16:41:22.977163 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 30 16:41:22 crc kubenswrapper[4766]: I0130 16:41:22.977594 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 30 16:41:23 crc kubenswrapper[4766]: I0130 16:41:23.055555 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 30 16:41:23 crc kubenswrapper[4766]: I0130 16:41:23.429315 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89d5d749-qwlk6" event={"ID":"d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d","Type":"ContainerStarted","Data":"e7daa579f70bf6a3acb178e9ec1ed8c1fc385b21c22bdf54aaf089b63285df83"} Jan 30 16:41:23 crc kubenswrapper[4766]: I0130 16:41:23.429705 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6c89d5d749-qwlk6" Jan 30 16:41:23 crc kubenswrapper[4766]: I0130 16:41:23.441676 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-rghwg" event={"ID":"c4db25e7-718f-4a48-8dd2-2db2ae9e804c","Type":"ContainerStarted","Data":"d4d926b25f16af7c860cb7d5c7c75d1eb0c85c7438a98e36515485f9623090f7"} Jan 30 16:41:23 crc kubenswrapper[4766]: I0130 16:41:23.448691 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6c89d5d749-qwlk6" podStartSLOduration=5.448654913 podStartE2EDuration="5.448654913s" podCreationTimestamp="2026-01-30 16:41:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:41:23.446474893 +0000 UTC m=+1138.084432239" watchObservedRunningTime="2026-01-30 16:41:23.448654913 +0000 UTC m=+1138.086612259" Jan 30 16:41:23 crc kubenswrapper[4766]: I0130 16:41:23.489165 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-rghwg" podStartSLOduration=5.489144529 podStartE2EDuration="5.489144529s" podCreationTimestamp="2026-01-30 16:41:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:41:23.484349346 +0000 UTC m=+1138.122306712" watchObservedRunningTime="2026-01-30 16:41:23.489144529 +0000 UTC m=+1138.127101875" Jan 30 16:41:23 crc kubenswrapper[4766]: I0130 16:41:23.531594 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 30 16:41:24 crc kubenswrapper[4766]: I0130 16:41:24.385491 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-rghwg" Jan 30 16:41:24 crc kubenswrapper[4766]: I0130 16:41:24.385558 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-nw485" Jan 30 16:41:24 crc kubenswrapper[4766]: I0130 16:41:24.454560 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-nw485" Jan 30 16:41:24 crc kubenswrapper[4766]: I0130 16:41:24.454845 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-nw485" event={"ID":"8114a4cb-b868-4813-836e-6e12b1b37c00","Type":"ContainerDied","Data":"6a26427f3e3e29a1dbf4dfdb6a3c3ecc4231decf4e2de22b82360cb9a413fd15"} Jan 30 16:41:24 crc kubenswrapper[4766]: I0130 16:41:24.454885 4766 scope.go:117] "RemoveContainer" containerID="88fb18e01e6b586a98e304f5d04726ff189a547e7fe84ce42c179ad7614d8d6d" Jan 30 16:41:24 crc kubenswrapper[4766]: I0130 16:41:24.494578 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8114a4cb-b868-4813-836e-6e12b1b37c00-dns-svc\") pod \"8114a4cb-b868-4813-836e-6e12b1b37c00\" (UID: \"8114a4cb-b868-4813-836e-6e12b1b37c00\") " Jan 30 16:41:24 crc kubenswrapper[4766]: I0130 16:41:24.494799 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8114a4cb-b868-4813-836e-6e12b1b37c00-config\") pod \"8114a4cb-b868-4813-836e-6e12b1b37c00\" (UID: \"8114a4cb-b868-4813-836e-6e12b1b37c00\") " Jan 30 16:41:24 crc kubenswrapper[4766]: I0130 16:41:24.494838 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g88kl\" (UniqueName: \"kubernetes.io/projected/8114a4cb-b868-4813-836e-6e12b1b37c00-kube-api-access-g88kl\") pod \"8114a4cb-b868-4813-836e-6e12b1b37c00\" (UID: \"8114a4cb-b868-4813-836e-6e12b1b37c00\") " Jan 30 16:41:24 crc kubenswrapper[4766]: I0130 16:41:24.502675 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8114a4cb-b868-4813-836e-6e12b1b37c00-kube-api-access-g88kl" (OuterVolumeSpecName: "kube-api-access-g88kl") pod "8114a4cb-b868-4813-836e-6e12b1b37c00" (UID: "8114a4cb-b868-4813-836e-6e12b1b37c00"). InnerVolumeSpecName "kube-api-access-g88kl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:41:24 crc kubenswrapper[4766]: I0130 16:41:24.541973 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8114a4cb-b868-4813-836e-6e12b1b37c00-config" (OuterVolumeSpecName: "config") pod "8114a4cb-b868-4813-836e-6e12b1b37c00" (UID: "8114a4cb-b868-4813-836e-6e12b1b37c00"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:41:24 crc kubenswrapper[4766]: I0130 16:41:24.550964 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8114a4cb-b868-4813-836e-6e12b1b37c00-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8114a4cb-b868-4813-836e-6e12b1b37c00" (UID: "8114a4cb-b868-4813-836e-6e12b1b37c00"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:41:24 crc kubenswrapper[4766]: I0130 16:41:24.597874 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8b182790-0761-450c-85d1-63ddd59ac10f-etc-swift\") pod \"swift-storage-0\" (UID: \"8b182790-0761-450c-85d1-63ddd59ac10f\") " pod="openstack/swift-storage-0" Jan 30 16:41:24 crc kubenswrapper[4766]: I0130 16:41:24.598355 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8114a4cb-b868-4813-836e-6e12b1b37c00-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:24 crc kubenswrapper[4766]: I0130 16:41:24.598681 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8114a4cb-b868-4813-836e-6e12b1b37c00-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:24 crc kubenswrapper[4766]: I0130 16:41:24.598695 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g88kl\" (UniqueName: \"kubernetes.io/projected/8114a4cb-b868-4813-836e-6e12b1b37c00-kube-api-access-g88kl\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:24 crc kubenswrapper[4766]: E0130 16:41:24.598712 4766 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 16:41:24 crc kubenswrapper[4766]: E0130 16:41:24.598737 4766 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 16:41:24 crc kubenswrapper[4766]: E0130 16:41:24.598782 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8b182790-0761-450c-85d1-63ddd59ac10f-etc-swift podName:8b182790-0761-450c-85d1-63ddd59ac10f nodeName:}" failed. No retries permitted until 2026-01-30 16:41:32.598763578 +0000 UTC m=+1147.236720924 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8b182790-0761-450c-85d1-63ddd59ac10f-etc-swift") pod "swift-storage-0" (UID: "8b182790-0761-450c-85d1-63ddd59ac10f") : configmap "swift-ring-files" not found Jan 30 16:41:24 crc kubenswrapper[4766]: I0130 16:41:24.795937 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-nw485"] Jan 30 16:41:24 crc kubenswrapper[4766]: I0130 16:41:24.803507 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-nw485"] Jan 30 16:41:25 crc kubenswrapper[4766]: I0130 16:41:25.545863 4766 scope.go:117] "RemoveContainer" containerID="47fbf09005959840cf9c0719b304d36f50890aa9f40b3160e0f527a56e67579f" Jan 30 16:41:26 crc kubenswrapper[4766]: I0130 16:41:26.060690 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8114a4cb-b868-4813-836e-6e12b1b37c00" path="/var/lib/kubelet/pods/8114a4cb-b868-4813-836e-6e12b1b37c00/volumes" Jan 30 16:41:26 crc kubenswrapper[4766]: I0130 16:41:26.486351 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22","Type":"ContainerStarted","Data":"722b9f0bf4bb4fdc169a16a2a0008b553646c69b6b43ec117a7046c04ee677ad"} Jan 30 16:41:26 crc kubenswrapper[4766]: I0130 16:41:26.487235 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22","Type":"ContainerStarted","Data":"1018ad035e1117daba7d0fa6d624c300af7a28f4b34f661587a2d4823b6112f1"} Jan 30 16:41:26 crc kubenswrapper[4766]: I0130 16:41:26.492832 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-n8rf4" event={"ID":"6da00370-0819-4857-8fa3-1ffe3e6b628b","Type":"ContainerStarted","Data":"d0d3a385994a831e8571ce1c7041fd4ec8f5ca6264fb5b4f4e85ee29e52f53f1"} Jan 30 16:41:26 crc kubenswrapper[4766]: I0130 16:41:26.494492 4766 generic.go:334] "Generic (PLEG): container finished" podID="62dd6ad1-1550-48cf-b103-b7ab6dd93c97" containerID="6760427dc9415b24501e486881c4fff8d34a1c94fce91dc3276f0bac6cf59171" exitCode=0 Jan 30 16:41:26 crc kubenswrapper[4766]: I0130 16:41:26.494530 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"62dd6ad1-1550-48cf-b103-b7ab6dd93c97","Type":"ContainerDied","Data":"6760427dc9415b24501e486881c4fff8d34a1c94fce91dc3276f0bac6cf59171"} Jan 30 16:41:26 crc kubenswrapper[4766]: I0130 16:41:26.510393 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.796306466 podStartE2EDuration="8.510347214s" podCreationTimestamp="2026-01-30 16:41:18 +0000 UTC" firstStartedPulling="2026-01-30 16:41:20.19734701 +0000 UTC m=+1134.835304346" lastFinishedPulling="2026-01-30 16:41:25.911387748 +0000 UTC m=+1140.549345094" observedRunningTime="2026-01-30 16:41:26.505673595 +0000 UTC m=+1141.143630961" watchObservedRunningTime="2026-01-30 16:41:26.510347214 +0000 UTC m=+1141.148304590" Jan 30 16:41:26 crc kubenswrapper[4766]: I0130 16:41:26.529251 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-n8rf4" podStartSLOduration=2.346672951 podStartE2EDuration="10.529217974s" podCreationTimestamp="2026-01-30 16:41:16 +0000 UTC" firstStartedPulling="2026-01-30 16:41:17.645191032 +0000 UTC m=+1132.283148378" lastFinishedPulling="2026-01-30 16:41:25.827736055 +0000 UTC m=+1140.465693401" observedRunningTime="2026-01-30 16:41:26.526951302 +0000 UTC m=+1141.164908648" watchObservedRunningTime="2026-01-30 16:41:26.529217974 +0000 UTC m=+1141.167175320" Jan 30 16:41:27 crc kubenswrapper[4766]: I0130 16:41:27.505057 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"62dd6ad1-1550-48cf-b103-b7ab6dd93c97","Type":"ContainerStarted","Data":"aff944adaf195dece4b9f2cffe288741329b6e6257d83a2eb304dfe74183c399"} Jan 30 16:41:27 crc kubenswrapper[4766]: I0130 16:41:27.505512 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 30 16:41:27 crc kubenswrapper[4766]: I0130 16:41:27.531169 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=-9223371989.323624 podStartE2EDuration="47.531152728s" podCreationTimestamp="2026-01-30 16:40:40 +0000 UTC" firstStartedPulling="2026-01-30 16:40:42.514465045 +0000 UTC m=+1097.152422381" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:41:27.527424905 +0000 UTC m=+1142.165382251" watchObservedRunningTime="2026-01-30 16:41:27.531152728 +0000 UTC m=+1142.169110074" Jan 30 16:41:28 crc kubenswrapper[4766]: I0130 16:41:28.936374 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6c89d5d749-qwlk6" Jan 30 16:41:29 crc kubenswrapper[4766]: I0130 16:41:29.387432 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-rghwg" Jan 30 16:41:29 crc kubenswrapper[4766]: I0130 16:41:29.439077 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-qwlk6"] Jan 30 16:41:29 crc kubenswrapper[4766]: I0130 16:41:29.520944 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6c89d5d749-qwlk6" podUID="d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d" containerName="dnsmasq-dns" containerID="cri-o://e7daa579f70bf6a3acb178e9ec1ed8c1fc385b21c22bdf54aaf089b63285df83" gracePeriod=10 Jan 30 16:41:30 crc kubenswrapper[4766]: I0130 16:41:30.432763 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89d5d749-qwlk6" Jan 30 16:41:30 crc kubenswrapper[4766]: I0130 16:41:30.507595 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frqn2\" (UniqueName: \"kubernetes.io/projected/d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d-kube-api-access-frqn2\") pod \"d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d\" (UID: \"d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d\") " Jan 30 16:41:30 crc kubenswrapper[4766]: I0130 16:41:30.507807 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d-config\") pod \"d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d\" (UID: \"d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d\") " Jan 30 16:41:30 crc kubenswrapper[4766]: I0130 16:41:30.507845 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d-ovsdbserver-sb\") pod \"d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d\" (UID: \"d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d\") " Jan 30 16:41:30 crc kubenswrapper[4766]: I0130 16:41:30.507867 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d-dns-svc\") pod \"d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d\" (UID: \"d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d\") " Jan 30 16:41:30 crc kubenswrapper[4766]: I0130 16:41:30.514020 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d-kube-api-access-frqn2" (OuterVolumeSpecName: "kube-api-access-frqn2") pod "d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d" (UID: "d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d"). InnerVolumeSpecName "kube-api-access-frqn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:41:30 crc kubenswrapper[4766]: I0130 16:41:30.529468 4766 generic.go:334] "Generic (PLEG): container finished" podID="d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d" containerID="e7daa579f70bf6a3acb178e9ec1ed8c1fc385b21c22bdf54aaf089b63285df83" exitCode=0 Jan 30 16:41:30 crc kubenswrapper[4766]: I0130 16:41:30.529536 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89d5d749-qwlk6" event={"ID":"d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d","Type":"ContainerDied","Data":"e7daa579f70bf6a3acb178e9ec1ed8c1fc385b21c22bdf54aaf089b63285df83"} Jan 30 16:41:30 crc kubenswrapper[4766]: I0130 16:41:30.529564 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89d5d749-qwlk6" event={"ID":"d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d","Type":"ContainerDied","Data":"8d9a46eb1216a3100209bf80420923758d0ab5dbd1dd0ccea57c594994eba465"} Jan 30 16:41:30 crc kubenswrapper[4766]: I0130 16:41:30.529583 4766 scope.go:117] "RemoveContainer" containerID="e7daa579f70bf6a3acb178e9ec1ed8c1fc385b21c22bdf54aaf089b63285df83" Jan 30 16:41:30 crc kubenswrapper[4766]: I0130 16:41:30.529725 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89d5d749-qwlk6" Jan 30 16:41:30 crc kubenswrapper[4766]: I0130 16:41:30.545566 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d" (UID: "d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:41:30 crc kubenswrapper[4766]: I0130 16:41:30.553445 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d-config" (OuterVolumeSpecName: "config") pod "d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d" (UID: "d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:41:30 crc kubenswrapper[4766]: I0130 16:41:30.554120 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d" (UID: "d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:41:30 crc kubenswrapper[4766]: I0130 16:41:30.601344 4766 scope.go:117] "RemoveContainer" containerID="9fb0691e704a0ac758a7232bb2f768608170d0a691b085e55b2a3e90678520dd" Jan 30 16:41:30 crc kubenswrapper[4766]: I0130 16:41:30.610212 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frqn2\" (UniqueName: \"kubernetes.io/projected/d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d-kube-api-access-frqn2\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:30 crc kubenswrapper[4766]: I0130 16:41:30.610245 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:30 crc kubenswrapper[4766]: I0130 16:41:30.610255 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:30 crc kubenswrapper[4766]: I0130 16:41:30.610263 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:30 crc kubenswrapper[4766]: I0130 16:41:30.621635 4766 scope.go:117] "RemoveContainer" containerID="e7daa579f70bf6a3acb178e9ec1ed8c1fc385b21c22bdf54aaf089b63285df83" Jan 30 16:41:30 crc kubenswrapper[4766]: E0130 16:41:30.622114 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7daa579f70bf6a3acb178e9ec1ed8c1fc385b21c22bdf54aaf089b63285df83\": container with ID starting with e7daa579f70bf6a3acb178e9ec1ed8c1fc385b21c22bdf54aaf089b63285df83 not found: ID does not exist" containerID="e7daa579f70bf6a3acb178e9ec1ed8c1fc385b21c22bdf54aaf089b63285df83" Jan 30 16:41:30 crc kubenswrapper[4766]: I0130 16:41:30.622150 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7daa579f70bf6a3acb178e9ec1ed8c1fc385b21c22bdf54aaf089b63285df83"} err="failed to get container status \"e7daa579f70bf6a3acb178e9ec1ed8c1fc385b21c22bdf54aaf089b63285df83\": rpc error: code = NotFound desc = could not find container \"e7daa579f70bf6a3acb178e9ec1ed8c1fc385b21c22bdf54aaf089b63285df83\": container with ID starting with e7daa579f70bf6a3acb178e9ec1ed8c1fc385b21c22bdf54aaf089b63285df83 not found: ID does not exist" Jan 30 16:41:30 crc kubenswrapper[4766]: I0130 16:41:30.622172 4766 scope.go:117] "RemoveContainer" containerID="9fb0691e704a0ac758a7232bb2f768608170d0a691b085e55b2a3e90678520dd" Jan 30 16:41:30 crc kubenswrapper[4766]: E0130 16:41:30.622545 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fb0691e704a0ac758a7232bb2f768608170d0a691b085e55b2a3e90678520dd\": container with ID starting with 9fb0691e704a0ac758a7232bb2f768608170d0a691b085e55b2a3e90678520dd not found: ID does not exist" containerID="9fb0691e704a0ac758a7232bb2f768608170d0a691b085e55b2a3e90678520dd" Jan 30 16:41:30 crc kubenswrapper[4766]: I0130 16:41:30.622579 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fb0691e704a0ac758a7232bb2f768608170d0a691b085e55b2a3e90678520dd"} err="failed to get container status \"9fb0691e704a0ac758a7232bb2f768608170d0a691b085e55b2a3e90678520dd\": rpc error: code = NotFound desc = could not find container \"9fb0691e704a0ac758a7232bb2f768608170d0a691b085e55b2a3e90678520dd\": container with ID starting with 9fb0691e704a0ac758a7232bb2f768608170d0a691b085e55b2a3e90678520dd not found: ID does not exist" Jan 30 16:41:30 crc kubenswrapper[4766]: I0130 16:41:30.862694 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-qwlk6"] Jan 30 16:41:30 crc kubenswrapper[4766]: I0130 16:41:30.869665 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-qwlk6"] Jan 30 16:41:31 crc kubenswrapper[4766]: I0130 16:41:31.479526 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 30 16:41:31 crc kubenswrapper[4766]: I0130 16:41:31.479976 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 30 16:41:31 crc kubenswrapper[4766]: I0130 16:41:31.668035 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-wht5r"] Jan 30 16:41:31 crc kubenswrapper[4766]: E0130 16:41:31.668462 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8114a4cb-b868-4813-836e-6e12b1b37c00" containerName="init" Jan 30 16:41:31 crc kubenswrapper[4766]: I0130 16:41:31.668478 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8114a4cb-b868-4813-836e-6e12b1b37c00" containerName="init" Jan 30 16:41:31 crc kubenswrapper[4766]: E0130 16:41:31.668495 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d" containerName="dnsmasq-dns" Jan 30 16:41:31 crc kubenswrapper[4766]: I0130 16:41:31.668502 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d" containerName="dnsmasq-dns" Jan 30 16:41:31 crc kubenswrapper[4766]: E0130 16:41:31.668523 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8114a4cb-b868-4813-836e-6e12b1b37c00" containerName="dnsmasq-dns" Jan 30 16:41:31 crc kubenswrapper[4766]: I0130 16:41:31.668530 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8114a4cb-b868-4813-836e-6e12b1b37c00" containerName="dnsmasq-dns" Jan 30 16:41:31 crc kubenswrapper[4766]: E0130 16:41:31.668549 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d" containerName="init" Jan 30 16:41:31 crc kubenswrapper[4766]: I0130 16:41:31.668555 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d" containerName="init" Jan 30 16:41:31 crc kubenswrapper[4766]: I0130 16:41:31.668726 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d" containerName="dnsmasq-dns" Jan 30 16:41:31 crc kubenswrapper[4766]: I0130 16:41:31.668745 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8114a4cb-b868-4813-836e-6e12b1b37c00" containerName="dnsmasq-dns" Jan 30 16:41:31 crc kubenswrapper[4766]: I0130 16:41:31.669378 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wht5r" Jan 30 16:41:31 crc kubenswrapper[4766]: I0130 16:41:31.672141 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 30 16:41:31 crc kubenswrapper[4766]: I0130 16:41:31.678351 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-wht5r"] Jan 30 16:41:31 crc kubenswrapper[4766]: I0130 16:41:31.730754 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93fa2128-fb98-4cca-9067-a864a6207188-operator-scripts\") pod \"root-account-create-update-wht5r\" (UID: \"93fa2128-fb98-4cca-9067-a864a6207188\") " pod="openstack/root-account-create-update-wht5r" Jan 30 16:41:31 crc kubenswrapper[4766]: I0130 16:41:31.730878 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wswmt\" (UniqueName: \"kubernetes.io/projected/93fa2128-fb98-4cca-9067-a864a6207188-kube-api-access-wswmt\") pod \"root-account-create-update-wht5r\" (UID: \"93fa2128-fb98-4cca-9067-a864a6207188\") " pod="openstack/root-account-create-update-wht5r" Jan 30 16:41:31 crc kubenswrapper[4766]: I0130 16:41:31.832079 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wswmt\" (UniqueName: \"kubernetes.io/projected/93fa2128-fb98-4cca-9067-a864a6207188-kube-api-access-wswmt\") pod \"root-account-create-update-wht5r\" (UID: \"93fa2128-fb98-4cca-9067-a864a6207188\") " pod="openstack/root-account-create-update-wht5r" Jan 30 16:41:31 crc kubenswrapper[4766]: I0130 16:41:31.832306 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93fa2128-fb98-4cca-9067-a864a6207188-operator-scripts\") pod \"root-account-create-update-wht5r\" (UID: \"93fa2128-fb98-4cca-9067-a864a6207188\") " pod="openstack/root-account-create-update-wht5r" Jan 30 16:41:31 crc kubenswrapper[4766]: I0130 16:41:31.833036 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93fa2128-fb98-4cca-9067-a864a6207188-operator-scripts\") pod \"root-account-create-update-wht5r\" (UID: \"93fa2128-fb98-4cca-9067-a864a6207188\") " pod="openstack/root-account-create-update-wht5r" Jan 30 16:41:31 crc kubenswrapper[4766]: I0130 16:41:31.864642 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wswmt\" (UniqueName: \"kubernetes.io/projected/93fa2128-fb98-4cca-9067-a864a6207188-kube-api-access-wswmt\") pod \"root-account-create-update-wht5r\" (UID: \"93fa2128-fb98-4cca-9067-a864a6207188\") " pod="openstack/root-account-create-update-wht5r" Jan 30 16:41:31 crc kubenswrapper[4766]: I0130 16:41:31.989261 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wht5r" Jan 30 16:41:32 crc kubenswrapper[4766]: I0130 16:41:32.049679 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d" path="/var/lib/kubelet/pods/d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d/volumes" Jan 30 16:41:32 crc kubenswrapper[4766]: I0130 16:41:32.467021 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-wht5r"] Jan 30 16:41:32 crc kubenswrapper[4766]: I0130 16:41:32.556273 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wht5r" event={"ID":"93fa2128-fb98-4cca-9067-a864a6207188","Type":"ContainerStarted","Data":"aea6ed23d3ef964fc62d7cf8523fae82358a8f95c83877ca02c400c33f672f97"} Jan 30 16:41:32 crc kubenswrapper[4766]: I0130 16:41:32.651803 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8b182790-0761-450c-85d1-63ddd59ac10f-etc-swift\") pod \"swift-storage-0\" (UID: \"8b182790-0761-450c-85d1-63ddd59ac10f\") " pod="openstack/swift-storage-0" Jan 30 16:41:32 crc kubenswrapper[4766]: E0130 16:41:32.652136 4766 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 16:41:32 crc kubenswrapper[4766]: E0130 16:41:32.652374 4766 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 16:41:32 crc kubenswrapper[4766]: E0130 16:41:32.652440 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8b182790-0761-450c-85d1-63ddd59ac10f-etc-swift podName:8b182790-0761-450c-85d1-63ddd59ac10f nodeName:}" failed. No retries permitted until 2026-01-30 16:41:48.652418582 +0000 UTC m=+1163.290375928 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8b182790-0761-450c-85d1-63ddd59ac10f-etc-swift") pod "swift-storage-0" (UID: "8b182790-0761-450c-85d1-63ddd59ac10f") : configmap "swift-ring-files" not found Jan 30 16:41:33 crc kubenswrapper[4766]: I0130 16:41:33.567422 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wht5r" event={"ID":"93fa2128-fb98-4cca-9067-a864a6207188","Type":"ContainerStarted","Data":"29b7ceb22d3dfe6928b75436b2b8db935b27d650279fb88c7e2bd402672ad8a8"} Jan 30 16:41:33 crc kubenswrapper[4766]: I0130 16:41:33.585520 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-wht5r" podStartSLOduration=2.58549986 podStartE2EDuration="2.58549986s" podCreationTimestamp="2026-01-30 16:41:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:41:33.581216071 +0000 UTC m=+1148.219173427" watchObservedRunningTime="2026-01-30 16:41:33.58549986 +0000 UTC m=+1148.223457206" Jan 30 16:41:35 crc kubenswrapper[4766]: I0130 16:41:35.586646 4766 generic.go:334] "Generic (PLEG): container finished" podID="6da00370-0819-4857-8fa3-1ffe3e6b628b" containerID="d0d3a385994a831e8571ce1c7041fd4ec8f5ca6264fb5b4f4e85ee29e52f53f1" exitCode=0 Jan 30 16:41:35 crc kubenswrapper[4766]: I0130 16:41:35.586739 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-n8rf4" event={"ID":"6da00370-0819-4857-8fa3-1ffe3e6b628b","Type":"ContainerDied","Data":"d0d3a385994a831e8571ce1c7041fd4ec8f5ca6264fb5b4f4e85ee29e52f53f1"} Jan 30 16:41:36 crc kubenswrapper[4766]: I0130 16:41:36.596744 4766 generic.go:334] "Generic (PLEG): container finished" podID="93fa2128-fb98-4cca-9067-a864a6207188" containerID="29b7ceb22d3dfe6928b75436b2b8db935b27d650279fb88c7e2bd402672ad8a8" exitCode=0 Jan 30 16:41:36 crc kubenswrapper[4766]: I0130 16:41:36.596827 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wht5r" event={"ID":"93fa2128-fb98-4cca-9067-a864a6207188","Type":"ContainerDied","Data":"29b7ceb22d3dfe6928b75436b2b8db935b27d650279fb88c7e2bd402672ad8a8"} Jan 30 16:41:36 crc kubenswrapper[4766]: I0130 16:41:36.924075 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-n8rf4" Jan 30 16:41:37 crc kubenswrapper[4766]: I0130 16:41:37.030501 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 30 16:41:37 crc kubenswrapper[4766]: I0130 16:41:37.108507 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="62dd6ad1-1550-48cf-b103-b7ab6dd93c97" containerName="galera" probeResult="failure" output=< Jan 30 16:41:37 crc kubenswrapper[4766]: wsrep_local_state_comment (Joined) differs from Synced Jan 30 16:41:37 crc kubenswrapper[4766]: > Jan 30 16:41:37 crc kubenswrapper[4766]: I0130 16:41:37.114491 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6da00370-0819-4857-8fa3-1ffe3e6b628b-dispersionconf\") pod \"6da00370-0819-4857-8fa3-1ffe3e6b628b\" (UID: \"6da00370-0819-4857-8fa3-1ffe3e6b628b\") " Jan 30 16:41:37 crc kubenswrapper[4766]: I0130 16:41:37.114540 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6da00370-0819-4857-8fa3-1ffe3e6b628b-ring-data-devices\") pod \"6da00370-0819-4857-8fa3-1ffe3e6b628b\" (UID: \"6da00370-0819-4857-8fa3-1ffe3e6b628b\") " Jan 30 16:41:37 crc kubenswrapper[4766]: I0130 16:41:37.114571 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6da00370-0819-4857-8fa3-1ffe3e6b628b-etc-swift\") pod \"6da00370-0819-4857-8fa3-1ffe3e6b628b\" (UID: \"6da00370-0819-4857-8fa3-1ffe3e6b628b\") " Jan 30 16:41:37 crc kubenswrapper[4766]: I0130 16:41:37.114624 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6da00370-0819-4857-8fa3-1ffe3e6b628b-swiftconf\") pod \"6da00370-0819-4857-8fa3-1ffe3e6b628b\" (UID: \"6da00370-0819-4857-8fa3-1ffe3e6b628b\") " Jan 30 16:41:37 crc kubenswrapper[4766]: I0130 16:41:37.114654 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6da00370-0819-4857-8fa3-1ffe3e6b628b-combined-ca-bundle\") pod \"6da00370-0819-4857-8fa3-1ffe3e6b628b\" (UID: \"6da00370-0819-4857-8fa3-1ffe3e6b628b\") " Jan 30 16:41:37 crc kubenswrapper[4766]: I0130 16:41:37.114723 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6da00370-0819-4857-8fa3-1ffe3e6b628b-scripts\") pod \"6da00370-0819-4857-8fa3-1ffe3e6b628b\" (UID: \"6da00370-0819-4857-8fa3-1ffe3e6b628b\") " Jan 30 16:41:37 crc kubenswrapper[4766]: I0130 16:41:37.114879 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hx9v\" (UniqueName: \"kubernetes.io/projected/6da00370-0819-4857-8fa3-1ffe3e6b628b-kube-api-access-8hx9v\") pod \"6da00370-0819-4857-8fa3-1ffe3e6b628b\" (UID: \"6da00370-0819-4857-8fa3-1ffe3e6b628b\") " Jan 30 16:41:37 crc kubenswrapper[4766]: I0130 16:41:37.115144 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6da00370-0819-4857-8fa3-1ffe3e6b628b-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "6da00370-0819-4857-8fa3-1ffe3e6b628b" (UID: "6da00370-0819-4857-8fa3-1ffe3e6b628b"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:41:37 crc kubenswrapper[4766]: I0130 16:41:37.116154 4766 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6da00370-0819-4857-8fa3-1ffe3e6b628b-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:37 crc kubenswrapper[4766]: I0130 16:41:37.116321 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6da00370-0819-4857-8fa3-1ffe3e6b628b-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "6da00370-0819-4857-8fa3-1ffe3e6b628b" (UID: "6da00370-0819-4857-8fa3-1ffe3e6b628b"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:41:37 crc kubenswrapper[4766]: I0130 16:41:37.121796 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6da00370-0819-4857-8fa3-1ffe3e6b628b-kube-api-access-8hx9v" (OuterVolumeSpecName: "kube-api-access-8hx9v") pod "6da00370-0819-4857-8fa3-1ffe3e6b628b" (UID: "6da00370-0819-4857-8fa3-1ffe3e6b628b"). InnerVolumeSpecName "kube-api-access-8hx9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:41:37 crc kubenswrapper[4766]: I0130 16:41:37.126717 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6da00370-0819-4857-8fa3-1ffe3e6b628b-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "6da00370-0819-4857-8fa3-1ffe3e6b628b" (UID: "6da00370-0819-4857-8fa3-1ffe3e6b628b"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:41:37 crc kubenswrapper[4766]: I0130 16:41:37.142153 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6da00370-0819-4857-8fa3-1ffe3e6b628b-scripts" (OuterVolumeSpecName: "scripts") pod "6da00370-0819-4857-8fa3-1ffe3e6b628b" (UID: "6da00370-0819-4857-8fa3-1ffe3e6b628b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:41:37 crc kubenswrapper[4766]: I0130 16:41:37.144012 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6da00370-0819-4857-8fa3-1ffe3e6b628b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6da00370-0819-4857-8fa3-1ffe3e6b628b" (UID: "6da00370-0819-4857-8fa3-1ffe3e6b628b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:41:37 crc kubenswrapper[4766]: I0130 16:41:37.146809 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6da00370-0819-4857-8fa3-1ffe3e6b628b-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "6da00370-0819-4857-8fa3-1ffe3e6b628b" (UID: "6da00370-0819-4857-8fa3-1ffe3e6b628b"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:41:37 crc kubenswrapper[4766]: I0130 16:41:37.217815 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6da00370-0819-4857-8fa3-1ffe3e6b628b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:37 crc kubenswrapper[4766]: I0130 16:41:37.218135 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6da00370-0819-4857-8fa3-1ffe3e6b628b-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:37 crc kubenswrapper[4766]: I0130 16:41:37.218233 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hx9v\" (UniqueName: \"kubernetes.io/projected/6da00370-0819-4857-8fa3-1ffe3e6b628b-kube-api-access-8hx9v\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:37 crc kubenswrapper[4766]: I0130 16:41:37.218330 4766 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6da00370-0819-4857-8fa3-1ffe3e6b628b-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:37 crc kubenswrapper[4766]: I0130 16:41:37.218435 4766 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6da00370-0819-4857-8fa3-1ffe3e6b628b-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:37 crc kubenswrapper[4766]: I0130 16:41:37.218672 4766 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6da00370-0819-4857-8fa3-1ffe3e6b628b-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:37 crc kubenswrapper[4766]: I0130 16:41:37.608996 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-n8rf4" event={"ID":"6da00370-0819-4857-8fa3-1ffe3e6b628b","Type":"ContainerDied","Data":"0e2a1beef2986dc171385e28859599afa82cdfc8eed7aa1c2a744690930b7204"} Jan 30 16:41:37 crc kubenswrapper[4766]: I0130 16:41:37.609404 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e2a1beef2986dc171385e28859599afa82cdfc8eed7aa1c2a744690930b7204" Jan 30 16:41:37 crc kubenswrapper[4766]: I0130 16:41:37.609024 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-n8rf4" Jan 30 16:41:38 crc kubenswrapper[4766]: I0130 16:41:38.018942 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wht5r" Jan 30 16:41:38 crc kubenswrapper[4766]: I0130 16:41:38.133848 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93fa2128-fb98-4cca-9067-a864a6207188-operator-scripts\") pod \"93fa2128-fb98-4cca-9067-a864a6207188\" (UID: \"93fa2128-fb98-4cca-9067-a864a6207188\") " Jan 30 16:41:38 crc kubenswrapper[4766]: I0130 16:41:38.133890 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wswmt\" (UniqueName: \"kubernetes.io/projected/93fa2128-fb98-4cca-9067-a864a6207188-kube-api-access-wswmt\") pod \"93fa2128-fb98-4cca-9067-a864a6207188\" (UID: \"93fa2128-fb98-4cca-9067-a864a6207188\") " Jan 30 16:41:38 crc kubenswrapper[4766]: I0130 16:41:38.134683 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93fa2128-fb98-4cca-9067-a864a6207188-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "93fa2128-fb98-4cca-9067-a864a6207188" (UID: "93fa2128-fb98-4cca-9067-a864a6207188"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:41:38 crc kubenswrapper[4766]: I0130 16:41:38.138934 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93fa2128-fb98-4cca-9067-a864a6207188-kube-api-access-wswmt" (OuterVolumeSpecName: "kube-api-access-wswmt") pod "93fa2128-fb98-4cca-9067-a864a6207188" (UID: "93fa2128-fb98-4cca-9067-a864a6207188"). InnerVolumeSpecName "kube-api-access-wswmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:41:38 crc kubenswrapper[4766]: I0130 16:41:38.236047 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93fa2128-fb98-4cca-9067-a864a6207188-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:38 crc kubenswrapper[4766]: I0130 16:41:38.236084 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wswmt\" (UniqueName: \"kubernetes.io/projected/93fa2128-fb98-4cca-9067-a864a6207188-kube-api-access-wswmt\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:38 crc kubenswrapper[4766]: I0130 16:41:38.626555 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wht5r" event={"ID":"93fa2128-fb98-4cca-9067-a864a6207188","Type":"ContainerDied","Data":"aea6ed23d3ef964fc62d7cf8523fae82358a8f95c83877ca02c400c33f672f97"} Jan 30 16:41:38 crc kubenswrapper[4766]: I0130 16:41:38.626624 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aea6ed23d3ef964fc62d7cf8523fae82358a8f95c83877ca02c400c33f672f97" Jan 30 16:41:38 crc kubenswrapper[4766]: I0130 16:41:38.626716 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wht5r" Jan 30 16:41:39 crc kubenswrapper[4766]: I0130 16:41:39.045699 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:41:39 crc kubenswrapper[4766]: I0130 16:41:39.046039 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:41:39 crc kubenswrapper[4766]: I0130 16:41:39.046087 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 16:41:39 crc kubenswrapper[4766]: I0130 16:41:39.046754 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ff8a362ea851503bbb575c0aae10eba4412530904ed767a62c62bad94b884ce0"} pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 16:41:39 crc kubenswrapper[4766]: I0130 16:41:39.046811 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" containerID="cri-o://ff8a362ea851503bbb575c0aae10eba4412530904ed767a62c62bad94b884ce0" gracePeriod=600 Jan 30 16:41:39 crc kubenswrapper[4766]: I0130 16:41:39.428875 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 30 16:41:39 crc kubenswrapper[4766]: I0130 16:41:39.638282 4766 generic.go:334] "Generic (PLEG): container finished" podID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerID="ff8a362ea851503bbb575c0aae10eba4412530904ed767a62c62bad94b884ce0" exitCode=0 Jan 30 16:41:39 crc kubenswrapper[4766]: I0130 16:41:39.638414 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerDied","Data":"ff8a362ea851503bbb575c0aae10eba4412530904ed767a62c62bad94b884ce0"} Jan 30 16:41:39 crc kubenswrapper[4766]: I0130 16:41:39.639074 4766 scope.go:117] "RemoveContainer" containerID="5e25fe15fa17987c12e4d9db1a1dd14967f9d491c11f7c6086924c59f51346cf" Jan 30 16:41:40 crc kubenswrapper[4766]: I0130 16:41:40.649405 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerStarted","Data":"401c81042a218118cfba77ecd472ad3789063907971964c9b9416c5db7f3d8ba"} Jan 30 16:41:41 crc kubenswrapper[4766]: I0130 16:41:41.553285 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 30 16:41:41 crc kubenswrapper[4766]: I0130 16:41:41.659090 4766 generic.go:334] "Generic (PLEG): container finished" podID="b21357e1-82c9-419a-a191-359c84d6d001" containerID="9ef10f5c3d1f4d9a253dd3e9c4606457ea1bec6b576606c275600dbaeca7eb9d" exitCode=0 Jan 30 16:41:41 crc kubenswrapper[4766]: I0130 16:41:41.659438 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b21357e1-82c9-419a-a191-359c84d6d001","Type":"ContainerDied","Data":"9ef10f5c3d1f4d9a253dd3e9c4606457ea1bec6b576606c275600dbaeca7eb9d"} Jan 30 16:41:42 crc kubenswrapper[4766]: I0130 16:41:42.785112 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-qdgxb"] Jan 30 16:41:42 crc kubenswrapper[4766]: E0130 16:41:42.786145 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93fa2128-fb98-4cca-9067-a864a6207188" containerName="mariadb-account-create-update" Jan 30 16:41:42 crc kubenswrapper[4766]: I0130 16:41:42.786164 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="93fa2128-fb98-4cca-9067-a864a6207188" containerName="mariadb-account-create-update" Jan 30 16:41:42 crc kubenswrapper[4766]: E0130 16:41:42.786226 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6da00370-0819-4857-8fa3-1ffe3e6b628b" containerName="swift-ring-rebalance" Jan 30 16:41:42 crc kubenswrapper[4766]: I0130 16:41:42.786236 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="6da00370-0819-4857-8fa3-1ffe3e6b628b" containerName="swift-ring-rebalance" Jan 30 16:41:42 crc kubenswrapper[4766]: I0130 16:41:42.786437 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="6da00370-0819-4857-8fa3-1ffe3e6b628b" containerName="swift-ring-rebalance" Jan 30 16:41:42 crc kubenswrapper[4766]: I0130 16:41:42.786454 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="93fa2128-fb98-4cca-9067-a864a6207188" containerName="mariadb-account-create-update" Jan 30 16:41:42 crc kubenswrapper[4766]: I0130 16:41:42.787056 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-qdgxb" Jan 30 16:41:42 crc kubenswrapper[4766]: I0130 16:41:42.794666 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-qdgxb"] Jan 30 16:41:42 crc kubenswrapper[4766]: I0130 16:41:42.876324 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-e3be-account-create-update-n7qg6"] Jan 30 16:41:42 crc kubenswrapper[4766]: I0130 16:41:42.877744 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e3be-account-create-update-n7qg6" Jan 30 16:41:42 crc kubenswrapper[4766]: I0130 16:41:42.881888 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 30 16:41:42 crc kubenswrapper[4766]: I0130 16:41:42.886953 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-e3be-account-create-update-n7qg6"] Jan 30 16:41:42 crc kubenswrapper[4766]: I0130 16:41:42.917011 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0dbf5802-dfa7-4b32-aaa5-48fc779da5d6-operator-scripts\") pod \"keystone-db-create-qdgxb\" (UID: \"0dbf5802-dfa7-4b32-aaa5-48fc779da5d6\") " pod="openstack/keystone-db-create-qdgxb" Jan 30 16:41:42 crc kubenswrapper[4766]: I0130 16:41:42.917063 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzqqs\" (UniqueName: \"kubernetes.io/projected/0dbf5802-dfa7-4b32-aaa5-48fc779da5d6-kube-api-access-rzqqs\") pod \"keystone-db-create-qdgxb\" (UID: \"0dbf5802-dfa7-4b32-aaa5-48fc779da5d6\") " pod="openstack/keystone-db-create-qdgxb" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.018529 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t76v\" (UniqueName: \"kubernetes.io/projected/3fb40e54-43ed-4dd6-8c23-138c01cf062d-kube-api-access-2t76v\") pod \"keystone-e3be-account-create-update-n7qg6\" (UID: \"3fb40e54-43ed-4dd6-8c23-138c01cf062d\") " pod="openstack/keystone-e3be-account-create-update-n7qg6" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.018585 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fb40e54-43ed-4dd6-8c23-138c01cf062d-operator-scripts\") pod \"keystone-e3be-account-create-update-n7qg6\" (UID: \"3fb40e54-43ed-4dd6-8c23-138c01cf062d\") " pod="openstack/keystone-e3be-account-create-update-n7qg6" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.018627 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0dbf5802-dfa7-4b32-aaa5-48fc779da5d6-operator-scripts\") pod \"keystone-db-create-qdgxb\" (UID: \"0dbf5802-dfa7-4b32-aaa5-48fc779da5d6\") " pod="openstack/keystone-db-create-qdgxb" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.018648 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzqqs\" (UniqueName: \"kubernetes.io/projected/0dbf5802-dfa7-4b32-aaa5-48fc779da5d6-kube-api-access-rzqqs\") pod \"keystone-db-create-qdgxb\" (UID: \"0dbf5802-dfa7-4b32-aaa5-48fc779da5d6\") " pod="openstack/keystone-db-create-qdgxb" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.019337 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0dbf5802-dfa7-4b32-aaa5-48fc779da5d6-operator-scripts\") pod \"keystone-db-create-qdgxb\" (UID: \"0dbf5802-dfa7-4b32-aaa5-48fc779da5d6\") " pod="openstack/keystone-db-create-qdgxb" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.039876 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzqqs\" (UniqueName: \"kubernetes.io/projected/0dbf5802-dfa7-4b32-aaa5-48fc779da5d6-kube-api-access-rzqqs\") pod \"keystone-db-create-qdgxb\" (UID: \"0dbf5802-dfa7-4b32-aaa5-48fc779da5d6\") " pod="openstack/keystone-db-create-qdgxb" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.091335 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-nwrgq"] Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.092845 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-nwrgq" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.104134 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-qdgxb" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.104865 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-nwrgq"] Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.120249 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2t76v\" (UniqueName: \"kubernetes.io/projected/3fb40e54-43ed-4dd6-8c23-138c01cf062d-kube-api-access-2t76v\") pod \"keystone-e3be-account-create-update-n7qg6\" (UID: \"3fb40e54-43ed-4dd6-8c23-138c01cf062d\") " pod="openstack/keystone-e3be-account-create-update-n7qg6" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.120316 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fb40e54-43ed-4dd6-8c23-138c01cf062d-operator-scripts\") pod \"keystone-e3be-account-create-update-n7qg6\" (UID: \"3fb40e54-43ed-4dd6-8c23-138c01cf062d\") " pod="openstack/keystone-e3be-account-create-update-n7qg6" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.122075 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fb40e54-43ed-4dd6-8c23-138c01cf062d-operator-scripts\") pod \"keystone-e3be-account-create-update-n7qg6\" (UID: \"3fb40e54-43ed-4dd6-8c23-138c01cf062d\") " pod="openstack/keystone-e3be-account-create-update-n7qg6" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.182004 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2t76v\" (UniqueName: \"kubernetes.io/projected/3fb40e54-43ed-4dd6-8c23-138c01cf062d-kube-api-access-2t76v\") pod \"keystone-e3be-account-create-update-n7qg6\" (UID: \"3fb40e54-43ed-4dd6-8c23-138c01cf062d\") " pod="openstack/keystone-e3be-account-create-update-n7qg6" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.199379 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e3be-account-create-update-n7qg6" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.210485 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-cc14-account-create-update-jhjn2"] Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.211927 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-cc14-account-create-update-jhjn2" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.214356 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.220942 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-cc14-account-create-update-jhjn2"] Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.221693 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12ab95d5-fb83-42b1-a38b-9e3bb8916f37-operator-scripts\") pod \"placement-db-create-nwrgq\" (UID: \"12ab95d5-fb83-42b1-a38b-9e3bb8916f37\") " pod="openstack/placement-db-create-nwrgq" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.221804 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52pnt\" (UniqueName: \"kubernetes.io/projected/12ab95d5-fb83-42b1-a38b-9e3bb8916f37-kube-api-access-52pnt\") pod \"placement-db-create-nwrgq\" (UID: \"12ab95d5-fb83-42b1-a38b-9e3bb8916f37\") " pod="openstack/placement-db-create-nwrgq" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.325336 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snrr4\" (UniqueName: \"kubernetes.io/projected/75830eb2-571a-4fef-92b5-057b0928cfe0-kube-api-access-snrr4\") pod \"placement-cc14-account-create-update-jhjn2\" (UID: \"75830eb2-571a-4fef-92b5-057b0928cfe0\") " pod="openstack/placement-cc14-account-create-update-jhjn2" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.326190 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75830eb2-571a-4fef-92b5-057b0928cfe0-operator-scripts\") pod \"placement-cc14-account-create-update-jhjn2\" (UID: \"75830eb2-571a-4fef-92b5-057b0928cfe0\") " pod="openstack/placement-cc14-account-create-update-jhjn2" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.326255 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52pnt\" (UniqueName: \"kubernetes.io/projected/12ab95d5-fb83-42b1-a38b-9e3bb8916f37-kube-api-access-52pnt\") pod \"placement-db-create-nwrgq\" (UID: \"12ab95d5-fb83-42b1-a38b-9e3bb8916f37\") " pod="openstack/placement-db-create-nwrgq" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.326455 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12ab95d5-fb83-42b1-a38b-9e3bb8916f37-operator-scripts\") pod \"placement-db-create-nwrgq\" (UID: \"12ab95d5-fb83-42b1-a38b-9e3bb8916f37\") " pod="openstack/placement-db-create-nwrgq" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.327339 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12ab95d5-fb83-42b1-a38b-9e3bb8916f37-operator-scripts\") pod \"placement-db-create-nwrgq\" (UID: \"12ab95d5-fb83-42b1-a38b-9e3bb8916f37\") " pod="openstack/placement-db-create-nwrgq" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.345921 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52pnt\" (UniqueName: \"kubernetes.io/projected/12ab95d5-fb83-42b1-a38b-9e3bb8916f37-kube-api-access-52pnt\") pod \"placement-db-create-nwrgq\" (UID: \"12ab95d5-fb83-42b1-a38b-9e3bb8916f37\") " pod="openstack/placement-db-create-nwrgq" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.410382 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-nwrgq" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.431349 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snrr4\" (UniqueName: \"kubernetes.io/projected/75830eb2-571a-4fef-92b5-057b0928cfe0-kube-api-access-snrr4\") pod \"placement-cc14-account-create-update-jhjn2\" (UID: \"75830eb2-571a-4fef-92b5-057b0928cfe0\") " pod="openstack/placement-cc14-account-create-update-jhjn2" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.431430 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75830eb2-571a-4fef-92b5-057b0928cfe0-operator-scripts\") pod \"placement-cc14-account-create-update-jhjn2\" (UID: \"75830eb2-571a-4fef-92b5-057b0928cfe0\") " pod="openstack/placement-cc14-account-create-update-jhjn2" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.432065 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75830eb2-571a-4fef-92b5-057b0928cfe0-operator-scripts\") pod \"placement-cc14-account-create-update-jhjn2\" (UID: \"75830eb2-571a-4fef-92b5-057b0928cfe0\") " pod="openstack/placement-cc14-account-create-update-jhjn2" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.466697 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-2h7p2"] Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.467965 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snrr4\" (UniqueName: \"kubernetes.io/projected/75830eb2-571a-4fef-92b5-057b0928cfe0-kube-api-access-snrr4\") pod \"placement-cc14-account-create-update-jhjn2\" (UID: \"75830eb2-571a-4fef-92b5-057b0928cfe0\") " pod="openstack/placement-cc14-account-create-update-jhjn2" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.468220 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2h7p2" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.483780 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-2h7p2"] Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.532977 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h546w\" (UniqueName: \"kubernetes.io/projected/acb52775-c639-4afc-9f21-f33531a854b3-kube-api-access-h546w\") pod \"glance-db-create-2h7p2\" (UID: \"acb52775-c639-4afc-9f21-f33531a854b3\") " pod="openstack/glance-db-create-2h7p2" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.533405 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/acb52775-c639-4afc-9f21-f33531a854b3-operator-scripts\") pod \"glance-db-create-2h7p2\" (UID: \"acb52775-c639-4afc-9f21-f33531a854b3\") " pod="openstack/glance-db-create-2h7p2" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.571960 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-63c5-account-create-update-sx7bq"] Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.573277 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-63c5-account-create-update-sx7bq" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.575855 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.595132 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-63c5-account-create-update-sx7bq"] Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.606751 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-cc14-account-create-update-jhjn2" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.635392 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h546w\" (UniqueName: \"kubernetes.io/projected/acb52775-c639-4afc-9f21-f33531a854b3-kube-api-access-h546w\") pod \"glance-db-create-2h7p2\" (UID: \"acb52775-c639-4afc-9f21-f33531a854b3\") " pod="openstack/glance-db-create-2h7p2" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.635519 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/acb52775-c639-4afc-9f21-f33531a854b3-operator-scripts\") pod \"glance-db-create-2h7p2\" (UID: \"acb52775-c639-4afc-9f21-f33531a854b3\") " pod="openstack/glance-db-create-2h7p2" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.636334 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/acb52775-c639-4afc-9f21-f33531a854b3-operator-scripts\") pod \"glance-db-create-2h7p2\" (UID: \"acb52775-c639-4afc-9f21-f33531a854b3\") " pod="openstack/glance-db-create-2h7p2" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.665085 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h546w\" (UniqueName: \"kubernetes.io/projected/acb52775-c639-4afc-9f21-f33531a854b3-kube-api-access-h546w\") pod \"glance-db-create-2h7p2\" (UID: \"acb52775-c639-4afc-9f21-f33531a854b3\") " pod="openstack/glance-db-create-2h7p2" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.677983 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b21357e1-82c9-419a-a191-359c84d6d001","Type":"ContainerStarted","Data":"db7977d3774b9fe20bfc32eaf113a99ac43aaf8d33fb949d557be22220077920"} Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.678290 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.711995 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=39.172516344 podStartE2EDuration="1m5.71197083s" podCreationTimestamp="2026-01-30 16:40:38 +0000 UTC" firstStartedPulling="2026-01-30 16:40:40.622673684 +0000 UTC m=+1095.260631030" lastFinishedPulling="2026-01-30 16:41:07.16212817 +0000 UTC m=+1121.800085516" observedRunningTime="2026-01-30 16:41:43.703832036 +0000 UTC m=+1158.341789382" watchObservedRunningTime="2026-01-30 16:41:43.71197083 +0000 UTC m=+1158.349928176" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.716850 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-clmnh" podUID="eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9" containerName="ovn-controller" probeResult="failure" output=< Jan 30 16:41:43 crc kubenswrapper[4766]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 30 16:41:43 crc kubenswrapper[4766]: > Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.736903 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c8af029-8432-4152-8e74-5c40d72636d7-operator-scripts\") pod \"glance-63c5-account-create-update-sx7bq\" (UID: \"4c8af029-8432-4152-8e74-5c40d72636d7\") " pod="openstack/glance-63c5-account-create-update-sx7bq" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.736981 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2crjt\" (UniqueName: \"kubernetes.io/projected/4c8af029-8432-4152-8e74-5c40d72636d7-kube-api-access-2crjt\") pod \"glance-63c5-account-create-update-sx7bq\" (UID: \"4c8af029-8432-4152-8e74-5c40d72636d7\") " pod="openstack/glance-63c5-account-create-update-sx7bq" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.761000 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-qdgxb"] Jan 30 16:41:43 crc kubenswrapper[4766]: W0130 16:41:43.769695 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0dbf5802_dfa7_4b32_aaa5_48fc779da5d6.slice/crio-41f06b2c8257561a12ef65c8eeb76663e26a06286d15e19640ea4f589207a7a8 WatchSource:0}: Error finding container 41f06b2c8257561a12ef65c8eeb76663e26a06286d15e19640ea4f589207a7a8: Status 404 returned error can't find the container with id 41f06b2c8257561a12ef65c8eeb76663e26a06286d15e19640ea4f589207a7a8 Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.798540 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2h7p2" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.838342 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c8af029-8432-4152-8e74-5c40d72636d7-operator-scripts\") pod \"glance-63c5-account-create-update-sx7bq\" (UID: \"4c8af029-8432-4152-8e74-5c40d72636d7\") " pod="openstack/glance-63c5-account-create-update-sx7bq" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.838986 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2crjt\" (UniqueName: \"kubernetes.io/projected/4c8af029-8432-4152-8e74-5c40d72636d7-kube-api-access-2crjt\") pod \"glance-63c5-account-create-update-sx7bq\" (UID: \"4c8af029-8432-4152-8e74-5c40d72636d7\") " pod="openstack/glance-63c5-account-create-update-sx7bq" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.839131 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c8af029-8432-4152-8e74-5c40d72636d7-operator-scripts\") pod \"glance-63c5-account-create-update-sx7bq\" (UID: \"4c8af029-8432-4152-8e74-5c40d72636d7\") " pod="openstack/glance-63c5-account-create-update-sx7bq" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.855120 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-e3be-account-create-update-n7qg6"] Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.862982 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2crjt\" (UniqueName: \"kubernetes.io/projected/4c8af029-8432-4152-8e74-5c40d72636d7-kube-api-access-2crjt\") pod \"glance-63c5-account-create-update-sx7bq\" (UID: \"4c8af029-8432-4152-8e74-5c40d72636d7\") " pod="openstack/glance-63c5-account-create-update-sx7bq" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.900204 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-63c5-account-create-update-sx7bq" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.972756 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-nwrgq"] Jan 30 16:41:44 crc kubenswrapper[4766]: I0130 16:41:44.166572 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-cc14-account-create-update-jhjn2"] Jan 30 16:41:44 crc kubenswrapper[4766]: W0130 16:41:44.175721 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75830eb2_571a_4fef_92b5_057b0928cfe0.slice/crio-7fe75bd4773b57c5c426984b6630208dad0241c55bc83cca2c368bb40dd1f727 WatchSource:0}: Error finding container 7fe75bd4773b57c5c426984b6630208dad0241c55bc83cca2c368bb40dd1f727: Status 404 returned error can't find the container with id 7fe75bd4773b57c5c426984b6630208dad0241c55bc83cca2c368bb40dd1f727 Jan 30 16:41:44 crc kubenswrapper[4766]: I0130 16:41:44.264494 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-63c5-account-create-update-sx7bq"] Jan 30 16:41:44 crc kubenswrapper[4766]: W0130 16:41:44.292789 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c8af029_8432_4152_8e74_5c40d72636d7.slice/crio-e6544f73b15af94bb629621458b494b84335847e60182c1dd01da97465e4bba6 WatchSource:0}: Error finding container e6544f73b15af94bb629621458b494b84335847e60182c1dd01da97465e4bba6: Status 404 returned error can't find the container with id e6544f73b15af94bb629621458b494b84335847e60182c1dd01da97465e4bba6 Jan 30 16:41:44 crc kubenswrapper[4766]: I0130 16:41:44.362762 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-2h7p2"] Jan 30 16:41:44 crc kubenswrapper[4766]: I0130 16:41:44.689393 4766 generic.go:334] "Generic (PLEG): container finished" podID="12ab95d5-fb83-42b1-a38b-9e3bb8916f37" containerID="cc27ffe2d01636ffacab81d5d7a098bb9dc884b5c3f6289425d3f7eacfe02395" exitCode=0 Jan 30 16:41:44 crc kubenswrapper[4766]: I0130 16:41:44.689452 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-nwrgq" event={"ID":"12ab95d5-fb83-42b1-a38b-9e3bb8916f37","Type":"ContainerDied","Data":"cc27ffe2d01636ffacab81d5d7a098bb9dc884b5c3f6289425d3f7eacfe02395"} Jan 30 16:41:44 crc kubenswrapper[4766]: I0130 16:41:44.689513 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-nwrgq" event={"ID":"12ab95d5-fb83-42b1-a38b-9e3bb8916f37","Type":"ContainerStarted","Data":"07460daf7562cfe849a1b0747825aad95ff813f31aa3daee3420d79a511b7740"} Jan 30 16:41:44 crc kubenswrapper[4766]: I0130 16:41:44.691717 4766 generic.go:334] "Generic (PLEG): container finished" podID="0dbf5802-dfa7-4b32-aaa5-48fc779da5d6" containerID="7bfe4866f66053fb173d427988627ec6e6f5d14c9ef1395833beafecd3414e5d" exitCode=0 Jan 30 16:41:44 crc kubenswrapper[4766]: I0130 16:41:44.691815 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-qdgxb" event={"ID":"0dbf5802-dfa7-4b32-aaa5-48fc779da5d6","Type":"ContainerDied","Data":"7bfe4866f66053fb173d427988627ec6e6f5d14c9ef1395833beafecd3414e5d"} Jan 30 16:41:44 crc kubenswrapper[4766]: I0130 16:41:44.691846 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-qdgxb" event={"ID":"0dbf5802-dfa7-4b32-aaa5-48fc779da5d6","Type":"ContainerStarted","Data":"41f06b2c8257561a12ef65c8eeb76663e26a06286d15e19640ea4f589207a7a8"} Jan 30 16:41:44 crc kubenswrapper[4766]: I0130 16:41:44.693482 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-63c5-account-create-update-sx7bq" event={"ID":"4c8af029-8432-4152-8e74-5c40d72636d7","Type":"ContainerStarted","Data":"996950689e39dcea64b26ccd476b24aa5095e91f7aed3e954e00b825f7630cc9"} Jan 30 16:41:44 crc kubenswrapper[4766]: I0130 16:41:44.693515 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-63c5-account-create-update-sx7bq" event={"ID":"4c8af029-8432-4152-8e74-5c40d72636d7","Type":"ContainerStarted","Data":"e6544f73b15af94bb629621458b494b84335847e60182c1dd01da97465e4bba6"} Jan 30 16:41:44 crc kubenswrapper[4766]: I0130 16:41:44.699886 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-cc14-account-create-update-jhjn2" event={"ID":"75830eb2-571a-4fef-92b5-057b0928cfe0","Type":"ContainerStarted","Data":"2b053b03cd6fc4ae384ef42a3a1f67b2abeb432fc716aac5c95d03ae04affdd4"} Jan 30 16:41:44 crc kubenswrapper[4766]: I0130 16:41:44.699925 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-cc14-account-create-update-jhjn2" event={"ID":"75830eb2-571a-4fef-92b5-057b0928cfe0","Type":"ContainerStarted","Data":"7fe75bd4773b57c5c426984b6630208dad0241c55bc83cca2c368bb40dd1f727"} Jan 30 16:41:44 crc kubenswrapper[4766]: I0130 16:41:44.705889 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e3be-account-create-update-n7qg6" event={"ID":"3fb40e54-43ed-4dd6-8c23-138c01cf062d","Type":"ContainerStarted","Data":"5d846068f29d3046551737a3e9e9cf0e1ed2259d3b638644a8119627f752a5bb"} Jan 30 16:41:44 crc kubenswrapper[4766]: I0130 16:41:44.705942 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e3be-account-create-update-n7qg6" event={"ID":"3fb40e54-43ed-4dd6-8c23-138c01cf062d","Type":"ContainerStarted","Data":"eeb72ccaae70630331c7e646c1870cd1adfd31441bf6b569c32cec7aa4da058f"} Jan 30 16:41:44 crc kubenswrapper[4766]: I0130 16:41:44.708244 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-2h7p2" event={"ID":"acb52775-c639-4afc-9f21-f33531a854b3","Type":"ContainerStarted","Data":"e66531f1ac1c7bb36e0303175964fac57e3e6bc53065d7b2dc2989ce9b3d088e"} Jan 30 16:41:44 crc kubenswrapper[4766]: I0130 16:41:44.708293 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-2h7p2" event={"ID":"acb52775-c639-4afc-9f21-f33531a854b3","Type":"ContainerStarted","Data":"3edcccb4e6bd12f5d5a1f632835e7d89f180139beb543e68cd250b88bec9ea11"} Jan 30 16:41:44 crc kubenswrapper[4766]: I0130 16:41:44.750152 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-e3be-account-create-update-n7qg6" podStartSLOduration=2.750125002 podStartE2EDuration="2.750125002s" podCreationTimestamp="2026-01-30 16:41:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:41:44.732761514 +0000 UTC m=+1159.370718860" watchObservedRunningTime="2026-01-30 16:41:44.750125002 +0000 UTC m=+1159.388082338" Jan 30 16:41:44 crc kubenswrapper[4766]: I0130 16:41:44.753412 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-cc14-account-create-update-jhjn2" podStartSLOduration=1.753392221 podStartE2EDuration="1.753392221s" podCreationTimestamp="2026-01-30 16:41:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:41:44.748560029 +0000 UTC m=+1159.386517395" watchObservedRunningTime="2026-01-30 16:41:44.753392221 +0000 UTC m=+1159.391349567" Jan 30 16:41:44 crc kubenswrapper[4766]: I0130 16:41:44.776815 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-2h7p2" podStartSLOduration=1.7767925070000001 podStartE2EDuration="1.776792507s" podCreationTimestamp="2026-01-30 16:41:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:41:44.769835094 +0000 UTC m=+1159.407792450" watchObservedRunningTime="2026-01-30 16:41:44.776792507 +0000 UTC m=+1159.414749853" Jan 30 16:41:44 crc kubenswrapper[4766]: I0130 16:41:44.796833 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-63c5-account-create-update-sx7bq" podStartSLOduration=1.796810558 podStartE2EDuration="1.796810558s" podCreationTimestamp="2026-01-30 16:41:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:41:44.794534365 +0000 UTC m=+1159.432491711" watchObservedRunningTime="2026-01-30 16:41:44.796810558 +0000 UTC m=+1159.434767904" Jan 30 16:41:45 crc kubenswrapper[4766]: I0130 16:41:45.718265 4766 generic.go:334] "Generic (PLEG): container finished" podID="4c8af029-8432-4152-8e74-5c40d72636d7" containerID="996950689e39dcea64b26ccd476b24aa5095e91f7aed3e954e00b825f7630cc9" exitCode=0 Jan 30 16:41:45 crc kubenswrapper[4766]: I0130 16:41:45.718669 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-63c5-account-create-update-sx7bq" event={"ID":"4c8af029-8432-4152-8e74-5c40d72636d7","Type":"ContainerDied","Data":"996950689e39dcea64b26ccd476b24aa5095e91f7aed3e954e00b825f7630cc9"} Jan 30 16:41:45 crc kubenswrapper[4766]: I0130 16:41:45.722940 4766 generic.go:334] "Generic (PLEG): container finished" podID="75830eb2-571a-4fef-92b5-057b0928cfe0" containerID="2b053b03cd6fc4ae384ef42a3a1f67b2abeb432fc716aac5c95d03ae04affdd4" exitCode=0 Jan 30 16:41:45 crc kubenswrapper[4766]: I0130 16:41:45.723247 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-cc14-account-create-update-jhjn2" event={"ID":"75830eb2-571a-4fef-92b5-057b0928cfe0","Type":"ContainerDied","Data":"2b053b03cd6fc4ae384ef42a3a1f67b2abeb432fc716aac5c95d03ae04affdd4"} Jan 30 16:41:45 crc kubenswrapper[4766]: I0130 16:41:45.725622 4766 generic.go:334] "Generic (PLEG): container finished" podID="3fb40e54-43ed-4dd6-8c23-138c01cf062d" containerID="5d846068f29d3046551737a3e9e9cf0e1ed2259d3b638644a8119627f752a5bb" exitCode=0 Jan 30 16:41:45 crc kubenswrapper[4766]: I0130 16:41:45.725714 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e3be-account-create-update-n7qg6" event={"ID":"3fb40e54-43ed-4dd6-8c23-138c01cf062d","Type":"ContainerDied","Data":"5d846068f29d3046551737a3e9e9cf0e1ed2259d3b638644a8119627f752a5bb"} Jan 30 16:41:45 crc kubenswrapper[4766]: I0130 16:41:45.728211 4766 generic.go:334] "Generic (PLEG): container finished" podID="acb52775-c639-4afc-9f21-f33531a854b3" containerID="e66531f1ac1c7bb36e0303175964fac57e3e6bc53065d7b2dc2989ce9b3d088e" exitCode=0 Jan 30 16:41:45 crc kubenswrapper[4766]: I0130 16:41:45.728292 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-2h7p2" event={"ID":"acb52775-c639-4afc-9f21-f33531a854b3","Type":"ContainerDied","Data":"e66531f1ac1c7bb36e0303175964fac57e3e6bc53065d7b2dc2989ce9b3d088e"} Jan 30 16:41:46 crc kubenswrapper[4766]: I0130 16:41:46.233699 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-qdgxb" Jan 30 16:41:46 crc kubenswrapper[4766]: I0130 16:41:46.243720 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-nwrgq" Jan 30 16:41:46 crc kubenswrapper[4766]: I0130 16:41:46.285615 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0dbf5802-dfa7-4b32-aaa5-48fc779da5d6-operator-scripts\") pod \"0dbf5802-dfa7-4b32-aaa5-48fc779da5d6\" (UID: \"0dbf5802-dfa7-4b32-aaa5-48fc779da5d6\") " Jan 30 16:41:46 crc kubenswrapper[4766]: I0130 16:41:46.286025 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52pnt\" (UniqueName: \"kubernetes.io/projected/12ab95d5-fb83-42b1-a38b-9e3bb8916f37-kube-api-access-52pnt\") pod \"12ab95d5-fb83-42b1-a38b-9e3bb8916f37\" (UID: \"12ab95d5-fb83-42b1-a38b-9e3bb8916f37\") " Jan 30 16:41:46 crc kubenswrapper[4766]: I0130 16:41:46.286392 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzqqs\" (UniqueName: \"kubernetes.io/projected/0dbf5802-dfa7-4b32-aaa5-48fc779da5d6-kube-api-access-rzqqs\") pod \"0dbf5802-dfa7-4b32-aaa5-48fc779da5d6\" (UID: \"0dbf5802-dfa7-4b32-aaa5-48fc779da5d6\") " Jan 30 16:41:46 crc kubenswrapper[4766]: I0130 16:41:46.286611 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12ab95d5-fb83-42b1-a38b-9e3bb8916f37-operator-scripts\") pod \"12ab95d5-fb83-42b1-a38b-9e3bb8916f37\" (UID: \"12ab95d5-fb83-42b1-a38b-9e3bb8916f37\") " Jan 30 16:41:46 crc kubenswrapper[4766]: I0130 16:41:46.287054 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0dbf5802-dfa7-4b32-aaa5-48fc779da5d6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0dbf5802-dfa7-4b32-aaa5-48fc779da5d6" (UID: "0dbf5802-dfa7-4b32-aaa5-48fc779da5d6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:41:46 crc kubenswrapper[4766]: I0130 16:41:46.287574 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0dbf5802-dfa7-4b32-aaa5-48fc779da5d6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:46 crc kubenswrapper[4766]: I0130 16:41:46.288073 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12ab95d5-fb83-42b1-a38b-9e3bb8916f37-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "12ab95d5-fb83-42b1-a38b-9e3bb8916f37" (UID: "12ab95d5-fb83-42b1-a38b-9e3bb8916f37"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:41:46 crc kubenswrapper[4766]: I0130 16:41:46.296464 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dbf5802-dfa7-4b32-aaa5-48fc779da5d6-kube-api-access-rzqqs" (OuterVolumeSpecName: "kube-api-access-rzqqs") pod "0dbf5802-dfa7-4b32-aaa5-48fc779da5d6" (UID: "0dbf5802-dfa7-4b32-aaa5-48fc779da5d6"). InnerVolumeSpecName "kube-api-access-rzqqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:41:46 crc kubenswrapper[4766]: I0130 16:41:46.299765 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12ab95d5-fb83-42b1-a38b-9e3bb8916f37-kube-api-access-52pnt" (OuterVolumeSpecName: "kube-api-access-52pnt") pod "12ab95d5-fb83-42b1-a38b-9e3bb8916f37" (UID: "12ab95d5-fb83-42b1-a38b-9e3bb8916f37"). InnerVolumeSpecName "kube-api-access-52pnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:41:46 crc kubenswrapper[4766]: I0130 16:41:46.388989 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzqqs\" (UniqueName: \"kubernetes.io/projected/0dbf5802-dfa7-4b32-aaa5-48fc779da5d6-kube-api-access-rzqqs\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:46 crc kubenswrapper[4766]: I0130 16:41:46.389317 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12ab95d5-fb83-42b1-a38b-9e3bb8916f37-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:46 crc kubenswrapper[4766]: I0130 16:41:46.389409 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52pnt\" (UniqueName: \"kubernetes.io/projected/12ab95d5-fb83-42b1-a38b-9e3bb8916f37-kube-api-access-52pnt\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:46 crc kubenswrapper[4766]: I0130 16:41:46.737118 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-qdgxb" event={"ID":"0dbf5802-dfa7-4b32-aaa5-48fc779da5d6","Type":"ContainerDied","Data":"41f06b2c8257561a12ef65c8eeb76663e26a06286d15e19640ea4f589207a7a8"} Jan 30 16:41:46 crc kubenswrapper[4766]: I0130 16:41:46.737191 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41f06b2c8257561a12ef65c8eeb76663e26a06286d15e19640ea4f589207a7a8" Jan 30 16:41:46 crc kubenswrapper[4766]: I0130 16:41:46.737221 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-qdgxb" Jan 30 16:41:46 crc kubenswrapper[4766]: I0130 16:41:46.739749 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-nwrgq" event={"ID":"12ab95d5-fb83-42b1-a38b-9e3bb8916f37","Type":"ContainerDied","Data":"07460daf7562cfe849a1b0747825aad95ff813f31aa3daee3420d79a511b7740"} Jan 30 16:41:46 crc kubenswrapper[4766]: I0130 16:41:46.739791 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07460daf7562cfe849a1b0747825aad95ff813f31aa3daee3420d79a511b7740" Jan 30 16:41:46 crc kubenswrapper[4766]: I0130 16:41:46.739940 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-nwrgq" Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.116860 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-cc14-account-create-update-jhjn2" Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.216584 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snrr4\" (UniqueName: \"kubernetes.io/projected/75830eb2-571a-4fef-92b5-057b0928cfe0-kube-api-access-snrr4\") pod \"75830eb2-571a-4fef-92b5-057b0928cfe0\" (UID: \"75830eb2-571a-4fef-92b5-057b0928cfe0\") " Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.216748 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75830eb2-571a-4fef-92b5-057b0928cfe0-operator-scripts\") pod \"75830eb2-571a-4fef-92b5-057b0928cfe0\" (UID: \"75830eb2-571a-4fef-92b5-057b0928cfe0\") " Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.217510 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75830eb2-571a-4fef-92b5-057b0928cfe0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "75830eb2-571a-4fef-92b5-057b0928cfe0" (UID: "75830eb2-571a-4fef-92b5-057b0928cfe0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.219981 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75830eb2-571a-4fef-92b5-057b0928cfe0-kube-api-access-snrr4" (OuterVolumeSpecName: "kube-api-access-snrr4") pod "75830eb2-571a-4fef-92b5-057b0928cfe0" (UID: "75830eb2-571a-4fef-92b5-057b0928cfe0"). InnerVolumeSpecName "kube-api-access-snrr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.288971 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-63c5-account-create-update-sx7bq" Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.294403 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e3be-account-create-update-n7qg6" Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.301635 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2h7p2" Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.321012 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75830eb2-571a-4fef-92b5-057b0928cfe0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.321062 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snrr4\" (UniqueName: \"kubernetes.io/projected/75830eb2-571a-4fef-92b5-057b0928cfe0-kube-api-access-snrr4\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.422749 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fb40e54-43ed-4dd6-8c23-138c01cf062d-operator-scripts\") pod \"3fb40e54-43ed-4dd6-8c23-138c01cf062d\" (UID: \"3fb40e54-43ed-4dd6-8c23-138c01cf062d\") " Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.422821 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h546w\" (UniqueName: \"kubernetes.io/projected/acb52775-c639-4afc-9f21-f33531a854b3-kube-api-access-h546w\") pod \"acb52775-c639-4afc-9f21-f33531a854b3\" (UID: \"acb52775-c639-4afc-9f21-f33531a854b3\") " Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.422881 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2t76v\" (UniqueName: \"kubernetes.io/projected/3fb40e54-43ed-4dd6-8c23-138c01cf062d-kube-api-access-2t76v\") pod \"3fb40e54-43ed-4dd6-8c23-138c01cf062d\" (UID: \"3fb40e54-43ed-4dd6-8c23-138c01cf062d\") " Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.422946 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/acb52775-c639-4afc-9f21-f33531a854b3-operator-scripts\") pod \"acb52775-c639-4afc-9f21-f33531a854b3\" (UID: \"acb52775-c639-4afc-9f21-f33531a854b3\") " Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.423047 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2crjt\" (UniqueName: \"kubernetes.io/projected/4c8af029-8432-4152-8e74-5c40d72636d7-kube-api-access-2crjt\") pod \"4c8af029-8432-4152-8e74-5c40d72636d7\" (UID: \"4c8af029-8432-4152-8e74-5c40d72636d7\") " Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.423139 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c8af029-8432-4152-8e74-5c40d72636d7-operator-scripts\") pod \"4c8af029-8432-4152-8e74-5c40d72636d7\" (UID: \"4c8af029-8432-4152-8e74-5c40d72636d7\") " Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.423916 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fb40e54-43ed-4dd6-8c23-138c01cf062d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3fb40e54-43ed-4dd6-8c23-138c01cf062d" (UID: "3fb40e54-43ed-4dd6-8c23-138c01cf062d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.424147 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c8af029-8432-4152-8e74-5c40d72636d7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4c8af029-8432-4152-8e74-5c40d72636d7" (UID: "4c8af029-8432-4152-8e74-5c40d72636d7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.424159 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acb52775-c639-4afc-9f21-f33531a854b3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "acb52775-c639-4afc-9f21-f33531a854b3" (UID: "acb52775-c639-4afc-9f21-f33531a854b3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.427211 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acb52775-c639-4afc-9f21-f33531a854b3-kube-api-access-h546w" (OuterVolumeSpecName: "kube-api-access-h546w") pod "acb52775-c639-4afc-9f21-f33531a854b3" (UID: "acb52775-c639-4afc-9f21-f33531a854b3"). InnerVolumeSpecName "kube-api-access-h546w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.427955 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fb40e54-43ed-4dd6-8c23-138c01cf062d-kube-api-access-2t76v" (OuterVolumeSpecName: "kube-api-access-2t76v") pod "3fb40e54-43ed-4dd6-8c23-138c01cf062d" (UID: "3fb40e54-43ed-4dd6-8c23-138c01cf062d"). InnerVolumeSpecName "kube-api-access-2t76v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.432442 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c8af029-8432-4152-8e74-5c40d72636d7-kube-api-access-2crjt" (OuterVolumeSpecName: "kube-api-access-2crjt") pod "4c8af029-8432-4152-8e74-5c40d72636d7" (UID: "4c8af029-8432-4152-8e74-5c40d72636d7"). InnerVolumeSpecName "kube-api-access-2crjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.525439 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2t76v\" (UniqueName: \"kubernetes.io/projected/3fb40e54-43ed-4dd6-8c23-138c01cf062d-kube-api-access-2t76v\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.525877 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/acb52775-c639-4afc-9f21-f33531a854b3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.525949 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2crjt\" (UniqueName: \"kubernetes.io/projected/4c8af029-8432-4152-8e74-5c40d72636d7-kube-api-access-2crjt\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.526004 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c8af029-8432-4152-8e74-5c40d72636d7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.526058 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fb40e54-43ed-4dd6-8c23-138c01cf062d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.526113 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h546w\" (UniqueName: \"kubernetes.io/projected/acb52775-c639-4afc-9f21-f33531a854b3-kube-api-access-h546w\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.748998 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2h7p2" Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.748981 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-2h7p2" event={"ID":"acb52775-c639-4afc-9f21-f33531a854b3","Type":"ContainerDied","Data":"3edcccb4e6bd12f5d5a1f632835e7d89f180139beb543e68cd250b88bec9ea11"} Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.749505 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3edcccb4e6bd12f5d5a1f632835e7d89f180139beb543e68cd250b88bec9ea11" Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.750779 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-63c5-account-create-update-sx7bq" event={"ID":"4c8af029-8432-4152-8e74-5c40d72636d7","Type":"ContainerDied","Data":"e6544f73b15af94bb629621458b494b84335847e60182c1dd01da97465e4bba6"} Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.750853 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6544f73b15af94bb629621458b494b84335847e60182c1dd01da97465e4bba6" Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.750804 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-63c5-account-create-update-sx7bq" Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.753161 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-cc14-account-create-update-jhjn2" event={"ID":"75830eb2-571a-4fef-92b5-057b0928cfe0","Type":"ContainerDied","Data":"7fe75bd4773b57c5c426984b6630208dad0241c55bc83cca2c368bb40dd1f727"} Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.753242 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fe75bd4773b57c5c426984b6630208dad0241c55bc83cca2c368bb40dd1f727" Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.753289 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-cc14-account-create-update-jhjn2" Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.754784 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e3be-account-create-update-n7qg6" event={"ID":"3fb40e54-43ed-4dd6-8c23-138c01cf062d","Type":"ContainerDied","Data":"eeb72ccaae70630331c7e646c1870cd1adfd31441bf6b569c32cec7aa4da058f"} Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.754817 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eeb72ccaae70630331c7e646c1870cd1adfd31441bf6b569c32cec7aa4da058f" Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.754874 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e3be-account-create-update-n7qg6" Jan 30 16:41:48 crc kubenswrapper[4766]: I0130 16:41:48.701910 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-clmnh" podUID="eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9" containerName="ovn-controller" probeResult="failure" output=< Jan 30 16:41:48 crc kubenswrapper[4766]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 30 16:41:48 crc kubenswrapper[4766]: > Jan 30 16:41:48 crc kubenswrapper[4766]: I0130 16:41:48.746535 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8b182790-0761-450c-85d1-63ddd59ac10f-etc-swift\") pod \"swift-storage-0\" (UID: \"8b182790-0761-450c-85d1-63ddd59ac10f\") " pod="openstack/swift-storage-0" Jan 30 16:41:48 crc kubenswrapper[4766]: I0130 16:41:48.754303 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8b182790-0761-450c-85d1-63ddd59ac10f-etc-swift\") pod \"swift-storage-0\" (UID: \"8b182790-0761-450c-85d1-63ddd59ac10f\") " pod="openstack/swift-storage-0" Jan 30 16:41:48 crc kubenswrapper[4766]: I0130 16:41:48.764386 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 30 16:41:48 crc kubenswrapper[4766]: I0130 16:41:48.824159 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-l6hkn" Jan 30 16:41:48 crc kubenswrapper[4766]: I0130 16:41:48.837720 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-l6hkn" Jan 30 16:41:48 crc kubenswrapper[4766]: I0130 16:41:48.885572 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-jpmx7"] Jan 30 16:41:48 crc kubenswrapper[4766]: E0130 16:41:48.885990 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c8af029-8432-4152-8e74-5c40d72636d7" containerName="mariadb-account-create-update" Jan 30 16:41:48 crc kubenswrapper[4766]: I0130 16:41:48.886334 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c8af029-8432-4152-8e74-5c40d72636d7" containerName="mariadb-account-create-update" Jan 30 16:41:48 crc kubenswrapper[4766]: E0130 16:41:48.886364 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75830eb2-571a-4fef-92b5-057b0928cfe0" containerName="mariadb-account-create-update" Jan 30 16:41:48 crc kubenswrapper[4766]: I0130 16:41:48.886372 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="75830eb2-571a-4fef-92b5-057b0928cfe0" containerName="mariadb-account-create-update" Jan 30 16:41:48 crc kubenswrapper[4766]: E0130 16:41:48.886391 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dbf5802-dfa7-4b32-aaa5-48fc779da5d6" containerName="mariadb-database-create" Jan 30 16:41:48 crc kubenswrapper[4766]: I0130 16:41:48.886398 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dbf5802-dfa7-4b32-aaa5-48fc779da5d6" containerName="mariadb-database-create" Jan 30 16:41:48 crc kubenswrapper[4766]: E0130 16:41:48.886418 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acb52775-c639-4afc-9f21-f33531a854b3" containerName="mariadb-database-create" Jan 30 16:41:48 crc kubenswrapper[4766]: I0130 16:41:48.886425 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="acb52775-c639-4afc-9f21-f33531a854b3" containerName="mariadb-database-create" Jan 30 16:41:48 crc kubenswrapper[4766]: E0130 16:41:48.886436 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fb40e54-43ed-4dd6-8c23-138c01cf062d" containerName="mariadb-account-create-update" Jan 30 16:41:48 crc kubenswrapper[4766]: I0130 16:41:48.886443 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fb40e54-43ed-4dd6-8c23-138c01cf062d" containerName="mariadb-account-create-update" Jan 30 16:41:48 crc kubenswrapper[4766]: E0130 16:41:48.886454 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12ab95d5-fb83-42b1-a38b-9e3bb8916f37" containerName="mariadb-database-create" Jan 30 16:41:48 crc kubenswrapper[4766]: I0130 16:41:48.886460 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="12ab95d5-fb83-42b1-a38b-9e3bb8916f37" containerName="mariadb-database-create" Jan 30 16:41:48 crc kubenswrapper[4766]: I0130 16:41:48.886624 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fb40e54-43ed-4dd6-8c23-138c01cf062d" containerName="mariadb-account-create-update" Jan 30 16:41:48 crc kubenswrapper[4766]: I0130 16:41:48.886636 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="75830eb2-571a-4fef-92b5-057b0928cfe0" containerName="mariadb-account-create-update" Jan 30 16:41:48 crc kubenswrapper[4766]: I0130 16:41:48.886649 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dbf5802-dfa7-4b32-aaa5-48fc779da5d6" containerName="mariadb-database-create" Jan 30 16:41:48 crc kubenswrapper[4766]: I0130 16:41:48.886691 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="acb52775-c639-4afc-9f21-f33531a854b3" containerName="mariadb-database-create" Jan 30 16:41:48 crc kubenswrapper[4766]: I0130 16:41:48.886702 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="12ab95d5-fb83-42b1-a38b-9e3bb8916f37" containerName="mariadb-database-create" Jan 30 16:41:48 crc kubenswrapper[4766]: I0130 16:41:48.886712 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c8af029-8432-4152-8e74-5c40d72636d7" containerName="mariadb-account-create-update" Jan 30 16:41:48 crc kubenswrapper[4766]: I0130 16:41:48.887388 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-jpmx7" Jan 30 16:41:48 crc kubenswrapper[4766]: I0130 16:41:48.891467 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-6xjc8" Jan 30 16:41:48 crc kubenswrapper[4766]: I0130 16:41:48.891674 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 30 16:41:48 crc kubenswrapper[4766]: I0130 16:41:48.919695 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-jpmx7"] Jan 30 16:41:48 crc kubenswrapper[4766]: I0130 16:41:48.952776 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/42d1f0ba-d11c-4e08-9e01-5783f42a6b84-db-sync-config-data\") pod \"glance-db-sync-jpmx7\" (UID: \"42d1f0ba-d11c-4e08-9e01-5783f42a6b84\") " pod="openstack/glance-db-sync-jpmx7" Jan 30 16:41:48 crc kubenswrapper[4766]: I0130 16:41:48.952999 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42d1f0ba-d11c-4e08-9e01-5783f42a6b84-config-data\") pod \"glance-db-sync-jpmx7\" (UID: \"42d1f0ba-d11c-4e08-9e01-5783f42a6b84\") " pod="openstack/glance-db-sync-jpmx7" Jan 30 16:41:48 crc kubenswrapper[4766]: I0130 16:41:48.953039 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42d1f0ba-d11c-4e08-9e01-5783f42a6b84-combined-ca-bundle\") pod \"glance-db-sync-jpmx7\" (UID: \"42d1f0ba-d11c-4e08-9e01-5783f42a6b84\") " pod="openstack/glance-db-sync-jpmx7" Jan 30 16:41:48 crc kubenswrapper[4766]: I0130 16:41:48.953276 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfprs\" (UniqueName: \"kubernetes.io/projected/42d1f0ba-d11c-4e08-9e01-5783f42a6b84-kube-api-access-dfprs\") pod \"glance-db-sync-jpmx7\" (UID: \"42d1f0ba-d11c-4e08-9e01-5783f42a6b84\") " pod="openstack/glance-db-sync-jpmx7" Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.056369 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/42d1f0ba-d11c-4e08-9e01-5783f42a6b84-db-sync-config-data\") pod \"glance-db-sync-jpmx7\" (UID: \"42d1f0ba-d11c-4e08-9e01-5783f42a6b84\") " pod="openstack/glance-db-sync-jpmx7" Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.056477 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42d1f0ba-d11c-4e08-9e01-5783f42a6b84-config-data\") pod \"glance-db-sync-jpmx7\" (UID: \"42d1f0ba-d11c-4e08-9e01-5783f42a6b84\") " pod="openstack/glance-db-sync-jpmx7" Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.056510 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42d1f0ba-d11c-4e08-9e01-5783f42a6b84-combined-ca-bundle\") pod \"glance-db-sync-jpmx7\" (UID: \"42d1f0ba-d11c-4e08-9e01-5783f42a6b84\") " pod="openstack/glance-db-sync-jpmx7" Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.056605 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfprs\" (UniqueName: \"kubernetes.io/projected/42d1f0ba-d11c-4e08-9e01-5783f42a6b84-kube-api-access-dfprs\") pod \"glance-db-sync-jpmx7\" (UID: \"42d1f0ba-d11c-4e08-9e01-5783f42a6b84\") " pod="openstack/glance-db-sync-jpmx7" Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.062969 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42d1f0ba-d11c-4e08-9e01-5783f42a6b84-combined-ca-bundle\") pod \"glance-db-sync-jpmx7\" (UID: \"42d1f0ba-d11c-4e08-9e01-5783f42a6b84\") " pod="openstack/glance-db-sync-jpmx7" Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.066503 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/42d1f0ba-d11c-4e08-9e01-5783f42a6b84-db-sync-config-data\") pod \"glance-db-sync-jpmx7\" (UID: \"42d1f0ba-d11c-4e08-9e01-5783f42a6b84\") " pod="openstack/glance-db-sync-jpmx7" Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.066498 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42d1f0ba-d11c-4e08-9e01-5783f42a6b84-config-data\") pod \"glance-db-sync-jpmx7\" (UID: \"42d1f0ba-d11c-4e08-9e01-5783f42a6b84\") " pod="openstack/glance-db-sync-jpmx7" Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.081226 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfprs\" (UniqueName: \"kubernetes.io/projected/42d1f0ba-d11c-4e08-9e01-5783f42a6b84-kube-api-access-dfprs\") pod \"glance-db-sync-jpmx7\" (UID: \"42d1f0ba-d11c-4e08-9e01-5783f42a6b84\") " pod="openstack/glance-db-sync-jpmx7" Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.092239 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-clmnh-config-w69zf"] Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.093726 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-clmnh-config-w69zf" Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.095984 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.114369 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-clmnh-config-w69zf"] Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.161110 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/cbb373eb-bd59-4480-80b6-bd1b2427105b-additional-scripts\") pod \"ovn-controller-clmnh-config-w69zf\" (UID: \"cbb373eb-bd59-4480-80b6-bd1b2427105b\") " pod="openstack/ovn-controller-clmnh-config-w69zf" Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.161317 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cbb373eb-bd59-4480-80b6-bd1b2427105b-scripts\") pod \"ovn-controller-clmnh-config-w69zf\" (UID: \"cbb373eb-bd59-4480-80b6-bd1b2427105b\") " pod="openstack/ovn-controller-clmnh-config-w69zf" Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.161357 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cbb373eb-bd59-4480-80b6-bd1b2427105b-var-run-ovn\") pod \"ovn-controller-clmnh-config-w69zf\" (UID: \"cbb373eb-bd59-4480-80b6-bd1b2427105b\") " pod="openstack/ovn-controller-clmnh-config-w69zf" Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.161381 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7bk2\" (UniqueName: \"kubernetes.io/projected/cbb373eb-bd59-4480-80b6-bd1b2427105b-kube-api-access-s7bk2\") pod \"ovn-controller-clmnh-config-w69zf\" (UID: \"cbb373eb-bd59-4480-80b6-bd1b2427105b\") " pod="openstack/ovn-controller-clmnh-config-w69zf" Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.161512 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cbb373eb-bd59-4480-80b6-bd1b2427105b-var-run\") pod \"ovn-controller-clmnh-config-w69zf\" (UID: \"cbb373eb-bd59-4480-80b6-bd1b2427105b\") " pod="openstack/ovn-controller-clmnh-config-w69zf" Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.161641 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cbb373eb-bd59-4480-80b6-bd1b2427105b-var-log-ovn\") pod \"ovn-controller-clmnh-config-w69zf\" (UID: \"cbb373eb-bd59-4480-80b6-bd1b2427105b\") " pod="openstack/ovn-controller-clmnh-config-w69zf" Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.229710 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-jpmx7" Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.267052 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cbb373eb-bd59-4480-80b6-bd1b2427105b-scripts\") pod \"ovn-controller-clmnh-config-w69zf\" (UID: \"cbb373eb-bd59-4480-80b6-bd1b2427105b\") " pod="openstack/ovn-controller-clmnh-config-w69zf" Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.267115 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cbb373eb-bd59-4480-80b6-bd1b2427105b-var-run-ovn\") pod \"ovn-controller-clmnh-config-w69zf\" (UID: \"cbb373eb-bd59-4480-80b6-bd1b2427105b\") " pod="openstack/ovn-controller-clmnh-config-w69zf" Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.267142 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7bk2\" (UniqueName: \"kubernetes.io/projected/cbb373eb-bd59-4480-80b6-bd1b2427105b-kube-api-access-s7bk2\") pod \"ovn-controller-clmnh-config-w69zf\" (UID: \"cbb373eb-bd59-4480-80b6-bd1b2427105b\") " pod="openstack/ovn-controller-clmnh-config-w69zf" Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.267163 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cbb373eb-bd59-4480-80b6-bd1b2427105b-var-run\") pod \"ovn-controller-clmnh-config-w69zf\" (UID: \"cbb373eb-bd59-4480-80b6-bd1b2427105b\") " pod="openstack/ovn-controller-clmnh-config-w69zf" Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.267225 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cbb373eb-bd59-4480-80b6-bd1b2427105b-var-log-ovn\") pod \"ovn-controller-clmnh-config-w69zf\" (UID: \"cbb373eb-bd59-4480-80b6-bd1b2427105b\") " pod="openstack/ovn-controller-clmnh-config-w69zf" Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.267292 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/cbb373eb-bd59-4480-80b6-bd1b2427105b-additional-scripts\") pod \"ovn-controller-clmnh-config-w69zf\" (UID: \"cbb373eb-bd59-4480-80b6-bd1b2427105b\") " pod="openstack/ovn-controller-clmnh-config-w69zf" Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.267509 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cbb373eb-bd59-4480-80b6-bd1b2427105b-var-run\") pod \"ovn-controller-clmnh-config-w69zf\" (UID: \"cbb373eb-bd59-4480-80b6-bd1b2427105b\") " pod="openstack/ovn-controller-clmnh-config-w69zf" Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.267555 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cbb373eb-bd59-4480-80b6-bd1b2427105b-var-run-ovn\") pod \"ovn-controller-clmnh-config-w69zf\" (UID: \"cbb373eb-bd59-4480-80b6-bd1b2427105b\") " pod="openstack/ovn-controller-clmnh-config-w69zf" Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.267586 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cbb373eb-bd59-4480-80b6-bd1b2427105b-var-log-ovn\") pod \"ovn-controller-clmnh-config-w69zf\" (UID: \"cbb373eb-bd59-4480-80b6-bd1b2427105b\") " pod="openstack/ovn-controller-clmnh-config-w69zf" Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.268166 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/cbb373eb-bd59-4480-80b6-bd1b2427105b-additional-scripts\") pod \"ovn-controller-clmnh-config-w69zf\" (UID: \"cbb373eb-bd59-4480-80b6-bd1b2427105b\") " pod="openstack/ovn-controller-clmnh-config-w69zf" Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.270515 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cbb373eb-bd59-4480-80b6-bd1b2427105b-scripts\") pod \"ovn-controller-clmnh-config-w69zf\" (UID: \"cbb373eb-bd59-4480-80b6-bd1b2427105b\") " pod="openstack/ovn-controller-clmnh-config-w69zf" Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.287171 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7bk2\" (UniqueName: \"kubernetes.io/projected/cbb373eb-bd59-4480-80b6-bd1b2427105b-kube-api-access-s7bk2\") pod \"ovn-controller-clmnh-config-w69zf\" (UID: \"cbb373eb-bd59-4480-80b6-bd1b2427105b\") " pod="openstack/ovn-controller-clmnh-config-w69zf" Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.476654 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-clmnh-config-w69zf" Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.535209 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.775614 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b182790-0761-450c-85d1-63ddd59ac10f","Type":"ContainerStarted","Data":"e2895452d8c205fa0d4dc996a2287e6197931bc707b2d07e3c6da2c761ed67e2"} Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.811286 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-jpmx7"] Jan 30 16:41:50 crc kubenswrapper[4766]: I0130 16:41:50.016645 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-clmnh-config-w69zf"] Jan 30 16:41:50 crc kubenswrapper[4766]: I0130 16:41:50.128316 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-wht5r"] Jan 30 16:41:50 crc kubenswrapper[4766]: I0130 16:41:50.151168 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-wht5r"] Jan 30 16:41:50 crc kubenswrapper[4766]: I0130 16:41:50.212236 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-jppr8"] Jan 30 16:41:50 crc kubenswrapper[4766]: I0130 16:41:50.213363 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-jppr8" Jan 30 16:41:50 crc kubenswrapper[4766]: I0130 16:41:50.215542 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 30 16:41:50 crc kubenswrapper[4766]: I0130 16:41:50.226591 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-jppr8"] Jan 30 16:41:50 crc kubenswrapper[4766]: I0130 16:41:50.295066 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9dd82ac-e512-442e-97c4-53be730affca-operator-scripts\") pod \"root-account-create-update-jppr8\" (UID: \"e9dd82ac-e512-442e-97c4-53be730affca\") " pod="openstack/root-account-create-update-jppr8" Jan 30 16:41:50 crc kubenswrapper[4766]: I0130 16:41:50.295124 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52cwj\" (UniqueName: \"kubernetes.io/projected/e9dd82ac-e512-442e-97c4-53be730affca-kube-api-access-52cwj\") pod \"root-account-create-update-jppr8\" (UID: \"e9dd82ac-e512-442e-97c4-53be730affca\") " pod="openstack/root-account-create-update-jppr8" Jan 30 16:41:50 crc kubenswrapper[4766]: I0130 16:41:50.396356 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9dd82ac-e512-442e-97c4-53be730affca-operator-scripts\") pod \"root-account-create-update-jppr8\" (UID: \"e9dd82ac-e512-442e-97c4-53be730affca\") " pod="openstack/root-account-create-update-jppr8" Jan 30 16:41:50 crc kubenswrapper[4766]: I0130 16:41:50.396714 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52cwj\" (UniqueName: \"kubernetes.io/projected/e9dd82ac-e512-442e-97c4-53be730affca-kube-api-access-52cwj\") pod \"root-account-create-update-jppr8\" (UID: \"e9dd82ac-e512-442e-97c4-53be730affca\") " pod="openstack/root-account-create-update-jppr8" Jan 30 16:41:50 crc kubenswrapper[4766]: I0130 16:41:50.397829 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9dd82ac-e512-442e-97c4-53be730affca-operator-scripts\") pod \"root-account-create-update-jppr8\" (UID: \"e9dd82ac-e512-442e-97c4-53be730affca\") " pod="openstack/root-account-create-update-jppr8" Jan 30 16:41:50 crc kubenswrapper[4766]: I0130 16:41:50.428652 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52cwj\" (UniqueName: \"kubernetes.io/projected/e9dd82ac-e512-442e-97c4-53be730affca-kube-api-access-52cwj\") pod \"root-account-create-update-jppr8\" (UID: \"e9dd82ac-e512-442e-97c4-53be730affca\") " pod="openstack/root-account-create-update-jppr8" Jan 30 16:41:50 crc kubenswrapper[4766]: I0130 16:41:50.535741 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-jppr8" Jan 30 16:41:50 crc kubenswrapper[4766]: I0130 16:41:50.785698 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-jpmx7" event={"ID":"42d1f0ba-d11c-4e08-9e01-5783f42a6b84","Type":"ContainerStarted","Data":"156259d42ec5bb7cdf5b66d3e56d10fcf3255030f0fe6e860e8d86caf0aded59"} Jan 30 16:41:50 crc kubenswrapper[4766]: I0130 16:41:50.788296 4766 generic.go:334] "Generic (PLEG): container finished" podID="cbb373eb-bd59-4480-80b6-bd1b2427105b" containerID="5d73c2b655a052cf02654b11be29a35dfaa9dff493fdf53769ae78f9a9393392" exitCode=0 Jan 30 16:41:50 crc kubenswrapper[4766]: I0130 16:41:50.788347 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-clmnh-config-w69zf" event={"ID":"cbb373eb-bd59-4480-80b6-bd1b2427105b","Type":"ContainerDied","Data":"5d73c2b655a052cf02654b11be29a35dfaa9dff493fdf53769ae78f9a9393392"} Jan 30 16:41:50 crc kubenswrapper[4766]: I0130 16:41:50.788373 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-clmnh-config-w69zf" event={"ID":"cbb373eb-bd59-4480-80b6-bd1b2427105b","Type":"ContainerStarted","Data":"00c49953103bedb879a4e1810914f639a631e3e34626d1a29d97454bb88f0c1f"} Jan 30 16:41:51 crc kubenswrapper[4766]: I0130 16:41:51.171611 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-jppr8"] Jan 30 16:41:51 crc kubenswrapper[4766]: W0130 16:41:51.186421 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9dd82ac_e512_442e_97c4_53be730affca.slice/crio-b5aead66f7c29087a054f0580ba8f6d13d6016e59ac8cc33f7178b88f8ae405c WatchSource:0}: Error finding container b5aead66f7c29087a054f0580ba8f6d13d6016e59ac8cc33f7178b88f8ae405c: Status 404 returned error can't find the container with id b5aead66f7c29087a054f0580ba8f6d13d6016e59ac8cc33f7178b88f8ae405c Jan 30 16:41:51 crc kubenswrapper[4766]: I0130 16:41:51.806305 4766 generic.go:334] "Generic (PLEG): container finished" podID="e9dd82ac-e512-442e-97c4-53be730affca" containerID="10c98f81e678691873d549baafc8dd66a2c7e23fa5f08a3d15b04d97e86b3c60" exitCode=0 Jan 30 16:41:51 crc kubenswrapper[4766]: I0130 16:41:51.807128 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-jppr8" event={"ID":"e9dd82ac-e512-442e-97c4-53be730affca","Type":"ContainerDied","Data":"10c98f81e678691873d549baafc8dd66a2c7e23fa5f08a3d15b04d97e86b3c60"} Jan 30 16:41:51 crc kubenswrapper[4766]: I0130 16:41:51.807154 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-jppr8" event={"ID":"e9dd82ac-e512-442e-97c4-53be730affca","Type":"ContainerStarted","Data":"b5aead66f7c29087a054f0580ba8f6d13d6016e59ac8cc33f7178b88f8ae405c"} Jan 30 16:41:51 crc kubenswrapper[4766]: I0130 16:41:51.817762 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b182790-0761-450c-85d1-63ddd59ac10f","Type":"ContainerStarted","Data":"8fb2a9d730e1fac1ed432db1aa83e0d89ad22b45725d36e0ee578815b9d18bd4"} Jan 30 16:41:51 crc kubenswrapper[4766]: I0130 16:41:51.817802 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b182790-0761-450c-85d1-63ddd59ac10f","Type":"ContainerStarted","Data":"13a067c315d5248f25766b082e783d339afd79a237563ce5f91071342f2570b8"} Jan 30 16:41:51 crc kubenswrapper[4766]: I0130 16:41:51.817811 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b182790-0761-450c-85d1-63ddd59ac10f","Type":"ContainerStarted","Data":"374f13cd2087a08f8eec3c99c6917ad293b1c5c6f50b2378b94b79cc272999d3"} Jan 30 16:41:52 crc kubenswrapper[4766]: I0130 16:41:52.053487 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93fa2128-fb98-4cca-9067-a864a6207188" path="/var/lib/kubelet/pods/93fa2128-fb98-4cca-9067-a864a6207188/volumes" Jan 30 16:41:52 crc kubenswrapper[4766]: I0130 16:41:52.204334 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-clmnh-config-w69zf" Jan 30 16:41:52 crc kubenswrapper[4766]: I0130 16:41:52.342420 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/cbb373eb-bd59-4480-80b6-bd1b2427105b-additional-scripts\") pod \"cbb373eb-bd59-4480-80b6-bd1b2427105b\" (UID: \"cbb373eb-bd59-4480-80b6-bd1b2427105b\") " Jan 30 16:41:52 crc kubenswrapper[4766]: I0130 16:41:52.342548 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cbb373eb-bd59-4480-80b6-bd1b2427105b-var-run\") pod \"cbb373eb-bd59-4480-80b6-bd1b2427105b\" (UID: \"cbb373eb-bd59-4480-80b6-bd1b2427105b\") " Jan 30 16:41:52 crc kubenswrapper[4766]: I0130 16:41:52.342646 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cbb373eb-bd59-4480-80b6-bd1b2427105b-scripts\") pod \"cbb373eb-bd59-4480-80b6-bd1b2427105b\" (UID: \"cbb373eb-bd59-4480-80b6-bd1b2427105b\") " Jan 30 16:41:52 crc kubenswrapper[4766]: I0130 16:41:52.342665 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cbb373eb-bd59-4480-80b6-bd1b2427105b-var-run-ovn\") pod \"cbb373eb-bd59-4480-80b6-bd1b2427105b\" (UID: \"cbb373eb-bd59-4480-80b6-bd1b2427105b\") " Jan 30 16:41:52 crc kubenswrapper[4766]: I0130 16:41:52.342751 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cbb373eb-bd59-4480-80b6-bd1b2427105b-var-log-ovn\") pod \"cbb373eb-bd59-4480-80b6-bd1b2427105b\" (UID: \"cbb373eb-bd59-4480-80b6-bd1b2427105b\") " Jan 30 16:41:52 crc kubenswrapper[4766]: I0130 16:41:52.342806 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7bk2\" (UniqueName: \"kubernetes.io/projected/cbb373eb-bd59-4480-80b6-bd1b2427105b-kube-api-access-s7bk2\") pod \"cbb373eb-bd59-4480-80b6-bd1b2427105b\" (UID: \"cbb373eb-bd59-4480-80b6-bd1b2427105b\") " Jan 30 16:41:52 crc kubenswrapper[4766]: I0130 16:41:52.342767 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cbb373eb-bd59-4480-80b6-bd1b2427105b-var-run" (OuterVolumeSpecName: "var-run") pod "cbb373eb-bd59-4480-80b6-bd1b2427105b" (UID: "cbb373eb-bd59-4480-80b6-bd1b2427105b"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:41:52 crc kubenswrapper[4766]: I0130 16:41:52.342848 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cbb373eb-bd59-4480-80b6-bd1b2427105b-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "cbb373eb-bd59-4480-80b6-bd1b2427105b" (UID: "cbb373eb-bd59-4480-80b6-bd1b2427105b"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:41:52 crc kubenswrapper[4766]: I0130 16:41:52.342880 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cbb373eb-bd59-4480-80b6-bd1b2427105b-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "cbb373eb-bd59-4480-80b6-bd1b2427105b" (UID: "cbb373eb-bd59-4480-80b6-bd1b2427105b"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:41:52 crc kubenswrapper[4766]: I0130 16:41:52.343218 4766 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cbb373eb-bd59-4480-80b6-bd1b2427105b-var-run\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:52 crc kubenswrapper[4766]: I0130 16:41:52.343243 4766 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cbb373eb-bd59-4480-80b6-bd1b2427105b-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:52 crc kubenswrapper[4766]: I0130 16:41:52.343258 4766 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cbb373eb-bd59-4480-80b6-bd1b2427105b-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:52 crc kubenswrapper[4766]: I0130 16:41:52.343768 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbb373eb-bd59-4480-80b6-bd1b2427105b-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "cbb373eb-bd59-4480-80b6-bd1b2427105b" (UID: "cbb373eb-bd59-4480-80b6-bd1b2427105b"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:41:52 crc kubenswrapper[4766]: I0130 16:41:52.344146 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbb373eb-bd59-4480-80b6-bd1b2427105b-scripts" (OuterVolumeSpecName: "scripts") pod "cbb373eb-bd59-4480-80b6-bd1b2427105b" (UID: "cbb373eb-bd59-4480-80b6-bd1b2427105b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:41:52 crc kubenswrapper[4766]: I0130 16:41:52.364465 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbb373eb-bd59-4480-80b6-bd1b2427105b-kube-api-access-s7bk2" (OuterVolumeSpecName: "kube-api-access-s7bk2") pod "cbb373eb-bd59-4480-80b6-bd1b2427105b" (UID: "cbb373eb-bd59-4480-80b6-bd1b2427105b"). InnerVolumeSpecName "kube-api-access-s7bk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:41:52 crc kubenswrapper[4766]: I0130 16:41:52.445246 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7bk2\" (UniqueName: \"kubernetes.io/projected/cbb373eb-bd59-4480-80b6-bd1b2427105b-kube-api-access-s7bk2\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:52 crc kubenswrapper[4766]: I0130 16:41:52.445288 4766 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/cbb373eb-bd59-4480-80b6-bd1b2427105b-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:52 crc kubenswrapper[4766]: I0130 16:41:52.445303 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cbb373eb-bd59-4480-80b6-bd1b2427105b-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:52 crc kubenswrapper[4766]: I0130 16:41:52.829589 4766 generic.go:334] "Generic (PLEG): container finished" podID="bc2a138c-9abd-427b-815c-cbb9e12459f6" containerID="420bba712e788513308111db89ced03a759c0a7dc6262370124c82df4dd31af5" exitCode=0 Jan 30 16:41:52 crc kubenswrapper[4766]: I0130 16:41:52.829673 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bc2a138c-9abd-427b-815c-cbb9e12459f6","Type":"ContainerDied","Data":"420bba712e788513308111db89ced03a759c0a7dc6262370124c82df4dd31af5"} Jan 30 16:41:52 crc kubenswrapper[4766]: I0130 16:41:52.835966 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b182790-0761-450c-85d1-63ddd59ac10f","Type":"ContainerStarted","Data":"b33858618ac4f97b57ed3a00bf2ef12f457aa24b08e1a7b17d0bccf28da68819"} Jan 30 16:41:52 crc kubenswrapper[4766]: I0130 16:41:52.837920 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-clmnh-config-w69zf" event={"ID":"cbb373eb-bd59-4480-80b6-bd1b2427105b","Type":"ContainerDied","Data":"00c49953103bedb879a4e1810914f639a631e3e34626d1a29d97454bb88f0c1f"} Jan 30 16:41:52 crc kubenswrapper[4766]: I0130 16:41:52.837983 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00c49953103bedb879a4e1810914f639a631e3e34626d1a29d97454bb88f0c1f" Jan 30 16:41:52 crc kubenswrapper[4766]: I0130 16:41:52.837997 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-clmnh-config-w69zf" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.306264 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-clmnh-config-w69zf"] Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.316158 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-clmnh-config-w69zf"] Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.432495 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-clmnh-config-zx269"] Jan 30 16:41:53 crc kubenswrapper[4766]: E0130 16:41:53.432880 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbb373eb-bd59-4480-80b6-bd1b2427105b" containerName="ovn-config" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.432901 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbb373eb-bd59-4480-80b6-bd1b2427105b" containerName="ovn-config" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.436798 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbb373eb-bd59-4480-80b6-bd1b2427105b" containerName="ovn-config" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.437546 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-clmnh-config-zx269" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.439164 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.447017 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-clmnh-config-zx269"] Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.450128 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-jppr8" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.564820 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52cwj\" (UniqueName: \"kubernetes.io/projected/e9dd82ac-e512-442e-97c4-53be730affca-kube-api-access-52cwj\") pod \"e9dd82ac-e512-442e-97c4-53be730affca\" (UID: \"e9dd82ac-e512-442e-97c4-53be730affca\") " Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.564963 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9dd82ac-e512-442e-97c4-53be730affca-operator-scripts\") pod \"e9dd82ac-e512-442e-97c4-53be730affca\" (UID: \"e9dd82ac-e512-442e-97c4-53be730affca\") " Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.565231 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h92pv\" (UniqueName: \"kubernetes.io/projected/19522cbf-c17c-411f-9732-986bd8ea5c1f-kube-api-access-h92pv\") pod \"ovn-controller-clmnh-config-zx269\" (UID: \"19522cbf-c17c-411f-9732-986bd8ea5c1f\") " pod="openstack/ovn-controller-clmnh-config-zx269" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.565258 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/19522cbf-c17c-411f-9732-986bd8ea5c1f-var-log-ovn\") pod \"ovn-controller-clmnh-config-zx269\" (UID: \"19522cbf-c17c-411f-9732-986bd8ea5c1f\") " pod="openstack/ovn-controller-clmnh-config-zx269" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.565321 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19522cbf-c17c-411f-9732-986bd8ea5c1f-scripts\") pod \"ovn-controller-clmnh-config-zx269\" (UID: \"19522cbf-c17c-411f-9732-986bd8ea5c1f\") " pod="openstack/ovn-controller-clmnh-config-zx269" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.565344 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/19522cbf-c17c-411f-9732-986bd8ea5c1f-additional-scripts\") pod \"ovn-controller-clmnh-config-zx269\" (UID: \"19522cbf-c17c-411f-9732-986bd8ea5c1f\") " pod="openstack/ovn-controller-clmnh-config-zx269" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.565367 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/19522cbf-c17c-411f-9732-986bd8ea5c1f-var-run\") pod \"ovn-controller-clmnh-config-zx269\" (UID: \"19522cbf-c17c-411f-9732-986bd8ea5c1f\") " pod="openstack/ovn-controller-clmnh-config-zx269" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.565497 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/19522cbf-c17c-411f-9732-986bd8ea5c1f-var-run-ovn\") pod \"ovn-controller-clmnh-config-zx269\" (UID: \"19522cbf-c17c-411f-9732-986bd8ea5c1f\") " pod="openstack/ovn-controller-clmnh-config-zx269" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.565702 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9dd82ac-e512-442e-97c4-53be730affca-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e9dd82ac-e512-442e-97c4-53be730affca" (UID: "e9dd82ac-e512-442e-97c4-53be730affca"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.570902 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9dd82ac-e512-442e-97c4-53be730affca-kube-api-access-52cwj" (OuterVolumeSpecName: "kube-api-access-52cwj") pod "e9dd82ac-e512-442e-97c4-53be730affca" (UID: "e9dd82ac-e512-442e-97c4-53be730affca"). InnerVolumeSpecName "kube-api-access-52cwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.667157 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/19522cbf-c17c-411f-9732-986bd8ea5c1f-var-run-ovn\") pod \"ovn-controller-clmnh-config-zx269\" (UID: \"19522cbf-c17c-411f-9732-986bd8ea5c1f\") " pod="openstack/ovn-controller-clmnh-config-zx269" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.667264 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h92pv\" (UniqueName: \"kubernetes.io/projected/19522cbf-c17c-411f-9732-986bd8ea5c1f-kube-api-access-h92pv\") pod \"ovn-controller-clmnh-config-zx269\" (UID: \"19522cbf-c17c-411f-9732-986bd8ea5c1f\") " pod="openstack/ovn-controller-clmnh-config-zx269" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.667287 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/19522cbf-c17c-411f-9732-986bd8ea5c1f-var-log-ovn\") pod \"ovn-controller-clmnh-config-zx269\" (UID: \"19522cbf-c17c-411f-9732-986bd8ea5c1f\") " pod="openstack/ovn-controller-clmnh-config-zx269" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.667340 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19522cbf-c17c-411f-9732-986bd8ea5c1f-scripts\") pod \"ovn-controller-clmnh-config-zx269\" (UID: \"19522cbf-c17c-411f-9732-986bd8ea5c1f\") " pod="openstack/ovn-controller-clmnh-config-zx269" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.667359 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/19522cbf-c17c-411f-9732-986bd8ea5c1f-additional-scripts\") pod \"ovn-controller-clmnh-config-zx269\" (UID: \"19522cbf-c17c-411f-9732-986bd8ea5c1f\") " pod="openstack/ovn-controller-clmnh-config-zx269" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.667383 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/19522cbf-c17c-411f-9732-986bd8ea5c1f-var-run\") pod \"ovn-controller-clmnh-config-zx269\" (UID: \"19522cbf-c17c-411f-9732-986bd8ea5c1f\") " pod="openstack/ovn-controller-clmnh-config-zx269" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.667441 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52cwj\" (UniqueName: \"kubernetes.io/projected/e9dd82ac-e512-442e-97c4-53be730affca-kube-api-access-52cwj\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.667453 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9dd82ac-e512-442e-97c4-53be730affca-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.667567 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/19522cbf-c17c-411f-9732-986bd8ea5c1f-var-run\") pod \"ovn-controller-clmnh-config-zx269\" (UID: \"19522cbf-c17c-411f-9732-986bd8ea5c1f\") " pod="openstack/ovn-controller-clmnh-config-zx269" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.667570 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/19522cbf-c17c-411f-9732-986bd8ea5c1f-var-run-ovn\") pod \"ovn-controller-clmnh-config-zx269\" (UID: \"19522cbf-c17c-411f-9732-986bd8ea5c1f\") " pod="openstack/ovn-controller-clmnh-config-zx269" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.667619 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/19522cbf-c17c-411f-9732-986bd8ea5c1f-var-log-ovn\") pod \"ovn-controller-clmnh-config-zx269\" (UID: \"19522cbf-c17c-411f-9732-986bd8ea5c1f\") " pod="openstack/ovn-controller-clmnh-config-zx269" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.668474 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/19522cbf-c17c-411f-9732-986bd8ea5c1f-additional-scripts\") pod \"ovn-controller-clmnh-config-zx269\" (UID: \"19522cbf-c17c-411f-9732-986bd8ea5c1f\") " pod="openstack/ovn-controller-clmnh-config-zx269" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.669695 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19522cbf-c17c-411f-9732-986bd8ea5c1f-scripts\") pod \"ovn-controller-clmnh-config-zx269\" (UID: \"19522cbf-c17c-411f-9732-986bd8ea5c1f\") " pod="openstack/ovn-controller-clmnh-config-zx269" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.698069 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h92pv\" (UniqueName: \"kubernetes.io/projected/19522cbf-c17c-411f-9732-986bd8ea5c1f-kube-api-access-h92pv\") pod \"ovn-controller-clmnh-config-zx269\" (UID: \"19522cbf-c17c-411f-9732-986bd8ea5c1f\") " pod="openstack/ovn-controller-clmnh-config-zx269" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.711075 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-clmnh" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.806248 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-clmnh-config-zx269" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.850658 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-jppr8" event={"ID":"e9dd82ac-e512-442e-97c4-53be730affca","Type":"ContainerDied","Data":"b5aead66f7c29087a054f0580ba8f6d13d6016e59ac8cc33f7178b88f8ae405c"} Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.850915 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5aead66f7c29087a054f0580ba8f6d13d6016e59ac8cc33f7178b88f8ae405c" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.850877 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-jppr8" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.861278 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bc2a138c-9abd-427b-815c-cbb9e12459f6","Type":"ContainerStarted","Data":"40a3ac01470631f3856774db28b8f61347a07c88a9ecabdd8c4a7fdd55f65bf9"} Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.861572 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.867437 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b182790-0761-450c-85d1-63ddd59ac10f","Type":"ContainerStarted","Data":"4a378782d7a92d740e9d92e144de664ebf098b972f3febcbf7a8d0d8994d65c2"} Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.896871 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371960.957924 podStartE2EDuration="1m15.896852127s" podCreationTimestamp="2026-01-30 16:40:38 +0000 UTC" firstStartedPulling="2026-01-30 16:40:40.695273353 +0000 UTC m=+1095.333230699" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:41:53.892633835 +0000 UTC m=+1168.530591211" watchObservedRunningTime="2026-01-30 16:41:53.896852127 +0000 UTC m=+1168.534809473" Jan 30 16:41:54 crc kubenswrapper[4766]: I0130 16:41:54.054717 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbb373eb-bd59-4480-80b6-bd1b2427105b" path="/var/lib/kubelet/pods/cbb373eb-bd59-4480-80b6-bd1b2427105b/volumes" Jan 30 16:41:54 crc kubenswrapper[4766]: I0130 16:41:54.383574 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-clmnh-config-zx269"] Jan 30 16:41:54 crc kubenswrapper[4766]: W0130 16:41:54.410905 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod19522cbf_c17c_411f_9732_986bd8ea5c1f.slice/crio-c7412ef7490490406cd984094351044483b068f871fed9ebfbff7f36f589ba3c WatchSource:0}: Error finding container c7412ef7490490406cd984094351044483b068f871fed9ebfbff7f36f589ba3c: Status 404 returned error can't find the container with id c7412ef7490490406cd984094351044483b068f871fed9ebfbff7f36f589ba3c Jan 30 16:41:54 crc kubenswrapper[4766]: I0130 16:41:54.885938 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b182790-0761-450c-85d1-63ddd59ac10f","Type":"ContainerStarted","Data":"ed024a5d8346d6cba34ca8427849879c1c8708dd88d1dff2c821e85ba14d6f5d"} Jan 30 16:41:54 crc kubenswrapper[4766]: I0130 16:41:54.886543 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b182790-0761-450c-85d1-63ddd59ac10f","Type":"ContainerStarted","Data":"3d565bf23f387505355fc88939efb3e922421c5ce2f3cce9972954f997abf7e9"} Jan 30 16:41:54 crc kubenswrapper[4766]: I0130 16:41:54.886578 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b182790-0761-450c-85d1-63ddd59ac10f","Type":"ContainerStarted","Data":"7e0ee7c6c23df84239fa6a0f2dda7982f60b3b9413744489a50144073243e8be"} Jan 30 16:41:54 crc kubenswrapper[4766]: I0130 16:41:54.887933 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-clmnh-config-zx269" event={"ID":"19522cbf-c17c-411f-9732-986bd8ea5c1f","Type":"ContainerStarted","Data":"ccba621742d68e9586276ff231a6fa1b8cc39d7109fc1db500072a77f2e0577a"} Jan 30 16:41:54 crc kubenswrapper[4766]: I0130 16:41:54.887976 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-clmnh-config-zx269" event={"ID":"19522cbf-c17c-411f-9732-986bd8ea5c1f","Type":"ContainerStarted","Data":"c7412ef7490490406cd984094351044483b068f871fed9ebfbff7f36f589ba3c"} Jan 30 16:41:54 crc kubenswrapper[4766]: I0130 16:41:54.919356 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-clmnh-config-zx269" podStartSLOduration=1.919335322 podStartE2EDuration="1.919335322s" podCreationTimestamp="2026-01-30 16:41:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:41:54.910878555 +0000 UTC m=+1169.548835901" watchObservedRunningTime="2026-01-30 16:41:54.919335322 +0000 UTC m=+1169.557292668" Jan 30 16:41:55 crc kubenswrapper[4766]: I0130 16:41:55.900309 4766 generic.go:334] "Generic (PLEG): container finished" podID="19522cbf-c17c-411f-9732-986bd8ea5c1f" containerID="ccba621742d68e9586276ff231a6fa1b8cc39d7109fc1db500072a77f2e0577a" exitCode=0 Jan 30 16:41:55 crc kubenswrapper[4766]: I0130 16:41:55.900365 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-clmnh-config-zx269" event={"ID":"19522cbf-c17c-411f-9732-986bd8ea5c1f","Type":"ContainerDied","Data":"ccba621742d68e9586276ff231a6fa1b8cc39d7109fc1db500072a77f2e0577a"} Jan 30 16:41:56 crc kubenswrapper[4766]: I0130 16:41:56.951073 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b182790-0761-450c-85d1-63ddd59ac10f","Type":"ContainerStarted","Data":"cabff9d9eac1e96f01b9ae0ea6118276a0a0f7d8869b118376d2a160d9c95fbd"} Jan 30 16:41:56 crc kubenswrapper[4766]: I0130 16:41:56.951643 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b182790-0761-450c-85d1-63ddd59ac10f","Type":"ContainerStarted","Data":"686b4de4bfb8090cbee7ffd8b429f45a75fa7f8db6a139284fa6c26cb4ebf320"} Jan 30 16:41:56 crc kubenswrapper[4766]: I0130 16:41:56.951664 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b182790-0761-450c-85d1-63ddd59ac10f","Type":"ContainerStarted","Data":"93345e4db373057383a4e7560531f5f8dc222e4ea8e6511d8365b6b242bb9305"} Jan 30 16:41:57 crc kubenswrapper[4766]: I0130 16:41:57.459819 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-clmnh-config-zx269" Jan 30 16:41:57 crc kubenswrapper[4766]: I0130 16:41:57.545734 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/19522cbf-c17c-411f-9732-986bd8ea5c1f-var-log-ovn\") pod \"19522cbf-c17c-411f-9732-986bd8ea5c1f\" (UID: \"19522cbf-c17c-411f-9732-986bd8ea5c1f\") " Jan 30 16:41:57 crc kubenswrapper[4766]: I0130 16:41:57.545822 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/19522cbf-c17c-411f-9732-986bd8ea5c1f-var-run-ovn\") pod \"19522cbf-c17c-411f-9732-986bd8ea5c1f\" (UID: \"19522cbf-c17c-411f-9732-986bd8ea5c1f\") " Jan 30 16:41:57 crc kubenswrapper[4766]: I0130 16:41:57.545925 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19522cbf-c17c-411f-9732-986bd8ea5c1f-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "19522cbf-c17c-411f-9732-986bd8ea5c1f" (UID: "19522cbf-c17c-411f-9732-986bd8ea5c1f"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:41:57 crc kubenswrapper[4766]: I0130 16:41:57.545992 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19522cbf-c17c-411f-9732-986bd8ea5c1f-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "19522cbf-c17c-411f-9732-986bd8ea5c1f" (UID: "19522cbf-c17c-411f-9732-986bd8ea5c1f"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:41:57 crc kubenswrapper[4766]: I0130 16:41:57.546029 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/19522cbf-c17c-411f-9732-986bd8ea5c1f-var-run\") pod \"19522cbf-c17c-411f-9732-986bd8ea5c1f\" (UID: \"19522cbf-c17c-411f-9732-986bd8ea5c1f\") " Jan 30 16:41:57 crc kubenswrapper[4766]: I0130 16:41:57.546098 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h92pv\" (UniqueName: \"kubernetes.io/projected/19522cbf-c17c-411f-9732-986bd8ea5c1f-kube-api-access-h92pv\") pod \"19522cbf-c17c-411f-9732-986bd8ea5c1f\" (UID: \"19522cbf-c17c-411f-9732-986bd8ea5c1f\") " Jan 30 16:41:57 crc kubenswrapper[4766]: I0130 16:41:57.546207 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/19522cbf-c17c-411f-9732-986bd8ea5c1f-additional-scripts\") pod \"19522cbf-c17c-411f-9732-986bd8ea5c1f\" (UID: \"19522cbf-c17c-411f-9732-986bd8ea5c1f\") " Jan 30 16:41:57 crc kubenswrapper[4766]: I0130 16:41:57.546410 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19522cbf-c17c-411f-9732-986bd8ea5c1f-scripts\") pod \"19522cbf-c17c-411f-9732-986bd8ea5c1f\" (UID: \"19522cbf-c17c-411f-9732-986bd8ea5c1f\") " Jan 30 16:41:57 crc kubenswrapper[4766]: I0130 16:41:57.547686 4766 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/19522cbf-c17c-411f-9732-986bd8ea5c1f-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:57 crc kubenswrapper[4766]: I0130 16:41:57.547715 4766 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/19522cbf-c17c-411f-9732-986bd8ea5c1f-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:57 crc kubenswrapper[4766]: I0130 16:41:57.549565 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19522cbf-c17c-411f-9732-986bd8ea5c1f-scripts" (OuterVolumeSpecName: "scripts") pod "19522cbf-c17c-411f-9732-986bd8ea5c1f" (UID: "19522cbf-c17c-411f-9732-986bd8ea5c1f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:41:57 crc kubenswrapper[4766]: I0130 16:41:57.549640 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19522cbf-c17c-411f-9732-986bd8ea5c1f-var-run" (OuterVolumeSpecName: "var-run") pod "19522cbf-c17c-411f-9732-986bd8ea5c1f" (UID: "19522cbf-c17c-411f-9732-986bd8ea5c1f"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:41:57 crc kubenswrapper[4766]: I0130 16:41:57.550887 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19522cbf-c17c-411f-9732-986bd8ea5c1f-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "19522cbf-c17c-411f-9732-986bd8ea5c1f" (UID: "19522cbf-c17c-411f-9732-986bd8ea5c1f"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:41:57 crc kubenswrapper[4766]: I0130 16:41:57.558937 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19522cbf-c17c-411f-9732-986bd8ea5c1f-kube-api-access-h92pv" (OuterVolumeSpecName: "kube-api-access-h92pv") pod "19522cbf-c17c-411f-9732-986bd8ea5c1f" (UID: "19522cbf-c17c-411f-9732-986bd8ea5c1f"). InnerVolumeSpecName "kube-api-access-h92pv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:41:57 crc kubenswrapper[4766]: I0130 16:41:57.649663 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h92pv\" (UniqueName: \"kubernetes.io/projected/19522cbf-c17c-411f-9732-986bd8ea5c1f-kube-api-access-h92pv\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:57 crc kubenswrapper[4766]: I0130 16:41:57.649708 4766 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/19522cbf-c17c-411f-9732-986bd8ea5c1f-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:57 crc kubenswrapper[4766]: I0130 16:41:57.649720 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19522cbf-c17c-411f-9732-986bd8ea5c1f-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:57 crc kubenswrapper[4766]: I0130 16:41:57.649734 4766 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/19522cbf-c17c-411f-9732-986bd8ea5c1f-var-run\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:57 crc kubenswrapper[4766]: I0130 16:41:57.962054 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-clmnh-config-zx269" Jan 30 16:41:57 crc kubenswrapper[4766]: I0130 16:41:57.962099 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-clmnh-config-zx269" event={"ID":"19522cbf-c17c-411f-9732-986bd8ea5c1f","Type":"ContainerDied","Data":"c7412ef7490490406cd984094351044483b068f871fed9ebfbff7f36f589ba3c"} Jan 30 16:41:57 crc kubenswrapper[4766]: I0130 16:41:57.962141 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7412ef7490490406cd984094351044483b068f871fed9ebfbff7f36f589ba3c" Jan 30 16:41:57 crc kubenswrapper[4766]: I0130 16:41:57.976968 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b182790-0761-450c-85d1-63ddd59ac10f","Type":"ContainerStarted","Data":"2de20de1c925cc2fe2631c488767f62edc5546cfa1bab3a9f5b3b5568ebd33bd"} Jan 30 16:41:57 crc kubenswrapper[4766]: I0130 16:41:57.977012 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b182790-0761-450c-85d1-63ddd59ac10f","Type":"ContainerStarted","Data":"9ef33fd7af0697eee6aa37a4f43e02cd1ff7caec575a2b12e994eb6a0549b3a1"} Jan 30 16:41:57 crc kubenswrapper[4766]: I0130 16:41:57.977024 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b182790-0761-450c-85d1-63ddd59ac10f","Type":"ContainerStarted","Data":"fb57872e5fb6a58cc8c40e732147b1054a269fa84054e322cc2f52fa8c9c9ad5"} Jan 30 16:41:57 crc kubenswrapper[4766]: I0130 16:41:57.977033 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b182790-0761-450c-85d1-63ddd59ac10f","Type":"ContainerStarted","Data":"1867868d042226b0102d7af4efd2c5d0686e840d200dd33d6ec36968fc03fa94"} Jan 30 16:41:58 crc kubenswrapper[4766]: I0130 16:41:58.546590 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-clmnh-config-zx269"] Jan 30 16:41:58 crc kubenswrapper[4766]: I0130 16:41:58.555943 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-clmnh-config-zx269"] Jan 30 16:41:59 crc kubenswrapper[4766]: I0130 16:41:59.027166 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=37.531740307 podStartE2EDuration="44.027144359s" podCreationTimestamp="2026-01-30 16:41:15 +0000 UTC" firstStartedPulling="2026-01-30 16:41:49.549201852 +0000 UTC m=+1164.187159198" lastFinishedPulling="2026-01-30 16:41:56.044605904 +0000 UTC m=+1170.682563250" observedRunningTime="2026-01-30 16:41:59.020597384 +0000 UTC m=+1173.658554740" watchObservedRunningTime="2026-01-30 16:41:59.027144359 +0000 UTC m=+1173.665101705" Jan 30 16:41:59 crc kubenswrapper[4766]: I0130 16:41:59.335873 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-jfh6l"] Jan 30 16:41:59 crc kubenswrapper[4766]: E0130 16:41:59.336706 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19522cbf-c17c-411f-9732-986bd8ea5c1f" containerName="ovn-config" Jan 30 16:41:59 crc kubenswrapper[4766]: I0130 16:41:59.336801 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="19522cbf-c17c-411f-9732-986bd8ea5c1f" containerName="ovn-config" Jan 30 16:41:59 crc kubenswrapper[4766]: E0130 16:41:59.336879 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9dd82ac-e512-442e-97c4-53be730affca" containerName="mariadb-account-create-update" Jan 30 16:41:59 crc kubenswrapper[4766]: I0130 16:41:59.336940 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9dd82ac-e512-442e-97c4-53be730affca" containerName="mariadb-account-create-update" Jan 30 16:41:59 crc kubenswrapper[4766]: I0130 16:41:59.337155 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9dd82ac-e512-442e-97c4-53be730affca" containerName="mariadb-account-create-update" Jan 30 16:41:59 crc kubenswrapper[4766]: I0130 16:41:59.337274 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="19522cbf-c17c-411f-9732-986bd8ea5c1f" containerName="ovn-config" Jan 30 16:41:59 crc kubenswrapper[4766]: I0130 16:41:59.338257 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" Jan 30 16:41:59 crc kubenswrapper[4766]: I0130 16:41:59.341347 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 30 16:41:59 crc kubenswrapper[4766]: I0130 16:41:59.364003 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-jfh6l"] Jan 30 16:41:59 crc kubenswrapper[4766]: I0130 16:41:59.482376 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5be49188-9169-438f-a8df-6bd5d8dd29fd-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-jfh6l\" (UID: \"5be49188-9169-438f-a8df-6bd5d8dd29fd\") " pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" Jan 30 16:41:59 crc kubenswrapper[4766]: I0130 16:41:59.482779 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5be49188-9169-438f-a8df-6bd5d8dd29fd-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-jfh6l\" (UID: \"5be49188-9169-438f-a8df-6bd5d8dd29fd\") " pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" Jan 30 16:41:59 crc kubenswrapper[4766]: I0130 16:41:59.483158 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmfnv\" (UniqueName: \"kubernetes.io/projected/5be49188-9169-438f-a8df-6bd5d8dd29fd-kube-api-access-nmfnv\") pod \"dnsmasq-dns-77585f5f8c-jfh6l\" (UID: \"5be49188-9169-438f-a8df-6bd5d8dd29fd\") " pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" Jan 30 16:41:59 crc kubenswrapper[4766]: I0130 16:41:59.483245 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5be49188-9169-438f-a8df-6bd5d8dd29fd-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-jfh6l\" (UID: \"5be49188-9169-438f-a8df-6bd5d8dd29fd\") " pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" Jan 30 16:41:59 crc kubenswrapper[4766]: I0130 16:41:59.483303 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5be49188-9169-438f-a8df-6bd5d8dd29fd-config\") pod \"dnsmasq-dns-77585f5f8c-jfh6l\" (UID: \"5be49188-9169-438f-a8df-6bd5d8dd29fd\") " pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" Jan 30 16:41:59 crc kubenswrapper[4766]: I0130 16:41:59.483341 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5be49188-9169-438f-a8df-6bd5d8dd29fd-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-jfh6l\" (UID: \"5be49188-9169-438f-a8df-6bd5d8dd29fd\") " pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" Jan 30 16:41:59 crc kubenswrapper[4766]: I0130 16:41:59.584338 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5be49188-9169-438f-a8df-6bd5d8dd29fd-config\") pod \"dnsmasq-dns-77585f5f8c-jfh6l\" (UID: \"5be49188-9169-438f-a8df-6bd5d8dd29fd\") " pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" Jan 30 16:41:59 crc kubenswrapper[4766]: I0130 16:41:59.584410 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5be49188-9169-438f-a8df-6bd5d8dd29fd-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-jfh6l\" (UID: \"5be49188-9169-438f-a8df-6bd5d8dd29fd\") " pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" Jan 30 16:41:59 crc kubenswrapper[4766]: I0130 16:41:59.584461 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5be49188-9169-438f-a8df-6bd5d8dd29fd-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-jfh6l\" (UID: \"5be49188-9169-438f-a8df-6bd5d8dd29fd\") " pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" Jan 30 16:41:59 crc kubenswrapper[4766]: I0130 16:41:59.584498 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5be49188-9169-438f-a8df-6bd5d8dd29fd-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-jfh6l\" (UID: \"5be49188-9169-438f-a8df-6bd5d8dd29fd\") " pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" Jan 30 16:41:59 crc kubenswrapper[4766]: I0130 16:41:59.584557 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmfnv\" (UniqueName: \"kubernetes.io/projected/5be49188-9169-438f-a8df-6bd5d8dd29fd-kube-api-access-nmfnv\") pod \"dnsmasq-dns-77585f5f8c-jfh6l\" (UID: \"5be49188-9169-438f-a8df-6bd5d8dd29fd\") " pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" Jan 30 16:41:59 crc kubenswrapper[4766]: I0130 16:41:59.584599 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5be49188-9169-438f-a8df-6bd5d8dd29fd-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-jfh6l\" (UID: \"5be49188-9169-438f-a8df-6bd5d8dd29fd\") " pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" Jan 30 16:41:59 crc kubenswrapper[4766]: I0130 16:41:59.585485 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5be49188-9169-438f-a8df-6bd5d8dd29fd-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-jfh6l\" (UID: \"5be49188-9169-438f-a8df-6bd5d8dd29fd\") " pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" Jan 30 16:41:59 crc kubenswrapper[4766]: I0130 16:41:59.585738 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5be49188-9169-438f-a8df-6bd5d8dd29fd-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-jfh6l\" (UID: \"5be49188-9169-438f-a8df-6bd5d8dd29fd\") " pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" Jan 30 16:41:59 crc kubenswrapper[4766]: I0130 16:41:59.585886 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5be49188-9169-438f-a8df-6bd5d8dd29fd-config\") pod \"dnsmasq-dns-77585f5f8c-jfh6l\" (UID: \"5be49188-9169-438f-a8df-6bd5d8dd29fd\") " pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" Jan 30 16:41:59 crc kubenswrapper[4766]: I0130 16:41:59.585930 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5be49188-9169-438f-a8df-6bd5d8dd29fd-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-jfh6l\" (UID: \"5be49188-9169-438f-a8df-6bd5d8dd29fd\") " pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" Jan 30 16:41:59 crc kubenswrapper[4766]: I0130 16:41:59.585984 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5be49188-9169-438f-a8df-6bd5d8dd29fd-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-jfh6l\" (UID: \"5be49188-9169-438f-a8df-6bd5d8dd29fd\") " pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" Jan 30 16:41:59 crc kubenswrapper[4766]: I0130 16:41:59.609636 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmfnv\" (UniqueName: \"kubernetes.io/projected/5be49188-9169-438f-a8df-6bd5d8dd29fd-kube-api-access-nmfnv\") pod \"dnsmasq-dns-77585f5f8c-jfh6l\" (UID: \"5be49188-9169-438f-a8df-6bd5d8dd29fd\") " pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" Jan 30 16:41:59 crc kubenswrapper[4766]: I0130 16:41:59.666061 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" Jan 30 16:41:59 crc kubenswrapper[4766]: I0130 16:41:59.770506 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:42:00 crc kubenswrapper[4766]: I0130 16:42:00.055199 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19522cbf-c17c-411f-9732-986bd8ea5c1f" path="/var/lib/kubelet/pods/19522cbf-c17c-411f-9732-986bd8ea5c1f/volumes" Jan 30 16:42:07 crc kubenswrapper[4766]: I0130 16:42:07.726907 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-jfh6l"] Jan 30 16:42:08 crc kubenswrapper[4766]: I0130 16:42:08.084979 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-jpmx7" event={"ID":"42d1f0ba-d11c-4e08-9e01-5783f42a6b84","Type":"ContainerStarted","Data":"608ba2a26d2d587734c8a4f7540403d434c83f4f3e8dcb71158c93e46d824161"} Jan 30 16:42:08 crc kubenswrapper[4766]: I0130 16:42:08.088845 4766 generic.go:334] "Generic (PLEG): container finished" podID="5be49188-9169-438f-a8df-6bd5d8dd29fd" containerID="3a0eaa2d691ae4d65e795c3996eb0ab131211168f3e378f7e5d301593d79afe7" exitCode=0 Jan 30 16:42:08 crc kubenswrapper[4766]: I0130 16:42:08.088887 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" event={"ID":"5be49188-9169-438f-a8df-6bd5d8dd29fd","Type":"ContainerDied","Data":"3a0eaa2d691ae4d65e795c3996eb0ab131211168f3e378f7e5d301593d79afe7"} Jan 30 16:42:08 crc kubenswrapper[4766]: I0130 16:42:08.088905 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" event={"ID":"5be49188-9169-438f-a8df-6bd5d8dd29fd","Type":"ContainerStarted","Data":"12785cb0c22675855895839970651119da7335d185eeab854fc2e6552f272d1d"} Jan 30 16:42:08 crc kubenswrapper[4766]: I0130 16:42:08.111595 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-jpmx7" podStartSLOduration=2.544435285 podStartE2EDuration="20.111574868s" podCreationTimestamp="2026-01-30 16:41:48 +0000 UTC" firstStartedPulling="2026-01-30 16:41:49.826955131 +0000 UTC m=+1164.464912477" lastFinishedPulling="2026-01-30 16:42:07.394094714 +0000 UTC m=+1182.032052060" observedRunningTime="2026-01-30 16:42:08.102070363 +0000 UTC m=+1182.740027709" watchObservedRunningTime="2026-01-30 16:42:08.111574868 +0000 UTC m=+1182.749532214" Jan 30 16:42:09 crc kubenswrapper[4766]: I0130 16:42:09.103802 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" event={"ID":"5be49188-9169-438f-a8df-6bd5d8dd29fd","Type":"ContainerStarted","Data":"16de9997b9c78a1addb7a6173a72d9c91cb7c20a2b569788c1ccd21789b937ba"} Jan 30 16:42:09 crc kubenswrapper[4766]: I0130 16:42:09.104443 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" Jan 30 16:42:09 crc kubenswrapper[4766]: I0130 16:42:09.140778 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" podStartSLOduration=10.140745723 podStartE2EDuration="10.140745723s" podCreationTimestamp="2026-01-30 16:41:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:42:09.130147028 +0000 UTC m=+1183.768104394" watchObservedRunningTime="2026-01-30 16:42:09.140745723 +0000 UTC m=+1183.778703069" Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.152682 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.593608 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-x95v6"] Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.595081 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-x95v6" Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.607683 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-x95v6"] Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.692413 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-270a-account-create-update-d5mdk"] Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.694262 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-270a-account-create-update-d5mdk" Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.696520 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.705531 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-270a-account-create-update-d5mdk"] Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.711752 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqgb7\" (UniqueName: \"kubernetes.io/projected/1caad6ca-26a4-488c-8b03-90da40a955b0-kube-api-access-bqgb7\") pod \"cinder-db-create-x95v6\" (UID: \"1caad6ca-26a4-488c-8b03-90da40a955b0\") " pod="openstack/cinder-db-create-x95v6" Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.711810 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db058df5-07b8-4d6e-a646-48ac7105c516-operator-scripts\") pod \"cinder-270a-account-create-update-d5mdk\" (UID: \"db058df5-07b8-4d6e-a646-48ac7105c516\") " pod="openstack/cinder-270a-account-create-update-d5mdk" Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.711905 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbsrs\" (UniqueName: \"kubernetes.io/projected/db058df5-07b8-4d6e-a646-48ac7105c516-kube-api-access-lbsrs\") pod \"cinder-270a-account-create-update-d5mdk\" (UID: \"db058df5-07b8-4d6e-a646-48ac7105c516\") " pod="openstack/cinder-270a-account-create-update-d5mdk" Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.711945 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1caad6ca-26a4-488c-8b03-90da40a955b0-operator-scripts\") pod \"cinder-db-create-x95v6\" (UID: \"1caad6ca-26a4-488c-8b03-90da40a955b0\") " pod="openstack/cinder-db-create-x95v6" Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.813696 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-zf522"] Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.816219 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbsrs\" (UniqueName: \"kubernetes.io/projected/db058df5-07b8-4d6e-a646-48ac7105c516-kube-api-access-lbsrs\") pod \"cinder-270a-account-create-update-d5mdk\" (UID: \"db058df5-07b8-4d6e-a646-48ac7105c516\") " pod="openstack/cinder-270a-account-create-update-d5mdk" Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.816370 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1caad6ca-26a4-488c-8b03-90da40a955b0-operator-scripts\") pod \"cinder-db-create-x95v6\" (UID: \"1caad6ca-26a4-488c-8b03-90da40a955b0\") " pod="openstack/cinder-db-create-x95v6" Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.816531 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqgb7\" (UniqueName: \"kubernetes.io/projected/1caad6ca-26a4-488c-8b03-90da40a955b0-kube-api-access-bqgb7\") pod \"cinder-db-create-x95v6\" (UID: \"1caad6ca-26a4-488c-8b03-90da40a955b0\") " pod="openstack/cinder-db-create-x95v6" Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.816601 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db058df5-07b8-4d6e-a646-48ac7105c516-operator-scripts\") pod \"cinder-270a-account-create-update-d5mdk\" (UID: \"db058df5-07b8-4d6e-a646-48ac7105c516\") " pod="openstack/cinder-270a-account-create-update-d5mdk" Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.817994 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db058df5-07b8-4d6e-a646-48ac7105c516-operator-scripts\") pod \"cinder-270a-account-create-update-d5mdk\" (UID: \"db058df5-07b8-4d6e-a646-48ac7105c516\") " pod="openstack/cinder-270a-account-create-update-d5mdk" Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.818091 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1caad6ca-26a4-488c-8b03-90da40a955b0-operator-scripts\") pod \"cinder-db-create-x95v6\" (UID: \"1caad6ca-26a4-488c-8b03-90da40a955b0\") " pod="openstack/cinder-db-create-x95v6" Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.818406 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-zf522" Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.839065 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-zf522"] Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.864963 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqgb7\" (UniqueName: \"kubernetes.io/projected/1caad6ca-26a4-488c-8b03-90da40a955b0-kube-api-access-bqgb7\") pod \"cinder-db-create-x95v6\" (UID: \"1caad6ca-26a4-488c-8b03-90da40a955b0\") " pod="openstack/cinder-db-create-x95v6" Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.878842 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbsrs\" (UniqueName: \"kubernetes.io/projected/db058df5-07b8-4d6e-a646-48ac7105c516-kube-api-access-lbsrs\") pod \"cinder-270a-account-create-update-d5mdk\" (UID: \"db058df5-07b8-4d6e-a646-48ac7105c516\") " pod="openstack/cinder-270a-account-create-update-d5mdk" Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.912655 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-8p4hm"] Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.914863 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8p4hm" Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.921085 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.921406 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.921831 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-x95v6" Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.921955 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-ftsn6" Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.922565 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.931302 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-dksnn"] Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.933486 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-dksnn" Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.951560 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-8p4hm"] Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.966876 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-dksnn"] Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.011806 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-270a-account-create-update-d5mdk" Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.023349 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwzpp\" (UniqueName: \"kubernetes.io/projected/b242f466-9049-49a9-b655-b270790de9ce-kube-api-access-gwzpp\") pod \"keystone-db-sync-8p4hm\" (UID: \"b242f466-9049-49a9-b655-b270790de9ce\") " pod="openstack/keystone-db-sync-8p4hm" Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.023423 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81d680b3-ced9-4a2a-9a50-780e6239b4a5-operator-scripts\") pod \"barbican-db-create-zf522\" (UID: \"81d680b3-ced9-4a2a-9a50-780e6239b4a5\") " pod="openstack/barbican-db-create-zf522" Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.023457 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b242f466-9049-49a9-b655-b270790de9ce-config-data\") pod \"keystone-db-sync-8p4hm\" (UID: \"b242f466-9049-49a9-b655-b270790de9ce\") " pod="openstack/keystone-db-sync-8p4hm" Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.023501 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b242f466-9049-49a9-b655-b270790de9ce-combined-ca-bundle\") pod \"keystone-db-sync-8p4hm\" (UID: \"b242f466-9049-49a9-b655-b270790de9ce\") " pod="openstack/keystone-db-sync-8p4hm" Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.023592 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4scx4\" (UniqueName: \"kubernetes.io/projected/81d680b3-ced9-4a2a-9a50-780e6239b4a5-kube-api-access-4scx4\") pod \"barbican-db-create-zf522\" (UID: \"81d680b3-ced9-4a2a-9a50-780e6239b4a5\") " pod="openstack/barbican-db-create-zf522" Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.026428 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-66a8-account-create-update-wk4g8"] Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.027642 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-66a8-account-create-update-wk4g8" Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.033353 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.040907 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-66a8-account-create-update-wk4g8"] Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.125685 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4scx4\" (UniqueName: \"kubernetes.io/projected/81d680b3-ced9-4a2a-9a50-780e6239b4a5-kube-api-access-4scx4\") pod \"barbican-db-create-zf522\" (UID: \"81d680b3-ced9-4a2a-9a50-780e6239b4a5\") " pod="openstack/barbican-db-create-zf522" Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.125779 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwzpp\" (UniqueName: \"kubernetes.io/projected/b242f466-9049-49a9-b655-b270790de9ce-kube-api-access-gwzpp\") pod \"keystone-db-sync-8p4hm\" (UID: \"b242f466-9049-49a9-b655-b270790de9ce\") " pod="openstack/keystone-db-sync-8p4hm" Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.125819 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81d680b3-ced9-4a2a-9a50-780e6239b4a5-operator-scripts\") pod \"barbican-db-create-zf522\" (UID: \"81d680b3-ced9-4a2a-9a50-780e6239b4a5\") " pod="openstack/barbican-db-create-zf522" Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.126025 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b242f466-9049-49a9-b655-b270790de9ce-config-data\") pod \"keystone-db-sync-8p4hm\" (UID: \"b242f466-9049-49a9-b655-b270790de9ce\") " pod="openstack/keystone-db-sync-8p4hm" Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.126099 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b242f466-9049-49a9-b655-b270790de9ce-combined-ca-bundle\") pod \"keystone-db-sync-8p4hm\" (UID: \"b242f466-9049-49a9-b655-b270790de9ce\") " pod="openstack/keystone-db-sync-8p4hm" Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.126136 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/199b8ae3-05c7-4785-9590-1cb06cce0013-operator-scripts\") pod \"neutron-db-create-dksnn\" (UID: \"199b8ae3-05c7-4785-9590-1cb06cce0013\") " pod="openstack/neutron-db-create-dksnn" Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.126192 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wd6l4\" (UniqueName: \"kubernetes.io/projected/199b8ae3-05c7-4785-9590-1cb06cce0013-kube-api-access-wd6l4\") pod \"neutron-db-create-dksnn\" (UID: \"199b8ae3-05c7-4785-9590-1cb06cce0013\") " pod="openstack/neutron-db-create-dksnn" Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.134015 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-321b-account-create-update-fb9ws"] Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.135554 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-321b-account-create-update-fb9ws" Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.136229 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81d680b3-ced9-4a2a-9a50-780e6239b4a5-operator-scripts\") pod \"barbican-db-create-zf522\" (UID: \"81d680b3-ced9-4a2a-9a50-780e6239b4a5\") " pod="openstack/barbican-db-create-zf522" Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.144396 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.180157 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-321b-account-create-update-fb9ws"] Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.227846 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/199b8ae3-05c7-4785-9590-1cb06cce0013-operator-scripts\") pod \"neutron-db-create-dksnn\" (UID: \"199b8ae3-05c7-4785-9590-1cb06cce0013\") " pod="openstack/neutron-db-create-dksnn" Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.228400 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c29vj\" (UniqueName: \"kubernetes.io/projected/10bcd3d7-2c30-4a51-9455-2ffed88a7f43-kube-api-access-c29vj\") pod \"neutron-321b-account-create-update-fb9ws\" (UID: \"10bcd3d7-2c30-4a51-9455-2ffed88a7f43\") " pod="openstack/neutron-321b-account-create-update-fb9ws" Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.228430 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wd6l4\" (UniqueName: \"kubernetes.io/projected/199b8ae3-05c7-4785-9590-1cb06cce0013-kube-api-access-wd6l4\") pod \"neutron-db-create-dksnn\" (UID: \"199b8ae3-05c7-4785-9590-1cb06cce0013\") " pod="openstack/neutron-db-create-dksnn" Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.228553 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10bcd3d7-2c30-4a51-9455-2ffed88a7f43-operator-scripts\") pod \"neutron-321b-account-create-update-fb9ws\" (UID: \"10bcd3d7-2c30-4a51-9455-2ffed88a7f43\") " pod="openstack/neutron-321b-account-create-update-fb9ws" Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.228611 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3747d6ac-f476-429b-83b8-c5a65a241d47-operator-scripts\") pod \"barbican-66a8-account-create-update-wk4g8\" (UID: \"3747d6ac-f476-429b-83b8-c5a65a241d47\") " pod="openstack/barbican-66a8-account-create-update-wk4g8" Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.228646 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9h7t\" (UniqueName: \"kubernetes.io/projected/3747d6ac-f476-429b-83b8-c5a65a241d47-kube-api-access-h9h7t\") pod \"barbican-66a8-account-create-update-wk4g8\" (UID: \"3747d6ac-f476-429b-83b8-c5a65a241d47\") " pod="openstack/barbican-66a8-account-create-update-wk4g8" Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.229065 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/199b8ae3-05c7-4785-9590-1cb06cce0013-operator-scripts\") pod \"neutron-db-create-dksnn\" (UID: \"199b8ae3-05c7-4785-9590-1cb06cce0013\") " pod="openstack/neutron-db-create-dksnn" Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.258647 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b242f466-9049-49a9-b655-b270790de9ce-config-data\") pod \"keystone-db-sync-8p4hm\" (UID: \"b242f466-9049-49a9-b655-b270790de9ce\") " pod="openstack/keystone-db-sync-8p4hm" Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.259807 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwzpp\" (UniqueName: \"kubernetes.io/projected/b242f466-9049-49a9-b655-b270790de9ce-kube-api-access-gwzpp\") pod \"keystone-db-sync-8p4hm\" (UID: \"b242f466-9049-49a9-b655-b270790de9ce\") " pod="openstack/keystone-db-sync-8p4hm" Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.260323 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b242f466-9049-49a9-b655-b270790de9ce-combined-ca-bundle\") pod \"keystone-db-sync-8p4hm\" (UID: \"b242f466-9049-49a9-b655-b270790de9ce\") " pod="openstack/keystone-db-sync-8p4hm" Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.261901 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4scx4\" (UniqueName: \"kubernetes.io/projected/81d680b3-ced9-4a2a-9a50-780e6239b4a5-kube-api-access-4scx4\") pod \"barbican-db-create-zf522\" (UID: \"81d680b3-ced9-4a2a-9a50-780e6239b4a5\") " pod="openstack/barbican-db-create-zf522" Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.265802 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wd6l4\" (UniqueName: \"kubernetes.io/projected/199b8ae3-05c7-4785-9590-1cb06cce0013-kube-api-access-wd6l4\") pod \"neutron-db-create-dksnn\" (UID: \"199b8ae3-05c7-4785-9590-1cb06cce0013\") " pod="openstack/neutron-db-create-dksnn" Jan 30 16:42:12 crc kubenswrapper[4766]: I0130 16:42:11.330977 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10bcd3d7-2c30-4a51-9455-2ffed88a7f43-operator-scripts\") pod \"neutron-321b-account-create-update-fb9ws\" (UID: \"10bcd3d7-2c30-4a51-9455-2ffed88a7f43\") " pod="openstack/neutron-321b-account-create-update-fb9ws" Jan 30 16:42:12 crc kubenswrapper[4766]: I0130 16:42:11.331071 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3747d6ac-f476-429b-83b8-c5a65a241d47-operator-scripts\") pod \"barbican-66a8-account-create-update-wk4g8\" (UID: \"3747d6ac-f476-429b-83b8-c5a65a241d47\") " pod="openstack/barbican-66a8-account-create-update-wk4g8" Jan 30 16:42:12 crc kubenswrapper[4766]: I0130 16:42:11.331127 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9h7t\" (UniqueName: \"kubernetes.io/projected/3747d6ac-f476-429b-83b8-c5a65a241d47-kube-api-access-h9h7t\") pod \"barbican-66a8-account-create-update-wk4g8\" (UID: \"3747d6ac-f476-429b-83b8-c5a65a241d47\") " pod="openstack/barbican-66a8-account-create-update-wk4g8" Jan 30 16:42:12 crc kubenswrapper[4766]: I0130 16:42:11.331302 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c29vj\" (UniqueName: \"kubernetes.io/projected/10bcd3d7-2c30-4a51-9455-2ffed88a7f43-kube-api-access-c29vj\") pod \"neutron-321b-account-create-update-fb9ws\" (UID: \"10bcd3d7-2c30-4a51-9455-2ffed88a7f43\") " pod="openstack/neutron-321b-account-create-update-fb9ws" Jan 30 16:42:12 crc kubenswrapper[4766]: I0130 16:42:11.332069 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3747d6ac-f476-429b-83b8-c5a65a241d47-operator-scripts\") pod \"barbican-66a8-account-create-update-wk4g8\" (UID: \"3747d6ac-f476-429b-83b8-c5a65a241d47\") " pod="openstack/barbican-66a8-account-create-update-wk4g8" Jan 30 16:42:12 crc kubenswrapper[4766]: I0130 16:42:11.332303 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10bcd3d7-2c30-4a51-9455-2ffed88a7f43-operator-scripts\") pod \"neutron-321b-account-create-update-fb9ws\" (UID: \"10bcd3d7-2c30-4a51-9455-2ffed88a7f43\") " pod="openstack/neutron-321b-account-create-update-fb9ws" Jan 30 16:42:12 crc kubenswrapper[4766]: I0130 16:42:11.359931 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9h7t\" (UniqueName: \"kubernetes.io/projected/3747d6ac-f476-429b-83b8-c5a65a241d47-kube-api-access-h9h7t\") pod \"barbican-66a8-account-create-update-wk4g8\" (UID: \"3747d6ac-f476-429b-83b8-c5a65a241d47\") " pod="openstack/barbican-66a8-account-create-update-wk4g8" Jan 30 16:42:12 crc kubenswrapper[4766]: I0130 16:42:11.363679 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c29vj\" (UniqueName: \"kubernetes.io/projected/10bcd3d7-2c30-4a51-9455-2ffed88a7f43-kube-api-access-c29vj\") pod \"neutron-321b-account-create-update-fb9ws\" (UID: \"10bcd3d7-2c30-4a51-9455-2ffed88a7f43\") " pod="openstack/neutron-321b-account-create-update-fb9ws" Jan 30 16:42:12 crc kubenswrapper[4766]: I0130 16:42:11.402223 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8p4hm" Jan 30 16:42:12 crc kubenswrapper[4766]: I0130 16:42:11.422885 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-dksnn" Jan 30 16:42:12 crc kubenswrapper[4766]: I0130 16:42:11.439450 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-66a8-account-create-update-wk4g8" Jan 30 16:42:12 crc kubenswrapper[4766]: I0130 16:42:11.447483 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-zf522" Jan 30 16:42:12 crc kubenswrapper[4766]: I0130 16:42:11.631991 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-321b-account-create-update-fb9ws" Jan 30 16:42:12 crc kubenswrapper[4766]: I0130 16:42:12.618737 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-8p4hm"] Jan 30 16:42:12 crc kubenswrapper[4766]: I0130 16:42:12.626840 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-x95v6"] Jan 30 16:42:12 crc kubenswrapper[4766]: W0130 16:42:12.634661 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb242f466_9049_49a9_b655_b270790de9ce.slice/crio-c8288f2c3105e402da6e0f989ee1c689d8725bfc649cd0e05bef6f7830c2ab0c WatchSource:0}: Error finding container c8288f2c3105e402da6e0f989ee1c689d8725bfc649cd0e05bef6f7830c2ab0c: Status 404 returned error can't find the container with id c8288f2c3105e402da6e0f989ee1c689d8725bfc649cd0e05bef6f7830c2ab0c Jan 30 16:42:12 crc kubenswrapper[4766]: I0130 16:42:12.853503 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-66a8-account-create-update-wk4g8"] Jan 30 16:42:12 crc kubenswrapper[4766]: I0130 16:42:12.891924 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-270a-account-create-update-d5mdk"] Jan 30 16:42:12 crc kubenswrapper[4766]: W0130 16:42:12.902763 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod199b8ae3_05c7_4785_9590_1cb06cce0013.slice/crio-70d102c081dbd39e2e62993ac6ada201d37f0ba346e8d6f7b4db3fd0a7480f1a WatchSource:0}: Error finding container 70d102c081dbd39e2e62993ac6ada201d37f0ba346e8d6f7b4db3fd0a7480f1a: Status 404 returned error can't find the container with id 70d102c081dbd39e2e62993ac6ada201d37f0ba346e8d6f7b4db3fd0a7480f1a Jan 30 16:42:12 crc kubenswrapper[4766]: I0130 16:42:12.945218 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-dksnn"] Jan 30 16:42:12 crc kubenswrapper[4766]: I0130 16:42:12.990588 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-zf522"] Jan 30 16:42:13 crc kubenswrapper[4766]: I0130 16:42:13.017220 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-321b-account-create-update-fb9ws"] Jan 30 16:42:13 crc kubenswrapper[4766]: I0130 16:42:13.182723 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-zf522" event={"ID":"81d680b3-ced9-4a2a-9a50-780e6239b4a5","Type":"ContainerStarted","Data":"02b5651ff390f182500384a7546a30e84e2a5edec6f1b0b62a8505aa9b31da57"} Jan 30 16:42:13 crc kubenswrapper[4766]: I0130 16:42:13.185778 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-321b-account-create-update-fb9ws" event={"ID":"10bcd3d7-2c30-4a51-9455-2ffed88a7f43","Type":"ContainerStarted","Data":"14d244d5b685b5ff7067f3a2cfc86300c87e8c2c380c2d83c5247b70aa7d686c"} Jan 30 16:42:13 crc kubenswrapper[4766]: I0130 16:42:13.187699 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8p4hm" event={"ID":"b242f466-9049-49a9-b655-b270790de9ce","Type":"ContainerStarted","Data":"c8288f2c3105e402da6e0f989ee1c689d8725bfc649cd0e05bef6f7830c2ab0c"} Jan 30 16:42:13 crc kubenswrapper[4766]: I0130 16:42:13.189345 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-270a-account-create-update-d5mdk" event={"ID":"db058df5-07b8-4d6e-a646-48ac7105c516","Type":"ContainerStarted","Data":"9656d34761b96b7aec15427a2a76d3ef9b7ff049df5dafee525596963bfa4aec"} Jan 30 16:42:13 crc kubenswrapper[4766]: I0130 16:42:13.191564 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-66a8-account-create-update-wk4g8" event={"ID":"3747d6ac-f476-429b-83b8-c5a65a241d47","Type":"ContainerStarted","Data":"75f824fa71f59e0128ce66d11b0cd6c6363a46c019ebc5a4072951734cae7447"} Jan 30 16:42:13 crc kubenswrapper[4766]: I0130 16:42:13.193327 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-dksnn" event={"ID":"199b8ae3-05c7-4785-9590-1cb06cce0013","Type":"ContainerStarted","Data":"70d102c081dbd39e2e62993ac6ada201d37f0ba346e8d6f7b4db3fd0a7480f1a"} Jan 30 16:42:13 crc kubenswrapper[4766]: I0130 16:42:13.197149 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-x95v6" event={"ID":"1caad6ca-26a4-488c-8b03-90da40a955b0","Type":"ContainerStarted","Data":"b3115a74162c402b5afd67304852082bc2869cd8ceb2957889ed409ae79ee5a9"} Jan 30 16:42:13 crc kubenswrapper[4766]: I0130 16:42:13.197302 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-x95v6" event={"ID":"1caad6ca-26a4-488c-8b03-90da40a955b0","Type":"ContainerStarted","Data":"a6ab14890ae2c97c12d78d8e362cb2c1ad5f7d35b5f004e94864617693ecf820"} Jan 30 16:42:13 crc kubenswrapper[4766]: I0130 16:42:13.228563 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-x95v6" podStartSLOduration=3.228535213 podStartE2EDuration="3.228535213s" podCreationTimestamp="2026-01-30 16:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:42:13.217300892 +0000 UTC m=+1187.855258248" watchObservedRunningTime="2026-01-30 16:42:13.228535213 +0000 UTC m=+1187.866492559" Jan 30 16:42:14 crc kubenswrapper[4766]: I0130 16:42:14.209082 4766 generic.go:334] "Generic (PLEG): container finished" podID="1caad6ca-26a4-488c-8b03-90da40a955b0" containerID="b3115a74162c402b5afd67304852082bc2869cd8ceb2957889ed409ae79ee5a9" exitCode=0 Jan 30 16:42:14 crc kubenswrapper[4766]: I0130 16:42:14.209475 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-x95v6" event={"ID":"1caad6ca-26a4-488c-8b03-90da40a955b0","Type":"ContainerDied","Data":"b3115a74162c402b5afd67304852082bc2869cd8ceb2957889ed409ae79ee5a9"} Jan 30 16:42:14 crc kubenswrapper[4766]: I0130 16:42:14.213313 4766 generic.go:334] "Generic (PLEG): container finished" podID="81d680b3-ced9-4a2a-9a50-780e6239b4a5" containerID="384add243e65cdf50e496a8167782257f5aa6061e63ba8e7a412091ee4ed18e7" exitCode=0 Jan 30 16:42:14 crc kubenswrapper[4766]: I0130 16:42:14.213370 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-zf522" event={"ID":"81d680b3-ced9-4a2a-9a50-780e6239b4a5","Type":"ContainerDied","Data":"384add243e65cdf50e496a8167782257f5aa6061e63ba8e7a412091ee4ed18e7"} Jan 30 16:42:14 crc kubenswrapper[4766]: I0130 16:42:14.217502 4766 generic.go:334] "Generic (PLEG): container finished" podID="10bcd3d7-2c30-4a51-9455-2ffed88a7f43" containerID="89fde9e0995894b317c9fa05cd0667cbf50e79b056befd3734c3ed716957dbe3" exitCode=0 Jan 30 16:42:14 crc kubenswrapper[4766]: I0130 16:42:14.217543 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-321b-account-create-update-fb9ws" event={"ID":"10bcd3d7-2c30-4a51-9455-2ffed88a7f43","Type":"ContainerDied","Data":"89fde9e0995894b317c9fa05cd0667cbf50e79b056befd3734c3ed716957dbe3"} Jan 30 16:42:14 crc kubenswrapper[4766]: I0130 16:42:14.219513 4766 generic.go:334] "Generic (PLEG): container finished" podID="db058df5-07b8-4d6e-a646-48ac7105c516" containerID="3126afd72a7e503d66c3abfdc8d12c8e5d1f45d05dcb98bf8bf9842b6dbab025" exitCode=0 Jan 30 16:42:14 crc kubenswrapper[4766]: I0130 16:42:14.219562 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-270a-account-create-update-d5mdk" event={"ID":"db058df5-07b8-4d6e-a646-48ac7105c516","Type":"ContainerDied","Data":"3126afd72a7e503d66c3abfdc8d12c8e5d1f45d05dcb98bf8bf9842b6dbab025"} Jan 30 16:42:14 crc kubenswrapper[4766]: I0130 16:42:14.221091 4766 generic.go:334] "Generic (PLEG): container finished" podID="3747d6ac-f476-429b-83b8-c5a65a241d47" containerID="46dfb2a0af6dc1c92f20836420bf6bad9d95ad7a83767eb35ea5c22ee21a6991" exitCode=0 Jan 30 16:42:14 crc kubenswrapper[4766]: I0130 16:42:14.221139 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-66a8-account-create-update-wk4g8" event={"ID":"3747d6ac-f476-429b-83b8-c5a65a241d47","Type":"ContainerDied","Data":"46dfb2a0af6dc1c92f20836420bf6bad9d95ad7a83767eb35ea5c22ee21a6991"} Jan 30 16:42:14 crc kubenswrapper[4766]: I0130 16:42:14.230397 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-dksnn" event={"ID":"199b8ae3-05c7-4785-9590-1cb06cce0013","Type":"ContainerDied","Data":"8b6a5e00eb0e363beb4163ed64b109efdad6014e6d35f2b1358b2fb9057e6db4"} Jan 30 16:42:14 crc kubenswrapper[4766]: I0130 16:42:14.231530 4766 generic.go:334] "Generic (PLEG): container finished" podID="199b8ae3-05c7-4785-9590-1cb06cce0013" containerID="8b6a5e00eb0e363beb4163ed64b109efdad6014e6d35f2b1358b2fb9057e6db4" exitCode=0 Jan 30 16:42:14 crc kubenswrapper[4766]: I0130 16:42:14.668360 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" Jan 30 16:42:14 crc kubenswrapper[4766]: I0130 16:42:14.734363 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-rghwg"] Jan 30 16:42:14 crc kubenswrapper[4766]: I0130 16:42:14.734645 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-rghwg" podUID="c4db25e7-718f-4a48-8dd2-2db2ae9e804c" containerName="dnsmasq-dns" containerID="cri-o://d4d926b25f16af7c860cb7d5c7c75d1eb0c85c7438a98e36515485f9623090f7" gracePeriod=10 Jan 30 16:42:15 crc kubenswrapper[4766]: I0130 16:42:15.257365 4766 generic.go:334] "Generic (PLEG): container finished" podID="c4db25e7-718f-4a48-8dd2-2db2ae9e804c" containerID="d4d926b25f16af7c860cb7d5c7c75d1eb0c85c7438a98e36515485f9623090f7" exitCode=0 Jan 30 16:42:15 crc kubenswrapper[4766]: I0130 16:42:15.257593 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-rghwg" event={"ID":"c4db25e7-718f-4a48-8dd2-2db2ae9e804c","Type":"ContainerDied","Data":"d4d926b25f16af7c860cb7d5c7c75d1eb0c85c7438a98e36515485f9623090f7"} Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.731423 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-270a-account-create-update-d5mdk" Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.740251 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-dksnn" Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.751136 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-zf522" Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.758576 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-321b-account-create-update-fb9ws" Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.772658 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-66a8-account-create-update-wk4g8" Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.784559 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-x95v6" Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.794978 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-rghwg" Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.876778 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wd6l4\" (UniqueName: \"kubernetes.io/projected/199b8ae3-05c7-4785-9590-1cb06cce0013-kube-api-access-wd6l4\") pod \"199b8ae3-05c7-4785-9590-1cb06cce0013\" (UID: \"199b8ae3-05c7-4785-9590-1cb06cce0013\") " Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.876843 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c29vj\" (UniqueName: \"kubernetes.io/projected/10bcd3d7-2c30-4a51-9455-2ffed88a7f43-kube-api-access-c29vj\") pod \"10bcd3d7-2c30-4a51-9455-2ffed88a7f43\" (UID: \"10bcd3d7-2c30-4a51-9455-2ffed88a7f43\") " Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.876900 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9h7t\" (UniqueName: \"kubernetes.io/projected/3747d6ac-f476-429b-83b8-c5a65a241d47-kube-api-access-h9h7t\") pod \"3747d6ac-f476-429b-83b8-c5a65a241d47\" (UID: \"3747d6ac-f476-429b-83b8-c5a65a241d47\") " Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.876942 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/199b8ae3-05c7-4785-9590-1cb06cce0013-operator-scripts\") pod \"199b8ae3-05c7-4785-9590-1cb06cce0013\" (UID: \"199b8ae3-05c7-4785-9590-1cb06cce0013\") " Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.876973 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db058df5-07b8-4d6e-a646-48ac7105c516-operator-scripts\") pod \"db058df5-07b8-4d6e-a646-48ac7105c516\" (UID: \"db058df5-07b8-4d6e-a646-48ac7105c516\") " Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.877030 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81d680b3-ced9-4a2a-9a50-780e6239b4a5-operator-scripts\") pod \"81d680b3-ced9-4a2a-9a50-780e6239b4a5\" (UID: \"81d680b3-ced9-4a2a-9a50-780e6239b4a5\") " Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.877078 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbsrs\" (UniqueName: \"kubernetes.io/projected/db058df5-07b8-4d6e-a646-48ac7105c516-kube-api-access-lbsrs\") pod \"db058df5-07b8-4d6e-a646-48ac7105c516\" (UID: \"db058df5-07b8-4d6e-a646-48ac7105c516\") " Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.877134 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3747d6ac-f476-429b-83b8-c5a65a241d47-operator-scripts\") pod \"3747d6ac-f476-429b-83b8-c5a65a241d47\" (UID: \"3747d6ac-f476-429b-83b8-c5a65a241d47\") " Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.877165 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4scx4\" (UniqueName: \"kubernetes.io/projected/81d680b3-ced9-4a2a-9a50-780e6239b4a5-kube-api-access-4scx4\") pod \"81d680b3-ced9-4a2a-9a50-780e6239b4a5\" (UID: \"81d680b3-ced9-4a2a-9a50-780e6239b4a5\") " Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.877749 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/199b8ae3-05c7-4785-9590-1cb06cce0013-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "199b8ae3-05c7-4785-9590-1cb06cce0013" (UID: "199b8ae3-05c7-4785-9590-1cb06cce0013"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.877749 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81d680b3-ced9-4a2a-9a50-780e6239b4a5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "81d680b3-ced9-4a2a-9a50-780e6239b4a5" (UID: "81d680b3-ced9-4a2a-9a50-780e6239b4a5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.877804 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3747d6ac-f476-429b-83b8-c5a65a241d47-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3747d6ac-f476-429b-83b8-c5a65a241d47" (UID: "3747d6ac-f476-429b-83b8-c5a65a241d47"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.877890 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10bcd3d7-2c30-4a51-9455-2ffed88a7f43-operator-scripts\") pod \"10bcd3d7-2c30-4a51-9455-2ffed88a7f43\" (UID: \"10bcd3d7-2c30-4a51-9455-2ffed88a7f43\") " Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.877953 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db058df5-07b8-4d6e-a646-48ac7105c516-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "db058df5-07b8-4d6e-a646-48ac7105c516" (UID: "db058df5-07b8-4d6e-a646-48ac7105c516"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.878505 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/199b8ae3-05c7-4785-9590-1cb06cce0013-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.878528 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db058df5-07b8-4d6e-a646-48ac7105c516-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.878536 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81d680b3-ced9-4a2a-9a50-780e6239b4a5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.878545 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3747d6ac-f476-429b-83b8-c5a65a241d47-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.878601 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10bcd3d7-2c30-4a51-9455-2ffed88a7f43-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "10bcd3d7-2c30-4a51-9455-2ffed88a7f43" (UID: "10bcd3d7-2c30-4a51-9455-2ffed88a7f43"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.885496 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/199b8ae3-05c7-4785-9590-1cb06cce0013-kube-api-access-wd6l4" (OuterVolumeSpecName: "kube-api-access-wd6l4") pod "199b8ae3-05c7-4785-9590-1cb06cce0013" (UID: "199b8ae3-05c7-4785-9590-1cb06cce0013"). InnerVolumeSpecName "kube-api-access-wd6l4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.885548 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3747d6ac-f476-429b-83b8-c5a65a241d47-kube-api-access-h9h7t" (OuterVolumeSpecName: "kube-api-access-h9h7t") pod "3747d6ac-f476-429b-83b8-c5a65a241d47" (UID: "3747d6ac-f476-429b-83b8-c5a65a241d47"). InnerVolumeSpecName "kube-api-access-h9h7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.886551 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10bcd3d7-2c30-4a51-9455-2ffed88a7f43-kube-api-access-c29vj" (OuterVolumeSpecName: "kube-api-access-c29vj") pod "10bcd3d7-2c30-4a51-9455-2ffed88a7f43" (UID: "10bcd3d7-2c30-4a51-9455-2ffed88a7f43"). InnerVolumeSpecName "kube-api-access-c29vj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.900966 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db058df5-07b8-4d6e-a646-48ac7105c516-kube-api-access-lbsrs" (OuterVolumeSpecName: "kube-api-access-lbsrs") pod "db058df5-07b8-4d6e-a646-48ac7105c516" (UID: "db058df5-07b8-4d6e-a646-48ac7105c516"). InnerVolumeSpecName "kube-api-access-lbsrs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.906960 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81d680b3-ced9-4a2a-9a50-780e6239b4a5-kube-api-access-4scx4" (OuterVolumeSpecName: "kube-api-access-4scx4") pod "81d680b3-ced9-4a2a-9a50-780e6239b4a5" (UID: "81d680b3-ced9-4a2a-9a50-780e6239b4a5"). InnerVolumeSpecName "kube-api-access-4scx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.979869 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c4db25e7-718f-4a48-8dd2-2db2ae9e804c-ovsdbserver-sb\") pod \"c4db25e7-718f-4a48-8dd2-2db2ae9e804c\" (UID: \"c4db25e7-718f-4a48-8dd2-2db2ae9e804c\") " Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.979948 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4db25e7-718f-4a48-8dd2-2db2ae9e804c-config\") pod \"c4db25e7-718f-4a48-8dd2-2db2ae9e804c\" (UID: \"c4db25e7-718f-4a48-8dd2-2db2ae9e804c\") " Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.979991 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbrfs\" (UniqueName: \"kubernetes.io/projected/c4db25e7-718f-4a48-8dd2-2db2ae9e804c-kube-api-access-xbrfs\") pod \"c4db25e7-718f-4a48-8dd2-2db2ae9e804c\" (UID: \"c4db25e7-718f-4a48-8dd2-2db2ae9e804c\") " Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.980054 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqgb7\" (UniqueName: \"kubernetes.io/projected/1caad6ca-26a4-488c-8b03-90da40a955b0-kube-api-access-bqgb7\") pod \"1caad6ca-26a4-488c-8b03-90da40a955b0\" (UID: \"1caad6ca-26a4-488c-8b03-90da40a955b0\") " Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.980078 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1caad6ca-26a4-488c-8b03-90da40a955b0-operator-scripts\") pod \"1caad6ca-26a4-488c-8b03-90da40a955b0\" (UID: \"1caad6ca-26a4-488c-8b03-90da40a955b0\") " Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.980159 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4db25e7-718f-4a48-8dd2-2db2ae9e804c-dns-svc\") pod \"c4db25e7-718f-4a48-8dd2-2db2ae9e804c\" (UID: \"c4db25e7-718f-4a48-8dd2-2db2ae9e804c\") " Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.980243 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c4db25e7-718f-4a48-8dd2-2db2ae9e804c-ovsdbserver-nb\") pod \"c4db25e7-718f-4a48-8dd2-2db2ae9e804c\" (UID: \"c4db25e7-718f-4a48-8dd2-2db2ae9e804c\") " Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.980654 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbsrs\" (UniqueName: \"kubernetes.io/projected/db058df5-07b8-4d6e-a646-48ac7105c516-kube-api-access-lbsrs\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.980679 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4scx4\" (UniqueName: \"kubernetes.io/projected/81d680b3-ced9-4a2a-9a50-780e6239b4a5-kube-api-access-4scx4\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.980689 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10bcd3d7-2c30-4a51-9455-2ffed88a7f43-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.980700 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wd6l4\" (UniqueName: \"kubernetes.io/projected/199b8ae3-05c7-4785-9590-1cb06cce0013-kube-api-access-wd6l4\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.980710 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c29vj\" (UniqueName: \"kubernetes.io/projected/10bcd3d7-2c30-4a51-9455-2ffed88a7f43-kube-api-access-c29vj\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.980720 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9h7t\" (UniqueName: \"kubernetes.io/projected/3747d6ac-f476-429b-83b8-c5a65a241d47-kube-api-access-h9h7t\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.981802 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1caad6ca-26a4-488c-8b03-90da40a955b0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1caad6ca-26a4-488c-8b03-90da40a955b0" (UID: "1caad6ca-26a4-488c-8b03-90da40a955b0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.984423 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4db25e7-718f-4a48-8dd2-2db2ae9e804c-kube-api-access-xbrfs" (OuterVolumeSpecName: "kube-api-access-xbrfs") pod "c4db25e7-718f-4a48-8dd2-2db2ae9e804c" (UID: "c4db25e7-718f-4a48-8dd2-2db2ae9e804c"). InnerVolumeSpecName "kube-api-access-xbrfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.989547 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1caad6ca-26a4-488c-8b03-90da40a955b0-kube-api-access-bqgb7" (OuterVolumeSpecName: "kube-api-access-bqgb7") pod "1caad6ca-26a4-488c-8b03-90da40a955b0" (UID: "1caad6ca-26a4-488c-8b03-90da40a955b0"). InnerVolumeSpecName "kube-api-access-bqgb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.026407 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4db25e7-718f-4a48-8dd2-2db2ae9e804c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c4db25e7-718f-4a48-8dd2-2db2ae9e804c" (UID: "c4db25e7-718f-4a48-8dd2-2db2ae9e804c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.027575 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4db25e7-718f-4a48-8dd2-2db2ae9e804c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c4db25e7-718f-4a48-8dd2-2db2ae9e804c" (UID: "c4db25e7-718f-4a48-8dd2-2db2ae9e804c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.028825 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4db25e7-718f-4a48-8dd2-2db2ae9e804c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c4db25e7-718f-4a48-8dd2-2db2ae9e804c" (UID: "c4db25e7-718f-4a48-8dd2-2db2ae9e804c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.038027 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4db25e7-718f-4a48-8dd2-2db2ae9e804c-config" (OuterVolumeSpecName: "config") pod "c4db25e7-718f-4a48-8dd2-2db2ae9e804c" (UID: "c4db25e7-718f-4a48-8dd2-2db2ae9e804c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.083669 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c4db25e7-718f-4a48-8dd2-2db2ae9e804c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.083703 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c4db25e7-718f-4a48-8dd2-2db2ae9e804c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.083715 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4db25e7-718f-4a48-8dd2-2db2ae9e804c-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.083725 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbrfs\" (UniqueName: \"kubernetes.io/projected/c4db25e7-718f-4a48-8dd2-2db2ae9e804c-kube-api-access-xbrfs\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.083736 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqgb7\" (UniqueName: \"kubernetes.io/projected/1caad6ca-26a4-488c-8b03-90da40a955b0-kube-api-access-bqgb7\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.083748 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1caad6ca-26a4-488c-8b03-90da40a955b0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.083758 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4db25e7-718f-4a48-8dd2-2db2ae9e804c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.342266 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-x95v6" event={"ID":"1caad6ca-26a4-488c-8b03-90da40a955b0","Type":"ContainerDied","Data":"a6ab14890ae2c97c12d78d8e362cb2c1ad5f7d35b5f004e94864617693ecf820"} Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.342554 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6ab14890ae2c97c12d78d8e362cb2c1ad5f7d35b5f004e94864617693ecf820" Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.342696 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-x95v6" Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.357674 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-rghwg" Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.358318 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-rghwg" event={"ID":"c4db25e7-718f-4a48-8dd2-2db2ae9e804c","Type":"ContainerDied","Data":"9182f1033ef23024434f7951cc54bc1f7a26c4fcea86a6ac3668ac33be32ed89"} Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.358432 4766 scope.go:117] "RemoveContainer" containerID="d4d926b25f16af7c860cb7d5c7c75d1eb0c85c7438a98e36515485f9623090f7" Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.362036 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-zf522" event={"ID":"81d680b3-ced9-4a2a-9a50-780e6239b4a5","Type":"ContainerDied","Data":"02b5651ff390f182500384a7546a30e84e2a5edec6f1b0b62a8505aa9b31da57"} Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.362126 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02b5651ff390f182500384a7546a30e84e2a5edec6f1b0b62a8505aa9b31da57" Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.362258 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-zf522" Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.375443 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-321b-account-create-update-fb9ws" Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.375454 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-321b-account-create-update-fb9ws" event={"ID":"10bcd3d7-2c30-4a51-9455-2ffed88a7f43","Type":"ContainerDied","Data":"14d244d5b685b5ff7067f3a2cfc86300c87e8c2c380c2d83c5247b70aa7d686c"} Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.375485 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14d244d5b685b5ff7067f3a2cfc86300c87e8c2c380c2d83c5247b70aa7d686c" Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.378980 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8p4hm" event={"ID":"b242f466-9049-49a9-b655-b270790de9ce","Type":"ContainerStarted","Data":"88d113226aeebb5db30f4f4f9b3c172c70a6fbe5baa221cf177cb6428428ba00"} Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.388147 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-270a-account-create-update-d5mdk" event={"ID":"db058df5-07b8-4d6e-a646-48ac7105c516","Type":"ContainerDied","Data":"9656d34761b96b7aec15427a2a76d3ef9b7ff049df5dafee525596963bfa4aec"} Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.388202 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9656d34761b96b7aec15427a2a76d3ef9b7ff049df5dafee525596963bfa4aec" Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.388286 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-270a-account-create-update-d5mdk" Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.392563 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-66a8-account-create-update-wk4g8" event={"ID":"3747d6ac-f476-429b-83b8-c5a65a241d47","Type":"ContainerDied","Data":"75f824fa71f59e0128ce66d11b0cd6c6363a46c019ebc5a4072951734cae7447"} Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.392605 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75f824fa71f59e0128ce66d11b0cd6c6363a46c019ebc5a4072951734cae7447" Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.392675 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-66a8-account-create-update-wk4g8" Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.398685 4766 scope.go:117] "RemoveContainer" containerID="e50ccbe59f4a2cbb46a08d936a0c8b4ab930afea52bcfbf233b4a8e6a0125171" Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.399302 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-rghwg"] Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.407873 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-dksnn" event={"ID":"199b8ae3-05c7-4785-9590-1cb06cce0013","Type":"ContainerDied","Data":"70d102c081dbd39e2e62993ac6ada201d37f0ba346e8d6f7b4db3fd0a7480f1a"} Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.407917 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70d102c081dbd39e2e62993ac6ada201d37f0ba346e8d6f7b4db3fd0a7480f1a" Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.407936 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-dksnn" Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.409328 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-rghwg"] Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.433117 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-8p4hm" podStartSLOduration=3.029395202 podStartE2EDuration="11.433089922s" podCreationTimestamp="2026-01-30 16:42:10 +0000 UTC" firstStartedPulling="2026-01-30 16:42:12.641758024 +0000 UTC m=+1187.279715370" lastFinishedPulling="2026-01-30 16:42:21.045452744 +0000 UTC m=+1195.683410090" observedRunningTime="2026-01-30 16:42:21.416884247 +0000 UTC m=+1196.054841603" watchObservedRunningTime="2026-01-30 16:42:21.433089922 +0000 UTC m=+1196.071047268" Jan 30 16:42:22 crc kubenswrapper[4766]: I0130 16:42:22.050616 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4db25e7-718f-4a48-8dd2-2db2ae9e804c" path="/var/lib/kubelet/pods/c4db25e7-718f-4a48-8dd2-2db2ae9e804c/volumes" Jan 30 16:42:22 crc kubenswrapper[4766]: I0130 16:42:22.418428 4766 generic.go:334] "Generic (PLEG): container finished" podID="42d1f0ba-d11c-4e08-9e01-5783f42a6b84" containerID="608ba2a26d2d587734c8a4f7540403d434c83f4f3e8dcb71158c93e46d824161" exitCode=0 Jan 30 16:42:22 crc kubenswrapper[4766]: I0130 16:42:22.418517 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-jpmx7" event={"ID":"42d1f0ba-d11c-4e08-9e01-5783f42a6b84","Type":"ContainerDied","Data":"608ba2a26d2d587734c8a4f7540403d434c83f4f3e8dcb71158c93e46d824161"} Jan 30 16:42:23 crc kubenswrapper[4766]: I0130 16:42:23.978615 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-jpmx7" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.141689 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42d1f0ba-d11c-4e08-9e01-5783f42a6b84-combined-ca-bundle\") pod \"42d1f0ba-d11c-4e08-9e01-5783f42a6b84\" (UID: \"42d1f0ba-d11c-4e08-9e01-5783f42a6b84\") " Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.141888 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/42d1f0ba-d11c-4e08-9e01-5783f42a6b84-db-sync-config-data\") pod \"42d1f0ba-d11c-4e08-9e01-5783f42a6b84\" (UID: \"42d1f0ba-d11c-4e08-9e01-5783f42a6b84\") " Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.142185 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfprs\" (UniqueName: \"kubernetes.io/projected/42d1f0ba-d11c-4e08-9e01-5783f42a6b84-kube-api-access-dfprs\") pod \"42d1f0ba-d11c-4e08-9e01-5783f42a6b84\" (UID: \"42d1f0ba-d11c-4e08-9e01-5783f42a6b84\") " Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.142209 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42d1f0ba-d11c-4e08-9e01-5783f42a6b84-config-data\") pod \"42d1f0ba-d11c-4e08-9e01-5783f42a6b84\" (UID: \"42d1f0ba-d11c-4e08-9e01-5783f42a6b84\") " Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.150497 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42d1f0ba-d11c-4e08-9e01-5783f42a6b84-kube-api-access-dfprs" (OuterVolumeSpecName: "kube-api-access-dfprs") pod "42d1f0ba-d11c-4e08-9e01-5783f42a6b84" (UID: "42d1f0ba-d11c-4e08-9e01-5783f42a6b84"). InnerVolumeSpecName "kube-api-access-dfprs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.152915 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42d1f0ba-d11c-4e08-9e01-5783f42a6b84-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "42d1f0ba-d11c-4e08-9e01-5783f42a6b84" (UID: "42d1f0ba-d11c-4e08-9e01-5783f42a6b84"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.169672 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42d1f0ba-d11c-4e08-9e01-5783f42a6b84-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "42d1f0ba-d11c-4e08-9e01-5783f42a6b84" (UID: "42d1f0ba-d11c-4e08-9e01-5783f42a6b84"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.193352 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42d1f0ba-d11c-4e08-9e01-5783f42a6b84-config-data" (OuterVolumeSpecName: "config-data") pod "42d1f0ba-d11c-4e08-9e01-5783f42a6b84" (UID: "42d1f0ba-d11c-4e08-9e01-5783f42a6b84"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.243850 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42d1f0ba-d11c-4e08-9e01-5783f42a6b84-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.243887 4766 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/42d1f0ba-d11c-4e08-9e01-5783f42a6b84-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.244086 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfprs\" (UniqueName: \"kubernetes.io/projected/42d1f0ba-d11c-4e08-9e01-5783f42a6b84-kube-api-access-dfprs\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.244102 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42d1f0ba-d11c-4e08-9e01-5783f42a6b84-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.388492 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-rghwg" podUID="c4db25e7-718f-4a48-8dd2-2db2ae9e804c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.115:5353: i/o timeout" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.435594 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-jpmx7" event={"ID":"42d1f0ba-d11c-4e08-9e01-5783f42a6b84","Type":"ContainerDied","Data":"156259d42ec5bb7cdf5b66d3e56d10fcf3255030f0fe6e860e8d86caf0aded59"} Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.435638 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="156259d42ec5bb7cdf5b66d3e56d10fcf3255030f0fe6e860e8d86caf0aded59" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.435667 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-jpmx7" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.852469 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-kch9t"] Jan 30 16:42:24 crc kubenswrapper[4766]: E0130 16:42:24.853276 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42d1f0ba-d11c-4e08-9e01-5783f42a6b84" containerName="glance-db-sync" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.853301 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="42d1f0ba-d11c-4e08-9e01-5783f42a6b84" containerName="glance-db-sync" Jan 30 16:42:24 crc kubenswrapper[4766]: E0130 16:42:24.853318 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4db25e7-718f-4a48-8dd2-2db2ae9e804c" containerName="init" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.853326 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4db25e7-718f-4a48-8dd2-2db2ae9e804c" containerName="init" Jan 30 16:42:24 crc kubenswrapper[4766]: E0130 16:42:24.853346 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81d680b3-ced9-4a2a-9a50-780e6239b4a5" containerName="mariadb-database-create" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.853357 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="81d680b3-ced9-4a2a-9a50-780e6239b4a5" containerName="mariadb-database-create" Jan 30 16:42:24 crc kubenswrapper[4766]: E0130 16:42:24.853372 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="199b8ae3-05c7-4785-9590-1cb06cce0013" containerName="mariadb-database-create" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.853379 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="199b8ae3-05c7-4785-9590-1cb06cce0013" containerName="mariadb-database-create" Jan 30 16:42:24 crc kubenswrapper[4766]: E0130 16:42:24.853391 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db058df5-07b8-4d6e-a646-48ac7105c516" containerName="mariadb-account-create-update" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.853398 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="db058df5-07b8-4d6e-a646-48ac7105c516" containerName="mariadb-account-create-update" Jan 30 16:42:24 crc kubenswrapper[4766]: E0130 16:42:24.853418 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4db25e7-718f-4a48-8dd2-2db2ae9e804c" containerName="dnsmasq-dns" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.853427 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4db25e7-718f-4a48-8dd2-2db2ae9e804c" containerName="dnsmasq-dns" Jan 30 16:42:24 crc kubenswrapper[4766]: E0130 16:42:24.853448 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10bcd3d7-2c30-4a51-9455-2ffed88a7f43" containerName="mariadb-account-create-update" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.853456 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="10bcd3d7-2c30-4a51-9455-2ffed88a7f43" containerName="mariadb-account-create-update" Jan 30 16:42:24 crc kubenswrapper[4766]: E0130 16:42:24.853472 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3747d6ac-f476-429b-83b8-c5a65a241d47" containerName="mariadb-account-create-update" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.853478 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="3747d6ac-f476-429b-83b8-c5a65a241d47" containerName="mariadb-account-create-update" Jan 30 16:42:24 crc kubenswrapper[4766]: E0130 16:42:24.853492 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1caad6ca-26a4-488c-8b03-90da40a955b0" containerName="mariadb-database-create" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.853501 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="1caad6ca-26a4-488c-8b03-90da40a955b0" containerName="mariadb-database-create" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.853717 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="199b8ae3-05c7-4785-9590-1cb06cce0013" containerName="mariadb-database-create" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.853739 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="db058df5-07b8-4d6e-a646-48ac7105c516" containerName="mariadb-account-create-update" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.853750 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="3747d6ac-f476-429b-83b8-c5a65a241d47" containerName="mariadb-account-create-update" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.853760 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4db25e7-718f-4a48-8dd2-2db2ae9e804c" containerName="dnsmasq-dns" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.853770 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="1caad6ca-26a4-488c-8b03-90da40a955b0" containerName="mariadb-database-create" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.853782 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="81d680b3-ced9-4a2a-9a50-780e6239b4a5" containerName="mariadb-database-create" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.853796 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="10bcd3d7-2c30-4a51-9455-2ffed88a7f43" containerName="mariadb-account-create-update" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.853810 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="42d1f0ba-d11c-4e08-9e01-5783f42a6b84" containerName="glance-db-sync" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.854886 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-kch9t" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.887294 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-kch9t"] Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.956752 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-dns-swift-storage-0\") pod \"dnsmasq-dns-7ff5475cc9-kch9t\" (UID: \"b52befca-b3ab-4e81-bc0f-c828a8bdc49b\") " pod="openstack/dnsmasq-dns-7ff5475cc9-kch9t" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.956805 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-config\") pod \"dnsmasq-dns-7ff5475cc9-kch9t\" (UID: \"b52befca-b3ab-4e81-bc0f-c828a8bdc49b\") " pod="openstack/dnsmasq-dns-7ff5475cc9-kch9t" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.956850 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-ovsdbserver-nb\") pod \"dnsmasq-dns-7ff5475cc9-kch9t\" (UID: \"b52befca-b3ab-4e81-bc0f-c828a8bdc49b\") " pod="openstack/dnsmasq-dns-7ff5475cc9-kch9t" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.956989 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-ovsdbserver-sb\") pod \"dnsmasq-dns-7ff5475cc9-kch9t\" (UID: \"b52befca-b3ab-4e81-bc0f-c828a8bdc49b\") " pod="openstack/dnsmasq-dns-7ff5475cc9-kch9t" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.957028 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-dns-svc\") pod \"dnsmasq-dns-7ff5475cc9-kch9t\" (UID: \"b52befca-b3ab-4e81-bc0f-c828a8bdc49b\") " pod="openstack/dnsmasq-dns-7ff5475cc9-kch9t" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.957047 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22lvx\" (UniqueName: \"kubernetes.io/projected/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-kube-api-access-22lvx\") pod \"dnsmasq-dns-7ff5475cc9-kch9t\" (UID: \"b52befca-b3ab-4e81-bc0f-c828a8bdc49b\") " pod="openstack/dnsmasq-dns-7ff5475cc9-kch9t" Jan 30 16:42:25 crc kubenswrapper[4766]: I0130 16:42:25.059084 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-dns-svc\") pod \"dnsmasq-dns-7ff5475cc9-kch9t\" (UID: \"b52befca-b3ab-4e81-bc0f-c828a8bdc49b\") " pod="openstack/dnsmasq-dns-7ff5475cc9-kch9t" Jan 30 16:42:25 crc kubenswrapper[4766]: I0130 16:42:25.059142 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22lvx\" (UniqueName: \"kubernetes.io/projected/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-kube-api-access-22lvx\") pod \"dnsmasq-dns-7ff5475cc9-kch9t\" (UID: \"b52befca-b3ab-4e81-bc0f-c828a8bdc49b\") " pod="openstack/dnsmasq-dns-7ff5475cc9-kch9t" Jan 30 16:42:25 crc kubenswrapper[4766]: I0130 16:42:25.059239 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-dns-swift-storage-0\") pod \"dnsmasq-dns-7ff5475cc9-kch9t\" (UID: \"b52befca-b3ab-4e81-bc0f-c828a8bdc49b\") " pod="openstack/dnsmasq-dns-7ff5475cc9-kch9t" Jan 30 16:42:25 crc kubenswrapper[4766]: I0130 16:42:25.059283 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-config\") pod \"dnsmasq-dns-7ff5475cc9-kch9t\" (UID: \"b52befca-b3ab-4e81-bc0f-c828a8bdc49b\") " pod="openstack/dnsmasq-dns-7ff5475cc9-kch9t" Jan 30 16:42:25 crc kubenswrapper[4766]: I0130 16:42:25.059334 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-ovsdbserver-nb\") pod \"dnsmasq-dns-7ff5475cc9-kch9t\" (UID: \"b52befca-b3ab-4e81-bc0f-c828a8bdc49b\") " pod="openstack/dnsmasq-dns-7ff5475cc9-kch9t" Jan 30 16:42:25 crc kubenswrapper[4766]: I0130 16:42:25.059445 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-ovsdbserver-sb\") pod \"dnsmasq-dns-7ff5475cc9-kch9t\" (UID: \"b52befca-b3ab-4e81-bc0f-c828a8bdc49b\") " pod="openstack/dnsmasq-dns-7ff5475cc9-kch9t" Jan 30 16:42:25 crc kubenswrapper[4766]: I0130 16:42:25.060407 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-ovsdbserver-sb\") pod \"dnsmasq-dns-7ff5475cc9-kch9t\" (UID: \"b52befca-b3ab-4e81-bc0f-c828a8bdc49b\") " pod="openstack/dnsmasq-dns-7ff5475cc9-kch9t" Jan 30 16:42:25 crc kubenswrapper[4766]: I0130 16:42:25.060456 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-dns-swift-storage-0\") pod \"dnsmasq-dns-7ff5475cc9-kch9t\" (UID: \"b52befca-b3ab-4e81-bc0f-c828a8bdc49b\") " pod="openstack/dnsmasq-dns-7ff5475cc9-kch9t" Jan 30 16:42:25 crc kubenswrapper[4766]: I0130 16:42:25.060808 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-config\") pod \"dnsmasq-dns-7ff5475cc9-kch9t\" (UID: \"b52befca-b3ab-4e81-bc0f-c828a8bdc49b\") " pod="openstack/dnsmasq-dns-7ff5475cc9-kch9t" Jan 30 16:42:25 crc kubenswrapper[4766]: I0130 16:42:25.061104 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-ovsdbserver-nb\") pod \"dnsmasq-dns-7ff5475cc9-kch9t\" (UID: \"b52befca-b3ab-4e81-bc0f-c828a8bdc49b\") " pod="openstack/dnsmasq-dns-7ff5475cc9-kch9t" Jan 30 16:42:25 crc kubenswrapper[4766]: I0130 16:42:25.062121 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-dns-svc\") pod \"dnsmasq-dns-7ff5475cc9-kch9t\" (UID: \"b52befca-b3ab-4e81-bc0f-c828a8bdc49b\") " pod="openstack/dnsmasq-dns-7ff5475cc9-kch9t" Jan 30 16:42:25 crc kubenswrapper[4766]: I0130 16:42:25.080209 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22lvx\" (UniqueName: \"kubernetes.io/projected/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-kube-api-access-22lvx\") pod \"dnsmasq-dns-7ff5475cc9-kch9t\" (UID: \"b52befca-b3ab-4e81-bc0f-c828a8bdc49b\") " pod="openstack/dnsmasq-dns-7ff5475cc9-kch9t" Jan 30 16:42:25 crc kubenswrapper[4766]: I0130 16:42:25.185164 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-kch9t" Jan 30 16:42:25 crc kubenswrapper[4766]: I0130 16:42:25.443575 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-kch9t"] Jan 30 16:42:25 crc kubenswrapper[4766]: W0130 16:42:25.455432 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb52befca_b3ab_4e81_bc0f_c828a8bdc49b.slice/crio-ac637fb7fa3a5a2ceefaf2b57d6fb0986a7fd9542b5a8336144c4521b7ec6f8c WatchSource:0}: Error finding container ac637fb7fa3a5a2ceefaf2b57d6fb0986a7fd9542b5a8336144c4521b7ec6f8c: Status 404 returned error can't find the container with id ac637fb7fa3a5a2ceefaf2b57d6fb0986a7fd9542b5a8336144c4521b7ec6f8c Jan 30 16:42:26 crc kubenswrapper[4766]: I0130 16:42:26.453645 4766 generic.go:334] "Generic (PLEG): container finished" podID="b52befca-b3ab-4e81-bc0f-c828a8bdc49b" containerID="4088f4fd9d06fcf827d9109fe4a2bcec3a2991bdc54db993295bb9b7219e61c0" exitCode=0 Jan 30 16:42:26 crc kubenswrapper[4766]: I0130 16:42:26.453757 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-kch9t" event={"ID":"b52befca-b3ab-4e81-bc0f-c828a8bdc49b","Type":"ContainerDied","Data":"4088f4fd9d06fcf827d9109fe4a2bcec3a2991bdc54db993295bb9b7219e61c0"} Jan 30 16:42:26 crc kubenswrapper[4766]: I0130 16:42:26.454025 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-kch9t" event={"ID":"b52befca-b3ab-4e81-bc0f-c828a8bdc49b","Type":"ContainerStarted","Data":"ac637fb7fa3a5a2ceefaf2b57d6fb0986a7fd9542b5a8336144c4521b7ec6f8c"} Jan 30 16:42:26 crc kubenswrapper[4766]: I0130 16:42:26.457111 4766 generic.go:334] "Generic (PLEG): container finished" podID="b242f466-9049-49a9-b655-b270790de9ce" containerID="88d113226aeebb5db30f4f4f9b3c172c70a6fbe5baa221cf177cb6428428ba00" exitCode=0 Jan 30 16:42:26 crc kubenswrapper[4766]: I0130 16:42:26.457168 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8p4hm" event={"ID":"b242f466-9049-49a9-b655-b270790de9ce","Type":"ContainerDied","Data":"88d113226aeebb5db30f4f4f9b3c172c70a6fbe5baa221cf177cb6428428ba00"} Jan 30 16:42:27 crc kubenswrapper[4766]: I0130 16:42:27.467936 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-kch9t" event={"ID":"b52befca-b3ab-4e81-bc0f-c828a8bdc49b","Type":"ContainerStarted","Data":"956170968e42ad47b2cafd397e206fd1268906e156bbd44d9f8ca6e3f5096ee0"} Jan 30 16:42:27 crc kubenswrapper[4766]: I0130 16:42:27.497444 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7ff5475cc9-kch9t" podStartSLOduration=3.497420496 podStartE2EDuration="3.497420496s" podCreationTimestamp="2026-01-30 16:42:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:42:27.490703996 +0000 UTC m=+1202.128661352" watchObservedRunningTime="2026-01-30 16:42:27.497420496 +0000 UTC m=+1202.135377862" Jan 30 16:42:27 crc kubenswrapper[4766]: I0130 16:42:27.827535 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8p4hm" Jan 30 16:42:28 crc kubenswrapper[4766]: I0130 16:42:28.015156 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b242f466-9049-49a9-b655-b270790de9ce-config-data\") pod \"b242f466-9049-49a9-b655-b270790de9ce\" (UID: \"b242f466-9049-49a9-b655-b270790de9ce\") " Jan 30 16:42:28 crc kubenswrapper[4766]: I0130 16:42:28.015250 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b242f466-9049-49a9-b655-b270790de9ce-combined-ca-bundle\") pod \"b242f466-9049-49a9-b655-b270790de9ce\" (UID: \"b242f466-9049-49a9-b655-b270790de9ce\") " Jan 30 16:42:28 crc kubenswrapper[4766]: I0130 16:42:28.015373 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwzpp\" (UniqueName: \"kubernetes.io/projected/b242f466-9049-49a9-b655-b270790de9ce-kube-api-access-gwzpp\") pod \"b242f466-9049-49a9-b655-b270790de9ce\" (UID: \"b242f466-9049-49a9-b655-b270790de9ce\") " Jan 30 16:42:28 crc kubenswrapper[4766]: I0130 16:42:28.021003 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b242f466-9049-49a9-b655-b270790de9ce-kube-api-access-gwzpp" (OuterVolumeSpecName: "kube-api-access-gwzpp") pod "b242f466-9049-49a9-b655-b270790de9ce" (UID: "b242f466-9049-49a9-b655-b270790de9ce"). InnerVolumeSpecName "kube-api-access-gwzpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:42:28 crc kubenswrapper[4766]: I0130 16:42:28.037943 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b242f466-9049-49a9-b655-b270790de9ce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b242f466-9049-49a9-b655-b270790de9ce" (UID: "b242f466-9049-49a9-b655-b270790de9ce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:42:28 crc kubenswrapper[4766]: I0130 16:42:28.067483 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b242f466-9049-49a9-b655-b270790de9ce-config-data" (OuterVolumeSpecName: "config-data") pod "b242f466-9049-49a9-b655-b270790de9ce" (UID: "b242f466-9049-49a9-b655-b270790de9ce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:42:28 crc kubenswrapper[4766]: I0130 16:42:28.117866 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwzpp\" (UniqueName: \"kubernetes.io/projected/b242f466-9049-49a9-b655-b270790de9ce-kube-api-access-gwzpp\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:28 crc kubenswrapper[4766]: I0130 16:42:28.117895 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b242f466-9049-49a9-b655-b270790de9ce-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:28 crc kubenswrapper[4766]: I0130 16:42:28.117905 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b242f466-9049-49a9-b655-b270790de9ce-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:28 crc kubenswrapper[4766]: I0130 16:42:28.476952 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8p4hm" Jan 30 16:42:28 crc kubenswrapper[4766]: I0130 16:42:28.477334 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8p4hm" event={"ID":"b242f466-9049-49a9-b655-b270790de9ce","Type":"ContainerDied","Data":"c8288f2c3105e402da6e0f989ee1c689d8725bfc649cd0e05bef6f7830c2ab0c"} Jan 30 16:42:28 crc kubenswrapper[4766]: I0130 16:42:28.477376 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8288f2c3105e402da6e0f989ee1c689d8725bfc649cd0e05bef6f7830c2ab0c" Jan 30 16:42:28 crc kubenswrapper[4766]: I0130 16:42:28.477433 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7ff5475cc9-kch9t" Jan 30 16:42:28 crc kubenswrapper[4766]: I0130 16:42:28.822969 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-kch9t"] Jan 30 16:42:28 crc kubenswrapper[4766]: I0130 16:42:28.861731 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-hbqvh"] Jan 30 16:42:28 crc kubenswrapper[4766]: E0130 16:42:28.862108 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b242f466-9049-49a9-b655-b270790de9ce" containerName="keystone-db-sync" Jan 30 16:42:28 crc kubenswrapper[4766]: I0130 16:42:28.862128 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="b242f466-9049-49a9-b655-b270790de9ce" containerName="keystone-db-sync" Jan 30 16:42:28 crc kubenswrapper[4766]: I0130 16:42:28.862313 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="b242f466-9049-49a9-b655-b270790de9ce" containerName="keystone-db-sync" Jan 30 16:42:28 crc kubenswrapper[4766]: I0130 16:42:28.862871 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hbqvh" Jan 30 16:42:28 crc kubenswrapper[4766]: I0130 16:42:28.872230 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 30 16:42:28 crc kubenswrapper[4766]: I0130 16:42:28.872623 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 16:42:28 crc kubenswrapper[4766]: I0130 16:42:28.872798 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 16:42:28 crc kubenswrapper[4766]: I0130 16:42:28.872956 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-ftsn6" Jan 30 16:42:28 crc kubenswrapper[4766]: I0130 16:42:28.887649 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 16:42:28 crc kubenswrapper[4766]: I0130 16:42:28.888738 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-hbqvh"] Jan 30 16:42:28 crc kubenswrapper[4766]: I0130 16:42:28.928772 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-ll29f"] Jan 30 16:42:28 crc kubenswrapper[4766]: I0130 16:42:28.930230 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-ll29f" Jan 30 16:42:28 crc kubenswrapper[4766]: I0130 16:42:28.956321 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-ll29f"] Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.058815 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/020df37b-56f5-4f59-8c96-faaea5bb7e27-dns-svc\") pod \"dnsmasq-dns-5c5cc7c5ff-ll29f\" (UID: \"020df37b-56f5-4f59-8c96-faaea5bb7e27\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-ll29f" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.059010 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxp2r\" (UniqueName: \"kubernetes.io/projected/22fc62b3-3a89-44ec-8f23-4182b363478c-kube-api-access-nxp2r\") pod \"keystone-bootstrap-hbqvh\" (UID: \"22fc62b3-3a89-44ec-8f23-4182b363478c\") " pod="openstack/keystone-bootstrap-hbqvh" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.059075 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/22fc62b3-3a89-44ec-8f23-4182b363478c-credential-keys\") pod \"keystone-bootstrap-hbqvh\" (UID: \"22fc62b3-3a89-44ec-8f23-4182b363478c\") " pod="openstack/keystone-bootstrap-hbqvh" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.059106 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22fc62b3-3a89-44ec-8f23-4182b363478c-combined-ca-bundle\") pod \"keystone-bootstrap-hbqvh\" (UID: \"22fc62b3-3a89-44ec-8f23-4182b363478c\") " pod="openstack/keystone-bootstrap-hbqvh" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.059312 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/020df37b-56f5-4f59-8c96-faaea5bb7e27-config\") pod \"dnsmasq-dns-5c5cc7c5ff-ll29f\" (UID: \"020df37b-56f5-4f59-8c96-faaea5bb7e27\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-ll29f" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.059478 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/020df37b-56f5-4f59-8c96-faaea5bb7e27-ovsdbserver-nb\") pod \"dnsmasq-dns-5c5cc7c5ff-ll29f\" (UID: \"020df37b-56f5-4f59-8c96-faaea5bb7e27\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-ll29f" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.059562 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/020df37b-56f5-4f59-8c96-faaea5bb7e27-ovsdbserver-sb\") pod \"dnsmasq-dns-5c5cc7c5ff-ll29f\" (UID: \"020df37b-56f5-4f59-8c96-faaea5bb7e27\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-ll29f" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.059604 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22fc62b3-3a89-44ec-8f23-4182b363478c-config-data\") pod \"keystone-bootstrap-hbqvh\" (UID: \"22fc62b3-3a89-44ec-8f23-4182b363478c\") " pod="openstack/keystone-bootstrap-hbqvh" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.059696 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/22fc62b3-3a89-44ec-8f23-4182b363478c-fernet-keys\") pod \"keystone-bootstrap-hbqvh\" (UID: \"22fc62b3-3a89-44ec-8f23-4182b363478c\") " pod="openstack/keystone-bootstrap-hbqvh" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.059746 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22fc62b3-3a89-44ec-8f23-4182b363478c-scripts\") pod \"keystone-bootstrap-hbqvh\" (UID: \"22fc62b3-3a89-44ec-8f23-4182b363478c\") " pod="openstack/keystone-bootstrap-hbqvh" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.059819 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/020df37b-56f5-4f59-8c96-faaea5bb7e27-dns-swift-storage-0\") pod \"dnsmasq-dns-5c5cc7c5ff-ll29f\" (UID: \"020df37b-56f5-4f59-8c96-faaea5bb7e27\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-ll29f" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.059924 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgd4z\" (UniqueName: \"kubernetes.io/projected/020df37b-56f5-4f59-8c96-faaea5bb7e27-kube-api-access-mgd4z\") pod \"dnsmasq-dns-5c5cc7c5ff-ll29f\" (UID: \"020df37b-56f5-4f59-8c96-faaea5bb7e27\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-ll29f" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.111807 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-sc6rp"] Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.113005 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-sc6rp" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.126705 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.126872 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.127287 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-d97nd" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.161230 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/020df37b-56f5-4f59-8c96-faaea5bb7e27-config\") pod \"dnsmasq-dns-5c5cc7c5ff-ll29f\" (UID: \"020df37b-56f5-4f59-8c96-faaea5bb7e27\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-ll29f" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.161339 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/020df37b-56f5-4f59-8c96-faaea5bb7e27-ovsdbserver-nb\") pod \"dnsmasq-dns-5c5cc7c5ff-ll29f\" (UID: \"020df37b-56f5-4f59-8c96-faaea5bb7e27\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-ll29f" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.161388 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/020df37b-56f5-4f59-8c96-faaea5bb7e27-ovsdbserver-sb\") pod \"dnsmasq-dns-5c5cc7c5ff-ll29f\" (UID: \"020df37b-56f5-4f59-8c96-faaea5bb7e27\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-ll29f" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.161412 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22fc62b3-3a89-44ec-8f23-4182b363478c-config-data\") pod \"keystone-bootstrap-hbqvh\" (UID: \"22fc62b3-3a89-44ec-8f23-4182b363478c\") " pod="openstack/keystone-bootstrap-hbqvh" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.161451 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/22fc62b3-3a89-44ec-8f23-4182b363478c-fernet-keys\") pod \"keystone-bootstrap-hbqvh\" (UID: \"22fc62b3-3a89-44ec-8f23-4182b363478c\") " pod="openstack/keystone-bootstrap-hbqvh" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.161508 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22fc62b3-3a89-44ec-8f23-4182b363478c-scripts\") pod \"keystone-bootstrap-hbqvh\" (UID: \"22fc62b3-3a89-44ec-8f23-4182b363478c\") " pod="openstack/keystone-bootstrap-hbqvh" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.161550 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/020df37b-56f5-4f59-8c96-faaea5bb7e27-dns-swift-storage-0\") pod \"dnsmasq-dns-5c5cc7c5ff-ll29f\" (UID: \"020df37b-56f5-4f59-8c96-faaea5bb7e27\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-ll29f" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.161603 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgd4z\" (UniqueName: \"kubernetes.io/projected/020df37b-56f5-4f59-8c96-faaea5bb7e27-kube-api-access-mgd4z\") pod \"dnsmasq-dns-5c5cc7c5ff-ll29f\" (UID: \"020df37b-56f5-4f59-8c96-faaea5bb7e27\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-ll29f" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.161662 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/020df37b-56f5-4f59-8c96-faaea5bb7e27-dns-svc\") pod \"dnsmasq-dns-5c5cc7c5ff-ll29f\" (UID: \"020df37b-56f5-4f59-8c96-faaea5bb7e27\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-ll29f" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.161685 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxp2r\" (UniqueName: \"kubernetes.io/projected/22fc62b3-3a89-44ec-8f23-4182b363478c-kube-api-access-nxp2r\") pod \"keystone-bootstrap-hbqvh\" (UID: \"22fc62b3-3a89-44ec-8f23-4182b363478c\") " pod="openstack/keystone-bootstrap-hbqvh" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.161706 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/22fc62b3-3a89-44ec-8f23-4182b363478c-credential-keys\") pod \"keystone-bootstrap-hbqvh\" (UID: \"22fc62b3-3a89-44ec-8f23-4182b363478c\") " pod="openstack/keystone-bootstrap-hbqvh" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.161728 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22fc62b3-3a89-44ec-8f23-4182b363478c-combined-ca-bundle\") pod \"keystone-bootstrap-hbqvh\" (UID: \"22fc62b3-3a89-44ec-8f23-4182b363478c\") " pod="openstack/keystone-bootstrap-hbqvh" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.163010 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/020df37b-56f5-4f59-8c96-faaea5bb7e27-config\") pod \"dnsmasq-dns-5c5cc7c5ff-ll29f\" (UID: \"020df37b-56f5-4f59-8c96-faaea5bb7e27\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-ll29f" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.163010 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/020df37b-56f5-4f59-8c96-faaea5bb7e27-ovsdbserver-nb\") pod \"dnsmasq-dns-5c5cc7c5ff-ll29f\" (UID: \"020df37b-56f5-4f59-8c96-faaea5bb7e27\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-ll29f" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.163718 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/020df37b-56f5-4f59-8c96-faaea5bb7e27-ovsdbserver-sb\") pod \"dnsmasq-dns-5c5cc7c5ff-ll29f\" (UID: \"020df37b-56f5-4f59-8c96-faaea5bb7e27\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-ll29f" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.164849 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/020df37b-56f5-4f59-8c96-faaea5bb7e27-dns-svc\") pod \"dnsmasq-dns-5c5cc7c5ff-ll29f\" (UID: \"020df37b-56f5-4f59-8c96-faaea5bb7e27\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-ll29f" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.166060 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/020df37b-56f5-4f59-8c96-faaea5bb7e27-dns-swift-storage-0\") pod \"dnsmasq-dns-5c5cc7c5ff-ll29f\" (UID: \"020df37b-56f5-4f59-8c96-faaea5bb7e27\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-ll29f" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.177943 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22fc62b3-3a89-44ec-8f23-4182b363478c-config-data\") pod \"keystone-bootstrap-hbqvh\" (UID: \"22fc62b3-3a89-44ec-8f23-4182b363478c\") " pod="openstack/keystone-bootstrap-hbqvh" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.183861 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22fc62b3-3a89-44ec-8f23-4182b363478c-scripts\") pod \"keystone-bootstrap-hbqvh\" (UID: \"22fc62b3-3a89-44ec-8f23-4182b363478c\") " pod="openstack/keystone-bootstrap-hbqvh" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.188801 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22fc62b3-3a89-44ec-8f23-4182b363478c-combined-ca-bundle\") pod \"keystone-bootstrap-hbqvh\" (UID: \"22fc62b3-3a89-44ec-8f23-4182b363478c\") " pod="openstack/keystone-bootstrap-hbqvh" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.197757 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/22fc62b3-3a89-44ec-8f23-4182b363478c-fernet-keys\") pod \"keystone-bootstrap-hbqvh\" (UID: \"22fc62b3-3a89-44ec-8f23-4182b363478c\") " pod="openstack/keystone-bootstrap-hbqvh" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.226780 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/22fc62b3-3a89-44ec-8f23-4182b363478c-credential-keys\") pod \"keystone-bootstrap-hbqvh\" (UID: \"22fc62b3-3a89-44ec-8f23-4182b363478c\") " pod="openstack/keystone-bootstrap-hbqvh" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.228348 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-sc6rp"] Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.235363 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxp2r\" (UniqueName: \"kubernetes.io/projected/22fc62b3-3a89-44ec-8f23-4182b363478c-kube-api-access-nxp2r\") pod \"keystone-bootstrap-hbqvh\" (UID: \"22fc62b3-3a89-44ec-8f23-4182b363478c\") " pod="openstack/keystone-bootstrap-hbqvh" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.270015 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgd4z\" (UniqueName: \"kubernetes.io/projected/020df37b-56f5-4f59-8c96-faaea5bb7e27-kube-api-access-mgd4z\") pod \"dnsmasq-dns-5c5cc7c5ff-ll29f\" (UID: \"020df37b-56f5-4f59-8c96-faaea5bb7e27\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-ll29f" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.271149 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4bc27037-152a-461b-bce1-6d37b38bbb95-config\") pod \"neutron-db-sync-sc6rp\" (UID: \"4bc27037-152a-461b-bce1-6d37b38bbb95\") " pod="openstack/neutron-db-sync-sc6rp" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.271256 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql6bw\" (UniqueName: \"kubernetes.io/projected/4bc27037-152a-461b-bce1-6d37b38bbb95-kube-api-access-ql6bw\") pod \"neutron-db-sync-sc6rp\" (UID: \"4bc27037-152a-461b-bce1-6d37b38bbb95\") " pod="openstack/neutron-db-sync-sc6rp" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.271284 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bc27037-152a-461b-bce1-6d37b38bbb95-combined-ca-bundle\") pod \"neutron-db-sync-sc6rp\" (UID: \"4bc27037-152a-461b-bce1-6d37b38bbb95\") " pod="openstack/neutron-db-sync-sc6rp" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.297276 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-rxmkt"] Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.299018 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-rxmkt" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.309519 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.310006 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.314518 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-rbvkd" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.343256 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-rxmkt"] Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.368684 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-mq5sq"] Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.370002 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-mq5sq" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.372421 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a05e847-bb50-49ab-821d-e2432c0f01e9-combined-ca-bundle\") pod \"cinder-db-sync-rxmkt\" (UID: \"3a05e847-bb50-49ab-821d-e2432c0f01e9\") " pod="openstack/cinder-db-sync-rxmkt" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.372480 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4bc27037-152a-461b-bce1-6d37b38bbb95-config\") pod \"neutron-db-sync-sc6rp\" (UID: \"4bc27037-152a-461b-bce1-6d37b38bbb95\") " pod="openstack/neutron-db-sync-sc6rp" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.372524 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a05e847-bb50-49ab-821d-e2432c0f01e9-scripts\") pod \"cinder-db-sync-rxmkt\" (UID: \"3a05e847-bb50-49ab-821d-e2432c0f01e9\") " pod="openstack/cinder-db-sync-rxmkt" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.372576 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2627\" (UniqueName: \"kubernetes.io/projected/3a05e847-bb50-49ab-821d-e2432c0f01e9-kube-api-access-q2627\") pod \"cinder-db-sync-rxmkt\" (UID: \"3a05e847-bb50-49ab-821d-e2432c0f01e9\") " pod="openstack/cinder-db-sync-rxmkt" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.372612 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3a05e847-bb50-49ab-821d-e2432c0f01e9-db-sync-config-data\") pod \"cinder-db-sync-rxmkt\" (UID: \"3a05e847-bb50-49ab-821d-e2432c0f01e9\") " pod="openstack/cinder-db-sync-rxmkt" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.372636 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3a05e847-bb50-49ab-821d-e2432c0f01e9-etc-machine-id\") pod \"cinder-db-sync-rxmkt\" (UID: \"3a05e847-bb50-49ab-821d-e2432c0f01e9\") " pod="openstack/cinder-db-sync-rxmkt" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.372658 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a05e847-bb50-49ab-821d-e2432c0f01e9-config-data\") pod \"cinder-db-sync-rxmkt\" (UID: \"3a05e847-bb50-49ab-821d-e2432c0f01e9\") " pod="openstack/cinder-db-sync-rxmkt" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.372714 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ql6bw\" (UniqueName: \"kubernetes.io/projected/4bc27037-152a-461b-bce1-6d37b38bbb95-kube-api-access-ql6bw\") pod \"neutron-db-sync-sc6rp\" (UID: \"4bc27037-152a-461b-bce1-6d37b38bbb95\") " pod="openstack/neutron-db-sync-sc6rp" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.372749 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bc27037-152a-461b-bce1-6d37b38bbb95-combined-ca-bundle\") pod \"neutron-db-sync-sc6rp\" (UID: \"4bc27037-152a-461b-bce1-6d37b38bbb95\") " pod="openstack/neutron-db-sync-sc6rp" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.386613 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.386838 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-fh4lz" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.386996 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.414886 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4bc27037-152a-461b-bce1-6d37b38bbb95-config\") pod \"neutron-db-sync-sc6rp\" (UID: \"4bc27037-152a-461b-bce1-6d37b38bbb95\") " pod="openstack/neutron-db-sync-sc6rp" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.427537 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bc27037-152a-461b-bce1-6d37b38bbb95-combined-ca-bundle\") pod \"neutron-db-sync-sc6rp\" (UID: \"4bc27037-152a-461b-bce1-6d37b38bbb95\") " pod="openstack/neutron-db-sync-sc6rp" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.468460 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ql6bw\" (UniqueName: \"kubernetes.io/projected/4bc27037-152a-461b-bce1-6d37b38bbb95-kube-api-access-ql6bw\") pod \"neutron-db-sync-sc6rp\" (UID: \"4bc27037-152a-461b-bce1-6d37b38bbb95\") " pod="openstack/neutron-db-sync-sc6rp" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.473233 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-mq5sq"] Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.474349 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-combined-ca-bundle\") pod \"placement-db-sync-mq5sq\" (UID: \"83c08adc-cebc-4bff-8994-d8f1f0cb59d7\") " pod="openstack/placement-db-sync-mq5sq" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.474384 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-config-data\") pod \"placement-db-sync-mq5sq\" (UID: \"83c08adc-cebc-4bff-8994-d8f1f0cb59d7\") " pod="openstack/placement-db-sync-mq5sq" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.474460 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a05e847-bb50-49ab-821d-e2432c0f01e9-combined-ca-bundle\") pod \"cinder-db-sync-rxmkt\" (UID: \"3a05e847-bb50-49ab-821d-e2432c0f01e9\") " pod="openstack/cinder-db-sync-rxmkt" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.474579 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a05e847-bb50-49ab-821d-e2432c0f01e9-scripts\") pod \"cinder-db-sync-rxmkt\" (UID: \"3a05e847-bb50-49ab-821d-e2432c0f01e9\") " pod="openstack/cinder-db-sync-rxmkt" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.474624 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-scripts\") pod \"placement-db-sync-mq5sq\" (UID: \"83c08adc-cebc-4bff-8994-d8f1f0cb59d7\") " pod="openstack/placement-db-sync-mq5sq" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.474668 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2627\" (UniqueName: \"kubernetes.io/projected/3a05e847-bb50-49ab-821d-e2432c0f01e9-kube-api-access-q2627\") pod \"cinder-db-sync-rxmkt\" (UID: \"3a05e847-bb50-49ab-821d-e2432c0f01e9\") " pod="openstack/cinder-db-sync-rxmkt" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.474693 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k75sk\" (UniqueName: \"kubernetes.io/projected/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-kube-api-access-k75sk\") pod \"placement-db-sync-mq5sq\" (UID: \"83c08adc-cebc-4bff-8994-d8f1f0cb59d7\") " pod="openstack/placement-db-sync-mq5sq" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.474729 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3a05e847-bb50-49ab-821d-e2432c0f01e9-db-sync-config-data\") pod \"cinder-db-sync-rxmkt\" (UID: \"3a05e847-bb50-49ab-821d-e2432c0f01e9\") " pod="openstack/cinder-db-sync-rxmkt" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.478515 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a05e847-bb50-49ab-821d-e2432c0f01e9-combined-ca-bundle\") pod \"cinder-db-sync-rxmkt\" (UID: \"3a05e847-bb50-49ab-821d-e2432c0f01e9\") " pod="openstack/cinder-db-sync-rxmkt" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.489634 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hbqvh" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.491494 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3a05e847-bb50-49ab-821d-e2432c0f01e9-etc-machine-id\") pod \"cinder-db-sync-rxmkt\" (UID: \"3a05e847-bb50-49ab-821d-e2432c0f01e9\") " pod="openstack/cinder-db-sync-rxmkt" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.491554 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3a05e847-bb50-49ab-821d-e2432c0f01e9-etc-machine-id\") pod \"cinder-db-sync-rxmkt\" (UID: \"3a05e847-bb50-49ab-821d-e2432c0f01e9\") " pod="openstack/cinder-db-sync-rxmkt" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.491598 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a05e847-bb50-49ab-821d-e2432c0f01e9-config-data\") pod \"cinder-db-sync-rxmkt\" (UID: \"3a05e847-bb50-49ab-821d-e2432c0f01e9\") " pod="openstack/cinder-db-sync-rxmkt" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.491745 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-logs\") pod \"placement-db-sync-mq5sq\" (UID: \"83c08adc-cebc-4bff-8994-d8f1f0cb59d7\") " pod="openstack/placement-db-sync-mq5sq" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.504085 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a05e847-bb50-49ab-821d-e2432c0f01e9-config-data\") pod \"cinder-db-sync-rxmkt\" (UID: \"3a05e847-bb50-49ab-821d-e2432c0f01e9\") " pod="openstack/cinder-db-sync-rxmkt" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.504721 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3a05e847-bb50-49ab-821d-e2432c0f01e9-db-sync-config-data\") pod \"cinder-db-sync-rxmkt\" (UID: \"3a05e847-bb50-49ab-821d-e2432c0f01e9\") " pod="openstack/cinder-db-sync-rxmkt" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.507780 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a05e847-bb50-49ab-821d-e2432c0f01e9-scripts\") pod \"cinder-db-sync-rxmkt\" (UID: \"3a05e847-bb50-49ab-821d-e2432c0f01e9\") " pod="openstack/cinder-db-sync-rxmkt" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.515951 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.518197 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.525828 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.536828 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.557258 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-ll29f"] Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.557929 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-ll29f" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.583262 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2627\" (UniqueName: \"kubernetes.io/projected/3a05e847-bb50-49ab-821d-e2432c0f01e9-kube-api-access-q2627\") pod \"cinder-db-sync-rxmkt\" (UID: \"3a05e847-bb50-49ab-821d-e2432c0f01e9\") " pod="openstack/cinder-db-sync-rxmkt" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.603368 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-combined-ca-bundle\") pod \"placement-db-sync-mq5sq\" (UID: \"83c08adc-cebc-4bff-8994-d8f1f0cb59d7\") " pod="openstack/placement-db-sync-mq5sq" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.603478 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-config-data\") pod \"placement-db-sync-mq5sq\" (UID: \"83c08adc-cebc-4bff-8994-d8f1f0cb59d7\") " pod="openstack/placement-db-sync-mq5sq" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.603643 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-scripts\") pod \"placement-db-sync-mq5sq\" (UID: \"83c08adc-cebc-4bff-8994-d8f1f0cb59d7\") " pod="openstack/placement-db-sync-mq5sq" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.603729 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k75sk\" (UniqueName: \"kubernetes.io/projected/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-kube-api-access-k75sk\") pod \"placement-db-sync-mq5sq\" (UID: \"83c08adc-cebc-4bff-8994-d8f1f0cb59d7\") " pod="openstack/placement-db-sync-mq5sq" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.603911 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-logs\") pod \"placement-db-sync-mq5sq\" (UID: \"83c08adc-cebc-4bff-8994-d8f1f0cb59d7\") " pod="openstack/placement-db-sync-mq5sq" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.604737 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-logs\") pod \"placement-db-sync-mq5sq\" (UID: \"83c08adc-cebc-4bff-8994-d8f1f0cb59d7\") " pod="openstack/placement-db-sync-mq5sq" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.612119 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-config-data\") pod \"placement-db-sync-mq5sq\" (UID: \"83c08adc-cebc-4bff-8994-d8f1f0cb59d7\") " pod="openstack/placement-db-sync-mq5sq" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.622053 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-combined-ca-bundle\") pod \"placement-db-sync-mq5sq\" (UID: \"83c08adc-cebc-4bff-8994-d8f1f0cb59d7\") " pod="openstack/placement-db-sync-mq5sq" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.653596 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k75sk\" (UniqueName: \"kubernetes.io/projected/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-kube-api-access-k75sk\") pod \"placement-db-sync-mq5sq\" (UID: \"83c08adc-cebc-4bff-8994-d8f1f0cb59d7\") " pod="openstack/placement-db-sync-mq5sq" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.653610 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-scripts\") pod \"placement-db-sync-mq5sq\" (UID: \"83c08adc-cebc-4bff-8994-d8f1f0cb59d7\") " pod="openstack/placement-db-sync-mq5sq" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.656505 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.684665 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-jlsp7"] Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.686539 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-jlsp7" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.709518 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14501411-a443-4f68-93ed-4cadcbc48b9f-run-httpd\") pod \"ceilometer-0\" (UID: \"14501411-a443-4f68-93ed-4cadcbc48b9f\") " pod="openstack/ceilometer-0" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.709646 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14501411-a443-4f68-93ed-4cadcbc48b9f-log-httpd\") pod \"ceilometer-0\" (UID: \"14501411-a443-4f68-93ed-4cadcbc48b9f\") " pod="openstack/ceilometer-0" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.709676 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14501411-a443-4f68-93ed-4cadcbc48b9f-config-data\") pod \"ceilometer-0\" (UID: \"14501411-a443-4f68-93ed-4cadcbc48b9f\") " pod="openstack/ceilometer-0" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.709744 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14501411-a443-4f68-93ed-4cadcbc48b9f-scripts\") pod \"ceilometer-0\" (UID: \"14501411-a443-4f68-93ed-4cadcbc48b9f\") " pod="openstack/ceilometer-0" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.709773 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr64s\" (UniqueName: \"kubernetes.io/projected/14501411-a443-4f68-93ed-4cadcbc48b9f-kube-api-access-hr64s\") pod \"ceilometer-0\" (UID: \"14501411-a443-4f68-93ed-4cadcbc48b9f\") " pod="openstack/ceilometer-0" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.709952 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14501411-a443-4f68-93ed-4cadcbc48b9f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"14501411-a443-4f68-93ed-4cadcbc48b9f\") " pod="openstack/ceilometer-0" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.709996 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/14501411-a443-4f68-93ed-4cadcbc48b9f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"14501411-a443-4f68-93ed-4cadcbc48b9f\") " pod="openstack/ceilometer-0" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.730956 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-jlsp7"] Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.744341 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-zgzf5"] Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.747511 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-zgzf5" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.751151 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-47zjc" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.758569 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-sc6rp" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.766058 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.775812 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-zgzf5"] Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.813321 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr64s\" (UniqueName: \"kubernetes.io/projected/14501411-a443-4f68-93ed-4cadcbc48b9f-kube-api-access-hr64s\") pod \"ceilometer-0\" (UID: \"14501411-a443-4f68-93ed-4cadcbc48b9f\") " pod="openstack/ceilometer-0" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.813825 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-jlsp7\" (UID: \"a7ccb2d3-4270-48e3-99cc-6031edfa30ae\") " pod="openstack/dnsmasq-dns-8b5c85b87-jlsp7" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.813942 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-jlsp7\" (UID: \"a7ccb2d3-4270-48e3-99cc-6031edfa30ae\") " pod="openstack/dnsmasq-dns-8b5c85b87-jlsp7" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.814031 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14501411-a443-4f68-93ed-4cadcbc48b9f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"14501411-a443-4f68-93ed-4cadcbc48b9f\") " pod="openstack/ceilometer-0" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.815456 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-config\") pod \"dnsmasq-dns-8b5c85b87-jlsp7\" (UID: \"a7ccb2d3-4270-48e3-99cc-6031edfa30ae\") " pod="openstack/dnsmasq-dns-8b5c85b87-jlsp7" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.815555 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/14501411-a443-4f68-93ed-4cadcbc48b9f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"14501411-a443-4f68-93ed-4cadcbc48b9f\") " pod="openstack/ceilometer-0" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.815638 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdnct\" (UniqueName: \"kubernetes.io/projected/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-kube-api-access-wdnct\") pod \"dnsmasq-dns-8b5c85b87-jlsp7\" (UID: \"a7ccb2d3-4270-48e3-99cc-6031edfa30ae\") " pod="openstack/dnsmasq-dns-8b5c85b87-jlsp7" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.815791 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14501411-a443-4f68-93ed-4cadcbc48b9f-run-httpd\") pod \"ceilometer-0\" (UID: \"14501411-a443-4f68-93ed-4cadcbc48b9f\") " pod="openstack/ceilometer-0" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.815885 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-jlsp7\" (UID: \"a7ccb2d3-4270-48e3-99cc-6031edfa30ae\") " pod="openstack/dnsmasq-dns-8b5c85b87-jlsp7" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.815962 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14501411-a443-4f68-93ed-4cadcbc48b9f-log-httpd\") pod \"ceilometer-0\" (UID: \"14501411-a443-4f68-93ed-4cadcbc48b9f\") " pod="openstack/ceilometer-0" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.816023 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14501411-a443-4f68-93ed-4cadcbc48b9f-config-data\") pod \"ceilometer-0\" (UID: \"14501411-a443-4f68-93ed-4cadcbc48b9f\") " pod="openstack/ceilometer-0" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.816122 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-jlsp7\" (UID: \"a7ccb2d3-4270-48e3-99cc-6031edfa30ae\") " pod="openstack/dnsmasq-dns-8b5c85b87-jlsp7" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.816260 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14501411-a443-4f68-93ed-4cadcbc48b9f-scripts\") pod \"ceilometer-0\" (UID: \"14501411-a443-4f68-93ed-4cadcbc48b9f\") " pod="openstack/ceilometer-0" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.816515 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14501411-a443-4f68-93ed-4cadcbc48b9f-run-httpd\") pod \"ceilometer-0\" (UID: \"14501411-a443-4f68-93ed-4cadcbc48b9f\") " pod="openstack/ceilometer-0" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.822425 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14501411-a443-4f68-93ed-4cadcbc48b9f-log-httpd\") pod \"ceilometer-0\" (UID: \"14501411-a443-4f68-93ed-4cadcbc48b9f\") " pod="openstack/ceilometer-0" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.823859 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-rxmkt" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.857231 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14501411-a443-4f68-93ed-4cadcbc48b9f-scripts\") pod \"ceilometer-0\" (UID: \"14501411-a443-4f68-93ed-4cadcbc48b9f\") " pod="openstack/ceilometer-0" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.857502 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14501411-a443-4f68-93ed-4cadcbc48b9f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"14501411-a443-4f68-93ed-4cadcbc48b9f\") " pod="openstack/ceilometer-0" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.860487 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14501411-a443-4f68-93ed-4cadcbc48b9f-config-data\") pod \"ceilometer-0\" (UID: \"14501411-a443-4f68-93ed-4cadcbc48b9f\") " pod="openstack/ceilometer-0" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.868801 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/14501411-a443-4f68-93ed-4cadcbc48b9f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"14501411-a443-4f68-93ed-4cadcbc48b9f\") " pod="openstack/ceilometer-0" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.870110 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-mq5sq" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.874452 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr64s\" (UniqueName: \"kubernetes.io/projected/14501411-a443-4f68-93ed-4cadcbc48b9f-kube-api-access-hr64s\") pod \"ceilometer-0\" (UID: \"14501411-a443-4f68-93ed-4cadcbc48b9f\") " pod="openstack/ceilometer-0" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.908239 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.925608 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-jlsp7\" (UID: \"a7ccb2d3-4270-48e3-99cc-6031edfa30ae\") " pod="openstack/dnsmasq-dns-8b5c85b87-jlsp7" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.925664 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-jlsp7\" (UID: \"a7ccb2d3-4270-48e3-99cc-6031edfa30ae\") " pod="openstack/dnsmasq-dns-8b5c85b87-jlsp7" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.925713 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-jlsp7\" (UID: \"a7ccb2d3-4270-48e3-99cc-6031edfa30ae\") " pod="openstack/dnsmasq-dns-8b5c85b87-jlsp7" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.925739 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-config\") pod \"dnsmasq-dns-8b5c85b87-jlsp7\" (UID: \"a7ccb2d3-4270-48e3-99cc-6031edfa30ae\") " pod="openstack/dnsmasq-dns-8b5c85b87-jlsp7" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.925759 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdnct\" (UniqueName: \"kubernetes.io/projected/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-kube-api-access-wdnct\") pod \"dnsmasq-dns-8b5c85b87-jlsp7\" (UID: \"a7ccb2d3-4270-48e3-99cc-6031edfa30ae\") " pod="openstack/dnsmasq-dns-8b5c85b87-jlsp7" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.925789 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g5xs\" (UniqueName: \"kubernetes.io/projected/ad8b317f-6f81-4ac9-a854-7b71e384ed98-kube-api-access-6g5xs\") pod \"barbican-db-sync-zgzf5\" (UID: \"ad8b317f-6f81-4ac9-a854-7b71e384ed98\") " pod="openstack/barbican-db-sync-zgzf5" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.925817 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ad8b317f-6f81-4ac9-a854-7b71e384ed98-db-sync-config-data\") pod \"barbican-db-sync-zgzf5\" (UID: \"ad8b317f-6f81-4ac9-a854-7b71e384ed98\") " pod="openstack/barbican-db-sync-zgzf5" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.925838 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad8b317f-6f81-4ac9-a854-7b71e384ed98-combined-ca-bundle\") pod \"barbican-db-sync-zgzf5\" (UID: \"ad8b317f-6f81-4ac9-a854-7b71e384ed98\") " pod="openstack/barbican-db-sync-zgzf5" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.925897 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-jlsp7\" (UID: \"a7ccb2d3-4270-48e3-99cc-6031edfa30ae\") " pod="openstack/dnsmasq-dns-8b5c85b87-jlsp7" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.926860 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-jlsp7\" (UID: \"a7ccb2d3-4270-48e3-99cc-6031edfa30ae\") " pod="openstack/dnsmasq-dns-8b5c85b87-jlsp7" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.927340 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-jlsp7\" (UID: \"a7ccb2d3-4270-48e3-99cc-6031edfa30ae\") " pod="openstack/dnsmasq-dns-8b5c85b87-jlsp7" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.927717 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-jlsp7\" (UID: \"a7ccb2d3-4270-48e3-99cc-6031edfa30ae\") " pod="openstack/dnsmasq-dns-8b5c85b87-jlsp7" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.928069 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-config\") pod \"dnsmasq-dns-8b5c85b87-jlsp7\" (UID: \"a7ccb2d3-4270-48e3-99cc-6031edfa30ae\") " pod="openstack/dnsmasq-dns-8b5c85b87-jlsp7" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.933325 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-jlsp7\" (UID: \"a7ccb2d3-4270-48e3-99cc-6031edfa30ae\") " pod="openstack/dnsmasq-dns-8b5c85b87-jlsp7" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.962714 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdnct\" (UniqueName: \"kubernetes.io/projected/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-kube-api-access-wdnct\") pod \"dnsmasq-dns-8b5c85b87-jlsp7\" (UID: \"a7ccb2d3-4270-48e3-99cc-6031edfa30ae\") " pod="openstack/dnsmasq-dns-8b5c85b87-jlsp7" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.994089 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.995840 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.008629 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-jlsp7" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.009248 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.009509 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-6xjc8" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.009676 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.009878 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.029700 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6g5xs\" (UniqueName: \"kubernetes.io/projected/ad8b317f-6f81-4ac9-a854-7b71e384ed98-kube-api-access-6g5xs\") pod \"barbican-db-sync-zgzf5\" (UID: \"ad8b317f-6f81-4ac9-a854-7b71e384ed98\") " pod="openstack/barbican-db-sync-zgzf5" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.029789 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ad8b317f-6f81-4ac9-a854-7b71e384ed98-db-sync-config-data\") pod \"barbican-db-sync-zgzf5\" (UID: \"ad8b317f-6f81-4ac9-a854-7b71e384ed98\") " pod="openstack/barbican-db-sync-zgzf5" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.029833 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad8b317f-6f81-4ac9-a854-7b71e384ed98-combined-ca-bundle\") pod \"barbican-db-sync-zgzf5\" (UID: \"ad8b317f-6f81-4ac9-a854-7b71e384ed98\") " pod="openstack/barbican-db-sync-zgzf5" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.034555 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad8b317f-6f81-4ac9-a854-7b71e384ed98-combined-ca-bundle\") pod \"barbican-db-sync-zgzf5\" (UID: \"ad8b317f-6f81-4ac9-a854-7b71e384ed98\") " pod="openstack/barbican-db-sync-zgzf5" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.034638 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.037784 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ad8b317f-6f81-4ac9-a854-7b71e384ed98-db-sync-config-data\") pod \"barbican-db-sync-zgzf5\" (UID: \"ad8b317f-6f81-4ac9-a854-7b71e384ed98\") " pod="openstack/barbican-db-sync-zgzf5" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.063885 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6g5xs\" (UniqueName: \"kubernetes.io/projected/ad8b317f-6f81-4ac9-a854-7b71e384ed98-kube-api-access-6g5xs\") pod \"barbican-db-sync-zgzf5\" (UID: \"ad8b317f-6f81-4ac9-a854-7b71e384ed98\") " pod="openstack/barbican-db-sync-zgzf5" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.100326 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-zgzf5" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.108922 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.111688 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.116286 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.116644 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.133923 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.135646 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2654a202-1ccf-4de3-90bf-3bc6f15de239-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.135684 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.135701 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2654a202-1ccf-4de3-90bf-3bc6f15de239-logs\") pod \"glance-default-external-api-0\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.135761 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2654a202-1ccf-4de3-90bf-3bc6f15de239-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.135776 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2654a202-1ccf-4de3-90bf-3bc6f15de239-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.135854 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2654a202-1ccf-4de3-90bf-3bc6f15de239-scripts\") pod \"glance-default-external-api-0\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.135926 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2654a202-1ccf-4de3-90bf-3bc6f15de239-config-data\") pod \"glance-default-external-api-0\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.135975 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97hjj\" (UniqueName: \"kubernetes.io/projected/2654a202-1ccf-4de3-90bf-3bc6f15de239-kube-api-access-97hjj\") pod \"glance-default-external-api-0\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.239261 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/89845731-1ffc-4f79-a979-d83068cebc2a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.239665 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2654a202-1ccf-4de3-90bf-3bc6f15de239-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.239716 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2654a202-1ccf-4de3-90bf-3bc6f15de239-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.239755 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89845731-1ffc-4f79-a979-d83068cebc2a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.239802 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2654a202-1ccf-4de3-90bf-3bc6f15de239-scripts\") pod \"glance-default-external-api-0\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.239843 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89845731-1ffc-4f79-a979-d83068cebc2a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.239882 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmsmh\" (UniqueName: \"kubernetes.io/projected/89845731-1ffc-4f79-a979-d83068cebc2a-kube-api-access-lmsmh\") pod \"glance-default-internal-api-0\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.239939 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2654a202-1ccf-4de3-90bf-3bc6f15de239-config-data\") pod \"glance-default-external-api-0\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.239973 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89845731-1ffc-4f79-a979-d83068cebc2a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.239990 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/89845731-1ffc-4f79-a979-d83068cebc2a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.240109 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.240160 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97hjj\" (UniqueName: \"kubernetes.io/projected/2654a202-1ccf-4de3-90bf-3bc6f15de239-kube-api-access-97hjj\") pod \"glance-default-external-api-0\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.240683 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89845731-1ffc-4f79-a979-d83068cebc2a-logs\") pod \"glance-default-internal-api-0\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.240757 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2654a202-1ccf-4de3-90bf-3bc6f15de239-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.240805 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.240838 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2654a202-1ccf-4de3-90bf-3bc6f15de239-logs\") pod \"glance-default-external-api-0\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.241399 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2654a202-1ccf-4de3-90bf-3bc6f15de239-logs\") pod \"glance-default-external-api-0\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.241680 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2654a202-1ccf-4de3-90bf-3bc6f15de239-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.245365 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.247037 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2654a202-1ccf-4de3-90bf-3bc6f15de239-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.250045 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2654a202-1ccf-4de3-90bf-3bc6f15de239-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.262327 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2654a202-1ccf-4de3-90bf-3bc6f15de239-scripts\") pod \"glance-default-external-api-0\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.263361 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2654a202-1ccf-4de3-90bf-3bc6f15de239-config-data\") pod \"glance-default-external-api-0\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.295188 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97hjj\" (UniqueName: \"kubernetes.io/projected/2654a202-1ccf-4de3-90bf-3bc6f15de239-kube-api-access-97hjj\") pod \"glance-default-external-api-0\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: W0130 16:42:30.304464 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22fc62b3_3a89_44ec_8f23_4182b363478c.slice/crio-cb8dd33dc29b2286c115159871279671c13b4e68f9e215e5899370d3d4a8576e WatchSource:0}: Error finding container cb8dd33dc29b2286c115159871279671c13b4e68f9e215e5899370d3d4a8576e: Status 404 returned error can't find the container with id cb8dd33dc29b2286c115159871279671c13b4e68f9e215e5899370d3d4a8576e Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.314110 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-hbqvh"] Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.317841 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.326113 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.342409 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmsmh\" (UniqueName: \"kubernetes.io/projected/89845731-1ffc-4f79-a979-d83068cebc2a-kube-api-access-lmsmh\") pod \"glance-default-internal-api-0\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.342483 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89845731-1ffc-4f79-a979-d83068cebc2a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.342504 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/89845731-1ffc-4f79-a979-d83068cebc2a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.342521 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.342571 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89845731-1ffc-4f79-a979-d83068cebc2a-logs\") pod \"glance-default-internal-api-0\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.342621 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/89845731-1ffc-4f79-a979-d83068cebc2a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.342653 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89845731-1ffc-4f79-a979-d83068cebc2a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.342679 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89845731-1ffc-4f79-a979-d83068cebc2a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.344087 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89845731-1ffc-4f79-a979-d83068cebc2a-logs\") pod \"glance-default-internal-api-0\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.344388 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/89845731-1ffc-4f79-a979-d83068cebc2a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.345021 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.349052 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89845731-1ffc-4f79-a979-d83068cebc2a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.349395 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89845731-1ffc-4f79-a979-d83068cebc2a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.352990 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/89845731-1ffc-4f79-a979-d83068cebc2a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.370092 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmsmh\" (UniqueName: \"kubernetes.io/projected/89845731-1ffc-4f79-a979-d83068cebc2a-kube-api-access-lmsmh\") pod \"glance-default-internal-api-0\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.378486 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.397519 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89845731-1ffc-4f79-a979-d83068cebc2a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.480349 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.533055 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7ff5475cc9-kch9t" podUID="b52befca-b3ab-4e81-bc0f-c828a8bdc49b" containerName="dnsmasq-dns" containerID="cri-o://956170968e42ad47b2cafd397e206fd1268906e156bbd44d9f8ca6e3f5096ee0" gracePeriod=10 Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.533087 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hbqvh" event={"ID":"22fc62b3-3a89-44ec-8f23-4182b363478c","Type":"ContainerStarted","Data":"cb8dd33dc29b2286c115159871279671c13b4e68f9e215e5899370d3d4a8576e"} Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.593646 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-ll29f"] Jan 30 16:42:30 crc kubenswrapper[4766]: W0130 16:42:30.605209 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod020df37b_56f5_4f59_8c96_faaea5bb7e27.slice/crio-bbefff24f39dbadacd598d64ee407c71a5ea9986cb075543865724ab87f304f8 WatchSource:0}: Error finding container bbefff24f39dbadacd598d64ee407c71a5ea9986cb075543865724ab87f304f8: Status 404 returned error can't find the container with id bbefff24f39dbadacd598d64ee407c71a5ea9986cb075543865724ab87f304f8 Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.606350 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-rxmkt"] Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.617409 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-sc6rp"] Jan 30 16:42:30 crc kubenswrapper[4766]: W0130 16:42:30.673169 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4bc27037_152a_461b_bce1_6d37b38bbb95.slice/crio-fdbfa8e0065a380d3ba4a52bbdffd41bedf11875edae24ac7fb676379d4ea282 WatchSource:0}: Error finding container fdbfa8e0065a380d3ba4a52bbdffd41bedf11875edae24ac7fb676379d4ea282: Status 404 returned error can't find the container with id fdbfa8e0065a380d3ba4a52bbdffd41bedf11875edae24ac7fb676379d4ea282 Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.805000 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.824369 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-mq5sq"] Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.050118 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-zgzf5"] Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.053035 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-jlsp7"] Jan 30 16:42:31 crc kubenswrapper[4766]: W0130 16:42:31.112889 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7ccb2d3_4270_48e3_99cc_6031edfa30ae.slice/crio-de52ee7d6da539ff2915615ec98d46f519fe75c68b787c9ed63b8db673bf3c26 WatchSource:0}: Error finding container de52ee7d6da539ff2915615ec98d46f519fe75c68b787c9ed63b8db673bf3c26: Status 404 returned error can't find the container with id de52ee7d6da539ff2915615ec98d46f519fe75c68b787c9ed63b8db673bf3c26 Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.214439 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.354905 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-kch9t" Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.479007 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.479730 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22lvx\" (UniqueName: \"kubernetes.io/projected/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-kube-api-access-22lvx\") pod \"b52befca-b3ab-4e81-bc0f-c828a8bdc49b\" (UID: \"b52befca-b3ab-4e81-bc0f-c828a8bdc49b\") " Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.480259 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-config\") pod \"b52befca-b3ab-4e81-bc0f-c828a8bdc49b\" (UID: \"b52befca-b3ab-4e81-bc0f-c828a8bdc49b\") " Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.480777 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-dns-swift-storage-0\") pod \"b52befca-b3ab-4e81-bc0f-c828a8bdc49b\" (UID: \"b52befca-b3ab-4e81-bc0f-c828a8bdc49b\") " Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.480836 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-ovsdbserver-nb\") pod \"b52befca-b3ab-4e81-bc0f-c828a8bdc49b\" (UID: \"b52befca-b3ab-4e81-bc0f-c828a8bdc49b\") " Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.480905 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-ovsdbserver-sb\") pod \"b52befca-b3ab-4e81-bc0f-c828a8bdc49b\" (UID: \"b52befca-b3ab-4e81-bc0f-c828a8bdc49b\") " Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.480953 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-dns-svc\") pod \"b52befca-b3ab-4e81-bc0f-c828a8bdc49b\" (UID: \"b52befca-b3ab-4e81-bc0f-c828a8bdc49b\") " Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.512631 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-kube-api-access-22lvx" (OuterVolumeSpecName: "kube-api-access-22lvx") pod "b52befca-b3ab-4e81-bc0f-c828a8bdc49b" (UID: "b52befca-b3ab-4e81-bc0f-c828a8bdc49b"). InnerVolumeSpecName "kube-api-access-22lvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.586232 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22lvx\" (UniqueName: \"kubernetes.io/projected/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-kube-api-access-22lvx\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.592322 4766 generic.go:334] "Generic (PLEG): container finished" podID="b52befca-b3ab-4e81-bc0f-c828a8bdc49b" containerID="956170968e42ad47b2cafd397e206fd1268906e156bbd44d9f8ca6e3f5096ee0" exitCode=0 Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.592377 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-kch9t" event={"ID":"b52befca-b3ab-4e81-bc0f-c828a8bdc49b","Type":"ContainerDied","Data":"956170968e42ad47b2cafd397e206fd1268906e156bbd44d9f8ca6e3f5096ee0"} Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.592404 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-kch9t" event={"ID":"b52befca-b3ab-4e81-bc0f-c828a8bdc49b","Type":"ContainerDied","Data":"ac637fb7fa3a5a2ceefaf2b57d6fb0986a7fd9542b5a8336144c4521b7ec6f8c"} Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.592419 4766 scope.go:117] "RemoveContainer" containerID="956170968e42ad47b2cafd397e206fd1268906e156bbd44d9f8ca6e3f5096ee0" Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.592547 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-kch9t" Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.598604 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-jlsp7" event={"ID":"a7ccb2d3-4270-48e3-99cc-6031edfa30ae","Type":"ContainerStarted","Data":"de52ee7d6da539ff2915615ec98d46f519fe75c68b787c9ed63b8db673bf3c26"} Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.606818 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b52befca-b3ab-4e81-bc0f-c828a8bdc49b" (UID: "b52befca-b3ab-4e81-bc0f-c828a8bdc49b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.617205 4766 generic.go:334] "Generic (PLEG): container finished" podID="020df37b-56f5-4f59-8c96-faaea5bb7e27" containerID="f3e8472abbbcf843661882d9d161476828c357dd15048dd6266dd09765622991" exitCode=0 Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.617275 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5cc7c5ff-ll29f" event={"ID":"020df37b-56f5-4f59-8c96-faaea5bb7e27","Type":"ContainerDied","Data":"f3e8472abbbcf843661882d9d161476828c357dd15048dd6266dd09765622991"} Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.617301 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5cc7c5ff-ll29f" event={"ID":"020df37b-56f5-4f59-8c96-faaea5bb7e27","Type":"ContainerStarted","Data":"bbefff24f39dbadacd598d64ee407c71a5ea9986cb075543865724ab87f304f8"} Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.629282 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-sc6rp" event={"ID":"4bc27037-152a-461b-bce1-6d37b38bbb95","Type":"ContainerStarted","Data":"c109162953a72a45d6f1c14f847bc29a8241f51dc6338795a5b5a228252ba405"} Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.629330 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-sc6rp" event={"ID":"4bc27037-152a-461b-bce1-6d37b38bbb95","Type":"ContainerStarted","Data":"fdbfa8e0065a380d3ba4a52bbdffd41bedf11875edae24ac7fb676379d4ea282"} Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.641851 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b52befca-b3ab-4e81-bc0f-c828a8bdc49b" (UID: "b52befca-b3ab-4e81-bc0f-c828a8bdc49b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.643550 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-zgzf5" event={"ID":"ad8b317f-6f81-4ac9-a854-7b71e384ed98","Type":"ContainerStarted","Data":"e09f31873ccd116f2a3b1ef9422cf9428666d4cb02bc17d4466e621c29db9731"} Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.648949 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-mq5sq" event={"ID":"83c08adc-cebc-4bff-8994-d8f1f0cb59d7","Type":"ContainerStarted","Data":"7caac3e0c06feb794717f6f40765ed2205ff79a69ccdb722b91c767580ccb20f"} Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.653548 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-rxmkt" event={"ID":"3a05e847-bb50-49ab-821d-e2432c0f01e9","Type":"ContainerStarted","Data":"229d0980cc7e5e26832bda068f3b2059b081d7bd956f13cd9eecf8d3a512baaf"} Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.662997 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hbqvh" event={"ID":"22fc62b3-3a89-44ec-8f23-4182b363478c","Type":"ContainerStarted","Data":"486e761914f506c8f715baf8a899185c1691423ce4dc1690c67bd2bf32714c57"} Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.673406 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2654a202-1ccf-4de3-90bf-3bc6f15de239","Type":"ContainerStarted","Data":"49e6a264688b5efa68e5dd3bb58dc0b650db2a13ee17de4b4093f263fc716ec3"} Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.690426 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.690471 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:31 crc kubenswrapper[4766]: W0130 16:42:31.695298 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89845731_1ffc_4f79_a979_d83068cebc2a.slice/crio-8398be31fd1c1dbaac0a47e8ca9fd7d89f84dea6a8b9da4892e60534d152611d WatchSource:0}: Error finding container 8398be31fd1c1dbaac0a47e8ca9fd7d89f84dea6a8b9da4892e60534d152611d: Status 404 returned error can't find the container with id 8398be31fd1c1dbaac0a47e8ca9fd7d89f84dea6a8b9da4892e60534d152611d Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.695450 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14501411-a443-4f68-93ed-4cadcbc48b9f","Type":"ContainerStarted","Data":"80541219c3010f86d328821046e3eb93ce24469ac922b57c41a30f77d511e82f"} Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.732422 4766 scope.go:117] "RemoveContainer" containerID="4088f4fd9d06fcf827d9109fe4a2bcec3a2991bdc54db993295bb9b7219e61c0" Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.759375 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-sc6rp" podStartSLOduration=2.759350858 podStartE2EDuration="2.759350858s" podCreationTimestamp="2026-01-30 16:42:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:42:31.668126731 +0000 UTC m=+1206.306084077" watchObservedRunningTime="2026-01-30 16:42:31.759350858 +0000 UTC m=+1206.397308204" Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.760236 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b52befca-b3ab-4e81-bc0f-c828a8bdc49b" (UID: "b52befca-b3ab-4e81-bc0f-c828a8bdc49b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.767296 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-hbqvh" podStartSLOduration=3.7672767609999998 podStartE2EDuration="3.767276761s" podCreationTimestamp="2026-01-30 16:42:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:42:31.703048208 +0000 UTC m=+1206.341005554" watchObservedRunningTime="2026-01-30 16:42:31.767276761 +0000 UTC m=+1206.405234107" Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.768707 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b52befca-b3ab-4e81-bc0f-c828a8bdc49b" (UID: "b52befca-b3ab-4e81-bc0f-c828a8bdc49b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.780493 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-config" (OuterVolumeSpecName: "config") pod "b52befca-b3ab-4e81-bc0f-c828a8bdc49b" (UID: "b52befca-b3ab-4e81-bc0f-c828a8bdc49b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.798457 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.798490 4766 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.798501 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:32 crc kubenswrapper[4766]: I0130 16:42:32.250257 4766 scope.go:117] "RemoveContainer" containerID="956170968e42ad47b2cafd397e206fd1268906e156bbd44d9f8ca6e3f5096ee0" Jan 30 16:42:32 crc kubenswrapper[4766]: E0130 16:42:32.250964 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"956170968e42ad47b2cafd397e206fd1268906e156bbd44d9f8ca6e3f5096ee0\": container with ID starting with 956170968e42ad47b2cafd397e206fd1268906e156bbd44d9f8ca6e3f5096ee0 not found: ID does not exist" containerID="956170968e42ad47b2cafd397e206fd1268906e156bbd44d9f8ca6e3f5096ee0" Jan 30 16:42:32 crc kubenswrapper[4766]: I0130 16:42:32.251014 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"956170968e42ad47b2cafd397e206fd1268906e156bbd44d9f8ca6e3f5096ee0"} err="failed to get container status \"956170968e42ad47b2cafd397e206fd1268906e156bbd44d9f8ca6e3f5096ee0\": rpc error: code = NotFound desc = could not find container \"956170968e42ad47b2cafd397e206fd1268906e156bbd44d9f8ca6e3f5096ee0\": container with ID starting with 956170968e42ad47b2cafd397e206fd1268906e156bbd44d9f8ca6e3f5096ee0 not found: ID does not exist" Jan 30 16:42:32 crc kubenswrapper[4766]: I0130 16:42:32.251043 4766 scope.go:117] "RemoveContainer" containerID="4088f4fd9d06fcf827d9109fe4a2bcec3a2991bdc54db993295bb9b7219e61c0" Jan 30 16:42:32 crc kubenswrapper[4766]: E0130 16:42:32.255999 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4088f4fd9d06fcf827d9109fe4a2bcec3a2991bdc54db993295bb9b7219e61c0\": container with ID starting with 4088f4fd9d06fcf827d9109fe4a2bcec3a2991bdc54db993295bb9b7219e61c0 not found: ID does not exist" containerID="4088f4fd9d06fcf827d9109fe4a2bcec3a2991bdc54db993295bb9b7219e61c0" Jan 30 16:42:32 crc kubenswrapper[4766]: I0130 16:42:32.256046 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4088f4fd9d06fcf827d9109fe4a2bcec3a2991bdc54db993295bb9b7219e61c0"} err="failed to get container status \"4088f4fd9d06fcf827d9109fe4a2bcec3a2991bdc54db993295bb9b7219e61c0\": rpc error: code = NotFound desc = could not find container \"4088f4fd9d06fcf827d9109fe4a2bcec3a2991bdc54db993295bb9b7219e61c0\": container with ID starting with 4088f4fd9d06fcf827d9109fe4a2bcec3a2991bdc54db993295bb9b7219e61c0 not found: ID does not exist" Jan 30 16:42:32 crc kubenswrapper[4766]: I0130 16:42:32.615299 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-kch9t"] Jan 30 16:42:32 crc kubenswrapper[4766]: I0130 16:42:32.712050 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-kch9t"] Jan 30 16:42:32 crc kubenswrapper[4766]: I0130 16:42:32.758653 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-ll29f" Jan 30 16:42:32 crc kubenswrapper[4766]: I0130 16:42:32.760329 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5cc7c5ff-ll29f" event={"ID":"020df37b-56f5-4f59-8c96-faaea5bb7e27","Type":"ContainerDied","Data":"bbefff24f39dbadacd598d64ee407c71a5ea9986cb075543865724ab87f304f8"} Jan 30 16:42:32 crc kubenswrapper[4766]: I0130 16:42:32.760373 4766 scope.go:117] "RemoveContainer" containerID="f3e8472abbbcf843661882d9d161476828c357dd15048dd6266dd09765622991" Jan 30 16:42:32 crc kubenswrapper[4766]: I0130 16:42:32.807705 4766 generic.go:334] "Generic (PLEG): container finished" podID="a7ccb2d3-4270-48e3-99cc-6031edfa30ae" containerID="23f20e6f2114bc8f2119ea3e2aff96d54925d71ba01791ac4a7d732855922c81" exitCode=0 Jan 30 16:42:32 crc kubenswrapper[4766]: I0130 16:42:32.807766 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-jlsp7" event={"ID":"a7ccb2d3-4270-48e3-99cc-6031edfa30ae","Type":"ContainerDied","Data":"23f20e6f2114bc8f2119ea3e2aff96d54925d71ba01791ac4a7d732855922c81"} Jan 30 16:42:32 crc kubenswrapper[4766]: I0130 16:42:32.816413 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"89845731-1ffc-4f79-a979-d83068cebc2a","Type":"ContainerStarted","Data":"8398be31fd1c1dbaac0a47e8ca9fd7d89f84dea6a8b9da4892e60534d152611d"} Jan 30 16:42:32 crc kubenswrapper[4766]: I0130 16:42:32.822611 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2654a202-1ccf-4de3-90bf-3bc6f15de239","Type":"ContainerStarted","Data":"12fc3e700a602c61f6d7095c65bbcc8d24d4b615d031b5becb78070ca50a6e0b"} Jan 30 16:42:32 crc kubenswrapper[4766]: I0130 16:42:32.855620 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/020df37b-56f5-4f59-8c96-faaea5bb7e27-dns-svc\") pod \"020df37b-56f5-4f59-8c96-faaea5bb7e27\" (UID: \"020df37b-56f5-4f59-8c96-faaea5bb7e27\") " Jan 30 16:42:32 crc kubenswrapper[4766]: I0130 16:42:32.855732 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgd4z\" (UniqueName: \"kubernetes.io/projected/020df37b-56f5-4f59-8c96-faaea5bb7e27-kube-api-access-mgd4z\") pod \"020df37b-56f5-4f59-8c96-faaea5bb7e27\" (UID: \"020df37b-56f5-4f59-8c96-faaea5bb7e27\") " Jan 30 16:42:32 crc kubenswrapper[4766]: I0130 16:42:32.855800 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/020df37b-56f5-4f59-8c96-faaea5bb7e27-dns-swift-storage-0\") pod \"020df37b-56f5-4f59-8c96-faaea5bb7e27\" (UID: \"020df37b-56f5-4f59-8c96-faaea5bb7e27\") " Jan 30 16:42:32 crc kubenswrapper[4766]: I0130 16:42:32.855894 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/020df37b-56f5-4f59-8c96-faaea5bb7e27-ovsdbserver-nb\") pod \"020df37b-56f5-4f59-8c96-faaea5bb7e27\" (UID: \"020df37b-56f5-4f59-8c96-faaea5bb7e27\") " Jan 30 16:42:32 crc kubenswrapper[4766]: I0130 16:42:32.855960 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/020df37b-56f5-4f59-8c96-faaea5bb7e27-config\") pod \"020df37b-56f5-4f59-8c96-faaea5bb7e27\" (UID: \"020df37b-56f5-4f59-8c96-faaea5bb7e27\") " Jan 30 16:42:32 crc kubenswrapper[4766]: I0130 16:42:32.855994 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/020df37b-56f5-4f59-8c96-faaea5bb7e27-ovsdbserver-sb\") pod \"020df37b-56f5-4f59-8c96-faaea5bb7e27\" (UID: \"020df37b-56f5-4f59-8c96-faaea5bb7e27\") " Jan 30 16:42:32 crc kubenswrapper[4766]: I0130 16:42:32.906189 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/020df37b-56f5-4f59-8c96-faaea5bb7e27-kube-api-access-mgd4z" (OuterVolumeSpecName: "kube-api-access-mgd4z") pod "020df37b-56f5-4f59-8c96-faaea5bb7e27" (UID: "020df37b-56f5-4f59-8c96-faaea5bb7e27"). InnerVolumeSpecName "kube-api-access-mgd4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:42:32 crc kubenswrapper[4766]: I0130 16:42:32.958513 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgd4z\" (UniqueName: \"kubernetes.io/projected/020df37b-56f5-4f59-8c96-faaea5bb7e27-kube-api-access-mgd4z\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:32 crc kubenswrapper[4766]: I0130 16:42:32.974069 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/020df37b-56f5-4f59-8c96-faaea5bb7e27-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "020df37b-56f5-4f59-8c96-faaea5bb7e27" (UID: "020df37b-56f5-4f59-8c96-faaea5bb7e27"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:42:32 crc kubenswrapper[4766]: I0130 16:42:32.988389 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/020df37b-56f5-4f59-8c96-faaea5bb7e27-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "020df37b-56f5-4f59-8c96-faaea5bb7e27" (UID: "020df37b-56f5-4f59-8c96-faaea5bb7e27"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:42:32 crc kubenswrapper[4766]: I0130 16:42:32.988572 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/020df37b-56f5-4f59-8c96-faaea5bb7e27-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "020df37b-56f5-4f59-8c96-faaea5bb7e27" (UID: "020df37b-56f5-4f59-8c96-faaea5bb7e27"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:42:32 crc kubenswrapper[4766]: I0130 16:42:32.992765 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/020df37b-56f5-4f59-8c96-faaea5bb7e27-config" (OuterVolumeSpecName: "config") pod "020df37b-56f5-4f59-8c96-faaea5bb7e27" (UID: "020df37b-56f5-4f59-8c96-faaea5bb7e27"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:42:33 crc kubenswrapper[4766]: I0130 16:42:33.007442 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/020df37b-56f5-4f59-8c96-faaea5bb7e27-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "020df37b-56f5-4f59-8c96-faaea5bb7e27" (UID: "020df37b-56f5-4f59-8c96-faaea5bb7e27"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:42:33 crc kubenswrapper[4766]: I0130 16:42:33.066677 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/020df37b-56f5-4f59-8c96-faaea5bb7e27-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:33 crc kubenswrapper[4766]: I0130 16:42:33.066961 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/020df37b-56f5-4f59-8c96-faaea5bb7e27-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:33 crc kubenswrapper[4766]: I0130 16:42:33.066970 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/020df37b-56f5-4f59-8c96-faaea5bb7e27-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:33 crc kubenswrapper[4766]: I0130 16:42:33.066979 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/020df37b-56f5-4f59-8c96-faaea5bb7e27-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:33 crc kubenswrapper[4766]: I0130 16:42:33.066988 4766 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/020df37b-56f5-4f59-8c96-faaea5bb7e27-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:33 crc kubenswrapper[4766]: I0130 16:42:33.619684 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 16:42:33 crc kubenswrapper[4766]: I0130 16:42:33.781333 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:42:33 crc kubenswrapper[4766]: I0130 16:42:33.795502 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 16:42:33 crc kubenswrapper[4766]: I0130 16:42:33.863855 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"89845731-1ffc-4f79-a979-d83068cebc2a","Type":"ContainerStarted","Data":"05f83c6743616a1a228900808a01d7d7df378d9a76d8d0157d86c6fa042c029f"} Jan 30 16:42:33 crc kubenswrapper[4766]: I0130 16:42:33.865821 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-ll29f" Jan 30 16:42:33 crc kubenswrapper[4766]: I0130 16:42:33.876954 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-jlsp7" event={"ID":"a7ccb2d3-4270-48e3-99cc-6031edfa30ae","Type":"ContainerStarted","Data":"05de0f2960640a1d96ef314bfdd72efd8f32f0b341093df6924e01cbf4898754"} Jan 30 16:42:33 crc kubenswrapper[4766]: I0130 16:42:33.878287 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8b5c85b87-jlsp7" Jan 30 16:42:33 crc kubenswrapper[4766]: I0130 16:42:33.998461 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8b5c85b87-jlsp7" podStartSLOduration=4.998439894 podStartE2EDuration="4.998439894s" podCreationTimestamp="2026-01-30 16:42:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:42:33.916511207 +0000 UTC m=+1208.554468543" watchObservedRunningTime="2026-01-30 16:42:33.998439894 +0000 UTC m=+1208.636397240" Jan 30 16:42:34 crc kubenswrapper[4766]: I0130 16:42:34.036093 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-ll29f"] Jan 30 16:42:34 crc kubenswrapper[4766]: I0130 16:42:34.100216 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b52befca-b3ab-4e81-bc0f-c828a8bdc49b" path="/var/lib/kubelet/pods/b52befca-b3ab-4e81-bc0f-c828a8bdc49b/volumes" Jan 30 16:42:34 crc kubenswrapper[4766]: I0130 16:42:34.105964 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-ll29f"] Jan 30 16:42:34 crc kubenswrapper[4766]: I0130 16:42:34.891055 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2654a202-1ccf-4de3-90bf-3bc6f15de239","Type":"ContainerStarted","Data":"4fbb211752ea890c4ddb2cfff8ec0c8175e951ec7d5658df94ce295047ab2161"} Jan 30 16:42:34 crc kubenswrapper[4766]: I0130 16:42:34.891136 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="2654a202-1ccf-4de3-90bf-3bc6f15de239" containerName="glance-log" containerID="cri-o://12fc3e700a602c61f6d7095c65bbcc8d24d4b615d031b5becb78070ca50a6e0b" gracePeriod=30 Jan 30 16:42:34 crc kubenswrapper[4766]: I0130 16:42:34.891232 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="2654a202-1ccf-4de3-90bf-3bc6f15de239" containerName="glance-httpd" containerID="cri-o://4fbb211752ea890c4ddb2cfff8ec0c8175e951ec7d5658df94ce295047ab2161" gracePeriod=30 Jan 30 16:42:34 crc kubenswrapper[4766]: I0130 16:42:34.920917 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.920889955 podStartE2EDuration="6.920889955s" podCreationTimestamp="2026-01-30 16:42:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:42:34.911158735 +0000 UTC m=+1209.549116101" watchObservedRunningTime="2026-01-30 16:42:34.920889955 +0000 UTC m=+1209.558847311" Jan 30 16:42:35 crc kubenswrapper[4766]: I0130 16:42:35.926081 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"89845731-1ffc-4f79-a979-d83068cebc2a","Type":"ContainerStarted","Data":"6ca8dc52678762b9a6731937231aea93b115df4bd946ac847c87adee0d67eba8"} Jan 30 16:42:35 crc kubenswrapper[4766]: I0130 16:42:35.926381 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="89845731-1ffc-4f79-a979-d83068cebc2a" containerName="glance-log" containerID="cri-o://05f83c6743616a1a228900808a01d7d7df378d9a76d8d0157d86c6fa042c029f" gracePeriod=30 Jan 30 16:42:35 crc kubenswrapper[4766]: I0130 16:42:35.926949 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="89845731-1ffc-4f79-a979-d83068cebc2a" containerName="glance-httpd" containerID="cri-o://6ca8dc52678762b9a6731937231aea93b115df4bd946ac847c87adee0d67eba8" gracePeriod=30 Jan 30 16:42:35 crc kubenswrapper[4766]: I0130 16:42:35.945441 4766 generic.go:334] "Generic (PLEG): container finished" podID="2654a202-1ccf-4de3-90bf-3bc6f15de239" containerID="4fbb211752ea890c4ddb2cfff8ec0c8175e951ec7d5658df94ce295047ab2161" exitCode=143 Jan 30 16:42:35 crc kubenswrapper[4766]: I0130 16:42:35.945476 4766 generic.go:334] "Generic (PLEG): container finished" podID="2654a202-1ccf-4de3-90bf-3bc6f15de239" containerID="12fc3e700a602c61f6d7095c65bbcc8d24d4b615d031b5becb78070ca50a6e0b" exitCode=143 Jan 30 16:42:35 crc kubenswrapper[4766]: I0130 16:42:35.946278 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2654a202-1ccf-4de3-90bf-3bc6f15de239","Type":"ContainerDied","Data":"4fbb211752ea890c4ddb2cfff8ec0c8175e951ec7d5658df94ce295047ab2161"} Jan 30 16:42:35 crc kubenswrapper[4766]: I0130 16:42:35.946341 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2654a202-1ccf-4de3-90bf-3bc6f15de239","Type":"ContainerDied","Data":"12fc3e700a602c61f6d7095c65bbcc8d24d4b615d031b5becb78070ca50a6e0b"} Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.089690 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.089668454 podStartE2EDuration="7.089668454s" podCreationTimestamp="2026-01-30 16:42:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:42:35.955522746 +0000 UTC m=+1210.593480092" watchObservedRunningTime="2026-01-30 16:42:36.089668454 +0000 UTC m=+1210.727625800" Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.093516 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="020df37b-56f5-4f59-8c96-faaea5bb7e27" path="/var/lib/kubelet/pods/020df37b-56f5-4f59-8c96-faaea5bb7e27/volumes" Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.380359 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.438009 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2654a202-1ccf-4de3-90bf-3bc6f15de239-public-tls-certs\") pod \"2654a202-1ccf-4de3-90bf-3bc6f15de239\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") " Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.438077 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2654a202-1ccf-4de3-90bf-3bc6f15de239-scripts\") pod \"2654a202-1ccf-4de3-90bf-3bc6f15de239\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") " Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.438266 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2654a202-1ccf-4de3-90bf-3bc6f15de239-combined-ca-bundle\") pod \"2654a202-1ccf-4de3-90bf-3bc6f15de239\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") " Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.438297 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97hjj\" (UniqueName: \"kubernetes.io/projected/2654a202-1ccf-4de3-90bf-3bc6f15de239-kube-api-access-97hjj\") pod \"2654a202-1ccf-4de3-90bf-3bc6f15de239\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") " Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.438318 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"2654a202-1ccf-4de3-90bf-3bc6f15de239\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") " Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.438340 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2654a202-1ccf-4de3-90bf-3bc6f15de239-httpd-run\") pod \"2654a202-1ccf-4de3-90bf-3bc6f15de239\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") " Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.438375 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2654a202-1ccf-4de3-90bf-3bc6f15de239-config-data\") pod \"2654a202-1ccf-4de3-90bf-3bc6f15de239\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") " Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.438422 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2654a202-1ccf-4de3-90bf-3bc6f15de239-logs\") pod \"2654a202-1ccf-4de3-90bf-3bc6f15de239\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") " Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.439761 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2654a202-1ccf-4de3-90bf-3bc6f15de239-logs" (OuterVolumeSpecName: "logs") pod "2654a202-1ccf-4de3-90bf-3bc6f15de239" (UID: "2654a202-1ccf-4de3-90bf-3bc6f15de239"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.440376 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2654a202-1ccf-4de3-90bf-3bc6f15de239-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "2654a202-1ccf-4de3-90bf-3bc6f15de239" (UID: "2654a202-1ccf-4de3-90bf-3bc6f15de239"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.447425 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2654a202-1ccf-4de3-90bf-3bc6f15de239-scripts" (OuterVolumeSpecName: "scripts") pod "2654a202-1ccf-4de3-90bf-3bc6f15de239" (UID: "2654a202-1ccf-4de3-90bf-3bc6f15de239"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.459849 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "2654a202-1ccf-4de3-90bf-3bc6f15de239" (UID: "2654a202-1ccf-4de3-90bf-3bc6f15de239"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.468570 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2654a202-1ccf-4de3-90bf-3bc6f15de239-kube-api-access-97hjj" (OuterVolumeSpecName: "kube-api-access-97hjj") pod "2654a202-1ccf-4de3-90bf-3bc6f15de239" (UID: "2654a202-1ccf-4de3-90bf-3bc6f15de239"). InnerVolumeSpecName "kube-api-access-97hjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.482106 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2654a202-1ccf-4de3-90bf-3bc6f15de239-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2654a202-1ccf-4de3-90bf-3bc6f15de239" (UID: "2654a202-1ccf-4de3-90bf-3bc6f15de239"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.510721 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2654a202-1ccf-4de3-90bf-3bc6f15de239-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2654a202-1ccf-4de3-90bf-3bc6f15de239" (UID: "2654a202-1ccf-4de3-90bf-3bc6f15de239"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.517459 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2654a202-1ccf-4de3-90bf-3bc6f15de239-config-data" (OuterVolumeSpecName: "config-data") pod "2654a202-1ccf-4de3-90bf-3bc6f15de239" (UID: "2654a202-1ccf-4de3-90bf-3bc6f15de239"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.543963 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2654a202-1ccf-4de3-90bf-3bc6f15de239-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.544012 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97hjj\" (UniqueName: \"kubernetes.io/projected/2654a202-1ccf-4de3-90bf-3bc6f15de239-kube-api-access-97hjj\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.544057 4766 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.544071 4766 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2654a202-1ccf-4de3-90bf-3bc6f15de239-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.544083 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2654a202-1ccf-4de3-90bf-3bc6f15de239-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.544127 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2654a202-1ccf-4de3-90bf-3bc6f15de239-logs\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.544140 4766 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2654a202-1ccf-4de3-90bf-3bc6f15de239-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.544152 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2654a202-1ccf-4de3-90bf-3bc6f15de239-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.560240 4766 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.645652 4766 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.957697 4766 generic.go:334] "Generic (PLEG): container finished" podID="89845731-1ffc-4f79-a979-d83068cebc2a" containerID="6ca8dc52678762b9a6731937231aea93b115df4bd946ac847c87adee0d67eba8" exitCode=0 Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.957731 4766 generic.go:334] "Generic (PLEG): container finished" podID="89845731-1ffc-4f79-a979-d83068cebc2a" containerID="05f83c6743616a1a228900808a01d7d7df378d9a76d8d0157d86c6fa042c029f" exitCode=143 Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.957778 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"89845731-1ffc-4f79-a979-d83068cebc2a","Type":"ContainerDied","Data":"6ca8dc52678762b9a6731937231aea93b115df4bd946ac847c87adee0d67eba8"} Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.957811 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"89845731-1ffc-4f79-a979-d83068cebc2a","Type":"ContainerDied","Data":"05f83c6743616a1a228900808a01d7d7df378d9a76d8d0157d86c6fa042c029f"} Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.961149 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2654a202-1ccf-4de3-90bf-3bc6f15de239","Type":"ContainerDied","Data":"49e6a264688b5efa68e5dd3bb58dc0b650db2a13ee17de4b4093f263fc716ec3"} Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.961228 4766 scope.go:117] "RemoveContainer" containerID="4fbb211752ea890c4ddb2cfff8ec0c8175e951ec7d5658df94ce295047ab2161" Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.961257 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.010710 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.024277 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.036395 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 16:42:37 crc kubenswrapper[4766]: E0130 16:42:37.036884 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2654a202-1ccf-4de3-90bf-3bc6f15de239" containerName="glance-httpd" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.036919 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="2654a202-1ccf-4de3-90bf-3bc6f15de239" containerName="glance-httpd" Jan 30 16:42:37 crc kubenswrapper[4766]: E0130 16:42:37.036940 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b52befca-b3ab-4e81-bc0f-c828a8bdc49b" containerName="init" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.036949 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="b52befca-b3ab-4e81-bc0f-c828a8bdc49b" containerName="init" Jan 30 16:42:37 crc kubenswrapper[4766]: E0130 16:42:37.036971 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="020df37b-56f5-4f59-8c96-faaea5bb7e27" containerName="init" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.036980 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="020df37b-56f5-4f59-8c96-faaea5bb7e27" containerName="init" Jan 30 16:42:37 crc kubenswrapper[4766]: E0130 16:42:37.036999 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b52befca-b3ab-4e81-bc0f-c828a8bdc49b" containerName="dnsmasq-dns" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.037007 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="b52befca-b3ab-4e81-bc0f-c828a8bdc49b" containerName="dnsmasq-dns" Jan 30 16:42:37 crc kubenswrapper[4766]: E0130 16:42:37.037019 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2654a202-1ccf-4de3-90bf-3bc6f15de239" containerName="glance-log" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.037026 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="2654a202-1ccf-4de3-90bf-3bc6f15de239" containerName="glance-log" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.037247 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="2654a202-1ccf-4de3-90bf-3bc6f15de239" containerName="glance-log" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.037277 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="2654a202-1ccf-4de3-90bf-3bc6f15de239" containerName="glance-httpd" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.037291 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="b52befca-b3ab-4e81-bc0f-c828a8bdc49b" containerName="dnsmasq-dns" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.037313 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="020df37b-56f5-4f59-8c96-faaea5bb7e27" containerName="init" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.038335 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.047050 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.053397 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.061377 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.161861 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/64f88e91-eb62-45a5-bfcb-d38a918e23da-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.161929 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.161964 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64f88e91-eb62-45a5-bfcb-d38a918e23da-scripts\") pod \"glance-default-external-api-0\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.162016 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64f88e91-eb62-45a5-bfcb-d38a918e23da-logs\") pod \"glance-default-external-api-0\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.162114 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64f88e91-eb62-45a5-bfcb-d38a918e23da-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.162292 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/64f88e91-eb62-45a5-bfcb-d38a918e23da-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.162501 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q78hl\" (UniqueName: \"kubernetes.io/projected/64f88e91-eb62-45a5-bfcb-d38a918e23da-kube-api-access-q78hl\") pod \"glance-default-external-api-0\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.162938 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64f88e91-eb62-45a5-bfcb-d38a918e23da-config-data\") pod \"glance-default-external-api-0\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.264015 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64f88e91-eb62-45a5-bfcb-d38a918e23da-scripts\") pod \"glance-default-external-api-0\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.264091 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64f88e91-eb62-45a5-bfcb-d38a918e23da-logs\") pod \"glance-default-external-api-0\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.264117 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64f88e91-eb62-45a5-bfcb-d38a918e23da-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.264153 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/64f88e91-eb62-45a5-bfcb-d38a918e23da-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.264205 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q78hl\" (UniqueName: \"kubernetes.io/projected/64f88e91-eb62-45a5-bfcb-d38a918e23da-kube-api-access-q78hl\") pod \"glance-default-external-api-0\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.264246 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64f88e91-eb62-45a5-bfcb-d38a918e23da-config-data\") pod \"glance-default-external-api-0\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.264278 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/64f88e91-eb62-45a5-bfcb-d38a918e23da-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.264299 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.264547 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.265519 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/64f88e91-eb62-45a5-bfcb-d38a918e23da-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.265617 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64f88e91-eb62-45a5-bfcb-d38a918e23da-logs\") pod \"glance-default-external-api-0\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.274857 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64f88e91-eb62-45a5-bfcb-d38a918e23da-scripts\") pod \"glance-default-external-api-0\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.275116 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64f88e91-eb62-45a5-bfcb-d38a918e23da-config-data\") pod \"glance-default-external-api-0\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.276283 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64f88e91-eb62-45a5-bfcb-d38a918e23da-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.279001 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/64f88e91-eb62-45a5-bfcb-d38a918e23da-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.286080 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q78hl\" (UniqueName: \"kubernetes.io/projected/64f88e91-eb62-45a5-bfcb-d38a918e23da-kube-api-access-q78hl\") pod \"glance-default-external-api-0\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.295556 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.360284 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 16:42:38 crc kubenswrapper[4766]: I0130 16:42:38.053702 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2654a202-1ccf-4de3-90bf-3bc6f15de239" path="/var/lib/kubelet/pods/2654a202-1ccf-4de3-90bf-3bc6f15de239/volumes" Jan 30 16:42:39 crc kubenswrapper[4766]: I0130 16:42:39.990361 4766 generic.go:334] "Generic (PLEG): container finished" podID="22fc62b3-3a89-44ec-8f23-4182b363478c" containerID="486e761914f506c8f715baf8a899185c1691423ce4dc1690c67bd2bf32714c57" exitCode=0 Jan 30 16:42:39 crc kubenswrapper[4766]: I0130 16:42:39.990534 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hbqvh" event={"ID":"22fc62b3-3a89-44ec-8f23-4182b363478c","Type":"ContainerDied","Data":"486e761914f506c8f715baf8a899185c1691423ce4dc1690c67bd2bf32714c57"} Jan 30 16:42:40 crc kubenswrapper[4766]: I0130 16:42:40.011465 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8b5c85b87-jlsp7" Jan 30 16:42:40 crc kubenswrapper[4766]: I0130 16:42:40.085218 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-jfh6l"] Jan 30 16:42:40 crc kubenswrapper[4766]: I0130 16:42:40.085860 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" podUID="5be49188-9169-438f-a8df-6bd5d8dd29fd" containerName="dnsmasq-dns" containerID="cri-o://16de9997b9c78a1addb7a6173a72d9c91cb7c20a2b569788c1ccd21789b937ba" gracePeriod=10 Jan 30 16:42:40 crc kubenswrapper[4766]: I0130 16:42:40.912555 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.023668 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"89845731-1ffc-4f79-a979-d83068cebc2a","Type":"ContainerDied","Data":"8398be31fd1c1dbaac0a47e8ca9fd7d89f84dea6a8b9da4892e60534d152611d"} Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.023795 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.030311 4766 generic.go:334] "Generic (PLEG): container finished" podID="5be49188-9169-438f-a8df-6bd5d8dd29fd" containerID="16de9997b9c78a1addb7a6173a72d9c91cb7c20a2b569788c1ccd21789b937ba" exitCode=0 Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.030569 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" event={"ID":"5be49188-9169-438f-a8df-6bd5d8dd29fd","Type":"ContainerDied","Data":"16de9997b9c78a1addb7a6173a72d9c91cb7c20a2b569788c1ccd21789b937ba"} Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.061421 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89845731-1ffc-4f79-a979-d83068cebc2a-logs\") pod \"89845731-1ffc-4f79-a979-d83068cebc2a\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") " Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.061507 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89845731-1ffc-4f79-a979-d83068cebc2a-config-data\") pod \"89845731-1ffc-4f79-a979-d83068cebc2a\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") " Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.061579 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89845731-1ffc-4f79-a979-d83068cebc2a-combined-ca-bundle\") pod \"89845731-1ffc-4f79-a979-d83068cebc2a\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") " Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.061628 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/89845731-1ffc-4f79-a979-d83068cebc2a-httpd-run\") pod \"89845731-1ffc-4f79-a979-d83068cebc2a\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") " Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.061665 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89845731-1ffc-4f79-a979-d83068cebc2a-scripts\") pod \"89845731-1ffc-4f79-a979-d83068cebc2a\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") " Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.061715 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/89845731-1ffc-4f79-a979-d83068cebc2a-internal-tls-certs\") pod \"89845731-1ffc-4f79-a979-d83068cebc2a\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") " Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.061803 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmsmh\" (UniqueName: \"kubernetes.io/projected/89845731-1ffc-4f79-a979-d83068cebc2a-kube-api-access-lmsmh\") pod \"89845731-1ffc-4f79-a979-d83068cebc2a\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") " Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.061918 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"89845731-1ffc-4f79-a979-d83068cebc2a\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") " Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.062238 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89845731-1ffc-4f79-a979-d83068cebc2a-logs" (OuterVolumeSpecName: "logs") pod "89845731-1ffc-4f79-a979-d83068cebc2a" (UID: "89845731-1ffc-4f79-a979-d83068cebc2a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.062509 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89845731-1ffc-4f79-a979-d83068cebc2a-logs\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.062734 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89845731-1ffc-4f79-a979-d83068cebc2a-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "89845731-1ffc-4f79-a979-d83068cebc2a" (UID: "89845731-1ffc-4f79-a979-d83068cebc2a"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.074335 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "89845731-1ffc-4f79-a979-d83068cebc2a" (UID: "89845731-1ffc-4f79-a979-d83068cebc2a"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.092575 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89845731-1ffc-4f79-a979-d83068cebc2a-kube-api-access-lmsmh" (OuterVolumeSpecName: "kube-api-access-lmsmh") pod "89845731-1ffc-4f79-a979-d83068cebc2a" (UID: "89845731-1ffc-4f79-a979-d83068cebc2a"). InnerVolumeSpecName "kube-api-access-lmsmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.092888 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89845731-1ffc-4f79-a979-d83068cebc2a-scripts" (OuterVolumeSpecName: "scripts") pod "89845731-1ffc-4f79-a979-d83068cebc2a" (UID: "89845731-1ffc-4f79-a979-d83068cebc2a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.119263 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89845731-1ffc-4f79-a979-d83068cebc2a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "89845731-1ffc-4f79-a979-d83068cebc2a" (UID: "89845731-1ffc-4f79-a979-d83068cebc2a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.155431 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89845731-1ffc-4f79-a979-d83068cebc2a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "89845731-1ffc-4f79-a979-d83068cebc2a" (UID: "89845731-1ffc-4f79-a979-d83068cebc2a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.166564 4766 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.166626 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89845731-1ffc-4f79-a979-d83068cebc2a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.166637 4766 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/89845731-1ffc-4f79-a979-d83068cebc2a-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.166647 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89845731-1ffc-4f79-a979-d83068cebc2a-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.166655 4766 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/89845731-1ffc-4f79-a979-d83068cebc2a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.168200 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmsmh\" (UniqueName: \"kubernetes.io/projected/89845731-1ffc-4f79-a979-d83068cebc2a-kube-api-access-lmsmh\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.170404 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89845731-1ffc-4f79-a979-d83068cebc2a-config-data" (OuterVolumeSpecName: "config-data") pod "89845731-1ffc-4f79-a979-d83068cebc2a" (UID: "89845731-1ffc-4f79-a979-d83068cebc2a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.197596 4766 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.269979 4766 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.270022 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89845731-1ffc-4f79-a979-d83068cebc2a-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.450672 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.458555 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.474270 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 16:42:41 crc kubenswrapper[4766]: E0130 16:42:41.474662 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89845731-1ffc-4f79-a979-d83068cebc2a" containerName="glance-httpd" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.474676 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="89845731-1ffc-4f79-a979-d83068cebc2a" containerName="glance-httpd" Jan 30 16:42:41 crc kubenswrapper[4766]: E0130 16:42:41.474693 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89845731-1ffc-4f79-a979-d83068cebc2a" containerName="glance-log" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.474699 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="89845731-1ffc-4f79-a979-d83068cebc2a" containerName="glance-log" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.474863 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="89845731-1ffc-4f79-a979-d83068cebc2a" containerName="glance-log" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.474879 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="89845731-1ffc-4f79-a979-d83068cebc2a" containerName="glance-httpd" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.475831 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.475926 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.488225 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.505011 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.593800 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.593918 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.593949 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.594017 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.594081 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.594102 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.594118 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-logs\") pod \"glance-default-internal-api-0\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.594143 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t96m\" (UniqueName: \"kubernetes.io/projected/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-kube-api-access-5t96m\") pod \"glance-default-internal-api-0\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.696222 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.696366 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.696398 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.696416 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-logs\") pod \"glance-default-internal-api-0\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.696439 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5t96m\" (UniqueName: \"kubernetes.io/projected/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-kube-api-access-5t96m\") pod \"glance-default-internal-api-0\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.696465 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.696504 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.696526 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.697907 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-logs\") pod \"glance-default-internal-api-0\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.698407 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.698660 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.705823 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.708949 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.716130 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.724418 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.725543 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5t96m\" (UniqueName: \"kubernetes.io/projected/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-kube-api-access-5t96m\") pod \"glance-default-internal-api-0\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.773735 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.820737 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 16:42:42 crc kubenswrapper[4766]: I0130 16:42:42.080102 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89845731-1ffc-4f79-a979-d83068cebc2a" path="/var/lib/kubelet/pods/89845731-1ffc-4f79-a979-d83068cebc2a/volumes" Jan 30 16:42:49 crc kubenswrapper[4766]: I0130 16:42:49.667759 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" podUID="5be49188-9169-438f-a8df-6bd5d8dd29fd" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.128:5353: i/o timeout" Jan 30 16:42:54 crc kubenswrapper[4766]: I0130 16:42:54.669223 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" podUID="5be49188-9169-438f-a8df-6bd5d8dd29fd" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.128:5353: i/o timeout" Jan 30 16:42:59 crc kubenswrapper[4766]: I0130 16:42:59.670282 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" podUID="5be49188-9169-438f-a8df-6bd5d8dd29fd" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.128:5353: i/o timeout" Jan 30 16:42:59 crc kubenswrapper[4766]: I0130 16:42:59.671046 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.182514 4766 scope.go:117] "RemoveContainer" containerID="12fc3e700a602c61f6d7095c65bbcc8d24d4b615d031b5becb78070ca50a6e0b" Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.210997 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hbqvh" event={"ID":"22fc62b3-3a89-44ec-8f23-4182b363478c","Type":"ContainerDied","Data":"cb8dd33dc29b2286c115159871279671c13b4e68f9e215e5899370d3d4a8576e"} Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.211045 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb8dd33dc29b2286c115159871279671c13b4e68f9e215e5899370d3d4a8576e" Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.215953 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" event={"ID":"5be49188-9169-438f-a8df-6bd5d8dd29fd","Type":"ContainerDied","Data":"12785cb0c22675855895839970651119da7335d185eeab854fc2e6552f272d1d"} Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.215994 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12785cb0c22675855895839970651119da7335d185eeab854fc2e6552f272d1d" Jan 30 16:43:01 crc kubenswrapper[4766]: E0130 16:43:01.283559 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Jan 30 16:43:01 crc kubenswrapper[4766]: E0130 16:43:01.283773 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k75sk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-mq5sq_openstack(83c08adc-cebc-4bff-8994-d8f1f0cb59d7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 16:43:01 crc kubenswrapper[4766]: E0130 16:43:01.285863 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-mq5sq" podUID="83c08adc-cebc-4bff-8994-d8f1f0cb59d7" Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.294358 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.300333 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hbqvh" Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.387134 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5be49188-9169-438f-a8df-6bd5d8dd29fd-config\") pod \"5be49188-9169-438f-a8df-6bd5d8dd29fd\" (UID: \"5be49188-9169-438f-a8df-6bd5d8dd29fd\") " Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.387218 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/22fc62b3-3a89-44ec-8f23-4182b363478c-fernet-keys\") pod \"22fc62b3-3a89-44ec-8f23-4182b363478c\" (UID: \"22fc62b3-3a89-44ec-8f23-4182b363478c\") " Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.387254 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxp2r\" (UniqueName: \"kubernetes.io/projected/22fc62b3-3a89-44ec-8f23-4182b363478c-kube-api-access-nxp2r\") pod \"22fc62b3-3a89-44ec-8f23-4182b363478c\" (UID: \"22fc62b3-3a89-44ec-8f23-4182b363478c\") " Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.387292 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5be49188-9169-438f-a8df-6bd5d8dd29fd-ovsdbserver-nb\") pod \"5be49188-9169-438f-a8df-6bd5d8dd29fd\" (UID: \"5be49188-9169-438f-a8df-6bd5d8dd29fd\") " Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.387349 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22fc62b3-3a89-44ec-8f23-4182b363478c-scripts\") pod \"22fc62b3-3a89-44ec-8f23-4182b363478c\" (UID: \"22fc62b3-3a89-44ec-8f23-4182b363478c\") " Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.387395 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5be49188-9169-438f-a8df-6bd5d8dd29fd-dns-swift-storage-0\") pod \"5be49188-9169-438f-a8df-6bd5d8dd29fd\" (UID: \"5be49188-9169-438f-a8df-6bd5d8dd29fd\") " Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.387537 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5be49188-9169-438f-a8df-6bd5d8dd29fd-dns-svc\") pod \"5be49188-9169-438f-a8df-6bd5d8dd29fd\" (UID: \"5be49188-9169-438f-a8df-6bd5d8dd29fd\") " Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.388040 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22fc62b3-3a89-44ec-8f23-4182b363478c-combined-ca-bundle\") pod \"22fc62b3-3a89-44ec-8f23-4182b363478c\" (UID: \"22fc62b3-3a89-44ec-8f23-4182b363478c\") " Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.388076 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmfnv\" (UniqueName: \"kubernetes.io/projected/5be49188-9169-438f-a8df-6bd5d8dd29fd-kube-api-access-nmfnv\") pod \"5be49188-9169-438f-a8df-6bd5d8dd29fd\" (UID: \"5be49188-9169-438f-a8df-6bd5d8dd29fd\") " Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.388150 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/22fc62b3-3a89-44ec-8f23-4182b363478c-credential-keys\") pod \"22fc62b3-3a89-44ec-8f23-4182b363478c\" (UID: \"22fc62b3-3a89-44ec-8f23-4182b363478c\") " Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.388198 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5be49188-9169-438f-a8df-6bd5d8dd29fd-ovsdbserver-sb\") pod \"5be49188-9169-438f-a8df-6bd5d8dd29fd\" (UID: \"5be49188-9169-438f-a8df-6bd5d8dd29fd\") " Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.388227 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22fc62b3-3a89-44ec-8f23-4182b363478c-config-data\") pod \"22fc62b3-3a89-44ec-8f23-4182b363478c\" (UID: \"22fc62b3-3a89-44ec-8f23-4182b363478c\") " Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.394709 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5be49188-9169-438f-a8df-6bd5d8dd29fd-kube-api-access-nmfnv" (OuterVolumeSpecName: "kube-api-access-nmfnv") pod "5be49188-9169-438f-a8df-6bd5d8dd29fd" (UID: "5be49188-9169-438f-a8df-6bd5d8dd29fd"). InnerVolumeSpecName "kube-api-access-nmfnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.396520 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22fc62b3-3a89-44ec-8f23-4182b363478c-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "22fc62b3-3a89-44ec-8f23-4182b363478c" (UID: "22fc62b3-3a89-44ec-8f23-4182b363478c"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.397570 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22fc62b3-3a89-44ec-8f23-4182b363478c-scripts" (OuterVolumeSpecName: "scripts") pod "22fc62b3-3a89-44ec-8f23-4182b363478c" (UID: "22fc62b3-3a89-44ec-8f23-4182b363478c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.398741 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22fc62b3-3a89-44ec-8f23-4182b363478c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "22fc62b3-3a89-44ec-8f23-4182b363478c" (UID: "22fc62b3-3a89-44ec-8f23-4182b363478c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.417380 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22fc62b3-3a89-44ec-8f23-4182b363478c-kube-api-access-nxp2r" (OuterVolumeSpecName: "kube-api-access-nxp2r") pod "22fc62b3-3a89-44ec-8f23-4182b363478c" (UID: "22fc62b3-3a89-44ec-8f23-4182b363478c"). InnerVolumeSpecName "kube-api-access-nxp2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.441941 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5be49188-9169-438f-a8df-6bd5d8dd29fd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5be49188-9169-438f-a8df-6bd5d8dd29fd" (UID: "5be49188-9169-438f-a8df-6bd5d8dd29fd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.444826 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22fc62b3-3a89-44ec-8f23-4182b363478c-config-data" (OuterVolumeSpecName: "config-data") pod "22fc62b3-3a89-44ec-8f23-4182b363478c" (UID: "22fc62b3-3a89-44ec-8f23-4182b363478c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.445372 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22fc62b3-3a89-44ec-8f23-4182b363478c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "22fc62b3-3a89-44ec-8f23-4182b363478c" (UID: "22fc62b3-3a89-44ec-8f23-4182b363478c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.447454 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5be49188-9169-438f-a8df-6bd5d8dd29fd-config" (OuterVolumeSpecName: "config") pod "5be49188-9169-438f-a8df-6bd5d8dd29fd" (UID: "5be49188-9169-438f-a8df-6bd5d8dd29fd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.449039 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5be49188-9169-438f-a8df-6bd5d8dd29fd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5be49188-9169-438f-a8df-6bd5d8dd29fd" (UID: "5be49188-9169-438f-a8df-6bd5d8dd29fd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.457032 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5be49188-9169-438f-a8df-6bd5d8dd29fd-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5be49188-9169-438f-a8df-6bd5d8dd29fd" (UID: "5be49188-9169-438f-a8df-6bd5d8dd29fd"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.464112 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5be49188-9169-438f-a8df-6bd5d8dd29fd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5be49188-9169-438f-a8df-6bd5d8dd29fd" (UID: "5be49188-9169-438f-a8df-6bd5d8dd29fd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.492199 4766 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/22fc62b3-3a89-44ec-8f23-4182b363478c-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.492251 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5be49188-9169-438f-a8df-6bd5d8dd29fd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.492262 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22fc62b3-3a89-44ec-8f23-4182b363478c-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.492362 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5be49188-9169-438f-a8df-6bd5d8dd29fd-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.492378 4766 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/22fc62b3-3a89-44ec-8f23-4182b363478c-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.492393 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxp2r\" (UniqueName: \"kubernetes.io/projected/22fc62b3-3a89-44ec-8f23-4182b363478c-kube-api-access-nxp2r\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.492408 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5be49188-9169-438f-a8df-6bd5d8dd29fd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.492468 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22fc62b3-3a89-44ec-8f23-4182b363478c-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.492504 4766 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5be49188-9169-438f-a8df-6bd5d8dd29fd-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.492515 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5be49188-9169-438f-a8df-6bd5d8dd29fd-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.492526 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22fc62b3-3a89-44ec-8f23-4182b363478c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.492538 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmfnv\" (UniqueName: \"kubernetes.io/projected/5be49188-9169-438f-a8df-6bd5d8dd29fd-kube-api-access-nmfnv\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.225101 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.225383 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hbqvh" Jan 30 16:43:02 crc kubenswrapper[4766]: E0130 16:43:02.229937 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-mq5sq" podUID="83c08adc-cebc-4bff-8994-d8f1f0cb59d7" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.276531 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-jfh6l"] Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.285214 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-jfh6l"] Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.431640 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-hbqvh"] Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.454604 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-hbqvh"] Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.522328 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-2jkw8"] Jan 30 16:43:02 crc kubenswrapper[4766]: E0130 16:43:02.522735 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5be49188-9169-438f-a8df-6bd5d8dd29fd" containerName="init" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.522753 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="5be49188-9169-438f-a8df-6bd5d8dd29fd" containerName="init" Jan 30 16:43:02 crc kubenswrapper[4766]: E0130 16:43:02.522766 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22fc62b3-3a89-44ec-8f23-4182b363478c" containerName="keystone-bootstrap" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.522773 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="22fc62b3-3a89-44ec-8f23-4182b363478c" containerName="keystone-bootstrap" Jan 30 16:43:02 crc kubenswrapper[4766]: E0130 16:43:02.522787 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5be49188-9169-438f-a8df-6bd5d8dd29fd" containerName="dnsmasq-dns" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.522794 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="5be49188-9169-438f-a8df-6bd5d8dd29fd" containerName="dnsmasq-dns" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.522978 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="22fc62b3-3a89-44ec-8f23-4182b363478c" containerName="keystone-bootstrap" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.523014 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="5be49188-9169-438f-a8df-6bd5d8dd29fd" containerName="dnsmasq-dns" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.523971 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2jkw8" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.530204 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-ftsn6" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.530495 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.530966 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.532134 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.532432 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.537110 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-2jkw8"] Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.612664 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/59eff57d-cb92-4c52-aad2-6e43b3908fd4-fernet-keys\") pod \"keystone-bootstrap-2jkw8\" (UID: \"59eff57d-cb92-4c52-aad2-6e43b3908fd4\") " pod="openstack/keystone-bootstrap-2jkw8" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.612708 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9kbq\" (UniqueName: \"kubernetes.io/projected/59eff57d-cb92-4c52-aad2-6e43b3908fd4-kube-api-access-d9kbq\") pod \"keystone-bootstrap-2jkw8\" (UID: \"59eff57d-cb92-4c52-aad2-6e43b3908fd4\") " pod="openstack/keystone-bootstrap-2jkw8" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.612732 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/59eff57d-cb92-4c52-aad2-6e43b3908fd4-credential-keys\") pod \"keystone-bootstrap-2jkw8\" (UID: \"59eff57d-cb92-4c52-aad2-6e43b3908fd4\") " pod="openstack/keystone-bootstrap-2jkw8" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.612766 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59eff57d-cb92-4c52-aad2-6e43b3908fd4-scripts\") pod \"keystone-bootstrap-2jkw8\" (UID: \"59eff57d-cb92-4c52-aad2-6e43b3908fd4\") " pod="openstack/keystone-bootstrap-2jkw8" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.612800 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59eff57d-cb92-4c52-aad2-6e43b3908fd4-combined-ca-bundle\") pod \"keystone-bootstrap-2jkw8\" (UID: \"59eff57d-cb92-4c52-aad2-6e43b3908fd4\") " pod="openstack/keystone-bootstrap-2jkw8" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.612840 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59eff57d-cb92-4c52-aad2-6e43b3908fd4-config-data\") pod \"keystone-bootstrap-2jkw8\" (UID: \"59eff57d-cb92-4c52-aad2-6e43b3908fd4\") " pod="openstack/keystone-bootstrap-2jkw8" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.714292 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59eff57d-cb92-4c52-aad2-6e43b3908fd4-scripts\") pod \"keystone-bootstrap-2jkw8\" (UID: \"59eff57d-cb92-4c52-aad2-6e43b3908fd4\") " pod="openstack/keystone-bootstrap-2jkw8" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.714360 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59eff57d-cb92-4c52-aad2-6e43b3908fd4-combined-ca-bundle\") pod \"keystone-bootstrap-2jkw8\" (UID: \"59eff57d-cb92-4c52-aad2-6e43b3908fd4\") " pod="openstack/keystone-bootstrap-2jkw8" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.714399 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59eff57d-cb92-4c52-aad2-6e43b3908fd4-config-data\") pod \"keystone-bootstrap-2jkw8\" (UID: \"59eff57d-cb92-4c52-aad2-6e43b3908fd4\") " pod="openstack/keystone-bootstrap-2jkw8" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.714514 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/59eff57d-cb92-4c52-aad2-6e43b3908fd4-fernet-keys\") pod \"keystone-bootstrap-2jkw8\" (UID: \"59eff57d-cb92-4c52-aad2-6e43b3908fd4\") " pod="openstack/keystone-bootstrap-2jkw8" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.714542 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9kbq\" (UniqueName: \"kubernetes.io/projected/59eff57d-cb92-4c52-aad2-6e43b3908fd4-kube-api-access-d9kbq\") pod \"keystone-bootstrap-2jkw8\" (UID: \"59eff57d-cb92-4c52-aad2-6e43b3908fd4\") " pod="openstack/keystone-bootstrap-2jkw8" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.714566 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/59eff57d-cb92-4c52-aad2-6e43b3908fd4-credential-keys\") pod \"keystone-bootstrap-2jkw8\" (UID: \"59eff57d-cb92-4c52-aad2-6e43b3908fd4\") " pod="openstack/keystone-bootstrap-2jkw8" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.718992 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/59eff57d-cb92-4c52-aad2-6e43b3908fd4-credential-keys\") pod \"keystone-bootstrap-2jkw8\" (UID: \"59eff57d-cb92-4c52-aad2-6e43b3908fd4\") " pod="openstack/keystone-bootstrap-2jkw8" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.720066 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59eff57d-cb92-4c52-aad2-6e43b3908fd4-combined-ca-bundle\") pod \"keystone-bootstrap-2jkw8\" (UID: \"59eff57d-cb92-4c52-aad2-6e43b3908fd4\") " pod="openstack/keystone-bootstrap-2jkw8" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.721485 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59eff57d-cb92-4c52-aad2-6e43b3908fd4-config-data\") pod \"keystone-bootstrap-2jkw8\" (UID: \"59eff57d-cb92-4c52-aad2-6e43b3908fd4\") " pod="openstack/keystone-bootstrap-2jkw8" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.721897 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/59eff57d-cb92-4c52-aad2-6e43b3908fd4-fernet-keys\") pod \"keystone-bootstrap-2jkw8\" (UID: \"59eff57d-cb92-4c52-aad2-6e43b3908fd4\") " pod="openstack/keystone-bootstrap-2jkw8" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.732227 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59eff57d-cb92-4c52-aad2-6e43b3908fd4-scripts\") pod \"keystone-bootstrap-2jkw8\" (UID: \"59eff57d-cb92-4c52-aad2-6e43b3908fd4\") " pod="openstack/keystone-bootstrap-2jkw8" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.733334 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9kbq\" (UniqueName: \"kubernetes.io/projected/59eff57d-cb92-4c52-aad2-6e43b3908fd4-kube-api-access-d9kbq\") pod \"keystone-bootstrap-2jkw8\" (UID: \"59eff57d-cb92-4c52-aad2-6e43b3908fd4\") " pod="openstack/keystone-bootstrap-2jkw8" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.850165 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2jkw8" Jan 30 16:43:04 crc kubenswrapper[4766]: I0130 16:43:04.049752 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22fc62b3-3a89-44ec-8f23-4182b363478c" path="/var/lib/kubelet/pods/22fc62b3-3a89-44ec-8f23-4182b363478c/volumes" Jan 30 16:43:04 crc kubenswrapper[4766]: I0130 16:43:04.050326 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5be49188-9169-438f-a8df-6bd5d8dd29fd" path="/var/lib/kubelet/pods/5be49188-9169-438f-a8df-6bd5d8dd29fd/volumes" Jan 30 16:43:04 crc kubenswrapper[4766]: I0130 16:43:04.671982 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" podUID="5be49188-9169-438f-a8df-6bd5d8dd29fd" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.128:5353: i/o timeout" Jan 30 16:43:10 crc kubenswrapper[4766]: E0130 16:43:10.238321 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 30 16:43:10 crc kubenswrapper[4766]: E0130 16:43:10.239052 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6g5xs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-zgzf5_openstack(ad8b317f-6f81-4ac9-a854-7b71e384ed98): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 16:43:10 crc kubenswrapper[4766]: E0130 16:43:10.240315 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-zgzf5" podUID="ad8b317f-6f81-4ac9-a854-7b71e384ed98" Jan 30 16:43:10 crc kubenswrapper[4766]: E0130 16:43:10.304493 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-zgzf5" podUID="ad8b317f-6f81-4ac9-a854-7b71e384ed98" Jan 30 16:43:14 crc kubenswrapper[4766]: I0130 16:43:14.018230 4766 scope.go:117] "RemoveContainer" containerID="6ca8dc52678762b9a6731937231aea93b115df4bd946ac847c87adee0d67eba8" Jan 30 16:43:14 crc kubenswrapper[4766]: E0130 16:43:14.039344 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 30 16:43:14 crc kubenswrapper[4766]: E0130 16:43:14.039520 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q2627,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-rxmkt_openstack(3a05e847-bb50-49ab-821d-e2432c0f01e9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 16:43:14 crc kubenswrapper[4766]: E0130 16:43:14.040747 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-rxmkt" podUID="3a05e847-bb50-49ab-821d-e2432c0f01e9" Jan 30 16:43:14 crc kubenswrapper[4766]: I0130 16:43:14.171784 4766 scope.go:117] "RemoveContainer" containerID="05f83c6743616a1a228900808a01d7d7df378d9a76d8d0157d86c6fa042c029f" Jan 30 16:43:14 crc kubenswrapper[4766]: I0130 16:43:14.343867 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14501411-a443-4f68-93ed-4cadcbc48b9f","Type":"ContainerStarted","Data":"8a28e1297786ace5ddf8a0acc683ca89f4eebb3aa58a5204b9dc553eb9ab1afc"} Jan 30 16:43:14 crc kubenswrapper[4766]: E0130 16:43:14.353970 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-rxmkt" podUID="3a05e847-bb50-49ab-821d-e2432c0f01e9" Jan 30 16:43:14 crc kubenswrapper[4766]: I0130 16:43:14.550508 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-2jkw8"] Jan 30 16:43:14 crc kubenswrapper[4766]: W0130 16:43:14.572665 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod59eff57d_cb92_4c52_aad2_6e43b3908fd4.slice/crio-a8649f87a88174d17b6487719f5885622c831c9443f1da0c32da65d70df7cac2 WatchSource:0}: Error finding container a8649f87a88174d17b6487719f5885622c831c9443f1da0c32da65d70df7cac2: Status 404 returned error can't find the container with id a8649f87a88174d17b6487719f5885622c831c9443f1da0c32da65d70df7cac2 Jan 30 16:43:14 crc kubenswrapper[4766]: I0130 16:43:14.696657 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 16:43:15 crc kubenswrapper[4766]: I0130 16:43:15.238406 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 16:43:15 crc kubenswrapper[4766]: I0130 16:43:15.353100 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda","Type":"ContainerStarted","Data":"c628aa6775fa8d17ac86f5683f6cf5c80fc38a33f4c92757b020af220822f50a"} Jan 30 16:43:15 crc kubenswrapper[4766]: I0130 16:43:15.353161 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda","Type":"ContainerStarted","Data":"323ddb58f9d31b5bc758e9920b4b5a6270bffb075aa3aec77b37c8af05f7ec01"} Jan 30 16:43:15 crc kubenswrapper[4766]: I0130 16:43:15.354557 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2jkw8" event={"ID":"59eff57d-cb92-4c52-aad2-6e43b3908fd4","Type":"ContainerStarted","Data":"fb2ca6c4c30cdfea0387f0737fa8335ebccfac0d91ab6a883ee48bb871ca5508"} Jan 30 16:43:15 crc kubenswrapper[4766]: I0130 16:43:15.354595 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2jkw8" event={"ID":"59eff57d-cb92-4c52-aad2-6e43b3908fd4","Type":"ContainerStarted","Data":"a8649f87a88174d17b6487719f5885622c831c9443f1da0c32da65d70df7cac2"} Jan 30 16:43:15 crc kubenswrapper[4766]: I0130 16:43:15.377678 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-2jkw8" podStartSLOduration=13.377659578 podStartE2EDuration="13.377659578s" podCreationTimestamp="2026-01-30 16:43:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:43:15.370385123 +0000 UTC m=+1250.008342469" watchObservedRunningTime="2026-01-30 16:43:15.377659578 +0000 UTC m=+1250.015616924" Jan 30 16:43:15 crc kubenswrapper[4766]: W0130 16:43:15.502248 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64f88e91_eb62_45a5_bfcb_d38a918e23da.slice/crio-935c723156bfbd5c9680c8c0177ab173e556ff98d5fd8edb1776d96225b947f7 WatchSource:0}: Error finding container 935c723156bfbd5c9680c8c0177ab173e556ff98d5fd8edb1776d96225b947f7: Status 404 returned error can't find the container with id 935c723156bfbd5c9680c8c0177ab173e556ff98d5fd8edb1776d96225b947f7 Jan 30 16:43:16 crc kubenswrapper[4766]: I0130 16:43:16.371645 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14501411-a443-4f68-93ed-4cadcbc48b9f","Type":"ContainerStarted","Data":"29f2e1e9d854f93be08c13178f68a83ac09bfae5d8769fe9b88954c95b0e3d61"} Jan 30 16:43:16 crc kubenswrapper[4766]: I0130 16:43:16.377489 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda","Type":"ContainerStarted","Data":"3cb23532304b03e1da0f93a0cdcb7fa000cdddef8c5037121da270eaf943e938"} Jan 30 16:43:16 crc kubenswrapper[4766]: I0130 16:43:16.381678 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"64f88e91-eb62-45a5-bfcb-d38a918e23da","Type":"ContainerStarted","Data":"9352e7173393de9492d5222999bb67029ca5e0dcf5693f98de32c79fb6c9bf8a"} Jan 30 16:43:16 crc kubenswrapper[4766]: I0130 16:43:16.381920 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"64f88e91-eb62-45a5-bfcb-d38a918e23da","Type":"ContainerStarted","Data":"935c723156bfbd5c9680c8c0177ab173e556ff98d5fd8edb1776d96225b947f7"} Jan 30 16:43:16 crc kubenswrapper[4766]: I0130 16:43:16.404578 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=35.404558272 podStartE2EDuration="35.404558272s" podCreationTimestamp="2026-01-30 16:42:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:43:16.398079257 +0000 UTC m=+1251.036036603" watchObservedRunningTime="2026-01-30 16:43:16.404558272 +0000 UTC m=+1251.042515618" Jan 30 16:43:17 crc kubenswrapper[4766]: I0130 16:43:17.393162 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"64f88e91-eb62-45a5-bfcb-d38a918e23da","Type":"ContainerStarted","Data":"87f92f11ae60bdb748400bbfd15a3c1c211bca6b2a6c5c076aef71b15e044896"} Jan 30 16:43:17 crc kubenswrapper[4766]: I0130 16:43:17.425779 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=40.425756351 podStartE2EDuration="40.425756351s" podCreationTimestamp="2026-01-30 16:42:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:43:17.411294313 +0000 UTC m=+1252.049251669" watchObservedRunningTime="2026-01-30 16:43:17.425756351 +0000 UTC m=+1252.063713697" Jan 30 16:43:18 crc kubenswrapper[4766]: I0130 16:43:18.404409 4766 generic.go:334] "Generic (PLEG): container finished" podID="59eff57d-cb92-4c52-aad2-6e43b3908fd4" containerID="fb2ca6c4c30cdfea0387f0737fa8335ebccfac0d91ab6a883ee48bb871ca5508" exitCode=0 Jan 30 16:43:18 crc kubenswrapper[4766]: I0130 16:43:18.404476 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2jkw8" event={"ID":"59eff57d-cb92-4c52-aad2-6e43b3908fd4","Type":"ContainerDied","Data":"fb2ca6c4c30cdfea0387f0737fa8335ebccfac0d91ab6a883ee48bb871ca5508"} Jan 30 16:43:21 crc kubenswrapper[4766]: I0130 16:43:21.821329 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 16:43:21 crc kubenswrapper[4766]: I0130 16:43:21.821743 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 16:43:21 crc kubenswrapper[4766]: I0130 16:43:21.850381 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2jkw8" Jan 30 16:43:21 crc kubenswrapper[4766]: I0130 16:43:21.860221 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 16:43:21 crc kubenswrapper[4766]: I0130 16:43:21.874588 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 16:43:21 crc kubenswrapper[4766]: I0130 16:43:21.955796 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/59eff57d-cb92-4c52-aad2-6e43b3908fd4-credential-keys\") pod \"59eff57d-cb92-4c52-aad2-6e43b3908fd4\" (UID: \"59eff57d-cb92-4c52-aad2-6e43b3908fd4\") " Jan 30 16:43:21 crc kubenswrapper[4766]: I0130 16:43:21.955984 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9kbq\" (UniqueName: \"kubernetes.io/projected/59eff57d-cb92-4c52-aad2-6e43b3908fd4-kube-api-access-d9kbq\") pod \"59eff57d-cb92-4c52-aad2-6e43b3908fd4\" (UID: \"59eff57d-cb92-4c52-aad2-6e43b3908fd4\") " Jan 30 16:43:21 crc kubenswrapper[4766]: I0130 16:43:21.956139 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/59eff57d-cb92-4c52-aad2-6e43b3908fd4-fernet-keys\") pod \"59eff57d-cb92-4c52-aad2-6e43b3908fd4\" (UID: \"59eff57d-cb92-4c52-aad2-6e43b3908fd4\") " Jan 30 16:43:21 crc kubenswrapper[4766]: I0130 16:43:21.956401 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59eff57d-cb92-4c52-aad2-6e43b3908fd4-scripts\") pod \"59eff57d-cb92-4c52-aad2-6e43b3908fd4\" (UID: \"59eff57d-cb92-4c52-aad2-6e43b3908fd4\") " Jan 30 16:43:21 crc kubenswrapper[4766]: I0130 16:43:21.956491 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59eff57d-cb92-4c52-aad2-6e43b3908fd4-combined-ca-bundle\") pod \"59eff57d-cb92-4c52-aad2-6e43b3908fd4\" (UID: \"59eff57d-cb92-4c52-aad2-6e43b3908fd4\") " Jan 30 16:43:21 crc kubenswrapper[4766]: I0130 16:43:21.956587 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59eff57d-cb92-4c52-aad2-6e43b3908fd4-config-data\") pod \"59eff57d-cb92-4c52-aad2-6e43b3908fd4\" (UID: \"59eff57d-cb92-4c52-aad2-6e43b3908fd4\") " Jan 30 16:43:21 crc kubenswrapper[4766]: I0130 16:43:21.961011 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59eff57d-cb92-4c52-aad2-6e43b3908fd4-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "59eff57d-cb92-4c52-aad2-6e43b3908fd4" (UID: "59eff57d-cb92-4c52-aad2-6e43b3908fd4"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:21 crc kubenswrapper[4766]: I0130 16:43:21.961406 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59eff57d-cb92-4c52-aad2-6e43b3908fd4-scripts" (OuterVolumeSpecName: "scripts") pod "59eff57d-cb92-4c52-aad2-6e43b3908fd4" (UID: "59eff57d-cb92-4c52-aad2-6e43b3908fd4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:21 crc kubenswrapper[4766]: I0130 16:43:21.961592 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59eff57d-cb92-4c52-aad2-6e43b3908fd4-kube-api-access-d9kbq" (OuterVolumeSpecName: "kube-api-access-d9kbq") pod "59eff57d-cb92-4c52-aad2-6e43b3908fd4" (UID: "59eff57d-cb92-4c52-aad2-6e43b3908fd4"). InnerVolumeSpecName "kube-api-access-d9kbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:43:21 crc kubenswrapper[4766]: I0130 16:43:21.963016 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59eff57d-cb92-4c52-aad2-6e43b3908fd4-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "59eff57d-cb92-4c52-aad2-6e43b3908fd4" (UID: "59eff57d-cb92-4c52-aad2-6e43b3908fd4"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:21 crc kubenswrapper[4766]: I0130 16:43:21.990866 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59eff57d-cb92-4c52-aad2-6e43b3908fd4-config-data" (OuterVolumeSpecName: "config-data") pod "59eff57d-cb92-4c52-aad2-6e43b3908fd4" (UID: "59eff57d-cb92-4c52-aad2-6e43b3908fd4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:21 crc kubenswrapper[4766]: I0130 16:43:21.993637 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59eff57d-cb92-4c52-aad2-6e43b3908fd4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "59eff57d-cb92-4c52-aad2-6e43b3908fd4" (UID: "59eff57d-cb92-4c52-aad2-6e43b3908fd4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:22 crc kubenswrapper[4766]: I0130 16:43:22.059831 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59eff57d-cb92-4c52-aad2-6e43b3908fd4-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:22 crc kubenswrapper[4766]: I0130 16:43:22.060347 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59eff57d-cb92-4c52-aad2-6e43b3908fd4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:22 crc kubenswrapper[4766]: I0130 16:43:22.060362 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59eff57d-cb92-4c52-aad2-6e43b3908fd4-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:22 crc kubenswrapper[4766]: I0130 16:43:22.060373 4766 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/59eff57d-cb92-4c52-aad2-6e43b3908fd4-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:22 crc kubenswrapper[4766]: I0130 16:43:22.060386 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9kbq\" (UniqueName: \"kubernetes.io/projected/59eff57d-cb92-4c52-aad2-6e43b3908fd4-kube-api-access-d9kbq\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:22 crc kubenswrapper[4766]: I0130 16:43:22.060397 4766 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/59eff57d-cb92-4c52-aad2-6e43b3908fd4-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:22 crc kubenswrapper[4766]: I0130 16:43:22.450250 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2jkw8" event={"ID":"59eff57d-cb92-4c52-aad2-6e43b3908fd4","Type":"ContainerDied","Data":"a8649f87a88174d17b6487719f5885622c831c9443f1da0c32da65d70df7cac2"} Jan 30 16:43:22 crc kubenswrapper[4766]: I0130 16:43:22.450296 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8649f87a88174d17b6487719f5885622c831c9443f1da0c32da65d70df7cac2" Jan 30 16:43:22 crc kubenswrapper[4766]: I0130 16:43:22.450260 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2jkw8" Jan 30 16:43:22 crc kubenswrapper[4766]: I0130 16:43:22.453003 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14501411-a443-4f68-93ed-4cadcbc48b9f","Type":"ContainerStarted","Data":"eafc3f5c849b2dd0f1f5325be3c9f10da539dc0774b466b2e2ee8e51194a6616"} Jan 30 16:43:22 crc kubenswrapper[4766]: I0130 16:43:22.457032 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-mq5sq" event={"ID":"83c08adc-cebc-4bff-8994-d8f1f0cb59d7","Type":"ContainerStarted","Data":"d472b2710d2b86d4d81d4fb6b931148f6dd0a1a2e9b155c00e350e8d497251f8"} Jan 30 16:43:22 crc kubenswrapper[4766]: I0130 16:43:22.457114 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 16:43:22 crc kubenswrapper[4766]: I0130 16:43:22.457139 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 16:43:22 crc kubenswrapper[4766]: I0130 16:43:22.487549 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-mq5sq" podStartSLOduration=2.401719007 podStartE2EDuration="53.487523576s" podCreationTimestamp="2026-01-30 16:42:29 +0000 UTC" firstStartedPulling="2026-01-30 16:42:30.839743703 +0000 UTC m=+1205.477701049" lastFinishedPulling="2026-01-30 16:43:21.925548262 +0000 UTC m=+1256.563505618" observedRunningTime="2026-01-30 16:43:22.485648545 +0000 UTC m=+1257.123605941" watchObservedRunningTime="2026-01-30 16:43:22.487523576 +0000 UTC m=+1257.125480962" Jan 30 16:43:22 crc kubenswrapper[4766]: I0130 16:43:22.967262 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7bc6f65df6-mx4xk"] Jan 30 16:43:22 crc kubenswrapper[4766]: E0130 16:43:22.967724 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59eff57d-cb92-4c52-aad2-6e43b3908fd4" containerName="keystone-bootstrap" Jan 30 16:43:22 crc kubenswrapper[4766]: I0130 16:43:22.967740 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="59eff57d-cb92-4c52-aad2-6e43b3908fd4" containerName="keystone-bootstrap" Jan 30 16:43:22 crc kubenswrapper[4766]: I0130 16:43:22.967993 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="59eff57d-cb92-4c52-aad2-6e43b3908fd4" containerName="keystone-bootstrap" Jan 30 16:43:22 crc kubenswrapper[4766]: I0130 16:43:22.968679 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7bc6f65df6-mx4xk" Jan 30 16:43:22 crc kubenswrapper[4766]: I0130 16:43:22.971249 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 16:43:22 crc kubenswrapper[4766]: I0130 16:43:22.986154 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 16:43:22 crc kubenswrapper[4766]: I0130 16:43:22.987252 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 30 16:43:22 crc kubenswrapper[4766]: I0130 16:43:22.987474 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 30 16:43:22 crc kubenswrapper[4766]: I0130 16:43:22.987860 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 16:43:22 crc kubenswrapper[4766]: I0130 16:43:22.987933 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-ftsn6" Jan 30 16:43:23 crc kubenswrapper[4766]: I0130 16:43:22.999853 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7bc6f65df6-mx4xk"] Jan 30 16:43:23 crc kubenswrapper[4766]: I0130 16:43:23.088413 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-internal-tls-certs\") pod \"keystone-7bc6f65df6-mx4xk\" (UID: \"821de7d3-dc41-4351-bced-6ed09a729223\") " pod="openstack/keystone-7bc6f65df6-mx4xk" Jan 30 16:43:23 crc kubenswrapper[4766]: I0130 16:43:23.088477 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-combined-ca-bundle\") pod \"keystone-7bc6f65df6-mx4xk\" (UID: \"821de7d3-dc41-4351-bced-6ed09a729223\") " pod="openstack/keystone-7bc6f65df6-mx4xk" Jan 30 16:43:23 crc kubenswrapper[4766]: I0130 16:43:23.088503 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-fernet-keys\") pod \"keystone-7bc6f65df6-mx4xk\" (UID: \"821de7d3-dc41-4351-bced-6ed09a729223\") " pod="openstack/keystone-7bc6f65df6-mx4xk" Jan 30 16:43:23 crc kubenswrapper[4766]: I0130 16:43:23.088532 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-credential-keys\") pod \"keystone-7bc6f65df6-mx4xk\" (UID: \"821de7d3-dc41-4351-bced-6ed09a729223\") " pod="openstack/keystone-7bc6f65df6-mx4xk" Jan 30 16:43:23 crc kubenswrapper[4766]: I0130 16:43:23.088572 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-config-data\") pod \"keystone-7bc6f65df6-mx4xk\" (UID: \"821de7d3-dc41-4351-bced-6ed09a729223\") " pod="openstack/keystone-7bc6f65df6-mx4xk" Jan 30 16:43:23 crc kubenswrapper[4766]: I0130 16:43:23.088640 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxtx6\" (UniqueName: \"kubernetes.io/projected/821de7d3-dc41-4351-bced-6ed09a729223-kube-api-access-pxtx6\") pod \"keystone-7bc6f65df6-mx4xk\" (UID: \"821de7d3-dc41-4351-bced-6ed09a729223\") " pod="openstack/keystone-7bc6f65df6-mx4xk" Jan 30 16:43:23 crc kubenswrapper[4766]: I0130 16:43:23.088682 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-scripts\") pod \"keystone-7bc6f65df6-mx4xk\" (UID: \"821de7d3-dc41-4351-bced-6ed09a729223\") " pod="openstack/keystone-7bc6f65df6-mx4xk" Jan 30 16:43:23 crc kubenswrapper[4766]: I0130 16:43:23.088713 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-public-tls-certs\") pod \"keystone-7bc6f65df6-mx4xk\" (UID: \"821de7d3-dc41-4351-bced-6ed09a729223\") " pod="openstack/keystone-7bc6f65df6-mx4xk" Jan 30 16:43:23 crc kubenswrapper[4766]: I0130 16:43:23.190441 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxtx6\" (UniqueName: \"kubernetes.io/projected/821de7d3-dc41-4351-bced-6ed09a729223-kube-api-access-pxtx6\") pod \"keystone-7bc6f65df6-mx4xk\" (UID: \"821de7d3-dc41-4351-bced-6ed09a729223\") " pod="openstack/keystone-7bc6f65df6-mx4xk" Jan 30 16:43:23 crc kubenswrapper[4766]: I0130 16:43:23.190518 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-scripts\") pod \"keystone-7bc6f65df6-mx4xk\" (UID: \"821de7d3-dc41-4351-bced-6ed09a729223\") " pod="openstack/keystone-7bc6f65df6-mx4xk" Jan 30 16:43:23 crc kubenswrapper[4766]: I0130 16:43:23.190545 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-public-tls-certs\") pod \"keystone-7bc6f65df6-mx4xk\" (UID: \"821de7d3-dc41-4351-bced-6ed09a729223\") " pod="openstack/keystone-7bc6f65df6-mx4xk" Jan 30 16:43:23 crc kubenswrapper[4766]: I0130 16:43:23.190616 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-internal-tls-certs\") pod \"keystone-7bc6f65df6-mx4xk\" (UID: \"821de7d3-dc41-4351-bced-6ed09a729223\") " pod="openstack/keystone-7bc6f65df6-mx4xk" Jan 30 16:43:23 crc kubenswrapper[4766]: I0130 16:43:23.190643 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-combined-ca-bundle\") pod \"keystone-7bc6f65df6-mx4xk\" (UID: \"821de7d3-dc41-4351-bced-6ed09a729223\") " pod="openstack/keystone-7bc6f65df6-mx4xk" Jan 30 16:43:23 crc kubenswrapper[4766]: I0130 16:43:23.190660 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-fernet-keys\") pod \"keystone-7bc6f65df6-mx4xk\" (UID: \"821de7d3-dc41-4351-bced-6ed09a729223\") " pod="openstack/keystone-7bc6f65df6-mx4xk" Jan 30 16:43:23 crc kubenswrapper[4766]: I0130 16:43:23.190678 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-credential-keys\") pod \"keystone-7bc6f65df6-mx4xk\" (UID: \"821de7d3-dc41-4351-bced-6ed09a729223\") " pod="openstack/keystone-7bc6f65df6-mx4xk" Jan 30 16:43:23 crc kubenswrapper[4766]: I0130 16:43:23.190724 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-config-data\") pod \"keystone-7bc6f65df6-mx4xk\" (UID: \"821de7d3-dc41-4351-bced-6ed09a729223\") " pod="openstack/keystone-7bc6f65df6-mx4xk" Jan 30 16:43:23 crc kubenswrapper[4766]: I0130 16:43:23.196948 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-internal-tls-certs\") pod \"keystone-7bc6f65df6-mx4xk\" (UID: \"821de7d3-dc41-4351-bced-6ed09a729223\") " pod="openstack/keystone-7bc6f65df6-mx4xk" Jan 30 16:43:23 crc kubenswrapper[4766]: I0130 16:43:23.197087 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-fernet-keys\") pod \"keystone-7bc6f65df6-mx4xk\" (UID: \"821de7d3-dc41-4351-bced-6ed09a729223\") " pod="openstack/keystone-7bc6f65df6-mx4xk" Jan 30 16:43:23 crc kubenswrapper[4766]: I0130 16:43:23.198132 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-public-tls-certs\") pod \"keystone-7bc6f65df6-mx4xk\" (UID: \"821de7d3-dc41-4351-bced-6ed09a729223\") " pod="openstack/keystone-7bc6f65df6-mx4xk" Jan 30 16:43:23 crc kubenswrapper[4766]: I0130 16:43:23.199747 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-scripts\") pod \"keystone-7bc6f65df6-mx4xk\" (UID: \"821de7d3-dc41-4351-bced-6ed09a729223\") " pod="openstack/keystone-7bc6f65df6-mx4xk" Jan 30 16:43:23 crc kubenswrapper[4766]: I0130 16:43:23.200658 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-config-data\") pod \"keystone-7bc6f65df6-mx4xk\" (UID: \"821de7d3-dc41-4351-bced-6ed09a729223\") " pod="openstack/keystone-7bc6f65df6-mx4xk" Jan 30 16:43:23 crc kubenswrapper[4766]: I0130 16:43:23.201365 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-credential-keys\") pod \"keystone-7bc6f65df6-mx4xk\" (UID: \"821de7d3-dc41-4351-bced-6ed09a729223\") " pod="openstack/keystone-7bc6f65df6-mx4xk" Jan 30 16:43:23 crc kubenswrapper[4766]: I0130 16:43:23.202714 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-combined-ca-bundle\") pod \"keystone-7bc6f65df6-mx4xk\" (UID: \"821de7d3-dc41-4351-bced-6ed09a729223\") " pod="openstack/keystone-7bc6f65df6-mx4xk" Jan 30 16:43:23 crc kubenswrapper[4766]: I0130 16:43:23.219651 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxtx6\" (UniqueName: \"kubernetes.io/projected/821de7d3-dc41-4351-bced-6ed09a729223-kube-api-access-pxtx6\") pod \"keystone-7bc6f65df6-mx4xk\" (UID: \"821de7d3-dc41-4351-bced-6ed09a729223\") " pod="openstack/keystone-7bc6f65df6-mx4xk" Jan 30 16:43:23 crc kubenswrapper[4766]: I0130 16:43:23.292528 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7bc6f65df6-mx4xk" Jan 30 16:43:23 crc kubenswrapper[4766]: I0130 16:43:23.794880 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7bc6f65df6-mx4xk"] Jan 30 16:43:24 crc kubenswrapper[4766]: I0130 16:43:24.480275 4766 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 16:43:24 crc kubenswrapper[4766]: I0130 16:43:24.480674 4766 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 16:43:24 crc kubenswrapper[4766]: I0130 16:43:24.482512 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7bc6f65df6-mx4xk" event={"ID":"821de7d3-dc41-4351-bced-6ed09a729223","Type":"ContainerStarted","Data":"7fedc7578cd65e1da9885d991db738315a5357e363187467c355ed6389131188"} Jan 30 16:43:24 crc kubenswrapper[4766]: I0130 16:43:24.482595 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7bc6f65df6-mx4xk" Jan 30 16:43:24 crc kubenswrapper[4766]: I0130 16:43:24.482607 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7bc6f65df6-mx4xk" event={"ID":"821de7d3-dc41-4351-bced-6ed09a729223","Type":"ContainerStarted","Data":"f7e59fee20a8c8c4ebf0975c2f9adc338f4c7ce8ad17f7e1383af919425199ff"} Jan 30 16:43:24 crc kubenswrapper[4766]: I0130 16:43:24.512256 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7bc6f65df6-mx4xk" podStartSLOduration=2.511870112 podStartE2EDuration="2.511870112s" podCreationTimestamp="2026-01-30 16:43:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:43:24.501916455 +0000 UTC m=+1259.139873801" watchObservedRunningTime="2026-01-30 16:43:24.511870112 +0000 UTC m=+1259.149827458" Jan 30 16:43:24 crc kubenswrapper[4766]: E0130 16:43:24.602451 4766 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83c08adc_cebc_4bff_8994_d8f1f0cb59d7.slice/crio-d472b2710d2b86d4d81d4fb6b931148f6dd0a1a2e9b155c00e350e8d497251f8.scope\": RecentStats: unable to find data in memory cache]" Jan 30 16:43:24 crc kubenswrapper[4766]: I0130 16:43:24.776975 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 16:43:24 crc kubenswrapper[4766]: I0130 16:43:24.777937 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 16:43:25 crc kubenswrapper[4766]: I0130 16:43:25.493227 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-zgzf5" event={"ID":"ad8b317f-6f81-4ac9-a854-7b71e384ed98","Type":"ContainerStarted","Data":"41ae1fdf6e3a258b7f3ba76000e1d22b3902137f00a4cd0b5ed0e97ffdf576d3"} Jan 30 16:43:25 crc kubenswrapper[4766]: I0130 16:43:25.497802 4766 generic.go:334] "Generic (PLEG): container finished" podID="83c08adc-cebc-4bff-8994-d8f1f0cb59d7" containerID="d472b2710d2b86d4d81d4fb6b931148f6dd0a1a2e9b155c00e350e8d497251f8" exitCode=0 Jan 30 16:43:25 crc kubenswrapper[4766]: I0130 16:43:25.497938 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-mq5sq" event={"ID":"83c08adc-cebc-4bff-8994-d8f1f0cb59d7","Type":"ContainerDied","Data":"d472b2710d2b86d4d81d4fb6b931148f6dd0a1a2e9b155c00e350e8d497251f8"} Jan 30 16:43:25 crc kubenswrapper[4766]: I0130 16:43:25.545824 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-zgzf5" podStartSLOduration=2.623225568 podStartE2EDuration="56.545805484s" podCreationTimestamp="2026-01-30 16:42:29 +0000 UTC" firstStartedPulling="2026-01-30 16:42:31.031497736 +0000 UTC m=+1205.669455082" lastFinishedPulling="2026-01-30 16:43:24.954077662 +0000 UTC m=+1259.592034998" observedRunningTime="2026-01-30 16:43:25.514491494 +0000 UTC m=+1260.152448840" watchObservedRunningTime="2026-01-30 16:43:25.545805484 +0000 UTC m=+1260.183762830" Jan 30 16:43:26 crc kubenswrapper[4766]: I0130 16:43:26.517246 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-rxmkt" event={"ID":"3a05e847-bb50-49ab-821d-e2432c0f01e9","Type":"ContainerStarted","Data":"590619885e87e1a14deb1f9f567a37d743fd8966bf2a912bbf096d5bd9ef44b7"} Jan 30 16:43:26 crc kubenswrapper[4766]: I0130 16:43:26.545976 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-rxmkt" podStartSLOduration=2.67032561 podStartE2EDuration="57.545957069s" podCreationTimestamp="2026-01-30 16:42:29 +0000 UTC" firstStartedPulling="2026-01-30 16:42:30.598517512 +0000 UTC m=+1205.236474858" lastFinishedPulling="2026-01-30 16:43:25.474148971 +0000 UTC m=+1260.112106317" observedRunningTime="2026-01-30 16:43:26.537962405 +0000 UTC m=+1261.175919751" watchObservedRunningTime="2026-01-30 16:43:26.545957069 +0000 UTC m=+1261.183914415" Jan 30 16:43:26 crc kubenswrapper[4766]: I0130 16:43:26.929345 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-mq5sq" Jan 30 16:43:26 crc kubenswrapper[4766]: I0130 16:43:26.960572 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-config-data\") pod \"83c08adc-cebc-4bff-8994-d8f1f0cb59d7\" (UID: \"83c08adc-cebc-4bff-8994-d8f1f0cb59d7\") " Jan 30 16:43:26 crc kubenswrapper[4766]: I0130 16:43:26.960654 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-logs\") pod \"83c08adc-cebc-4bff-8994-d8f1f0cb59d7\" (UID: \"83c08adc-cebc-4bff-8994-d8f1f0cb59d7\") " Jan 30 16:43:26 crc kubenswrapper[4766]: I0130 16:43:26.960747 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k75sk\" (UniqueName: \"kubernetes.io/projected/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-kube-api-access-k75sk\") pod \"83c08adc-cebc-4bff-8994-d8f1f0cb59d7\" (UID: \"83c08adc-cebc-4bff-8994-d8f1f0cb59d7\") " Jan 30 16:43:26 crc kubenswrapper[4766]: I0130 16:43:26.960863 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-scripts\") pod \"83c08adc-cebc-4bff-8994-d8f1f0cb59d7\" (UID: \"83c08adc-cebc-4bff-8994-d8f1f0cb59d7\") " Jan 30 16:43:26 crc kubenswrapper[4766]: I0130 16:43:26.961048 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-combined-ca-bundle\") pod \"83c08adc-cebc-4bff-8994-d8f1f0cb59d7\" (UID: \"83c08adc-cebc-4bff-8994-d8f1f0cb59d7\") " Jan 30 16:43:26 crc kubenswrapper[4766]: I0130 16:43:26.964642 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-logs" (OuterVolumeSpecName: "logs") pod "83c08adc-cebc-4bff-8994-d8f1f0cb59d7" (UID: "83c08adc-cebc-4bff-8994-d8f1f0cb59d7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:43:26 crc kubenswrapper[4766]: I0130 16:43:26.969281 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-scripts" (OuterVolumeSpecName: "scripts") pod "83c08adc-cebc-4bff-8994-d8f1f0cb59d7" (UID: "83c08adc-cebc-4bff-8994-d8f1f0cb59d7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:26 crc kubenswrapper[4766]: I0130 16:43:26.971229 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-kube-api-access-k75sk" (OuterVolumeSpecName: "kube-api-access-k75sk") pod "83c08adc-cebc-4bff-8994-d8f1f0cb59d7" (UID: "83c08adc-cebc-4bff-8994-d8f1f0cb59d7"). InnerVolumeSpecName "kube-api-access-k75sk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:43:26 crc kubenswrapper[4766]: E0130 16:43:26.990516 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-config-data podName:83c08adc-cebc-4bff-8994-d8f1f0cb59d7 nodeName:}" failed. No retries permitted until 2026-01-30 16:43:27.490486292 +0000 UTC m=+1262.128443638 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-config-data") pod "83c08adc-cebc-4bff-8994-d8f1f0cb59d7" (UID: "83c08adc-cebc-4bff-8994-d8f1f0cb59d7") : error deleting /var/lib/kubelet/pods/83c08adc-cebc-4bff-8994-d8f1f0cb59d7/volume-subpaths: remove /var/lib/kubelet/pods/83c08adc-cebc-4bff-8994-d8f1f0cb59d7/volume-subpaths: no such file or directory Jan 30 16:43:26 crc kubenswrapper[4766]: I0130 16:43:26.993224 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "83c08adc-cebc-4bff-8994-d8f1f0cb59d7" (UID: "83c08adc-cebc-4bff-8994-d8f1f0cb59d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.065771 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.065809 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-logs\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.065823 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k75sk\" (UniqueName: \"kubernetes.io/projected/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-kube-api-access-k75sk\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.065841 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.360752 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.360804 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.392865 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.413641 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.528452 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-mq5sq" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.530639 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-mq5sq" event={"ID":"83c08adc-cebc-4bff-8994-d8f1f0cb59d7","Type":"ContainerDied","Data":"7caac3e0c06feb794717f6f40765ed2205ff79a69ccdb722b91c767580ccb20f"} Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.530766 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7caac3e0c06feb794717f6f40765ed2205ff79a69ccdb722b91c767580ccb20f" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.530796 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.531063 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.574316 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-config-data\") pod \"83c08adc-cebc-4bff-8994-d8f1f0cb59d7\" (UID: \"83c08adc-cebc-4bff-8994-d8f1f0cb59d7\") " Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.578501 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-config-data" (OuterVolumeSpecName: "config-data") pod "83c08adc-cebc-4bff-8994-d8f1f0cb59d7" (UID: "83c08adc-cebc-4bff-8994-d8f1f0cb59d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.677314 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.691986 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-69d8797fb6-zzsfd"] Jan 30 16:43:27 crc kubenswrapper[4766]: E0130 16:43:27.692419 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83c08adc-cebc-4bff-8994-d8f1f0cb59d7" containerName="placement-db-sync" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.692450 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="83c08adc-cebc-4bff-8994-d8f1f0cb59d7" containerName="placement-db-sync" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.692681 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="83c08adc-cebc-4bff-8994-d8f1f0cb59d7" containerName="placement-db-sync" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.693982 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-69d8797fb6-zzsfd" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.696331 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.697705 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.715414 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-69d8797fb6-zzsfd"] Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.778975 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-internal-tls-certs\") pod \"placement-69d8797fb6-zzsfd\" (UID: \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\") " pod="openstack/placement-69d8797fb6-zzsfd" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.779048 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-combined-ca-bundle\") pod \"placement-69d8797fb6-zzsfd\" (UID: \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\") " pod="openstack/placement-69d8797fb6-zzsfd" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.779086 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkrsf\" (UniqueName: \"kubernetes.io/projected/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-kube-api-access-nkrsf\") pod \"placement-69d8797fb6-zzsfd\" (UID: \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\") " pod="openstack/placement-69d8797fb6-zzsfd" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.779169 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-config-data\") pod \"placement-69d8797fb6-zzsfd\" (UID: \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\") " pod="openstack/placement-69d8797fb6-zzsfd" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.779215 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-logs\") pod \"placement-69d8797fb6-zzsfd\" (UID: \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\") " pod="openstack/placement-69d8797fb6-zzsfd" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.779243 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-public-tls-certs\") pod \"placement-69d8797fb6-zzsfd\" (UID: \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\") " pod="openstack/placement-69d8797fb6-zzsfd" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.779289 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-scripts\") pod \"placement-69d8797fb6-zzsfd\" (UID: \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\") " pod="openstack/placement-69d8797fb6-zzsfd" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.881803 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-config-data\") pod \"placement-69d8797fb6-zzsfd\" (UID: \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\") " pod="openstack/placement-69d8797fb6-zzsfd" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.881877 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-logs\") pod \"placement-69d8797fb6-zzsfd\" (UID: \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\") " pod="openstack/placement-69d8797fb6-zzsfd" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.881921 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-public-tls-certs\") pod \"placement-69d8797fb6-zzsfd\" (UID: \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\") " pod="openstack/placement-69d8797fb6-zzsfd" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.881988 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-scripts\") pod \"placement-69d8797fb6-zzsfd\" (UID: \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\") " pod="openstack/placement-69d8797fb6-zzsfd" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.882058 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-internal-tls-certs\") pod \"placement-69d8797fb6-zzsfd\" (UID: \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\") " pod="openstack/placement-69d8797fb6-zzsfd" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.882093 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-combined-ca-bundle\") pod \"placement-69d8797fb6-zzsfd\" (UID: \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\") " pod="openstack/placement-69d8797fb6-zzsfd" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.882126 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkrsf\" (UniqueName: \"kubernetes.io/projected/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-kube-api-access-nkrsf\") pod \"placement-69d8797fb6-zzsfd\" (UID: \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\") " pod="openstack/placement-69d8797fb6-zzsfd" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.884010 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-logs\") pod \"placement-69d8797fb6-zzsfd\" (UID: \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\") " pod="openstack/placement-69d8797fb6-zzsfd" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.888136 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-combined-ca-bundle\") pod \"placement-69d8797fb6-zzsfd\" (UID: \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\") " pod="openstack/placement-69d8797fb6-zzsfd" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.888733 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-scripts\") pod \"placement-69d8797fb6-zzsfd\" (UID: \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\") " pod="openstack/placement-69d8797fb6-zzsfd" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.888753 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-public-tls-certs\") pod \"placement-69d8797fb6-zzsfd\" (UID: \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\") " pod="openstack/placement-69d8797fb6-zzsfd" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.892870 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-internal-tls-certs\") pod \"placement-69d8797fb6-zzsfd\" (UID: \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\") " pod="openstack/placement-69d8797fb6-zzsfd" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.900169 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-config-data\") pod \"placement-69d8797fb6-zzsfd\" (UID: \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\") " pod="openstack/placement-69d8797fb6-zzsfd" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.919621 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkrsf\" (UniqueName: \"kubernetes.io/projected/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-kube-api-access-nkrsf\") pod \"placement-69d8797fb6-zzsfd\" (UID: \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\") " pod="openstack/placement-69d8797fb6-zzsfd" Jan 30 16:43:28 crc kubenswrapper[4766]: I0130 16:43:28.019428 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-69d8797fb6-zzsfd" Jan 30 16:43:29 crc kubenswrapper[4766]: I0130 16:43:29.546412 4766 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 16:43:29 crc kubenswrapper[4766]: I0130 16:43:29.546963 4766 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 16:43:29 crc kubenswrapper[4766]: I0130 16:43:29.653647 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 16:43:29 crc kubenswrapper[4766]: I0130 16:43:29.663365 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 16:43:32 crc kubenswrapper[4766]: I0130 16:43:32.308406 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-69d8797fb6-zzsfd"] Jan 30 16:43:32 crc kubenswrapper[4766]: I0130 16:43:32.569427 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-69d8797fb6-zzsfd" event={"ID":"447a8ec3-4e50-40a9-b418-01fd8c0eb03e","Type":"ContainerStarted","Data":"e94bea3a22075449c7ce733d15ed50c31bf49ec686272c0a7961479d9194b9c6"} Jan 30 16:43:32 crc kubenswrapper[4766]: I0130 16:43:32.576876 4766 generic.go:334] "Generic (PLEG): container finished" podID="ad8b317f-6f81-4ac9-a854-7b71e384ed98" containerID="41ae1fdf6e3a258b7f3ba76000e1d22b3902137f00a4cd0b5ed0e97ffdf576d3" exitCode=0 Jan 30 16:43:32 crc kubenswrapper[4766]: I0130 16:43:32.576921 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-zgzf5" event={"ID":"ad8b317f-6f81-4ac9-a854-7b71e384ed98","Type":"ContainerDied","Data":"41ae1fdf6e3a258b7f3ba76000e1d22b3902137f00a4cd0b5ed0e97ffdf576d3"} Jan 30 16:43:33 crc kubenswrapper[4766]: I0130 16:43:33.588072 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14501411-a443-4f68-93ed-4cadcbc48b9f","Type":"ContainerStarted","Data":"2dd4b41c08df4639b3a0f7331cc05e4486025741bd175061a52afd1a5dbed498"} Jan 30 16:43:33 crc kubenswrapper[4766]: I0130 16:43:33.588475 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 16:43:33 crc kubenswrapper[4766]: I0130 16:43:33.588255 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="14501411-a443-4f68-93ed-4cadcbc48b9f" containerName="proxy-httpd" containerID="cri-o://2dd4b41c08df4639b3a0f7331cc05e4486025741bd175061a52afd1a5dbed498" gracePeriod=30 Jan 30 16:43:33 crc kubenswrapper[4766]: I0130 16:43:33.588169 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="14501411-a443-4f68-93ed-4cadcbc48b9f" containerName="ceilometer-central-agent" containerID="cri-o://8a28e1297786ace5ddf8a0acc683ca89f4eebb3aa58a5204b9dc553eb9ab1afc" gracePeriod=30 Jan 30 16:43:33 crc kubenswrapper[4766]: I0130 16:43:33.588298 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="14501411-a443-4f68-93ed-4cadcbc48b9f" containerName="sg-core" containerID="cri-o://eafc3f5c849b2dd0f1f5325be3c9f10da539dc0774b466b2e2ee8e51194a6616" gracePeriod=30 Jan 30 16:43:33 crc kubenswrapper[4766]: I0130 16:43:33.588259 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="14501411-a443-4f68-93ed-4cadcbc48b9f" containerName="ceilometer-notification-agent" containerID="cri-o://29f2e1e9d854f93be08c13178f68a83ac09bfae5d8769fe9b88954c95b0e3d61" gracePeriod=30 Jan 30 16:43:33 crc kubenswrapper[4766]: I0130 16:43:33.592537 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-69d8797fb6-zzsfd" event={"ID":"447a8ec3-4e50-40a9-b418-01fd8c0eb03e","Type":"ContainerStarted","Data":"e1c9c044f33b3da34602b78fc59451988ca7b3d5b492d71105b99eb5384541ae"} Jan 30 16:43:33 crc kubenswrapper[4766]: I0130 16:43:33.592580 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-69d8797fb6-zzsfd" event={"ID":"447a8ec3-4e50-40a9-b418-01fd8c0eb03e","Type":"ContainerStarted","Data":"13f1ad493c49e69abd03b3b6444cd83dde3cd1df4412312365d88ef9307e7a64"} Jan 30 16:43:33 crc kubenswrapper[4766]: I0130 16:43:33.592598 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-69d8797fb6-zzsfd" Jan 30 16:43:33 crc kubenswrapper[4766]: I0130 16:43:33.592639 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-69d8797fb6-zzsfd" Jan 30 16:43:33 crc kubenswrapper[4766]: I0130 16:43:33.635668 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.131124551 podStartE2EDuration="1m4.635643766s" podCreationTimestamp="2026-01-30 16:42:29 +0000 UTC" firstStartedPulling="2026-01-30 16:42:30.830167056 +0000 UTC m=+1205.468124402" lastFinishedPulling="2026-01-30 16:43:32.334686271 +0000 UTC m=+1266.972643617" observedRunningTime="2026-01-30 16:43:33.622890913 +0000 UTC m=+1268.260848309" watchObservedRunningTime="2026-01-30 16:43:33.635643766 +0000 UTC m=+1268.273601152" Jan 30 16:43:33 crc kubenswrapper[4766]: I0130 16:43:33.656172 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-69d8797fb6-zzsfd" podStartSLOduration=6.656145545 podStartE2EDuration="6.656145545s" podCreationTimestamp="2026-01-30 16:43:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:43:33.647654858 +0000 UTC m=+1268.285612224" watchObservedRunningTime="2026-01-30 16:43:33.656145545 +0000 UTC m=+1268.294102891" Jan 30 16:43:33 crc kubenswrapper[4766]: I0130 16:43:33.918946 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-zgzf5" Jan 30 16:43:34 crc kubenswrapper[4766]: I0130 16:43:34.016667 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad8b317f-6f81-4ac9-a854-7b71e384ed98-combined-ca-bundle\") pod \"ad8b317f-6f81-4ac9-a854-7b71e384ed98\" (UID: \"ad8b317f-6f81-4ac9-a854-7b71e384ed98\") " Jan 30 16:43:34 crc kubenswrapper[4766]: I0130 16:43:34.016760 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g5xs\" (UniqueName: \"kubernetes.io/projected/ad8b317f-6f81-4ac9-a854-7b71e384ed98-kube-api-access-6g5xs\") pod \"ad8b317f-6f81-4ac9-a854-7b71e384ed98\" (UID: \"ad8b317f-6f81-4ac9-a854-7b71e384ed98\") " Jan 30 16:43:34 crc kubenswrapper[4766]: I0130 16:43:34.016786 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ad8b317f-6f81-4ac9-a854-7b71e384ed98-db-sync-config-data\") pod \"ad8b317f-6f81-4ac9-a854-7b71e384ed98\" (UID: \"ad8b317f-6f81-4ac9-a854-7b71e384ed98\") " Jan 30 16:43:34 crc kubenswrapper[4766]: I0130 16:43:34.023191 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad8b317f-6f81-4ac9-a854-7b71e384ed98-kube-api-access-6g5xs" (OuterVolumeSpecName: "kube-api-access-6g5xs") pod "ad8b317f-6f81-4ac9-a854-7b71e384ed98" (UID: "ad8b317f-6f81-4ac9-a854-7b71e384ed98"). InnerVolumeSpecName "kube-api-access-6g5xs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:43:34 crc kubenswrapper[4766]: I0130 16:43:34.023596 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad8b317f-6f81-4ac9-a854-7b71e384ed98-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "ad8b317f-6f81-4ac9-a854-7b71e384ed98" (UID: "ad8b317f-6f81-4ac9-a854-7b71e384ed98"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:34 crc kubenswrapper[4766]: I0130 16:43:34.043285 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad8b317f-6f81-4ac9-a854-7b71e384ed98-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ad8b317f-6f81-4ac9-a854-7b71e384ed98" (UID: "ad8b317f-6f81-4ac9-a854-7b71e384ed98"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:34 crc kubenswrapper[4766]: I0130 16:43:34.119486 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad8b317f-6f81-4ac9-a854-7b71e384ed98-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:34 crc kubenswrapper[4766]: I0130 16:43:34.119542 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g5xs\" (UniqueName: \"kubernetes.io/projected/ad8b317f-6f81-4ac9-a854-7b71e384ed98-kube-api-access-6g5xs\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:34 crc kubenswrapper[4766]: I0130 16:43:34.119562 4766 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ad8b317f-6f81-4ac9-a854-7b71e384ed98-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:34 crc kubenswrapper[4766]: I0130 16:43:34.604728 4766 generic.go:334] "Generic (PLEG): container finished" podID="14501411-a443-4f68-93ed-4cadcbc48b9f" containerID="2dd4b41c08df4639b3a0f7331cc05e4486025741bd175061a52afd1a5dbed498" exitCode=0 Jan 30 16:43:34 crc kubenswrapper[4766]: I0130 16:43:34.604777 4766 generic.go:334] "Generic (PLEG): container finished" podID="14501411-a443-4f68-93ed-4cadcbc48b9f" containerID="eafc3f5c849b2dd0f1f5325be3c9f10da539dc0774b466b2e2ee8e51194a6616" exitCode=2 Jan 30 16:43:34 crc kubenswrapper[4766]: I0130 16:43:34.604799 4766 generic.go:334] "Generic (PLEG): container finished" podID="14501411-a443-4f68-93ed-4cadcbc48b9f" containerID="8a28e1297786ace5ddf8a0acc683ca89f4eebb3aa58a5204b9dc553eb9ab1afc" exitCode=0 Jan 30 16:43:34 crc kubenswrapper[4766]: I0130 16:43:34.604869 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14501411-a443-4f68-93ed-4cadcbc48b9f","Type":"ContainerDied","Data":"2dd4b41c08df4639b3a0f7331cc05e4486025741bd175061a52afd1a5dbed498"} Jan 30 16:43:34 crc kubenswrapper[4766]: I0130 16:43:34.604902 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14501411-a443-4f68-93ed-4cadcbc48b9f","Type":"ContainerDied","Data":"eafc3f5c849b2dd0f1f5325be3c9f10da539dc0774b466b2e2ee8e51194a6616"} Jan 30 16:43:34 crc kubenswrapper[4766]: I0130 16:43:34.604917 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14501411-a443-4f68-93ed-4cadcbc48b9f","Type":"ContainerDied","Data":"8a28e1297786ace5ddf8a0acc683ca89f4eebb3aa58a5204b9dc553eb9ab1afc"} Jan 30 16:43:34 crc kubenswrapper[4766]: I0130 16:43:34.606717 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-zgzf5" Jan 30 16:43:34 crc kubenswrapper[4766]: I0130 16:43:34.606753 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-zgzf5" event={"ID":"ad8b317f-6f81-4ac9-a854-7b71e384ed98","Type":"ContainerDied","Data":"e09f31873ccd116f2a3b1ef9422cf9428666d4cb02bc17d4466e621c29db9731"} Jan 30 16:43:34 crc kubenswrapper[4766]: I0130 16:43:34.606846 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e09f31873ccd116f2a3b1ef9422cf9428666d4cb02bc17d4466e621c29db9731" Jan 30 16:43:34 crc kubenswrapper[4766]: I0130 16:43:34.880729 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-d6c45fdd9-srlkx"] Jan 30 16:43:34 crc kubenswrapper[4766]: E0130 16:43:34.881088 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad8b317f-6f81-4ac9-a854-7b71e384ed98" containerName="barbican-db-sync" Jan 30 16:43:34 crc kubenswrapper[4766]: I0130 16:43:34.881102 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad8b317f-6f81-4ac9-a854-7b71e384ed98" containerName="barbican-db-sync" Jan 30 16:43:34 crc kubenswrapper[4766]: I0130 16:43:34.881417 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad8b317f-6f81-4ac9-a854-7b71e384ed98" containerName="barbican-db-sync" Jan 30 16:43:34 crc kubenswrapper[4766]: I0130 16:43:34.882274 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-d6c45fdd9-srlkx" Jan 30 16:43:34 crc kubenswrapper[4766]: I0130 16:43:34.885755 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-47zjc" Jan 30 16:43:34 crc kubenswrapper[4766]: I0130 16:43:34.886256 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 30 16:43:34 crc kubenswrapper[4766]: I0130 16:43:34.886487 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 30 16:43:34 crc kubenswrapper[4766]: I0130 16:43:34.930050 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-d6c45fdd9-srlkx"] Jan 30 16:43:34 crc kubenswrapper[4766]: I0130 16:43:34.997621 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5c649fd446-flqwn"] Jan 30 16:43:34 crc kubenswrapper[4766]: I0130 16:43:34.999028 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5c649fd446-flqwn" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.004558 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.028559 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5c649fd446-flqwn"] Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.035354 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d13e6f63-37d4-4780-9902-430a9669901c-combined-ca-bundle\") pod \"barbican-worker-d6c45fdd9-srlkx\" (UID: \"d13e6f63-37d4-4780-9902-430a9669901c\") " pod="openstack/barbican-worker-d6c45fdd9-srlkx" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.035716 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d13e6f63-37d4-4780-9902-430a9669901c-config-data\") pod \"barbican-worker-d6c45fdd9-srlkx\" (UID: \"d13e6f63-37d4-4780-9902-430a9669901c\") " pod="openstack/barbican-worker-d6c45fdd9-srlkx" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.035748 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d13e6f63-37d4-4780-9902-430a9669901c-logs\") pod \"barbican-worker-d6c45fdd9-srlkx\" (UID: \"d13e6f63-37d4-4780-9902-430a9669901c\") " pod="openstack/barbican-worker-d6c45fdd9-srlkx" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.035809 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d13e6f63-37d4-4780-9902-430a9669901c-config-data-custom\") pod \"barbican-worker-d6c45fdd9-srlkx\" (UID: \"d13e6f63-37d4-4780-9902-430a9669901c\") " pod="openstack/barbican-worker-d6c45fdd9-srlkx" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.035837 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbwrx\" (UniqueName: \"kubernetes.io/projected/d13e6f63-37d4-4780-9902-430a9669901c-kube-api-access-rbwrx\") pod \"barbican-worker-d6c45fdd9-srlkx\" (UID: \"d13e6f63-37d4-4780-9902-430a9669901c\") " pod="openstack/barbican-worker-d6c45fdd9-srlkx" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.137905 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d13e6f63-37d4-4780-9902-430a9669901c-config-data-custom\") pod \"barbican-worker-d6c45fdd9-srlkx\" (UID: \"d13e6f63-37d4-4780-9902-430a9669901c\") " pod="openstack/barbican-worker-d6c45fdd9-srlkx" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.137981 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbwrx\" (UniqueName: \"kubernetes.io/projected/d13e6f63-37d4-4780-9902-430a9669901c-kube-api-access-rbwrx\") pod \"barbican-worker-d6c45fdd9-srlkx\" (UID: \"d13e6f63-37d4-4780-9902-430a9669901c\") " pod="openstack/barbican-worker-d6c45fdd9-srlkx" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.138047 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22d60b44-40c9-425e-8daf-8931a25954e0-logs\") pod \"barbican-keystone-listener-5c649fd446-flqwn\" (UID: \"22d60b44-40c9-425e-8daf-8931a25954e0\") " pod="openstack/barbican-keystone-listener-5c649fd446-flqwn" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.138079 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22d60b44-40c9-425e-8daf-8931a25954e0-config-data\") pod \"barbican-keystone-listener-5c649fd446-flqwn\" (UID: \"22d60b44-40c9-425e-8daf-8931a25954e0\") " pod="openstack/barbican-keystone-listener-5c649fd446-flqwn" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.138116 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9fwz\" (UniqueName: \"kubernetes.io/projected/22d60b44-40c9-425e-8daf-8931a25954e0-kube-api-access-h9fwz\") pod \"barbican-keystone-listener-5c649fd446-flqwn\" (UID: \"22d60b44-40c9-425e-8daf-8931a25954e0\") " pod="openstack/barbican-keystone-listener-5c649fd446-flqwn" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.138242 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d13e6f63-37d4-4780-9902-430a9669901c-combined-ca-bundle\") pod \"barbican-worker-d6c45fdd9-srlkx\" (UID: \"d13e6f63-37d4-4780-9902-430a9669901c\") " pod="openstack/barbican-worker-d6c45fdd9-srlkx" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.138280 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/22d60b44-40c9-425e-8daf-8931a25954e0-config-data-custom\") pod \"barbican-keystone-listener-5c649fd446-flqwn\" (UID: \"22d60b44-40c9-425e-8daf-8931a25954e0\") " pod="openstack/barbican-keystone-listener-5c649fd446-flqwn" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.138425 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d13e6f63-37d4-4780-9902-430a9669901c-config-data\") pod \"barbican-worker-d6c45fdd9-srlkx\" (UID: \"d13e6f63-37d4-4780-9902-430a9669901c\") " pod="openstack/barbican-worker-d6c45fdd9-srlkx" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.138460 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22d60b44-40c9-425e-8daf-8931a25954e0-combined-ca-bundle\") pod \"barbican-keystone-listener-5c649fd446-flqwn\" (UID: \"22d60b44-40c9-425e-8daf-8931a25954e0\") " pod="openstack/barbican-keystone-listener-5c649fd446-flqwn" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.138499 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d13e6f63-37d4-4780-9902-430a9669901c-logs\") pod \"barbican-worker-d6c45fdd9-srlkx\" (UID: \"d13e6f63-37d4-4780-9902-430a9669901c\") " pod="openstack/barbican-worker-d6c45fdd9-srlkx" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.144144 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d13e6f63-37d4-4780-9902-430a9669901c-logs\") pod \"barbican-worker-d6c45fdd9-srlkx\" (UID: \"d13e6f63-37d4-4780-9902-430a9669901c\") " pod="openstack/barbican-worker-d6c45fdd9-srlkx" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.144788 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59d5ff467f-czb2k"] Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.148921 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59d5ff467f-czb2k" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.165630 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d13e6f63-37d4-4780-9902-430a9669901c-config-data-custom\") pod \"barbican-worker-d6c45fdd9-srlkx\" (UID: \"d13e6f63-37d4-4780-9902-430a9669901c\") " pod="openstack/barbican-worker-d6c45fdd9-srlkx" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.165946 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59d5ff467f-czb2k"] Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.166750 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d13e6f63-37d4-4780-9902-430a9669901c-combined-ca-bundle\") pod \"barbican-worker-d6c45fdd9-srlkx\" (UID: \"d13e6f63-37d4-4780-9902-430a9669901c\") " pod="openstack/barbican-worker-d6c45fdd9-srlkx" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.179957 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbwrx\" (UniqueName: \"kubernetes.io/projected/d13e6f63-37d4-4780-9902-430a9669901c-kube-api-access-rbwrx\") pod \"barbican-worker-d6c45fdd9-srlkx\" (UID: \"d13e6f63-37d4-4780-9902-430a9669901c\") " pod="openstack/barbican-worker-d6c45fdd9-srlkx" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.180494 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d13e6f63-37d4-4780-9902-430a9669901c-config-data\") pod \"barbican-worker-d6c45fdd9-srlkx\" (UID: \"d13e6f63-37d4-4780-9902-430a9669901c\") " pod="openstack/barbican-worker-d6c45fdd9-srlkx" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.227477 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-d6c45fdd9-srlkx" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.240566 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xqcl\" (UniqueName: \"kubernetes.io/projected/ee1aefba-bd2e-47f2-832c-7e74e707ad69-kube-api-access-8xqcl\") pod \"dnsmasq-dns-59d5ff467f-czb2k\" (UID: \"ee1aefba-bd2e-47f2-832c-7e74e707ad69\") " pod="openstack/dnsmasq-dns-59d5ff467f-czb2k" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.240632 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ee1aefba-bd2e-47f2-832c-7e74e707ad69-dns-swift-storage-0\") pod \"dnsmasq-dns-59d5ff467f-czb2k\" (UID: \"ee1aefba-bd2e-47f2-832c-7e74e707ad69\") " pod="openstack/dnsmasq-dns-59d5ff467f-czb2k" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.240697 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22d60b44-40c9-425e-8daf-8931a25954e0-logs\") pod \"barbican-keystone-listener-5c649fd446-flqwn\" (UID: \"22d60b44-40c9-425e-8daf-8931a25954e0\") " pod="openstack/barbican-keystone-listener-5c649fd446-flqwn" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.240728 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22d60b44-40c9-425e-8daf-8931a25954e0-config-data\") pod \"barbican-keystone-listener-5c649fd446-flqwn\" (UID: \"22d60b44-40c9-425e-8daf-8931a25954e0\") " pod="openstack/barbican-keystone-listener-5c649fd446-flqwn" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.240776 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ee1aefba-bd2e-47f2-832c-7e74e707ad69-ovsdbserver-sb\") pod \"dnsmasq-dns-59d5ff467f-czb2k\" (UID: \"ee1aefba-bd2e-47f2-832c-7e74e707ad69\") " pod="openstack/dnsmasq-dns-59d5ff467f-czb2k" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.240804 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9fwz\" (UniqueName: \"kubernetes.io/projected/22d60b44-40c9-425e-8daf-8931a25954e0-kube-api-access-h9fwz\") pod \"barbican-keystone-listener-5c649fd446-flqwn\" (UID: \"22d60b44-40c9-425e-8daf-8931a25954e0\") " pod="openstack/barbican-keystone-listener-5c649fd446-flqwn" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.240886 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ee1aefba-bd2e-47f2-832c-7e74e707ad69-ovsdbserver-nb\") pod \"dnsmasq-dns-59d5ff467f-czb2k\" (UID: \"ee1aefba-bd2e-47f2-832c-7e74e707ad69\") " pod="openstack/dnsmasq-dns-59d5ff467f-czb2k" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.240932 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee1aefba-bd2e-47f2-832c-7e74e707ad69-dns-svc\") pod \"dnsmasq-dns-59d5ff467f-czb2k\" (UID: \"ee1aefba-bd2e-47f2-832c-7e74e707ad69\") " pod="openstack/dnsmasq-dns-59d5ff467f-czb2k" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.240974 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/22d60b44-40c9-425e-8daf-8931a25954e0-config-data-custom\") pod \"barbican-keystone-listener-5c649fd446-flqwn\" (UID: \"22d60b44-40c9-425e-8daf-8931a25954e0\") " pod="openstack/barbican-keystone-listener-5c649fd446-flqwn" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.241037 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee1aefba-bd2e-47f2-832c-7e74e707ad69-config\") pod \"dnsmasq-dns-59d5ff467f-czb2k\" (UID: \"ee1aefba-bd2e-47f2-832c-7e74e707ad69\") " pod="openstack/dnsmasq-dns-59d5ff467f-czb2k" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.241114 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22d60b44-40c9-425e-8daf-8931a25954e0-combined-ca-bundle\") pod \"barbican-keystone-listener-5c649fd446-flqwn\" (UID: \"22d60b44-40c9-425e-8daf-8931a25954e0\") " pod="openstack/barbican-keystone-listener-5c649fd446-flqwn" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.242774 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22d60b44-40c9-425e-8daf-8931a25954e0-logs\") pod \"barbican-keystone-listener-5c649fd446-flqwn\" (UID: \"22d60b44-40c9-425e-8daf-8931a25954e0\") " pod="openstack/barbican-keystone-listener-5c649fd446-flqwn" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.252061 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22d60b44-40c9-425e-8daf-8931a25954e0-combined-ca-bundle\") pod \"barbican-keystone-listener-5c649fd446-flqwn\" (UID: \"22d60b44-40c9-425e-8daf-8931a25954e0\") " pod="openstack/barbican-keystone-listener-5c649fd446-flqwn" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.252753 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22d60b44-40c9-425e-8daf-8931a25954e0-config-data\") pod \"barbican-keystone-listener-5c649fd446-flqwn\" (UID: \"22d60b44-40c9-425e-8daf-8931a25954e0\") " pod="openstack/barbican-keystone-listener-5c649fd446-flqwn" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.253390 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/22d60b44-40c9-425e-8daf-8931a25954e0-config-data-custom\") pod \"barbican-keystone-listener-5c649fd446-flqwn\" (UID: \"22d60b44-40c9-425e-8daf-8931a25954e0\") " pod="openstack/barbican-keystone-listener-5c649fd446-flqwn" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.259739 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-869cbffcd-4n87d"] Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.261267 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-869cbffcd-4n87d" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.266479 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.270903 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9fwz\" (UniqueName: \"kubernetes.io/projected/22d60b44-40c9-425e-8daf-8931a25954e0-kube-api-access-h9fwz\") pod \"barbican-keystone-listener-5c649fd446-flqwn\" (UID: \"22d60b44-40c9-425e-8daf-8931a25954e0\") " pod="openstack/barbican-keystone-listener-5c649fd446-flqwn" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.290762 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-869cbffcd-4n87d"] Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.332586 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5c649fd446-flqwn" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.343434 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ee1aefba-bd2e-47f2-832c-7e74e707ad69-dns-swift-storage-0\") pod \"dnsmasq-dns-59d5ff467f-czb2k\" (UID: \"ee1aefba-bd2e-47f2-832c-7e74e707ad69\") " pod="openstack/dnsmasq-dns-59d5ff467f-czb2k" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.343517 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c0217e5-bcc8-482c-9e44-4be03ee7d059-logs\") pod \"barbican-api-869cbffcd-4n87d\" (UID: \"6c0217e5-bcc8-482c-9e44-4be03ee7d059\") " pod="openstack/barbican-api-869cbffcd-4n87d" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.343550 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ee1aefba-bd2e-47f2-832c-7e74e707ad69-ovsdbserver-sb\") pod \"dnsmasq-dns-59d5ff467f-czb2k\" (UID: \"ee1aefba-bd2e-47f2-832c-7e74e707ad69\") " pod="openstack/dnsmasq-dns-59d5ff467f-czb2k" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.343607 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ee1aefba-bd2e-47f2-832c-7e74e707ad69-ovsdbserver-nb\") pod \"dnsmasq-dns-59d5ff467f-czb2k\" (UID: \"ee1aefba-bd2e-47f2-832c-7e74e707ad69\") " pod="openstack/dnsmasq-dns-59d5ff467f-czb2k" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.343632 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee1aefba-bd2e-47f2-832c-7e74e707ad69-dns-svc\") pod \"dnsmasq-dns-59d5ff467f-czb2k\" (UID: \"ee1aefba-bd2e-47f2-832c-7e74e707ad69\") " pod="openstack/dnsmasq-dns-59d5ff467f-czb2k" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.343679 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c0217e5-bcc8-482c-9e44-4be03ee7d059-config-data-custom\") pod \"barbican-api-869cbffcd-4n87d\" (UID: \"6c0217e5-bcc8-482c-9e44-4be03ee7d059\") " pod="openstack/barbican-api-869cbffcd-4n87d" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.343717 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee1aefba-bd2e-47f2-832c-7e74e707ad69-config\") pod \"dnsmasq-dns-59d5ff467f-czb2k\" (UID: \"ee1aefba-bd2e-47f2-832c-7e74e707ad69\") " pod="openstack/dnsmasq-dns-59d5ff467f-czb2k" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.343767 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c0217e5-bcc8-482c-9e44-4be03ee7d059-config-data\") pod \"barbican-api-869cbffcd-4n87d\" (UID: \"6c0217e5-bcc8-482c-9e44-4be03ee7d059\") " pod="openstack/barbican-api-869cbffcd-4n87d" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.343789 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c0217e5-bcc8-482c-9e44-4be03ee7d059-combined-ca-bundle\") pod \"barbican-api-869cbffcd-4n87d\" (UID: \"6c0217e5-bcc8-482c-9e44-4be03ee7d059\") " pod="openstack/barbican-api-869cbffcd-4n87d" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.343834 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kcg5\" (UniqueName: \"kubernetes.io/projected/6c0217e5-bcc8-482c-9e44-4be03ee7d059-kube-api-access-4kcg5\") pod \"barbican-api-869cbffcd-4n87d\" (UID: \"6c0217e5-bcc8-482c-9e44-4be03ee7d059\") " pod="openstack/barbican-api-869cbffcd-4n87d" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.343878 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xqcl\" (UniqueName: \"kubernetes.io/projected/ee1aefba-bd2e-47f2-832c-7e74e707ad69-kube-api-access-8xqcl\") pod \"dnsmasq-dns-59d5ff467f-czb2k\" (UID: \"ee1aefba-bd2e-47f2-832c-7e74e707ad69\") " pod="openstack/dnsmasq-dns-59d5ff467f-czb2k" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.345268 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ee1aefba-bd2e-47f2-832c-7e74e707ad69-dns-swift-storage-0\") pod \"dnsmasq-dns-59d5ff467f-czb2k\" (UID: \"ee1aefba-bd2e-47f2-832c-7e74e707ad69\") " pod="openstack/dnsmasq-dns-59d5ff467f-czb2k" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.345949 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ee1aefba-bd2e-47f2-832c-7e74e707ad69-ovsdbserver-sb\") pod \"dnsmasq-dns-59d5ff467f-czb2k\" (UID: \"ee1aefba-bd2e-47f2-832c-7e74e707ad69\") " pod="openstack/dnsmasq-dns-59d5ff467f-czb2k" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.347707 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ee1aefba-bd2e-47f2-832c-7e74e707ad69-ovsdbserver-nb\") pod \"dnsmasq-dns-59d5ff467f-czb2k\" (UID: \"ee1aefba-bd2e-47f2-832c-7e74e707ad69\") " pod="openstack/dnsmasq-dns-59d5ff467f-czb2k" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.351697 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee1aefba-bd2e-47f2-832c-7e74e707ad69-dns-svc\") pod \"dnsmasq-dns-59d5ff467f-czb2k\" (UID: \"ee1aefba-bd2e-47f2-832c-7e74e707ad69\") " pod="openstack/dnsmasq-dns-59d5ff467f-czb2k" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.351954 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee1aefba-bd2e-47f2-832c-7e74e707ad69-config\") pod \"dnsmasq-dns-59d5ff467f-czb2k\" (UID: \"ee1aefba-bd2e-47f2-832c-7e74e707ad69\") " pod="openstack/dnsmasq-dns-59d5ff467f-czb2k" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.373651 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xqcl\" (UniqueName: \"kubernetes.io/projected/ee1aefba-bd2e-47f2-832c-7e74e707ad69-kube-api-access-8xqcl\") pod \"dnsmasq-dns-59d5ff467f-czb2k\" (UID: \"ee1aefba-bd2e-47f2-832c-7e74e707ad69\") " pod="openstack/dnsmasq-dns-59d5ff467f-czb2k" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.446752 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c0217e5-bcc8-482c-9e44-4be03ee7d059-config-data-custom\") pod \"barbican-api-869cbffcd-4n87d\" (UID: \"6c0217e5-bcc8-482c-9e44-4be03ee7d059\") " pod="openstack/barbican-api-869cbffcd-4n87d" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.447687 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c0217e5-bcc8-482c-9e44-4be03ee7d059-config-data\") pod \"barbican-api-869cbffcd-4n87d\" (UID: \"6c0217e5-bcc8-482c-9e44-4be03ee7d059\") " pod="openstack/barbican-api-869cbffcd-4n87d" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.447714 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c0217e5-bcc8-482c-9e44-4be03ee7d059-combined-ca-bundle\") pod \"barbican-api-869cbffcd-4n87d\" (UID: \"6c0217e5-bcc8-482c-9e44-4be03ee7d059\") " pod="openstack/barbican-api-869cbffcd-4n87d" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.447776 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kcg5\" (UniqueName: \"kubernetes.io/projected/6c0217e5-bcc8-482c-9e44-4be03ee7d059-kube-api-access-4kcg5\") pod \"barbican-api-869cbffcd-4n87d\" (UID: \"6c0217e5-bcc8-482c-9e44-4be03ee7d059\") " pod="openstack/barbican-api-869cbffcd-4n87d" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.447871 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c0217e5-bcc8-482c-9e44-4be03ee7d059-logs\") pod \"barbican-api-869cbffcd-4n87d\" (UID: \"6c0217e5-bcc8-482c-9e44-4be03ee7d059\") " pod="openstack/barbican-api-869cbffcd-4n87d" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.448487 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c0217e5-bcc8-482c-9e44-4be03ee7d059-logs\") pod \"barbican-api-869cbffcd-4n87d\" (UID: \"6c0217e5-bcc8-482c-9e44-4be03ee7d059\") " pod="openstack/barbican-api-869cbffcd-4n87d" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.452652 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c0217e5-bcc8-482c-9e44-4be03ee7d059-combined-ca-bundle\") pod \"barbican-api-869cbffcd-4n87d\" (UID: \"6c0217e5-bcc8-482c-9e44-4be03ee7d059\") " pod="openstack/barbican-api-869cbffcd-4n87d" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.452744 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c0217e5-bcc8-482c-9e44-4be03ee7d059-config-data\") pod \"barbican-api-869cbffcd-4n87d\" (UID: \"6c0217e5-bcc8-482c-9e44-4be03ee7d059\") " pod="openstack/barbican-api-869cbffcd-4n87d" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.453087 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c0217e5-bcc8-482c-9e44-4be03ee7d059-config-data-custom\") pod \"barbican-api-869cbffcd-4n87d\" (UID: \"6c0217e5-bcc8-482c-9e44-4be03ee7d059\") " pod="openstack/barbican-api-869cbffcd-4n87d" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.474336 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kcg5\" (UniqueName: \"kubernetes.io/projected/6c0217e5-bcc8-482c-9e44-4be03ee7d059-kube-api-access-4kcg5\") pod \"barbican-api-869cbffcd-4n87d\" (UID: \"6c0217e5-bcc8-482c-9e44-4be03ee7d059\") " pod="openstack/barbican-api-869cbffcd-4n87d" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.629748 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59d5ff467f-czb2k" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.641827 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-869cbffcd-4n87d" Jan 30 16:43:35 crc kubenswrapper[4766]: W0130 16:43:35.785529 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd13e6f63_37d4_4780_9902_430a9669901c.slice/crio-2b767d9a62146b9e45249c95c9dbe239af5e99c61039ee01f25412d61a3eb409 WatchSource:0}: Error finding container 2b767d9a62146b9e45249c95c9dbe239af5e99c61039ee01f25412d61a3eb409: Status 404 returned error can't find the container with id 2b767d9a62146b9e45249c95c9dbe239af5e99c61039ee01f25412d61a3eb409 Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.788505 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-d6c45fdd9-srlkx"] Jan 30 16:43:35 crc kubenswrapper[4766]: W0130 16:43:35.847157 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22d60b44_40c9_425e_8daf_8931a25954e0.slice/crio-c7517f7d6af60d2837e96c3e702ddd2f2f09fff46823d6dc0045b42053075fb3 WatchSource:0}: Error finding container c7517f7d6af60d2837e96c3e702ddd2f2f09fff46823d6dc0045b42053075fb3: Status 404 returned error can't find the container with id c7517f7d6af60d2837e96c3e702ddd2f2f09fff46823d6dc0045b42053075fb3 Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.849402 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5c649fd446-flqwn"] Jan 30 16:43:36 crc kubenswrapper[4766]: I0130 16:43:36.108800 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-869cbffcd-4n87d"] Jan 30 16:43:36 crc kubenswrapper[4766]: W0130 16:43:36.112472 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee1aefba_bd2e_47f2_832c_7e74e707ad69.slice/crio-0f05a6427a4592a4fbfb38f5c67f5bbead27aa40c290d9321f78dc9bf122aa81 WatchSource:0}: Error finding container 0f05a6427a4592a4fbfb38f5c67f5bbead27aa40c290d9321f78dc9bf122aa81: Status 404 returned error can't find the container with id 0f05a6427a4592a4fbfb38f5c67f5bbead27aa40c290d9321f78dc9bf122aa81 Jan 30 16:43:36 crc kubenswrapper[4766]: W0130 16:43:36.114766 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c0217e5_bcc8_482c_9e44_4be03ee7d059.slice/crio-92d1aaa2960ed19f9dead271c07bcadcb09aafba2b36e05ba013dc148c76ebbf WatchSource:0}: Error finding container 92d1aaa2960ed19f9dead271c07bcadcb09aafba2b36e05ba013dc148c76ebbf: Status 404 returned error can't find the container with id 92d1aaa2960ed19f9dead271c07bcadcb09aafba2b36e05ba013dc148c76ebbf Jan 30 16:43:36 crc kubenswrapper[4766]: I0130 16:43:36.115949 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59d5ff467f-czb2k"] Jan 30 16:43:36 crc kubenswrapper[4766]: I0130 16:43:36.651549 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-d6c45fdd9-srlkx" event={"ID":"d13e6f63-37d4-4780-9902-430a9669901c","Type":"ContainerStarted","Data":"2b767d9a62146b9e45249c95c9dbe239af5e99c61039ee01f25412d61a3eb409"} Jan 30 16:43:36 crc kubenswrapper[4766]: I0130 16:43:36.654464 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5c649fd446-flqwn" event={"ID":"22d60b44-40c9-425e-8daf-8931a25954e0","Type":"ContainerStarted","Data":"c7517f7d6af60d2837e96c3e702ddd2f2f09fff46823d6dc0045b42053075fb3"} Jan 30 16:43:36 crc kubenswrapper[4766]: I0130 16:43:36.656916 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-869cbffcd-4n87d" event={"ID":"6c0217e5-bcc8-482c-9e44-4be03ee7d059","Type":"ContainerStarted","Data":"bb7f2daa116cebbbf2cc73455a4ae3436f99d4ad876646c055fc1ee90da5e4c0"} Jan 30 16:43:36 crc kubenswrapper[4766]: I0130 16:43:36.656958 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-869cbffcd-4n87d" event={"ID":"6c0217e5-bcc8-482c-9e44-4be03ee7d059","Type":"ContainerStarted","Data":"92d1aaa2960ed19f9dead271c07bcadcb09aafba2b36e05ba013dc148c76ebbf"} Jan 30 16:43:36 crc kubenswrapper[4766]: I0130 16:43:36.663064 4766 generic.go:334] "Generic (PLEG): container finished" podID="ee1aefba-bd2e-47f2-832c-7e74e707ad69" containerID="9711eddd329c1e89a7dc01097b8376ca2746bf25cefdc64b1de7bcd30e1ecb4d" exitCode=0 Jan 30 16:43:36 crc kubenswrapper[4766]: I0130 16:43:36.663123 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59d5ff467f-czb2k" event={"ID":"ee1aefba-bd2e-47f2-832c-7e74e707ad69","Type":"ContainerDied","Data":"9711eddd329c1e89a7dc01097b8376ca2746bf25cefdc64b1de7bcd30e1ecb4d"} Jan 30 16:43:36 crc kubenswrapper[4766]: I0130 16:43:36.663154 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59d5ff467f-czb2k" event={"ID":"ee1aefba-bd2e-47f2-832c-7e74e707ad69","Type":"ContainerStarted","Data":"0f05a6427a4592a4fbfb38f5c67f5bbead27aa40c290d9321f78dc9bf122aa81"} Jan 30 16:43:37 crc kubenswrapper[4766]: I0130 16:43:37.685516 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-869cbffcd-4n87d" event={"ID":"6c0217e5-bcc8-482c-9e44-4be03ee7d059","Type":"ContainerStarted","Data":"997bcdb331587a6a7b0af6004a1001be1e445ca8d6604c747e6d479bac0d7b45"} Jan 30 16:43:37 crc kubenswrapper[4766]: I0130 16:43:37.688020 4766 generic.go:334] "Generic (PLEG): container finished" podID="3a05e847-bb50-49ab-821d-e2432c0f01e9" containerID="590619885e87e1a14deb1f9f567a37d743fd8966bf2a912bbf096d5bd9ef44b7" exitCode=0 Jan 30 16:43:37 crc kubenswrapper[4766]: I0130 16:43:37.688055 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-rxmkt" event={"ID":"3a05e847-bb50-49ab-821d-e2432c0f01e9","Type":"ContainerDied","Data":"590619885e87e1a14deb1f9f567a37d743fd8966bf2a912bbf096d5bd9ef44b7"} Jan 30 16:43:37 crc kubenswrapper[4766]: I0130 16:43:37.688683 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-869cbffcd-4n87d" Jan 30 16:43:37 crc kubenswrapper[4766]: I0130 16:43:37.688867 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-869cbffcd-4n87d" Jan 30 16:43:37 crc kubenswrapper[4766]: I0130 16:43:37.774070 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-869cbffcd-4n87d" podStartSLOduration=2.774046564 podStartE2EDuration="2.774046564s" podCreationTimestamp="2026-01-30 16:43:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:43:37.749566707 +0000 UTC m=+1272.387524073" watchObservedRunningTime="2026-01-30 16:43:37.774046564 +0000 UTC m=+1272.412003910" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.123421 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7b946b75c8-zb6q6"] Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.125719 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7b946b75c8-zb6q6" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.128084 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.128287 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.133116 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.134706 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7b946b75c8-zb6q6"] Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.211544 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14501411-a443-4f68-93ed-4cadcbc48b9f-scripts\") pod \"14501411-a443-4f68-93ed-4cadcbc48b9f\" (UID: \"14501411-a443-4f68-93ed-4cadcbc48b9f\") " Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.211590 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/14501411-a443-4f68-93ed-4cadcbc48b9f-sg-core-conf-yaml\") pod \"14501411-a443-4f68-93ed-4cadcbc48b9f\" (UID: \"14501411-a443-4f68-93ed-4cadcbc48b9f\") " Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.211641 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hr64s\" (UniqueName: \"kubernetes.io/projected/14501411-a443-4f68-93ed-4cadcbc48b9f-kube-api-access-hr64s\") pod \"14501411-a443-4f68-93ed-4cadcbc48b9f\" (UID: \"14501411-a443-4f68-93ed-4cadcbc48b9f\") " Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.211744 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14501411-a443-4f68-93ed-4cadcbc48b9f-combined-ca-bundle\") pod \"14501411-a443-4f68-93ed-4cadcbc48b9f\" (UID: \"14501411-a443-4f68-93ed-4cadcbc48b9f\") " Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.211798 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14501411-a443-4f68-93ed-4cadcbc48b9f-log-httpd\") pod \"14501411-a443-4f68-93ed-4cadcbc48b9f\" (UID: \"14501411-a443-4f68-93ed-4cadcbc48b9f\") " Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.211878 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14501411-a443-4f68-93ed-4cadcbc48b9f-run-httpd\") pod \"14501411-a443-4f68-93ed-4cadcbc48b9f\" (UID: \"14501411-a443-4f68-93ed-4cadcbc48b9f\") " Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.211942 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14501411-a443-4f68-93ed-4cadcbc48b9f-config-data\") pod \"14501411-a443-4f68-93ed-4cadcbc48b9f\" (UID: \"14501411-a443-4f68-93ed-4cadcbc48b9f\") " Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.212219 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/17d6e828-fc05-46cb-9bee-bac08ebf331a-internal-tls-certs\") pod \"barbican-api-7b946b75c8-zb6q6\" (UID: \"17d6e828-fc05-46cb-9bee-bac08ebf331a\") " pod="openstack/barbican-api-7b946b75c8-zb6q6" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.212258 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17d6e828-fc05-46cb-9bee-bac08ebf331a-config-data\") pod \"barbican-api-7b946b75c8-zb6q6\" (UID: \"17d6e828-fc05-46cb-9bee-bac08ebf331a\") " pod="openstack/barbican-api-7b946b75c8-zb6q6" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.212275 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/17d6e828-fc05-46cb-9bee-bac08ebf331a-public-tls-certs\") pod \"barbican-api-7b946b75c8-zb6q6\" (UID: \"17d6e828-fc05-46cb-9bee-bac08ebf331a\") " pod="openstack/barbican-api-7b946b75c8-zb6q6" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.212291 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17d6e828-fc05-46cb-9bee-bac08ebf331a-config-data-custom\") pod \"barbican-api-7b946b75c8-zb6q6\" (UID: \"17d6e828-fc05-46cb-9bee-bac08ebf331a\") " pod="openstack/barbican-api-7b946b75c8-zb6q6" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.212330 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17d6e828-fc05-46cb-9bee-bac08ebf331a-logs\") pod \"barbican-api-7b946b75c8-zb6q6\" (UID: \"17d6e828-fc05-46cb-9bee-bac08ebf331a\") " pod="openstack/barbican-api-7b946b75c8-zb6q6" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.212372 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dct4b\" (UniqueName: \"kubernetes.io/projected/17d6e828-fc05-46cb-9bee-bac08ebf331a-kube-api-access-dct4b\") pod \"barbican-api-7b946b75c8-zb6q6\" (UID: \"17d6e828-fc05-46cb-9bee-bac08ebf331a\") " pod="openstack/barbican-api-7b946b75c8-zb6q6" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.212457 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17d6e828-fc05-46cb-9bee-bac08ebf331a-combined-ca-bundle\") pod \"barbican-api-7b946b75c8-zb6q6\" (UID: \"17d6e828-fc05-46cb-9bee-bac08ebf331a\") " pod="openstack/barbican-api-7b946b75c8-zb6q6" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.213239 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14501411-a443-4f68-93ed-4cadcbc48b9f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "14501411-a443-4f68-93ed-4cadcbc48b9f" (UID: "14501411-a443-4f68-93ed-4cadcbc48b9f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.213337 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14501411-a443-4f68-93ed-4cadcbc48b9f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "14501411-a443-4f68-93ed-4cadcbc48b9f" (UID: "14501411-a443-4f68-93ed-4cadcbc48b9f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.233064 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14501411-a443-4f68-93ed-4cadcbc48b9f-kube-api-access-hr64s" (OuterVolumeSpecName: "kube-api-access-hr64s") pod "14501411-a443-4f68-93ed-4cadcbc48b9f" (UID: "14501411-a443-4f68-93ed-4cadcbc48b9f"). InnerVolumeSpecName "kube-api-access-hr64s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.239864 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14501411-a443-4f68-93ed-4cadcbc48b9f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "14501411-a443-4f68-93ed-4cadcbc48b9f" (UID: "14501411-a443-4f68-93ed-4cadcbc48b9f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.242755 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14501411-a443-4f68-93ed-4cadcbc48b9f-scripts" (OuterVolumeSpecName: "scripts") pod "14501411-a443-4f68-93ed-4cadcbc48b9f" (UID: "14501411-a443-4f68-93ed-4cadcbc48b9f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.287205 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14501411-a443-4f68-93ed-4cadcbc48b9f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "14501411-a443-4f68-93ed-4cadcbc48b9f" (UID: "14501411-a443-4f68-93ed-4cadcbc48b9f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.314193 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dct4b\" (UniqueName: \"kubernetes.io/projected/17d6e828-fc05-46cb-9bee-bac08ebf331a-kube-api-access-dct4b\") pod \"barbican-api-7b946b75c8-zb6q6\" (UID: \"17d6e828-fc05-46cb-9bee-bac08ebf331a\") " pod="openstack/barbican-api-7b946b75c8-zb6q6" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.314302 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17d6e828-fc05-46cb-9bee-bac08ebf331a-combined-ca-bundle\") pod \"barbican-api-7b946b75c8-zb6q6\" (UID: \"17d6e828-fc05-46cb-9bee-bac08ebf331a\") " pod="openstack/barbican-api-7b946b75c8-zb6q6" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.314377 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/17d6e828-fc05-46cb-9bee-bac08ebf331a-internal-tls-certs\") pod \"barbican-api-7b946b75c8-zb6q6\" (UID: \"17d6e828-fc05-46cb-9bee-bac08ebf331a\") " pod="openstack/barbican-api-7b946b75c8-zb6q6" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.314417 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17d6e828-fc05-46cb-9bee-bac08ebf331a-config-data\") pod \"barbican-api-7b946b75c8-zb6q6\" (UID: \"17d6e828-fc05-46cb-9bee-bac08ebf331a\") " pod="openstack/barbican-api-7b946b75c8-zb6q6" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.314441 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/17d6e828-fc05-46cb-9bee-bac08ebf331a-public-tls-certs\") pod \"barbican-api-7b946b75c8-zb6q6\" (UID: \"17d6e828-fc05-46cb-9bee-bac08ebf331a\") " pod="openstack/barbican-api-7b946b75c8-zb6q6" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.314463 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17d6e828-fc05-46cb-9bee-bac08ebf331a-config-data-custom\") pod \"barbican-api-7b946b75c8-zb6q6\" (UID: \"17d6e828-fc05-46cb-9bee-bac08ebf331a\") " pod="openstack/barbican-api-7b946b75c8-zb6q6" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.314517 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17d6e828-fc05-46cb-9bee-bac08ebf331a-logs\") pod \"barbican-api-7b946b75c8-zb6q6\" (UID: \"17d6e828-fc05-46cb-9bee-bac08ebf331a\") " pod="openstack/barbican-api-7b946b75c8-zb6q6" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.314736 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14501411-a443-4f68-93ed-4cadcbc48b9f-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.314760 4766 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/14501411-a443-4f68-93ed-4cadcbc48b9f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.314810 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hr64s\" (UniqueName: \"kubernetes.io/projected/14501411-a443-4f68-93ed-4cadcbc48b9f-kube-api-access-hr64s\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.314824 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14501411-a443-4f68-93ed-4cadcbc48b9f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.314837 4766 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14501411-a443-4f68-93ed-4cadcbc48b9f-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.314884 4766 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14501411-a443-4f68-93ed-4cadcbc48b9f-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.315752 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17d6e828-fc05-46cb-9bee-bac08ebf331a-logs\") pod \"barbican-api-7b946b75c8-zb6q6\" (UID: \"17d6e828-fc05-46cb-9bee-bac08ebf331a\") " pod="openstack/barbican-api-7b946b75c8-zb6q6" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.316669 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14501411-a443-4f68-93ed-4cadcbc48b9f-config-data" (OuterVolumeSpecName: "config-data") pod "14501411-a443-4f68-93ed-4cadcbc48b9f" (UID: "14501411-a443-4f68-93ed-4cadcbc48b9f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.318120 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17d6e828-fc05-46cb-9bee-bac08ebf331a-combined-ca-bundle\") pod \"barbican-api-7b946b75c8-zb6q6\" (UID: \"17d6e828-fc05-46cb-9bee-bac08ebf331a\") " pod="openstack/barbican-api-7b946b75c8-zb6q6" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.319614 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/17d6e828-fc05-46cb-9bee-bac08ebf331a-internal-tls-certs\") pod \"barbican-api-7b946b75c8-zb6q6\" (UID: \"17d6e828-fc05-46cb-9bee-bac08ebf331a\") " pod="openstack/barbican-api-7b946b75c8-zb6q6" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.320110 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17d6e828-fc05-46cb-9bee-bac08ebf331a-config-data\") pod \"barbican-api-7b946b75c8-zb6q6\" (UID: \"17d6e828-fc05-46cb-9bee-bac08ebf331a\") " pod="openstack/barbican-api-7b946b75c8-zb6q6" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.320895 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17d6e828-fc05-46cb-9bee-bac08ebf331a-config-data-custom\") pod \"barbican-api-7b946b75c8-zb6q6\" (UID: \"17d6e828-fc05-46cb-9bee-bac08ebf331a\") " pod="openstack/barbican-api-7b946b75c8-zb6q6" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.328921 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/17d6e828-fc05-46cb-9bee-bac08ebf331a-public-tls-certs\") pod \"barbican-api-7b946b75c8-zb6q6\" (UID: \"17d6e828-fc05-46cb-9bee-bac08ebf331a\") " pod="openstack/barbican-api-7b946b75c8-zb6q6" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.331516 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dct4b\" (UniqueName: \"kubernetes.io/projected/17d6e828-fc05-46cb-9bee-bac08ebf331a-kube-api-access-dct4b\") pod \"barbican-api-7b946b75c8-zb6q6\" (UID: \"17d6e828-fc05-46cb-9bee-bac08ebf331a\") " pod="openstack/barbican-api-7b946b75c8-zb6q6" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.416354 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14501411-a443-4f68-93ed-4cadcbc48b9f-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.508428 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7b946b75c8-zb6q6" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.717278 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5c649fd446-flqwn" event={"ID":"22d60b44-40c9-425e-8daf-8931a25954e0","Type":"ContainerStarted","Data":"812a3e23be177e19676f6003e9e0ddb46880fe309badbba4e93d1efe04dcf597"} Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.717758 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5c649fd446-flqwn" event={"ID":"22d60b44-40c9-425e-8daf-8931a25954e0","Type":"ContainerStarted","Data":"712f1ec6de09438090f58fbb0c4f302531a0e53b3ab1025ce983291fe2a30a55"} Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.731042 4766 generic.go:334] "Generic (PLEG): container finished" podID="14501411-a443-4f68-93ed-4cadcbc48b9f" containerID="29f2e1e9d854f93be08c13178f68a83ac09bfae5d8769fe9b88954c95b0e3d61" exitCode=0 Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.731149 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14501411-a443-4f68-93ed-4cadcbc48b9f","Type":"ContainerDied","Data":"29f2e1e9d854f93be08c13178f68a83ac09bfae5d8769fe9b88954c95b0e3d61"} Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.731200 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14501411-a443-4f68-93ed-4cadcbc48b9f","Type":"ContainerDied","Data":"80541219c3010f86d328821046e3eb93ce24469ac922b57c41a30f77d511e82f"} Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.731223 4766 scope.go:117] "RemoveContainer" containerID="2dd4b41c08df4639b3a0f7331cc05e4486025741bd175061a52afd1a5dbed498" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.731396 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.749733 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5c649fd446-flqwn" podStartSLOduration=3.149230515 podStartE2EDuration="4.749711052s" podCreationTimestamp="2026-01-30 16:43:34 +0000 UTC" firstStartedPulling="2026-01-30 16:43:35.85046251 +0000 UTC m=+1270.488419856" lastFinishedPulling="2026-01-30 16:43:37.450943047 +0000 UTC m=+1272.088900393" observedRunningTime="2026-01-30 16:43:38.741520733 +0000 UTC m=+1273.379478079" watchObservedRunningTime="2026-01-30 16:43:38.749711052 +0000 UTC m=+1273.387668398" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.752630 4766 generic.go:334] "Generic (PLEG): container finished" podID="4bc27037-152a-461b-bce1-6d37b38bbb95" containerID="c109162953a72a45d6f1c14f847bc29a8241f51dc6338795a5b5a228252ba405" exitCode=0 Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.752719 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-sc6rp" event={"ID":"4bc27037-152a-461b-bce1-6d37b38bbb95","Type":"ContainerDied","Data":"c109162953a72a45d6f1c14f847bc29a8241f51dc6338795a5b5a228252ba405"} Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.770274 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59d5ff467f-czb2k" event={"ID":"ee1aefba-bd2e-47f2-832c-7e74e707ad69","Type":"ContainerStarted","Data":"bc9352799004a876d938ff5e3475c63a67cb821e31390ecd3667042de650c4b3"} Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.770407 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59d5ff467f-czb2k" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.774573 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-d6c45fdd9-srlkx" event={"ID":"d13e6f63-37d4-4780-9902-430a9669901c","Type":"ContainerStarted","Data":"929f2cc066366dea699ff53637f354d8aeab119c1be0aa3851b50d5090307472"} Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.774667 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-d6c45fdd9-srlkx" event={"ID":"d13e6f63-37d4-4780-9902-430a9669901c","Type":"ContainerStarted","Data":"e3fbc192fdad733807e36f2325831d022e561f39e323dd8f0e5a0da778a417b6"} Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.819918 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59d5ff467f-czb2k" podStartSLOduration=3.819898765 podStartE2EDuration="3.819898765s" podCreationTimestamp="2026-01-30 16:43:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:43:38.798148401 +0000 UTC m=+1273.436105757" watchObservedRunningTime="2026-01-30 16:43:38.819898765 +0000 UTC m=+1273.457856111" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.845596 4766 scope.go:117] "RemoveContainer" containerID="eafc3f5c849b2dd0f1f5325be3c9f10da539dc0774b466b2e2ee8e51194a6616" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.847290 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-d6c45fdd9-srlkx" podStartSLOduration=3.186033602 podStartE2EDuration="4.847268729s" podCreationTimestamp="2026-01-30 16:43:34 +0000 UTC" firstStartedPulling="2026-01-30 16:43:35.789732031 +0000 UTC m=+1270.427689387" lastFinishedPulling="2026-01-30 16:43:37.450967168 +0000 UTC m=+1272.088924514" observedRunningTime="2026-01-30 16:43:38.820534242 +0000 UTC m=+1273.458491598" watchObservedRunningTime="2026-01-30 16:43:38.847268729 +0000 UTC m=+1273.485226075" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.879883 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.892256 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.900427 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:43:38 crc kubenswrapper[4766]: E0130 16:43:38.901000 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14501411-a443-4f68-93ed-4cadcbc48b9f" containerName="ceilometer-notification-agent" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.901016 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="14501411-a443-4f68-93ed-4cadcbc48b9f" containerName="ceilometer-notification-agent" Jan 30 16:43:38 crc kubenswrapper[4766]: E0130 16:43:38.901030 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14501411-a443-4f68-93ed-4cadcbc48b9f" containerName="sg-core" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.901036 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="14501411-a443-4f68-93ed-4cadcbc48b9f" containerName="sg-core" Jan 30 16:43:38 crc kubenswrapper[4766]: E0130 16:43:38.901052 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14501411-a443-4f68-93ed-4cadcbc48b9f" containerName="ceilometer-central-agent" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.901058 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="14501411-a443-4f68-93ed-4cadcbc48b9f" containerName="ceilometer-central-agent" Jan 30 16:43:38 crc kubenswrapper[4766]: E0130 16:43:38.901074 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14501411-a443-4f68-93ed-4cadcbc48b9f" containerName="proxy-httpd" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.901080 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="14501411-a443-4f68-93ed-4cadcbc48b9f" containerName="proxy-httpd" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.901242 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="14501411-a443-4f68-93ed-4cadcbc48b9f" containerName="ceilometer-central-agent" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.901271 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="14501411-a443-4f68-93ed-4cadcbc48b9f" containerName="sg-core" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.901289 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="14501411-a443-4f68-93ed-4cadcbc48b9f" containerName="proxy-httpd" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.901304 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="14501411-a443-4f68-93ed-4cadcbc48b9f" containerName="ceilometer-notification-agent" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.903578 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.904657 4766 scope.go:117] "RemoveContainer" containerID="29f2e1e9d854f93be08c13178f68a83ac09bfae5d8769fe9b88954c95b0e3d61" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.908008 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.908319 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.913869 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.950210 4766 scope.go:117] "RemoveContainer" containerID="8a28e1297786ace5ddf8a0acc683ca89f4eebb3aa58a5204b9dc553eb9ab1afc" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.976279 4766 scope.go:117] "RemoveContainer" containerID="2dd4b41c08df4639b3a0f7331cc05e4486025741bd175061a52afd1a5dbed498" Jan 30 16:43:38 crc kubenswrapper[4766]: E0130 16:43:38.976841 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2dd4b41c08df4639b3a0f7331cc05e4486025741bd175061a52afd1a5dbed498\": container with ID starting with 2dd4b41c08df4639b3a0f7331cc05e4486025741bd175061a52afd1a5dbed498 not found: ID does not exist" containerID="2dd4b41c08df4639b3a0f7331cc05e4486025741bd175061a52afd1a5dbed498" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.976879 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2dd4b41c08df4639b3a0f7331cc05e4486025741bd175061a52afd1a5dbed498"} err="failed to get container status \"2dd4b41c08df4639b3a0f7331cc05e4486025741bd175061a52afd1a5dbed498\": rpc error: code = NotFound desc = could not find container \"2dd4b41c08df4639b3a0f7331cc05e4486025741bd175061a52afd1a5dbed498\": container with ID starting with 2dd4b41c08df4639b3a0f7331cc05e4486025741bd175061a52afd1a5dbed498 not found: ID does not exist" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.976926 4766 scope.go:117] "RemoveContainer" containerID="eafc3f5c849b2dd0f1f5325be3c9f10da539dc0774b466b2e2ee8e51194a6616" Jan 30 16:43:38 crc kubenswrapper[4766]: E0130 16:43:38.977276 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eafc3f5c849b2dd0f1f5325be3c9f10da539dc0774b466b2e2ee8e51194a6616\": container with ID starting with eafc3f5c849b2dd0f1f5325be3c9f10da539dc0774b466b2e2ee8e51194a6616 not found: ID does not exist" containerID="eafc3f5c849b2dd0f1f5325be3c9f10da539dc0774b466b2e2ee8e51194a6616" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.977299 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eafc3f5c849b2dd0f1f5325be3c9f10da539dc0774b466b2e2ee8e51194a6616"} err="failed to get container status \"eafc3f5c849b2dd0f1f5325be3c9f10da539dc0774b466b2e2ee8e51194a6616\": rpc error: code = NotFound desc = could not find container \"eafc3f5c849b2dd0f1f5325be3c9f10da539dc0774b466b2e2ee8e51194a6616\": container with ID starting with eafc3f5c849b2dd0f1f5325be3c9f10da539dc0774b466b2e2ee8e51194a6616 not found: ID does not exist" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.977320 4766 scope.go:117] "RemoveContainer" containerID="29f2e1e9d854f93be08c13178f68a83ac09bfae5d8769fe9b88954c95b0e3d61" Jan 30 16:43:38 crc kubenswrapper[4766]: E0130 16:43:38.977651 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29f2e1e9d854f93be08c13178f68a83ac09bfae5d8769fe9b88954c95b0e3d61\": container with ID starting with 29f2e1e9d854f93be08c13178f68a83ac09bfae5d8769fe9b88954c95b0e3d61 not found: ID does not exist" containerID="29f2e1e9d854f93be08c13178f68a83ac09bfae5d8769fe9b88954c95b0e3d61" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.977715 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29f2e1e9d854f93be08c13178f68a83ac09bfae5d8769fe9b88954c95b0e3d61"} err="failed to get container status \"29f2e1e9d854f93be08c13178f68a83ac09bfae5d8769fe9b88954c95b0e3d61\": rpc error: code = NotFound desc = could not find container \"29f2e1e9d854f93be08c13178f68a83ac09bfae5d8769fe9b88954c95b0e3d61\": container with ID starting with 29f2e1e9d854f93be08c13178f68a83ac09bfae5d8769fe9b88954c95b0e3d61 not found: ID does not exist" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.977733 4766 scope.go:117] "RemoveContainer" containerID="8a28e1297786ace5ddf8a0acc683ca89f4eebb3aa58a5204b9dc553eb9ab1afc" Jan 30 16:43:38 crc kubenswrapper[4766]: E0130 16:43:38.979785 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a28e1297786ace5ddf8a0acc683ca89f4eebb3aa58a5204b9dc553eb9ab1afc\": container with ID starting with 8a28e1297786ace5ddf8a0acc683ca89f4eebb3aa58a5204b9dc553eb9ab1afc not found: ID does not exist" containerID="8a28e1297786ace5ddf8a0acc683ca89f4eebb3aa58a5204b9dc553eb9ab1afc" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.979847 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a28e1297786ace5ddf8a0acc683ca89f4eebb3aa58a5204b9dc553eb9ab1afc"} err="failed to get container status \"8a28e1297786ace5ddf8a0acc683ca89f4eebb3aa58a5204b9dc553eb9ab1afc\": rpc error: code = NotFound desc = could not find container \"8a28e1297786ace5ddf8a0acc683ca89f4eebb3aa58a5204b9dc553eb9ab1afc\": container with ID starting with 8a28e1297786ace5ddf8a0acc683ca89f4eebb3aa58a5204b9dc553eb9ab1afc not found: ID does not exist" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:38.999980 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7b946b75c8-zb6q6"] Jan 30 16:43:39 crc kubenswrapper[4766]: W0130 16:43:39.009341 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod17d6e828_fc05_46cb_9bee_bac08ebf331a.slice/crio-d7ba5e3a0e26b335d6f1850d527c93eb68d9d4d8bfecdec3674d222763957cd0 WatchSource:0}: Error finding container d7ba5e3a0e26b335d6f1850d527c93eb68d9d4d8bfecdec3674d222763957cd0: Status 404 returned error can't find the container with id d7ba5e3a0e26b335d6f1850d527c93eb68d9d4d8bfecdec3674d222763957cd0 Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.028554 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-scripts\") pod \"ceilometer-0\" (UID: \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\") " pod="openstack/ceilometer-0" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.028677 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-run-httpd\") pod \"ceilometer-0\" (UID: \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\") " pod="openstack/ceilometer-0" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.028762 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\") " pod="openstack/ceilometer-0" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.028917 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-log-httpd\") pod \"ceilometer-0\" (UID: \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\") " pod="openstack/ceilometer-0" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.029093 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\") " pod="openstack/ceilometer-0" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.029320 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-config-data\") pod \"ceilometer-0\" (UID: \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\") " pod="openstack/ceilometer-0" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.029345 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2p4d\" (UniqueName: \"kubernetes.io/projected/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-kube-api-access-f2p4d\") pod \"ceilometer-0\" (UID: \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\") " pod="openstack/ceilometer-0" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.045046 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.045099 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.131700 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\") " pod="openstack/ceilometer-0" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.131853 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-config-data\") pod \"ceilometer-0\" (UID: \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\") " pod="openstack/ceilometer-0" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.131891 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2p4d\" (UniqueName: \"kubernetes.io/projected/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-kube-api-access-f2p4d\") pod \"ceilometer-0\" (UID: \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\") " pod="openstack/ceilometer-0" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.131985 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-scripts\") pod \"ceilometer-0\" (UID: \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\") " pod="openstack/ceilometer-0" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.132059 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-run-httpd\") pod \"ceilometer-0\" (UID: \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\") " pod="openstack/ceilometer-0" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.132086 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\") " pod="openstack/ceilometer-0" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.132168 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-log-httpd\") pod \"ceilometer-0\" (UID: \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\") " pod="openstack/ceilometer-0" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.133121 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-log-httpd\") pod \"ceilometer-0\" (UID: \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\") " pod="openstack/ceilometer-0" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.135028 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-run-httpd\") pod \"ceilometer-0\" (UID: \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\") " pod="openstack/ceilometer-0" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.141391 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-config-data\") pod \"ceilometer-0\" (UID: \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\") " pod="openstack/ceilometer-0" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.141944 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\") " pod="openstack/ceilometer-0" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.142298 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-scripts\") pod \"ceilometer-0\" (UID: \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\") " pod="openstack/ceilometer-0" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.150316 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\") " pod="openstack/ceilometer-0" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.159383 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2p4d\" (UniqueName: \"kubernetes.io/projected/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-kube-api-access-f2p4d\") pod \"ceilometer-0\" (UID: \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\") " pod="openstack/ceilometer-0" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.228886 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.378811 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-rxmkt" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.436305 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a05e847-bb50-49ab-821d-e2432c0f01e9-scripts\") pod \"3a05e847-bb50-49ab-821d-e2432c0f01e9\" (UID: \"3a05e847-bb50-49ab-821d-e2432c0f01e9\") " Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.436495 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3a05e847-bb50-49ab-821d-e2432c0f01e9-db-sync-config-data\") pod \"3a05e847-bb50-49ab-821d-e2432c0f01e9\" (UID: \"3a05e847-bb50-49ab-821d-e2432c0f01e9\") " Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.436518 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a05e847-bb50-49ab-821d-e2432c0f01e9-combined-ca-bundle\") pod \"3a05e847-bb50-49ab-821d-e2432c0f01e9\" (UID: \"3a05e847-bb50-49ab-821d-e2432c0f01e9\") " Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.436548 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a05e847-bb50-49ab-821d-e2432c0f01e9-config-data\") pod \"3a05e847-bb50-49ab-821d-e2432c0f01e9\" (UID: \"3a05e847-bb50-49ab-821d-e2432c0f01e9\") " Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.436585 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3a05e847-bb50-49ab-821d-e2432c0f01e9-etc-machine-id\") pod \"3a05e847-bb50-49ab-821d-e2432c0f01e9\" (UID: \"3a05e847-bb50-49ab-821d-e2432c0f01e9\") " Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.436621 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2627\" (UniqueName: \"kubernetes.io/projected/3a05e847-bb50-49ab-821d-e2432c0f01e9-kube-api-access-q2627\") pod \"3a05e847-bb50-49ab-821d-e2432c0f01e9\" (UID: \"3a05e847-bb50-49ab-821d-e2432c0f01e9\") " Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.438783 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a05e847-bb50-49ab-821d-e2432c0f01e9-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "3a05e847-bb50-49ab-821d-e2432c0f01e9" (UID: "3a05e847-bb50-49ab-821d-e2432c0f01e9"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.441728 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a05e847-bb50-49ab-821d-e2432c0f01e9-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "3a05e847-bb50-49ab-821d-e2432c0f01e9" (UID: "3a05e847-bb50-49ab-821d-e2432c0f01e9"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.457050 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a05e847-bb50-49ab-821d-e2432c0f01e9-kube-api-access-q2627" (OuterVolumeSpecName: "kube-api-access-q2627") pod "3a05e847-bb50-49ab-821d-e2432c0f01e9" (UID: "3a05e847-bb50-49ab-821d-e2432c0f01e9"). InnerVolumeSpecName "kube-api-access-q2627". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.459494 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a05e847-bb50-49ab-821d-e2432c0f01e9-scripts" (OuterVolumeSpecName: "scripts") pod "3a05e847-bb50-49ab-821d-e2432c0f01e9" (UID: "3a05e847-bb50-49ab-821d-e2432c0f01e9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.484380 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a05e847-bb50-49ab-821d-e2432c0f01e9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3a05e847-bb50-49ab-821d-e2432c0f01e9" (UID: "3a05e847-bb50-49ab-821d-e2432c0f01e9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.504609 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a05e847-bb50-49ab-821d-e2432c0f01e9-config-data" (OuterVolumeSpecName: "config-data") pod "3a05e847-bb50-49ab-821d-e2432c0f01e9" (UID: "3a05e847-bb50-49ab-821d-e2432c0f01e9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.540805 4766 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3a05e847-bb50-49ab-821d-e2432c0f01e9-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.540838 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2627\" (UniqueName: \"kubernetes.io/projected/3a05e847-bb50-49ab-821d-e2432c0f01e9-kube-api-access-q2627\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.540849 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a05e847-bb50-49ab-821d-e2432c0f01e9-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.540861 4766 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3a05e847-bb50-49ab-821d-e2432c0f01e9-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.540870 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a05e847-bb50-49ab-821d-e2432c0f01e9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.540878 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a05e847-bb50-49ab-821d-e2432c0f01e9-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.729857 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.785423 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632","Type":"ContainerStarted","Data":"cd2c2b2506c59c114c23d0ceb86a25fba0633c14ce109f4881053f349d4e17dc"} Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.789120 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b946b75c8-zb6q6" event={"ID":"17d6e828-fc05-46cb-9bee-bac08ebf331a","Type":"ContainerStarted","Data":"b76d74d517695b954352727012ec69dbd68335889d86f736409624bd420dd1d5"} Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.789170 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b946b75c8-zb6q6" event={"ID":"17d6e828-fc05-46cb-9bee-bac08ebf331a","Type":"ContainerStarted","Data":"c8cf61cca49f5913c4f404f73dad03aeee69e0fc8dd7632010cae106dffa30c1"} Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.789197 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b946b75c8-zb6q6" event={"ID":"17d6e828-fc05-46cb-9bee-bac08ebf331a","Type":"ContainerStarted","Data":"d7ba5e3a0e26b335d6f1850d527c93eb68d9d4d8bfecdec3674d222763957cd0"} Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.789884 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7b946b75c8-zb6q6" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.789981 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7b946b75c8-zb6q6" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.792676 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-rxmkt" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.792805 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-rxmkt" event={"ID":"3a05e847-bb50-49ab-821d-e2432c0f01e9","Type":"ContainerDied","Data":"229d0980cc7e5e26832bda068f3b2059b081d7bd956f13cd9eecf8d3a512baaf"} Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.792925 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="229d0980cc7e5e26832bda068f3b2059b081d7bd956f13cd9eecf8d3a512baaf" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.975044 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7b946b75c8-zb6q6" podStartSLOduration=1.975023837 podStartE2EDuration="1.975023837s" podCreationTimestamp="2026-01-30 16:43:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:43:39.823202575 +0000 UTC m=+1274.461160021" watchObservedRunningTime="2026-01-30 16:43:39.975023837 +0000 UTC m=+1274.612981183" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.979612 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 16:43:39 crc kubenswrapper[4766]: E0130 16:43:39.980011 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a05e847-bb50-49ab-821d-e2432c0f01e9" containerName="cinder-db-sync" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.980027 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a05e847-bb50-49ab-821d-e2432c0f01e9" containerName="cinder-db-sync" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.980281 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a05e847-bb50-49ab-821d-e2432c0f01e9" containerName="cinder-db-sync" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.981454 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.983405 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.983972 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.984086 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.984841 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-rbvkd" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.006792 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.050361 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24818215-6fcc-4a45-8f7c-4f65e993eb7d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"24818215-6fcc-4a45-8f7c-4f65e993eb7d\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.050407 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvf5r\" (UniqueName: \"kubernetes.io/projected/24818215-6fcc-4a45-8f7c-4f65e993eb7d-kube-api-access-jvf5r\") pod \"cinder-scheduler-0\" (UID: \"24818215-6fcc-4a45-8f7c-4f65e993eb7d\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.050502 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24818215-6fcc-4a45-8f7c-4f65e993eb7d-config-data\") pod \"cinder-scheduler-0\" (UID: \"24818215-6fcc-4a45-8f7c-4f65e993eb7d\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.050541 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/24818215-6fcc-4a45-8f7c-4f65e993eb7d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"24818215-6fcc-4a45-8f7c-4f65e993eb7d\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.050562 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24818215-6fcc-4a45-8f7c-4f65e993eb7d-scripts\") pod \"cinder-scheduler-0\" (UID: \"24818215-6fcc-4a45-8f7c-4f65e993eb7d\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.050593 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24818215-6fcc-4a45-8f7c-4f65e993eb7d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"24818215-6fcc-4a45-8f7c-4f65e993eb7d\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.058268 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14501411-a443-4f68-93ed-4cadcbc48b9f" path="/var/lib/kubelet/pods/14501411-a443-4f68-93ed-4cadcbc48b9f/volumes" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.059334 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59d5ff467f-czb2k"] Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.081893 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-69c986f6d7-wtv6m"] Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.084199 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69c986f6d7-wtv6m" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.108734 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69c986f6d7-wtv6m"] Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.162379 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24818215-6fcc-4a45-8f7c-4f65e993eb7d-scripts\") pod \"cinder-scheduler-0\" (UID: \"24818215-6fcc-4a45-8f7c-4f65e993eb7d\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.162476 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24818215-6fcc-4a45-8f7c-4f65e993eb7d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"24818215-6fcc-4a45-8f7c-4f65e993eb7d\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.162511 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24818215-6fcc-4a45-8f7c-4f65e993eb7d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"24818215-6fcc-4a45-8f7c-4f65e993eb7d\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.162533 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvf5r\" (UniqueName: \"kubernetes.io/projected/24818215-6fcc-4a45-8f7c-4f65e993eb7d-kube-api-access-jvf5r\") pod \"cinder-scheduler-0\" (UID: \"24818215-6fcc-4a45-8f7c-4f65e993eb7d\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.162555 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e0cf707d-1c30-442d-8430-e714bd68752a-dns-swift-storage-0\") pod \"dnsmasq-dns-69c986f6d7-wtv6m\" (UID: \"e0cf707d-1c30-442d-8430-e714bd68752a\") " pod="openstack/dnsmasq-dns-69c986f6d7-wtv6m" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.162604 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vchwk\" (UniqueName: \"kubernetes.io/projected/e0cf707d-1c30-442d-8430-e714bd68752a-kube-api-access-vchwk\") pod \"dnsmasq-dns-69c986f6d7-wtv6m\" (UID: \"e0cf707d-1c30-442d-8430-e714bd68752a\") " pod="openstack/dnsmasq-dns-69c986f6d7-wtv6m" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.162687 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e0cf707d-1c30-442d-8430-e714bd68752a-ovsdbserver-nb\") pod \"dnsmasq-dns-69c986f6d7-wtv6m\" (UID: \"e0cf707d-1c30-442d-8430-e714bd68752a\") " pod="openstack/dnsmasq-dns-69c986f6d7-wtv6m" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.162711 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e0cf707d-1c30-442d-8430-e714bd68752a-ovsdbserver-sb\") pod \"dnsmasq-dns-69c986f6d7-wtv6m\" (UID: \"e0cf707d-1c30-442d-8430-e714bd68752a\") " pod="openstack/dnsmasq-dns-69c986f6d7-wtv6m" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.162770 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0cf707d-1c30-442d-8430-e714bd68752a-config\") pod \"dnsmasq-dns-69c986f6d7-wtv6m\" (UID: \"e0cf707d-1c30-442d-8430-e714bd68752a\") " pod="openstack/dnsmasq-dns-69c986f6d7-wtv6m" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.162810 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24818215-6fcc-4a45-8f7c-4f65e993eb7d-config-data\") pod \"cinder-scheduler-0\" (UID: \"24818215-6fcc-4a45-8f7c-4f65e993eb7d\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.162851 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0cf707d-1c30-442d-8430-e714bd68752a-dns-svc\") pod \"dnsmasq-dns-69c986f6d7-wtv6m\" (UID: \"e0cf707d-1c30-442d-8430-e714bd68752a\") " pod="openstack/dnsmasq-dns-69c986f6d7-wtv6m" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.162914 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/24818215-6fcc-4a45-8f7c-4f65e993eb7d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"24818215-6fcc-4a45-8f7c-4f65e993eb7d\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.163012 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/24818215-6fcc-4a45-8f7c-4f65e993eb7d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"24818215-6fcc-4a45-8f7c-4f65e993eb7d\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.167828 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24818215-6fcc-4a45-8f7c-4f65e993eb7d-scripts\") pod \"cinder-scheduler-0\" (UID: \"24818215-6fcc-4a45-8f7c-4f65e993eb7d\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.172104 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24818215-6fcc-4a45-8f7c-4f65e993eb7d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"24818215-6fcc-4a45-8f7c-4f65e993eb7d\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.172960 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24818215-6fcc-4a45-8f7c-4f65e993eb7d-config-data\") pod \"cinder-scheduler-0\" (UID: \"24818215-6fcc-4a45-8f7c-4f65e993eb7d\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.178467 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24818215-6fcc-4a45-8f7c-4f65e993eb7d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"24818215-6fcc-4a45-8f7c-4f65e993eb7d\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.191935 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvf5r\" (UniqueName: \"kubernetes.io/projected/24818215-6fcc-4a45-8f7c-4f65e993eb7d-kube-api-access-jvf5r\") pod \"cinder-scheduler-0\" (UID: \"24818215-6fcc-4a45-8f7c-4f65e993eb7d\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.264512 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e0cf707d-1c30-442d-8430-e714bd68752a-dns-swift-storage-0\") pod \"dnsmasq-dns-69c986f6d7-wtv6m\" (UID: \"e0cf707d-1c30-442d-8430-e714bd68752a\") " pod="openstack/dnsmasq-dns-69c986f6d7-wtv6m" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.264593 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vchwk\" (UniqueName: \"kubernetes.io/projected/e0cf707d-1c30-442d-8430-e714bd68752a-kube-api-access-vchwk\") pod \"dnsmasq-dns-69c986f6d7-wtv6m\" (UID: \"e0cf707d-1c30-442d-8430-e714bd68752a\") " pod="openstack/dnsmasq-dns-69c986f6d7-wtv6m" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.264659 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e0cf707d-1c30-442d-8430-e714bd68752a-ovsdbserver-nb\") pod \"dnsmasq-dns-69c986f6d7-wtv6m\" (UID: \"e0cf707d-1c30-442d-8430-e714bd68752a\") " pod="openstack/dnsmasq-dns-69c986f6d7-wtv6m" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.264682 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e0cf707d-1c30-442d-8430-e714bd68752a-ovsdbserver-sb\") pod \"dnsmasq-dns-69c986f6d7-wtv6m\" (UID: \"e0cf707d-1c30-442d-8430-e714bd68752a\") " pod="openstack/dnsmasq-dns-69c986f6d7-wtv6m" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.264718 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0cf707d-1c30-442d-8430-e714bd68752a-config\") pod \"dnsmasq-dns-69c986f6d7-wtv6m\" (UID: \"e0cf707d-1c30-442d-8430-e714bd68752a\") " pod="openstack/dnsmasq-dns-69c986f6d7-wtv6m" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.264762 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0cf707d-1c30-442d-8430-e714bd68752a-dns-svc\") pod \"dnsmasq-dns-69c986f6d7-wtv6m\" (UID: \"e0cf707d-1c30-442d-8430-e714bd68752a\") " pod="openstack/dnsmasq-dns-69c986f6d7-wtv6m" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.265804 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0cf707d-1c30-442d-8430-e714bd68752a-dns-svc\") pod \"dnsmasq-dns-69c986f6d7-wtv6m\" (UID: \"e0cf707d-1c30-442d-8430-e714bd68752a\") " pod="openstack/dnsmasq-dns-69c986f6d7-wtv6m" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.266445 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e0cf707d-1c30-442d-8430-e714bd68752a-dns-swift-storage-0\") pod \"dnsmasq-dns-69c986f6d7-wtv6m\" (UID: \"e0cf707d-1c30-442d-8430-e714bd68752a\") " pod="openstack/dnsmasq-dns-69c986f6d7-wtv6m" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.268229 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e0cf707d-1c30-442d-8430-e714bd68752a-ovsdbserver-sb\") pod \"dnsmasq-dns-69c986f6d7-wtv6m\" (UID: \"e0cf707d-1c30-442d-8430-e714bd68752a\") " pod="openstack/dnsmasq-dns-69c986f6d7-wtv6m" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.268898 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0cf707d-1c30-442d-8430-e714bd68752a-config\") pod \"dnsmasq-dns-69c986f6d7-wtv6m\" (UID: \"e0cf707d-1c30-442d-8430-e714bd68752a\") " pod="openstack/dnsmasq-dns-69c986f6d7-wtv6m" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.272608 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-sc6rp" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.273906 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e0cf707d-1c30-442d-8430-e714bd68752a-ovsdbserver-nb\") pod \"dnsmasq-dns-69c986f6d7-wtv6m\" (UID: \"e0cf707d-1c30-442d-8430-e714bd68752a\") " pod="openstack/dnsmasq-dns-69c986f6d7-wtv6m" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.284627 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vchwk\" (UniqueName: \"kubernetes.io/projected/e0cf707d-1c30-442d-8430-e714bd68752a-kube-api-access-vchwk\") pod \"dnsmasq-dns-69c986f6d7-wtv6m\" (UID: \"e0cf707d-1c30-442d-8430-e714bd68752a\") " pod="openstack/dnsmasq-dns-69c986f6d7-wtv6m" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.316824 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.318957 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 30 16:43:40 crc kubenswrapper[4766]: E0130 16:43:40.319368 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bc27037-152a-461b-bce1-6d37b38bbb95" containerName="neutron-db-sync" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.319388 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bc27037-152a-461b-bce1-6d37b38bbb95" containerName="neutron-db-sync" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.319597 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bc27037-152a-461b-bce1-6d37b38bbb95" containerName="neutron-db-sync" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.320589 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.328196 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.331152 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.366017 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4bc27037-152a-461b-bce1-6d37b38bbb95-config\") pod \"4bc27037-152a-461b-bce1-6d37b38bbb95\" (UID: \"4bc27037-152a-461b-bce1-6d37b38bbb95\") " Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.366063 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ql6bw\" (UniqueName: \"kubernetes.io/projected/4bc27037-152a-461b-bce1-6d37b38bbb95-kube-api-access-ql6bw\") pod \"4bc27037-152a-461b-bce1-6d37b38bbb95\" (UID: \"4bc27037-152a-461b-bce1-6d37b38bbb95\") " Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.366135 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bc27037-152a-461b-bce1-6d37b38bbb95-combined-ca-bundle\") pod \"4bc27037-152a-461b-bce1-6d37b38bbb95\" (UID: \"4bc27037-152a-461b-bce1-6d37b38bbb95\") " Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.366428 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\") " pod="openstack/cinder-api-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.366459 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\") " pod="openstack/cinder-api-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.366503 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-config-data-custom\") pod \"cinder-api-0\" (UID: \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\") " pod="openstack/cinder-api-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.366545 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-scripts\") pod \"cinder-api-0\" (UID: \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\") " pod="openstack/cinder-api-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.366567 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x2tt\" (UniqueName: \"kubernetes.io/projected/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-kube-api-access-9x2tt\") pod \"cinder-api-0\" (UID: \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\") " pod="openstack/cinder-api-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.366598 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-config-data\") pod \"cinder-api-0\" (UID: \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\") " pod="openstack/cinder-api-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.366636 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-logs\") pod \"cinder-api-0\" (UID: \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\") " pod="openstack/cinder-api-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.372591 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bc27037-152a-461b-bce1-6d37b38bbb95-kube-api-access-ql6bw" (OuterVolumeSpecName: "kube-api-access-ql6bw") pod "4bc27037-152a-461b-bce1-6d37b38bbb95" (UID: "4bc27037-152a-461b-bce1-6d37b38bbb95"). InnerVolumeSpecName "kube-api-access-ql6bw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.409424 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bc27037-152a-461b-bce1-6d37b38bbb95-config" (OuterVolumeSpecName: "config") pod "4bc27037-152a-461b-bce1-6d37b38bbb95" (UID: "4bc27037-152a-461b-bce1-6d37b38bbb95"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.426734 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bc27037-152a-461b-bce1-6d37b38bbb95-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4bc27037-152a-461b-bce1-6d37b38bbb95" (UID: "4bc27037-152a-461b-bce1-6d37b38bbb95"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.430985 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69c986f6d7-wtv6m" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.468441 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-scripts\") pod \"cinder-api-0\" (UID: \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\") " pod="openstack/cinder-api-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.468787 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9x2tt\" (UniqueName: \"kubernetes.io/projected/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-kube-api-access-9x2tt\") pod \"cinder-api-0\" (UID: \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\") " pod="openstack/cinder-api-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.468991 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-config-data\") pod \"cinder-api-0\" (UID: \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\") " pod="openstack/cinder-api-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.481256 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-logs\") pod \"cinder-api-0\" (UID: \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\") " pod="openstack/cinder-api-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.481857 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-logs\") pod \"cinder-api-0\" (UID: \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\") " pod="openstack/cinder-api-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.482066 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-scripts\") pod \"cinder-api-0\" (UID: \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\") " pod="openstack/cinder-api-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.482392 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\") " pod="openstack/cinder-api-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.482423 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\") " pod="openstack/cinder-api-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.482506 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-config-data-custom\") pod \"cinder-api-0\" (UID: \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\") " pod="openstack/cinder-api-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.482617 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/4bc27037-152a-461b-bce1-6d37b38bbb95-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.482630 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ql6bw\" (UniqueName: \"kubernetes.io/projected/4bc27037-152a-461b-bce1-6d37b38bbb95-kube-api-access-ql6bw\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.482641 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bc27037-152a-461b-bce1-6d37b38bbb95-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.482759 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-config-data\") pod \"cinder-api-0\" (UID: \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\") " pod="openstack/cinder-api-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.482806 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\") " pod="openstack/cinder-api-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.490049 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-config-data-custom\") pod \"cinder-api-0\" (UID: \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\") " pod="openstack/cinder-api-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.490760 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\") " pod="openstack/cinder-api-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.491129 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x2tt\" (UniqueName: \"kubernetes.io/projected/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-kube-api-access-9x2tt\") pod \"cinder-api-0\" (UID: \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\") " pod="openstack/cinder-api-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.769979 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.817621 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-sc6rp" event={"ID":"4bc27037-152a-461b-bce1-6d37b38bbb95","Type":"ContainerDied","Data":"fdbfa8e0065a380d3ba4a52bbdffd41bedf11875edae24ac7fb676379d4ea282"} Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.817639 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-sc6rp" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.817693 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fdbfa8e0065a380d3ba4a52bbdffd41bedf11875edae24ac7fb676379d4ea282" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.820336 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632","Type":"ContainerStarted","Data":"17bfefeda5ec998de43d5f88a70223bab321f691bb9c648b2910fa5820e7f804"} Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.820471 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-59d5ff467f-czb2k" podUID="ee1aefba-bd2e-47f2-832c-7e74e707ad69" containerName="dnsmasq-dns" containerID="cri-o://bc9352799004a876d938ff5e3475c63a67cb821e31390ecd3667042de650c4b3" gracePeriod=10 Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.914049 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.025660 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69c986f6d7-wtv6m"] Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.059613 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-69c986f6d7-wtv6m"] Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.136813 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-689xd"] Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.138554 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-689xd" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.195706 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-689xd"] Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.217031 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-689xd\" (UID: \"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae\") " pod="openstack/dnsmasq-dns-5784cf869f-689xd" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.217166 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-dns-svc\") pod \"dnsmasq-dns-5784cf869f-689xd\" (UID: \"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae\") " pod="openstack/dnsmasq-dns-5784cf869f-689xd" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.217241 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5hb7\" (UniqueName: \"kubernetes.io/projected/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-kube-api-access-f5hb7\") pod \"dnsmasq-dns-5784cf869f-689xd\" (UID: \"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae\") " pod="openstack/dnsmasq-dns-5784cf869f-689xd" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.217264 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-config\") pod \"dnsmasq-dns-5784cf869f-689xd\" (UID: \"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae\") " pod="openstack/dnsmasq-dns-5784cf869f-689xd" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.217298 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-689xd\" (UID: \"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae\") " pod="openstack/dnsmasq-dns-5784cf869f-689xd" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.217331 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-689xd\" (UID: \"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae\") " pod="openstack/dnsmasq-dns-5784cf869f-689xd" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.327398 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-dns-svc\") pod \"dnsmasq-dns-5784cf869f-689xd\" (UID: \"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae\") " pod="openstack/dnsmasq-dns-5784cf869f-689xd" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.328578 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5hb7\" (UniqueName: \"kubernetes.io/projected/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-kube-api-access-f5hb7\") pod \"dnsmasq-dns-5784cf869f-689xd\" (UID: \"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae\") " pod="openstack/dnsmasq-dns-5784cf869f-689xd" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.328609 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-config\") pod \"dnsmasq-dns-5784cf869f-689xd\" (UID: \"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae\") " pod="openstack/dnsmasq-dns-5784cf869f-689xd" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.328643 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-689xd\" (UID: \"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae\") " pod="openstack/dnsmasq-dns-5784cf869f-689xd" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.328671 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-689xd\" (UID: \"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae\") " pod="openstack/dnsmasq-dns-5784cf869f-689xd" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.328700 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-689xd\" (UID: \"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae\") " pod="openstack/dnsmasq-dns-5784cf869f-689xd" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.329467 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-dns-svc\") pod \"dnsmasq-dns-5784cf869f-689xd\" (UID: \"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae\") " pod="openstack/dnsmasq-dns-5784cf869f-689xd" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.329752 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-689xd\" (UID: \"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae\") " pod="openstack/dnsmasq-dns-5784cf869f-689xd" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.332360 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-config\") pod \"dnsmasq-dns-5784cf869f-689xd\" (UID: \"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae\") " pod="openstack/dnsmasq-dns-5784cf869f-689xd" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.335229 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5995f74f66-6c62l"] Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.336698 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5995f74f66-6c62l" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.337574 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-689xd\" (UID: \"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae\") " pod="openstack/dnsmasq-dns-5784cf869f-689xd" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.338494 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-689xd\" (UID: \"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae\") " pod="openstack/dnsmasq-dns-5784cf869f-689xd" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.339404 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.343870 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.344206 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.346668 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.356770 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5995f74f66-6c62l"] Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.376513 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-d97nd" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.394104 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5hb7\" (UniqueName: \"kubernetes.io/projected/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-kube-api-access-f5hb7\") pod \"dnsmasq-dns-5784cf869f-689xd\" (UID: \"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae\") " pod="openstack/dnsmasq-dns-5784cf869f-689xd" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.431360 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/41b169a2-8e44-4929-97b3-dbffe0cde1e3-httpd-config\") pod \"neutron-5995f74f66-6c62l\" (UID: \"41b169a2-8e44-4929-97b3-dbffe0cde1e3\") " pod="openstack/neutron-5995f74f66-6c62l" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.431409 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/41b169a2-8e44-4929-97b3-dbffe0cde1e3-ovndb-tls-certs\") pod \"neutron-5995f74f66-6c62l\" (UID: \"41b169a2-8e44-4929-97b3-dbffe0cde1e3\") " pod="openstack/neutron-5995f74f66-6c62l" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.431453 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41b169a2-8e44-4929-97b3-dbffe0cde1e3-combined-ca-bundle\") pod \"neutron-5995f74f66-6c62l\" (UID: \"41b169a2-8e44-4929-97b3-dbffe0cde1e3\") " pod="openstack/neutron-5995f74f66-6c62l" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.431533 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/41b169a2-8e44-4929-97b3-dbffe0cde1e3-config\") pod \"neutron-5995f74f66-6c62l\" (UID: \"41b169a2-8e44-4929-97b3-dbffe0cde1e3\") " pod="openstack/neutron-5995f74f66-6c62l" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.431551 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5bmm\" (UniqueName: \"kubernetes.io/projected/41b169a2-8e44-4929-97b3-dbffe0cde1e3-kube-api-access-p5bmm\") pod \"neutron-5995f74f66-6c62l\" (UID: \"41b169a2-8e44-4929-97b3-dbffe0cde1e3\") " pod="openstack/neutron-5995f74f66-6c62l" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.477006 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-689xd" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.533339 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/41b169a2-8e44-4929-97b3-dbffe0cde1e3-httpd-config\") pod \"neutron-5995f74f66-6c62l\" (UID: \"41b169a2-8e44-4929-97b3-dbffe0cde1e3\") " pod="openstack/neutron-5995f74f66-6c62l" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.533662 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/41b169a2-8e44-4929-97b3-dbffe0cde1e3-ovndb-tls-certs\") pod \"neutron-5995f74f66-6c62l\" (UID: \"41b169a2-8e44-4929-97b3-dbffe0cde1e3\") " pod="openstack/neutron-5995f74f66-6c62l" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.533704 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41b169a2-8e44-4929-97b3-dbffe0cde1e3-combined-ca-bundle\") pod \"neutron-5995f74f66-6c62l\" (UID: \"41b169a2-8e44-4929-97b3-dbffe0cde1e3\") " pod="openstack/neutron-5995f74f66-6c62l" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.533796 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/41b169a2-8e44-4929-97b3-dbffe0cde1e3-config\") pod \"neutron-5995f74f66-6c62l\" (UID: \"41b169a2-8e44-4929-97b3-dbffe0cde1e3\") " pod="openstack/neutron-5995f74f66-6c62l" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.533816 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5bmm\" (UniqueName: \"kubernetes.io/projected/41b169a2-8e44-4929-97b3-dbffe0cde1e3-kube-api-access-p5bmm\") pod \"neutron-5995f74f66-6c62l\" (UID: \"41b169a2-8e44-4929-97b3-dbffe0cde1e3\") " pod="openstack/neutron-5995f74f66-6c62l" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.544203 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41b169a2-8e44-4929-97b3-dbffe0cde1e3-combined-ca-bundle\") pod \"neutron-5995f74f66-6c62l\" (UID: \"41b169a2-8e44-4929-97b3-dbffe0cde1e3\") " pod="openstack/neutron-5995f74f66-6c62l" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.549525 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/41b169a2-8e44-4929-97b3-dbffe0cde1e3-config\") pod \"neutron-5995f74f66-6c62l\" (UID: \"41b169a2-8e44-4929-97b3-dbffe0cde1e3\") " pod="openstack/neutron-5995f74f66-6c62l" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.550207 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/41b169a2-8e44-4929-97b3-dbffe0cde1e3-httpd-config\") pod \"neutron-5995f74f66-6c62l\" (UID: \"41b169a2-8e44-4929-97b3-dbffe0cde1e3\") " pod="openstack/neutron-5995f74f66-6c62l" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.550853 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/41b169a2-8e44-4929-97b3-dbffe0cde1e3-ovndb-tls-certs\") pod \"neutron-5995f74f66-6c62l\" (UID: \"41b169a2-8e44-4929-97b3-dbffe0cde1e3\") " pod="openstack/neutron-5995f74f66-6c62l" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.564657 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5bmm\" (UniqueName: \"kubernetes.io/projected/41b169a2-8e44-4929-97b3-dbffe0cde1e3-kube-api-access-p5bmm\") pod \"neutron-5995f74f66-6c62l\" (UID: \"41b169a2-8e44-4929-97b3-dbffe0cde1e3\") " pod="openstack/neutron-5995f74f66-6c62l" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.789843 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5995f74f66-6c62l" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.869682 4766 generic.go:334] "Generic (PLEG): container finished" podID="ee1aefba-bd2e-47f2-832c-7e74e707ad69" containerID="bc9352799004a876d938ff5e3475c63a67cb821e31390ecd3667042de650c4b3" exitCode=0 Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.869786 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59d5ff467f-czb2k" event={"ID":"ee1aefba-bd2e-47f2-832c-7e74e707ad69","Type":"ContainerDied","Data":"bc9352799004a876d938ff5e3475c63a67cb821e31390ecd3667042de650c4b3"} Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.871888 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"24818215-6fcc-4a45-8f7c-4f65e993eb7d","Type":"ContainerStarted","Data":"6b7b6fbe45be35df26ed12004dacb8c6bf29682f09f9e1548db68481d831f9f3"} Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.876937 4766 generic.go:334] "Generic (PLEG): container finished" podID="e0cf707d-1c30-442d-8430-e714bd68752a" containerID="315d1474b9459e278c79e38256369dd5ba88d8a22915ed4e5c5210722342361b" exitCode=0 Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.877012 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69c986f6d7-wtv6m" event={"ID":"e0cf707d-1c30-442d-8430-e714bd68752a","Type":"ContainerDied","Data":"315d1474b9459e278c79e38256369dd5ba88d8a22915ed4e5c5210722342361b"} Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.877050 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69c986f6d7-wtv6m" event={"ID":"e0cf707d-1c30-442d-8430-e714bd68752a","Type":"ContainerStarted","Data":"d9494d16b1950242e2d85088ae6e45881e6fe2494c0a57e45b5cbe2dedb19001"} Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.895324 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"87ea3ac4-577b-4c1d-bf9d-816ad975cce1","Type":"ContainerStarted","Data":"c8586f92647bbb5a114dcd6f6899c5036c3e271083fa860bf64d7866744bcc76"} Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.011079 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-689xd"] Jan 30 16:43:42 crc kubenswrapper[4766]: W0130 16:43:42.081857 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d9443ad_23f2_4953_8fe3_1e30cddbb3ae.slice/crio-4beec3b7b2815bc010286da11d4373b366b5518d41bb70db8fd44faa4b14d146 WatchSource:0}: Error finding container 4beec3b7b2815bc010286da11d4373b366b5518d41bb70db8fd44faa4b14d146: Status 404 returned error can't find the container with id 4beec3b7b2815bc010286da11d4373b366b5518d41bb70db8fd44faa4b14d146 Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.082631 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59d5ff467f-czb2k" Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.149868 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ee1aefba-bd2e-47f2-832c-7e74e707ad69-ovsdbserver-nb\") pod \"ee1aefba-bd2e-47f2-832c-7e74e707ad69\" (UID: \"ee1aefba-bd2e-47f2-832c-7e74e707ad69\") " Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.150229 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee1aefba-bd2e-47f2-832c-7e74e707ad69-dns-svc\") pod \"ee1aefba-bd2e-47f2-832c-7e74e707ad69\" (UID: \"ee1aefba-bd2e-47f2-832c-7e74e707ad69\") " Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.150333 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee1aefba-bd2e-47f2-832c-7e74e707ad69-config\") pod \"ee1aefba-bd2e-47f2-832c-7e74e707ad69\" (UID: \"ee1aefba-bd2e-47f2-832c-7e74e707ad69\") " Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.150357 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xqcl\" (UniqueName: \"kubernetes.io/projected/ee1aefba-bd2e-47f2-832c-7e74e707ad69-kube-api-access-8xqcl\") pod \"ee1aefba-bd2e-47f2-832c-7e74e707ad69\" (UID: \"ee1aefba-bd2e-47f2-832c-7e74e707ad69\") " Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.150453 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ee1aefba-bd2e-47f2-832c-7e74e707ad69-ovsdbserver-sb\") pod \"ee1aefba-bd2e-47f2-832c-7e74e707ad69\" (UID: \"ee1aefba-bd2e-47f2-832c-7e74e707ad69\") " Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.150469 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ee1aefba-bd2e-47f2-832c-7e74e707ad69-dns-swift-storage-0\") pod \"ee1aefba-bd2e-47f2-832c-7e74e707ad69\" (UID: \"ee1aefba-bd2e-47f2-832c-7e74e707ad69\") " Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.157780 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee1aefba-bd2e-47f2-832c-7e74e707ad69-kube-api-access-8xqcl" (OuterVolumeSpecName: "kube-api-access-8xqcl") pod "ee1aefba-bd2e-47f2-832c-7e74e707ad69" (UID: "ee1aefba-bd2e-47f2-832c-7e74e707ad69"). InnerVolumeSpecName "kube-api-access-8xqcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.247296 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee1aefba-bd2e-47f2-832c-7e74e707ad69-config" (OuterVolumeSpecName: "config") pod "ee1aefba-bd2e-47f2-832c-7e74e707ad69" (UID: "ee1aefba-bd2e-47f2-832c-7e74e707ad69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.255711 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee1aefba-bd2e-47f2-832c-7e74e707ad69-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ee1aefba-bd2e-47f2-832c-7e74e707ad69" (UID: "ee1aefba-bd2e-47f2-832c-7e74e707ad69"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.267052 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee1aefba-bd2e-47f2-832c-7e74e707ad69-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.267086 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xqcl\" (UniqueName: \"kubernetes.io/projected/ee1aefba-bd2e-47f2-832c-7e74e707ad69-kube-api-access-8xqcl\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.267095 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ee1aefba-bd2e-47f2-832c-7e74e707ad69-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.289732 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee1aefba-bd2e-47f2-832c-7e74e707ad69-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ee1aefba-bd2e-47f2-832c-7e74e707ad69" (UID: "ee1aefba-bd2e-47f2-832c-7e74e707ad69"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.314279 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee1aefba-bd2e-47f2-832c-7e74e707ad69-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ee1aefba-bd2e-47f2-832c-7e74e707ad69" (UID: "ee1aefba-bd2e-47f2-832c-7e74e707ad69"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.318614 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee1aefba-bd2e-47f2-832c-7e74e707ad69-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ee1aefba-bd2e-47f2-832c-7e74e707ad69" (UID: "ee1aefba-bd2e-47f2-832c-7e74e707ad69"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.371756 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ee1aefba-bd2e-47f2-832c-7e74e707ad69-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.371795 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee1aefba-bd2e-47f2-832c-7e74e707ad69-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.371811 4766 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ee1aefba-bd2e-47f2-832c-7e74e707ad69-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.544563 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69c986f6d7-wtv6m" Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.678078 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5995f74f66-6c62l"] Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.685982 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e0cf707d-1c30-442d-8430-e714bd68752a-ovsdbserver-nb\") pod \"e0cf707d-1c30-442d-8430-e714bd68752a\" (UID: \"e0cf707d-1c30-442d-8430-e714bd68752a\") " Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.686047 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vchwk\" (UniqueName: \"kubernetes.io/projected/e0cf707d-1c30-442d-8430-e714bd68752a-kube-api-access-vchwk\") pod \"e0cf707d-1c30-442d-8430-e714bd68752a\" (UID: \"e0cf707d-1c30-442d-8430-e714bd68752a\") " Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.686091 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0cf707d-1c30-442d-8430-e714bd68752a-dns-svc\") pod \"e0cf707d-1c30-442d-8430-e714bd68752a\" (UID: \"e0cf707d-1c30-442d-8430-e714bd68752a\") " Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.686257 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e0cf707d-1c30-442d-8430-e714bd68752a-dns-swift-storage-0\") pod \"e0cf707d-1c30-442d-8430-e714bd68752a\" (UID: \"e0cf707d-1c30-442d-8430-e714bd68752a\") " Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.686311 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e0cf707d-1c30-442d-8430-e714bd68752a-ovsdbserver-sb\") pod \"e0cf707d-1c30-442d-8430-e714bd68752a\" (UID: \"e0cf707d-1c30-442d-8430-e714bd68752a\") " Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.686345 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0cf707d-1c30-442d-8430-e714bd68752a-config\") pod \"e0cf707d-1c30-442d-8430-e714bd68752a\" (UID: \"e0cf707d-1c30-442d-8430-e714bd68752a\") " Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.699432 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0cf707d-1c30-442d-8430-e714bd68752a-kube-api-access-vchwk" (OuterVolumeSpecName: "kube-api-access-vchwk") pod "e0cf707d-1c30-442d-8430-e714bd68752a" (UID: "e0cf707d-1c30-442d-8430-e714bd68752a"). InnerVolumeSpecName "kube-api-access-vchwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.712862 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0cf707d-1c30-442d-8430-e714bd68752a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e0cf707d-1c30-442d-8430-e714bd68752a" (UID: "e0cf707d-1c30-442d-8430-e714bd68752a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.728410 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0cf707d-1c30-442d-8430-e714bd68752a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e0cf707d-1c30-442d-8430-e714bd68752a" (UID: "e0cf707d-1c30-442d-8430-e714bd68752a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.746760 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0cf707d-1c30-442d-8430-e714bd68752a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e0cf707d-1c30-442d-8430-e714bd68752a" (UID: "e0cf707d-1c30-442d-8430-e714bd68752a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.766773 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0cf707d-1c30-442d-8430-e714bd68752a-config" (OuterVolumeSpecName: "config") pod "e0cf707d-1c30-442d-8430-e714bd68752a" (UID: "e0cf707d-1c30-442d-8430-e714bd68752a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.788027 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e0cf707d-1c30-442d-8430-e714bd68752a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.788070 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vchwk\" (UniqueName: \"kubernetes.io/projected/e0cf707d-1c30-442d-8430-e714bd68752a-kube-api-access-vchwk\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.788087 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0cf707d-1c30-442d-8430-e714bd68752a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.788097 4766 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e0cf707d-1c30-442d-8430-e714bd68752a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.788105 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0cf707d-1c30-442d-8430-e714bd68752a-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.791823 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0cf707d-1c30-442d-8430-e714bd68752a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e0cf707d-1c30-442d-8430-e714bd68752a" (UID: "e0cf707d-1c30-442d-8430-e714bd68752a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.890754 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e0cf707d-1c30-442d-8430-e714bd68752a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.919284 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59d5ff467f-czb2k" event={"ID":"ee1aefba-bd2e-47f2-832c-7e74e707ad69","Type":"ContainerDied","Data":"0f05a6427a4592a4fbfb38f5c67f5bbead27aa40c290d9321f78dc9bf122aa81"} Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.919337 4766 scope.go:117] "RemoveContainer" containerID="bc9352799004a876d938ff5e3475c63a67cb821e31390ecd3667042de650c4b3" Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.919447 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59d5ff467f-czb2k" Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.939466 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632","Type":"ContainerStarted","Data":"c134999c83f4c4e99a82493ef600daf161d562a5ef3c81f4abd0f6a30ecfc2e4"} Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.942462 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69c986f6d7-wtv6m" event={"ID":"e0cf707d-1c30-442d-8430-e714bd68752a","Type":"ContainerDied","Data":"d9494d16b1950242e2d85088ae6e45881e6fe2494c0a57e45b5cbe2dedb19001"} Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.942551 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69c986f6d7-wtv6m" Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.953499 4766 generic.go:334] "Generic (PLEG): container finished" podID="0d9443ad-23f2-4953-8fe3-1e30cddbb3ae" containerID="4d2657555f1f9716d5dd3ad8f0603e91ccb9d9b3d7434f90175a66e09ade98bf" exitCode=0 Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.954188 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-689xd" event={"ID":"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae","Type":"ContainerDied","Data":"4d2657555f1f9716d5dd3ad8f0603e91ccb9d9b3d7434f90175a66e09ade98bf"} Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.954269 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-689xd" event={"ID":"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae","Type":"ContainerStarted","Data":"4beec3b7b2815bc010286da11d4373b366b5518d41bb70db8fd44faa4b14d146"} Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.963581 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5995f74f66-6c62l" event={"ID":"41b169a2-8e44-4929-97b3-dbffe0cde1e3","Type":"ContainerStarted","Data":"e2b7b271b357b586463753be91e6e23e2c8d157467dd4ac8a1278aee093a63d3"} Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.965586 4766 scope.go:117] "RemoveContainer" containerID="9711eddd329c1e89a7dc01097b8376ca2746bf25cefdc64b1de7bcd30e1ecb4d" Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.987802 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"87ea3ac4-577b-4c1d-bf9d-816ad975cce1","Type":"ContainerStarted","Data":"672ed2d0c3fa05620751134ad4ec14075e011d163f9d3075b0cc19ed389afb1c"} Jan 30 16:43:43 crc kubenswrapper[4766]: I0130 16:43:43.044330 4766 scope.go:117] "RemoveContainer" containerID="315d1474b9459e278c79e38256369dd5ba88d8a22915ed4e5c5210722342361b" Jan 30 16:43:43 crc kubenswrapper[4766]: I0130 16:43:43.057303 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-69c986f6d7-wtv6m"] Jan 30 16:43:43 crc kubenswrapper[4766]: I0130 16:43:43.064252 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-69c986f6d7-wtv6m"] Jan 30 16:43:43 crc kubenswrapper[4766]: I0130 16:43:43.072315 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59d5ff467f-czb2k"] Jan 30 16:43:43 crc kubenswrapper[4766]: I0130 16:43:43.075758 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59d5ff467f-czb2k"] Jan 30 16:43:43 crc kubenswrapper[4766]: I0130 16:43:43.563918 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 30 16:43:44 crc kubenswrapper[4766]: I0130 16:43:44.006567 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5995f74f66-6c62l" event={"ID":"41b169a2-8e44-4929-97b3-dbffe0cde1e3","Type":"ContainerStarted","Data":"6b2451aedacfbae9e7985dd3d8faea0afad8a36f5d5db78f320e3010f19ea30d"} Jan 30 16:43:44 crc kubenswrapper[4766]: I0130 16:43:44.008554 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5995f74f66-6c62l" Jan 30 16:43:44 crc kubenswrapper[4766]: I0130 16:43:44.008613 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5995f74f66-6c62l" event={"ID":"41b169a2-8e44-4929-97b3-dbffe0cde1e3","Type":"ContainerStarted","Data":"f90d990e4bb708269fa857e162e408508f66de376ebca39148001cfa19b15e64"} Jan 30 16:43:44 crc kubenswrapper[4766]: I0130 16:43:44.012708 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"87ea3ac4-577b-4c1d-bf9d-816ad975cce1","Type":"ContainerStarted","Data":"9b228d765a873cea41f2139537c23bbfc06db149fe1e44721d80abc73ff98c0b"} Jan 30 16:43:44 crc kubenswrapper[4766]: I0130 16:43:44.012930 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="87ea3ac4-577b-4c1d-bf9d-816ad975cce1" containerName="cinder-api-log" containerID="cri-o://672ed2d0c3fa05620751134ad4ec14075e011d163f9d3075b0cc19ed389afb1c" gracePeriod=30 Jan 30 16:43:44 crc kubenswrapper[4766]: I0130 16:43:44.013792 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="87ea3ac4-577b-4c1d-bf9d-816ad975cce1" containerName="cinder-api" containerID="cri-o://9b228d765a873cea41f2139537c23bbfc06db149fe1e44721d80abc73ff98c0b" gracePeriod=30 Jan 30 16:43:44 crc kubenswrapper[4766]: I0130 16:43:44.013253 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 30 16:43:44 crc kubenswrapper[4766]: I0130 16:43:44.023405 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632","Type":"ContainerStarted","Data":"93ae5a7f81e887b02a21081139212a780cdb5e34370af4b94974276fd478d895"} Jan 30 16:43:44 crc kubenswrapper[4766]: I0130 16:43:44.035560 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"24818215-6fcc-4a45-8f7c-4f65e993eb7d","Type":"ContainerStarted","Data":"a4a70788d20ee25591f8d8de9fb5af1054325b5f27e262a04e6b420b99b6a70a"} Jan 30 16:43:44 crc kubenswrapper[4766]: I0130 16:43:44.051865 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0cf707d-1c30-442d-8430-e714bd68752a" path="/var/lib/kubelet/pods/e0cf707d-1c30-442d-8430-e714bd68752a/volumes" Jan 30 16:43:44 crc kubenswrapper[4766]: I0130 16:43:44.055855 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee1aefba-bd2e-47f2-832c-7e74e707ad69" path="/var/lib/kubelet/pods/ee1aefba-bd2e-47f2-832c-7e74e707ad69/volumes" Jan 30 16:43:44 crc kubenswrapper[4766]: I0130 16:43:44.056545 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5784cf869f-689xd" Jan 30 16:43:44 crc kubenswrapper[4766]: I0130 16:43:44.056572 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-689xd" event={"ID":"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae","Type":"ContainerStarted","Data":"c65acb718d30ac6457c863184074fe84d257f4ac320cf7f985745ed5d35f59e2"} Jan 30 16:43:44 crc kubenswrapper[4766]: I0130 16:43:44.057019 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5995f74f66-6c62l" podStartSLOduration=3.057002343 podStartE2EDuration="3.057002343s" podCreationTimestamp="2026-01-30 16:43:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:43:44.036031 +0000 UTC m=+1278.673988346" watchObservedRunningTime="2026-01-30 16:43:44.057002343 +0000 UTC m=+1278.694959689" Jan 30 16:43:44 crc kubenswrapper[4766]: I0130 16:43:44.066601 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.066581109 podStartE2EDuration="4.066581109s" podCreationTimestamp="2026-01-30 16:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:43:44.05691421 +0000 UTC m=+1278.694871556" watchObservedRunningTime="2026-01-30 16:43:44.066581109 +0000 UTC m=+1278.704538455" Jan 30 16:43:44 crc kubenswrapper[4766]: I0130 16:43:44.084987 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5784cf869f-689xd" podStartSLOduration=3.084970643 podStartE2EDuration="3.084970643s" podCreationTimestamp="2026-01-30 16:43:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:43:44.082415464 +0000 UTC m=+1278.720372810" watchObservedRunningTime="2026-01-30 16:43:44.084970643 +0000 UTC m=+1278.722927989" Jan 30 16:43:45 crc kubenswrapper[4766]: I0130 16:43:45.066669 4766 generic.go:334] "Generic (PLEG): container finished" podID="87ea3ac4-577b-4c1d-bf9d-816ad975cce1" containerID="672ed2d0c3fa05620751134ad4ec14075e011d163f9d3075b0cc19ed389afb1c" exitCode=143 Jan 30 16:43:45 crc kubenswrapper[4766]: I0130 16:43:45.067013 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"87ea3ac4-577b-4c1d-bf9d-816ad975cce1","Type":"ContainerDied","Data":"672ed2d0c3fa05620751134ad4ec14075e011d163f9d3075b0cc19ed389afb1c"} Jan 30 16:43:45 crc kubenswrapper[4766]: I0130 16:43:45.073652 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"24818215-6fcc-4a45-8f7c-4f65e993eb7d","Type":"ContainerStarted","Data":"f6f621ee7a9bda91392bd4eba1aef2c1a325f5a982ced4f20f15f94074a8ba5e"} Jan 30 16:43:45 crc kubenswrapper[4766]: I0130 16:43:45.099144 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.496741795 podStartE2EDuration="6.099122704s" podCreationTimestamp="2026-01-30 16:43:39 +0000 UTC" firstStartedPulling="2026-01-30 16:43:40.943581125 +0000 UTC m=+1275.581538471" lastFinishedPulling="2026-01-30 16:43:42.545962034 +0000 UTC m=+1277.183919380" observedRunningTime="2026-01-30 16:43:45.096841982 +0000 UTC m=+1279.734799348" watchObservedRunningTime="2026-01-30 16:43:45.099122704 +0000 UTC m=+1279.737080050" Jan 30 16:43:45 crc kubenswrapper[4766]: I0130 16:43:45.318120 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 30 16:43:46 crc kubenswrapper[4766]: I0130 16:43:46.104170 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632","Type":"ContainerStarted","Data":"05048db503ef6992d3bc323c39e15291dcbc536c204e47706f4e99b6d8070301"} Jan 30 16:43:46 crc kubenswrapper[4766]: I0130 16:43:46.104992 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 16:43:46 crc kubenswrapper[4766]: I0130 16:43:46.140416 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.145888479 podStartE2EDuration="8.140396822s" podCreationTimestamp="2026-01-30 16:43:38 +0000 UTC" firstStartedPulling="2026-01-30 16:43:39.691160563 +0000 UTC m=+1274.329117909" lastFinishedPulling="2026-01-30 16:43:45.685668906 +0000 UTC m=+1280.323626252" observedRunningTime="2026-01-30 16:43:46.126071768 +0000 UTC m=+1280.764029114" watchObservedRunningTime="2026-01-30 16:43:46.140396822 +0000 UTC m=+1280.778354168" Jan 30 16:43:46 crc kubenswrapper[4766]: I0130 16:43:46.901497 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6d4bdf9c45-5nxgr"] Jan 30 16:43:46 crc kubenswrapper[4766]: E0130 16:43:46.901937 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee1aefba-bd2e-47f2-832c-7e74e707ad69" containerName="init" Jan 30 16:43:46 crc kubenswrapper[4766]: I0130 16:43:46.901956 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee1aefba-bd2e-47f2-832c-7e74e707ad69" containerName="init" Jan 30 16:43:46 crc kubenswrapper[4766]: E0130 16:43:46.901989 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0cf707d-1c30-442d-8430-e714bd68752a" containerName="init" Jan 30 16:43:46 crc kubenswrapper[4766]: I0130 16:43:46.901998 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0cf707d-1c30-442d-8430-e714bd68752a" containerName="init" Jan 30 16:43:46 crc kubenswrapper[4766]: E0130 16:43:46.902016 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee1aefba-bd2e-47f2-832c-7e74e707ad69" containerName="dnsmasq-dns" Jan 30 16:43:46 crc kubenswrapper[4766]: I0130 16:43:46.902025 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee1aefba-bd2e-47f2-832c-7e74e707ad69" containerName="dnsmasq-dns" Jan 30 16:43:46 crc kubenswrapper[4766]: I0130 16:43:46.902280 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0cf707d-1c30-442d-8430-e714bd68752a" containerName="init" Jan 30 16:43:46 crc kubenswrapper[4766]: I0130 16:43:46.902302 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee1aefba-bd2e-47f2-832c-7e74e707ad69" containerName="dnsmasq-dns" Jan 30 16:43:46 crc kubenswrapper[4766]: I0130 16:43:46.903868 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6d4bdf9c45-5nxgr" Jan 30 16:43:46 crc kubenswrapper[4766]: I0130 16:43:46.905982 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 30 16:43:46 crc kubenswrapper[4766]: I0130 16:43:46.908084 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 30 16:43:46 crc kubenswrapper[4766]: I0130 16:43:46.920012 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6d4bdf9c45-5nxgr"] Jan 30 16:43:46 crc kubenswrapper[4766]: I0130 16:43:46.985767 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jfm4\" (UniqueName: \"kubernetes.io/projected/533a3663-0294-48ef-b771-1f5fb3ae05ab-kube-api-access-8jfm4\") pod \"neutron-6d4bdf9c45-5nxgr\" (UID: \"533a3663-0294-48ef-b771-1f5fb3ae05ab\") " pod="openstack/neutron-6d4bdf9c45-5nxgr" Jan 30 16:43:46 crc kubenswrapper[4766]: I0130 16:43:46.985826 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-config\") pod \"neutron-6d4bdf9c45-5nxgr\" (UID: \"533a3663-0294-48ef-b771-1f5fb3ae05ab\") " pod="openstack/neutron-6d4bdf9c45-5nxgr" Jan 30 16:43:46 crc kubenswrapper[4766]: I0130 16:43:46.985862 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-httpd-config\") pod \"neutron-6d4bdf9c45-5nxgr\" (UID: \"533a3663-0294-48ef-b771-1f5fb3ae05ab\") " pod="openstack/neutron-6d4bdf9c45-5nxgr" Jan 30 16:43:46 crc kubenswrapper[4766]: I0130 16:43:46.985944 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-internal-tls-certs\") pod \"neutron-6d4bdf9c45-5nxgr\" (UID: \"533a3663-0294-48ef-b771-1f5fb3ae05ab\") " pod="openstack/neutron-6d4bdf9c45-5nxgr" Jan 30 16:43:46 crc kubenswrapper[4766]: I0130 16:43:46.985960 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-combined-ca-bundle\") pod \"neutron-6d4bdf9c45-5nxgr\" (UID: \"533a3663-0294-48ef-b771-1f5fb3ae05ab\") " pod="openstack/neutron-6d4bdf9c45-5nxgr" Jan 30 16:43:46 crc kubenswrapper[4766]: I0130 16:43:46.985994 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-ovndb-tls-certs\") pod \"neutron-6d4bdf9c45-5nxgr\" (UID: \"533a3663-0294-48ef-b771-1f5fb3ae05ab\") " pod="openstack/neutron-6d4bdf9c45-5nxgr" Jan 30 16:43:46 crc kubenswrapper[4766]: I0130 16:43:46.986022 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-public-tls-certs\") pod \"neutron-6d4bdf9c45-5nxgr\" (UID: \"533a3663-0294-48ef-b771-1f5fb3ae05ab\") " pod="openstack/neutron-6d4bdf9c45-5nxgr" Jan 30 16:43:47 crc kubenswrapper[4766]: I0130 16:43:47.087718 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jfm4\" (UniqueName: \"kubernetes.io/projected/533a3663-0294-48ef-b771-1f5fb3ae05ab-kube-api-access-8jfm4\") pod \"neutron-6d4bdf9c45-5nxgr\" (UID: \"533a3663-0294-48ef-b771-1f5fb3ae05ab\") " pod="openstack/neutron-6d4bdf9c45-5nxgr" Jan 30 16:43:47 crc kubenswrapper[4766]: I0130 16:43:47.087782 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-config\") pod \"neutron-6d4bdf9c45-5nxgr\" (UID: \"533a3663-0294-48ef-b771-1f5fb3ae05ab\") " pod="openstack/neutron-6d4bdf9c45-5nxgr" Jan 30 16:43:47 crc kubenswrapper[4766]: I0130 16:43:47.087815 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-httpd-config\") pod \"neutron-6d4bdf9c45-5nxgr\" (UID: \"533a3663-0294-48ef-b771-1f5fb3ae05ab\") " pod="openstack/neutron-6d4bdf9c45-5nxgr" Jan 30 16:43:47 crc kubenswrapper[4766]: I0130 16:43:47.088765 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-internal-tls-certs\") pod \"neutron-6d4bdf9c45-5nxgr\" (UID: \"533a3663-0294-48ef-b771-1f5fb3ae05ab\") " pod="openstack/neutron-6d4bdf9c45-5nxgr" Jan 30 16:43:47 crc kubenswrapper[4766]: I0130 16:43:47.088818 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-combined-ca-bundle\") pod \"neutron-6d4bdf9c45-5nxgr\" (UID: \"533a3663-0294-48ef-b771-1f5fb3ae05ab\") " pod="openstack/neutron-6d4bdf9c45-5nxgr" Jan 30 16:43:47 crc kubenswrapper[4766]: I0130 16:43:47.088944 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-ovndb-tls-certs\") pod \"neutron-6d4bdf9c45-5nxgr\" (UID: \"533a3663-0294-48ef-b771-1f5fb3ae05ab\") " pod="openstack/neutron-6d4bdf9c45-5nxgr" Jan 30 16:43:47 crc kubenswrapper[4766]: I0130 16:43:47.089029 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-public-tls-certs\") pod \"neutron-6d4bdf9c45-5nxgr\" (UID: \"533a3663-0294-48ef-b771-1f5fb3ae05ab\") " pod="openstack/neutron-6d4bdf9c45-5nxgr" Jan 30 16:43:47 crc kubenswrapper[4766]: I0130 16:43:47.093426 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-internal-tls-certs\") pod \"neutron-6d4bdf9c45-5nxgr\" (UID: \"533a3663-0294-48ef-b771-1f5fb3ae05ab\") " pod="openstack/neutron-6d4bdf9c45-5nxgr" Jan 30 16:43:47 crc kubenswrapper[4766]: I0130 16:43:47.094128 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-combined-ca-bundle\") pod \"neutron-6d4bdf9c45-5nxgr\" (UID: \"533a3663-0294-48ef-b771-1f5fb3ae05ab\") " pod="openstack/neutron-6d4bdf9c45-5nxgr" Jan 30 16:43:47 crc kubenswrapper[4766]: I0130 16:43:47.095029 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-httpd-config\") pod \"neutron-6d4bdf9c45-5nxgr\" (UID: \"533a3663-0294-48ef-b771-1f5fb3ae05ab\") " pod="openstack/neutron-6d4bdf9c45-5nxgr" Jan 30 16:43:47 crc kubenswrapper[4766]: I0130 16:43:47.098095 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-public-tls-certs\") pod \"neutron-6d4bdf9c45-5nxgr\" (UID: \"533a3663-0294-48ef-b771-1f5fb3ae05ab\") " pod="openstack/neutron-6d4bdf9c45-5nxgr" Jan 30 16:43:47 crc kubenswrapper[4766]: I0130 16:43:47.107414 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-ovndb-tls-certs\") pod \"neutron-6d4bdf9c45-5nxgr\" (UID: \"533a3663-0294-48ef-b771-1f5fb3ae05ab\") " pod="openstack/neutron-6d4bdf9c45-5nxgr" Jan 30 16:43:47 crc kubenswrapper[4766]: I0130 16:43:47.108380 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-config\") pod \"neutron-6d4bdf9c45-5nxgr\" (UID: \"533a3663-0294-48ef-b771-1f5fb3ae05ab\") " pod="openstack/neutron-6d4bdf9c45-5nxgr" Jan 30 16:43:47 crc kubenswrapper[4766]: I0130 16:43:47.110790 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jfm4\" (UniqueName: \"kubernetes.io/projected/533a3663-0294-48ef-b771-1f5fb3ae05ab-kube-api-access-8jfm4\") pod \"neutron-6d4bdf9c45-5nxgr\" (UID: \"533a3663-0294-48ef-b771-1f5fb3ae05ab\") " pod="openstack/neutron-6d4bdf9c45-5nxgr" Jan 30 16:43:47 crc kubenswrapper[4766]: I0130 16:43:47.232308 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6d4bdf9c45-5nxgr" Jan 30 16:43:47 crc kubenswrapper[4766]: I0130 16:43:47.341700 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-869cbffcd-4n87d" Jan 30 16:43:47 crc kubenswrapper[4766]: I0130 16:43:47.863454 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6d4bdf9c45-5nxgr"] Jan 30 16:43:47 crc kubenswrapper[4766]: I0130 16:43:47.922949 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-869cbffcd-4n87d" Jan 30 16:43:48 crc kubenswrapper[4766]: I0130 16:43:48.139319 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d4bdf9c45-5nxgr" event={"ID":"533a3663-0294-48ef-b771-1f5fb3ae05ab","Type":"ContainerStarted","Data":"2ef26908ff305b23e8e962f558b46195015a464a6f4ddf9d9d52d4e04bf0f666"} Jan 30 16:43:48 crc kubenswrapper[4766]: I0130 16:43:48.139372 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d4bdf9c45-5nxgr" event={"ID":"533a3663-0294-48ef-b771-1f5fb3ae05ab","Type":"ContainerStarted","Data":"c0a3cd47bf6f73c69d465e105e571ff0dfdead63ace53c2387dc41608358f285"} Jan 30 16:43:49 crc kubenswrapper[4766]: I0130 16:43:49.144260 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d4bdf9c45-5nxgr" event={"ID":"533a3663-0294-48ef-b771-1f5fb3ae05ab","Type":"ContainerStarted","Data":"7b8bf066636272b652b67ba985eba08e74de13009f953d0190f16c41f92e8863"} Jan 30 16:43:49 crc kubenswrapper[4766]: I0130 16:43:49.146068 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6d4bdf9c45-5nxgr" Jan 30 16:43:50 crc kubenswrapper[4766]: I0130 16:43:50.114710 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7b946b75c8-zb6q6" Jan 30 16:43:50 crc kubenswrapper[4766]: I0130 16:43:50.146696 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6d4bdf9c45-5nxgr" podStartSLOduration=4.146673536 podStartE2EDuration="4.146673536s" podCreationTimestamp="2026-01-30 16:43:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:43:49.17331078 +0000 UTC m=+1283.811268136" watchObservedRunningTime="2026-01-30 16:43:50.146673536 +0000 UTC m=+1284.784630892" Jan 30 16:43:50 crc kubenswrapper[4766]: I0130 16:43:50.315415 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7b946b75c8-zb6q6" Jan 30 16:43:50 crc kubenswrapper[4766]: I0130 16:43:50.396362 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-869cbffcd-4n87d"] Jan 30 16:43:50 crc kubenswrapper[4766]: I0130 16:43:50.396588 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-869cbffcd-4n87d" podUID="6c0217e5-bcc8-482c-9e44-4be03ee7d059" containerName="barbican-api-log" containerID="cri-o://bb7f2daa116cebbbf2cc73455a4ae3436f99d4ad876646c055fc1ee90da5e4c0" gracePeriod=30 Jan 30 16:43:50 crc kubenswrapper[4766]: I0130 16:43:50.397009 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-869cbffcd-4n87d" podUID="6c0217e5-bcc8-482c-9e44-4be03ee7d059" containerName="barbican-api" containerID="cri-o://997bcdb331587a6a7b0af6004a1001be1e445ca8d6604c747e6d479bac0d7b45" gracePeriod=30 Jan 30 16:43:50 crc kubenswrapper[4766]: I0130 16:43:50.654617 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 30 16:43:50 crc kubenswrapper[4766]: I0130 16:43:50.700192 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 16:43:51 crc kubenswrapper[4766]: I0130 16:43:51.161140 4766 generic.go:334] "Generic (PLEG): container finished" podID="6c0217e5-bcc8-482c-9e44-4be03ee7d059" containerID="bb7f2daa116cebbbf2cc73455a4ae3436f99d4ad876646c055fc1ee90da5e4c0" exitCode=143 Jan 30 16:43:51 crc kubenswrapper[4766]: I0130 16:43:51.161534 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="24818215-6fcc-4a45-8f7c-4f65e993eb7d" containerName="cinder-scheduler" containerID="cri-o://a4a70788d20ee25591f8d8de9fb5af1054325b5f27e262a04e6b420b99b6a70a" gracePeriod=30 Jan 30 16:43:51 crc kubenswrapper[4766]: I0130 16:43:51.161847 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-869cbffcd-4n87d" event={"ID":"6c0217e5-bcc8-482c-9e44-4be03ee7d059","Type":"ContainerDied","Data":"bb7f2daa116cebbbf2cc73455a4ae3436f99d4ad876646c055fc1ee90da5e4c0"} Jan 30 16:43:51 crc kubenswrapper[4766]: I0130 16:43:51.163326 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="24818215-6fcc-4a45-8f7c-4f65e993eb7d" containerName="probe" containerID="cri-o://f6f621ee7a9bda91392bd4eba1aef2c1a325f5a982ced4f20f15f94074a8ba5e" gracePeriod=30 Jan 30 16:43:51 crc kubenswrapper[4766]: I0130 16:43:51.480470 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5784cf869f-689xd" Jan 30 16:43:51 crc kubenswrapper[4766]: I0130 16:43:51.561995 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-jlsp7"] Jan 30 16:43:51 crc kubenswrapper[4766]: I0130 16:43:51.562237 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8b5c85b87-jlsp7" podUID="a7ccb2d3-4270-48e3-99cc-6031edfa30ae" containerName="dnsmasq-dns" containerID="cri-o://05de0f2960640a1d96ef314bfdd72efd8f32f0b341093df6924e01cbf4898754" gracePeriod=10 Jan 30 16:43:52 crc kubenswrapper[4766]: I0130 16:43:52.170854 4766 generic.go:334] "Generic (PLEG): container finished" podID="a7ccb2d3-4270-48e3-99cc-6031edfa30ae" containerID="05de0f2960640a1d96ef314bfdd72efd8f32f0b341093df6924e01cbf4898754" exitCode=0 Jan 30 16:43:52 crc kubenswrapper[4766]: I0130 16:43:52.171112 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-jlsp7" event={"ID":"a7ccb2d3-4270-48e3-99cc-6031edfa30ae","Type":"ContainerDied","Data":"05de0f2960640a1d96ef314bfdd72efd8f32f0b341093df6924e01cbf4898754"} Jan 30 16:43:52 crc kubenswrapper[4766]: I0130 16:43:52.171211 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-jlsp7" event={"ID":"a7ccb2d3-4270-48e3-99cc-6031edfa30ae","Type":"ContainerDied","Data":"de52ee7d6da539ff2915615ec98d46f519fe75c68b787c9ed63b8db673bf3c26"} Jan 30 16:43:52 crc kubenswrapper[4766]: I0130 16:43:52.171233 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de52ee7d6da539ff2915615ec98d46f519fe75c68b787c9ed63b8db673bf3c26" Jan 30 16:43:52 crc kubenswrapper[4766]: I0130 16:43:52.182364 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-jlsp7" Jan 30 16:43:52 crc kubenswrapper[4766]: I0130 16:43:52.335103 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-dns-swift-storage-0\") pod \"a7ccb2d3-4270-48e3-99cc-6031edfa30ae\" (UID: \"a7ccb2d3-4270-48e3-99cc-6031edfa30ae\") " Jan 30 16:43:52 crc kubenswrapper[4766]: I0130 16:43:52.335234 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-ovsdbserver-nb\") pod \"a7ccb2d3-4270-48e3-99cc-6031edfa30ae\" (UID: \"a7ccb2d3-4270-48e3-99cc-6031edfa30ae\") " Jan 30 16:43:52 crc kubenswrapper[4766]: I0130 16:43:52.335290 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-dns-svc\") pod \"a7ccb2d3-4270-48e3-99cc-6031edfa30ae\" (UID: \"a7ccb2d3-4270-48e3-99cc-6031edfa30ae\") " Jan 30 16:43:52 crc kubenswrapper[4766]: I0130 16:43:52.335434 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-ovsdbserver-sb\") pod \"a7ccb2d3-4270-48e3-99cc-6031edfa30ae\" (UID: \"a7ccb2d3-4270-48e3-99cc-6031edfa30ae\") " Jan 30 16:43:52 crc kubenswrapper[4766]: I0130 16:43:52.335507 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-config\") pod \"a7ccb2d3-4270-48e3-99cc-6031edfa30ae\" (UID: \"a7ccb2d3-4270-48e3-99cc-6031edfa30ae\") " Jan 30 16:43:52 crc kubenswrapper[4766]: I0130 16:43:52.335526 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdnct\" (UniqueName: \"kubernetes.io/projected/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-kube-api-access-wdnct\") pod \"a7ccb2d3-4270-48e3-99cc-6031edfa30ae\" (UID: \"a7ccb2d3-4270-48e3-99cc-6031edfa30ae\") " Jan 30 16:43:52 crc kubenswrapper[4766]: I0130 16:43:52.343774 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-kube-api-access-wdnct" (OuterVolumeSpecName: "kube-api-access-wdnct") pod "a7ccb2d3-4270-48e3-99cc-6031edfa30ae" (UID: "a7ccb2d3-4270-48e3-99cc-6031edfa30ae"). InnerVolumeSpecName "kube-api-access-wdnct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:43:52 crc kubenswrapper[4766]: I0130 16:43:52.404875 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a7ccb2d3-4270-48e3-99cc-6031edfa30ae" (UID: "a7ccb2d3-4270-48e3-99cc-6031edfa30ae"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:43:52 crc kubenswrapper[4766]: I0130 16:43:52.408605 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-config" (OuterVolumeSpecName: "config") pod "a7ccb2d3-4270-48e3-99cc-6031edfa30ae" (UID: "a7ccb2d3-4270-48e3-99cc-6031edfa30ae"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:43:52 crc kubenswrapper[4766]: I0130 16:43:52.413224 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a7ccb2d3-4270-48e3-99cc-6031edfa30ae" (UID: "a7ccb2d3-4270-48e3-99cc-6031edfa30ae"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:43:52 crc kubenswrapper[4766]: I0130 16:43:52.415514 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a7ccb2d3-4270-48e3-99cc-6031edfa30ae" (UID: "a7ccb2d3-4270-48e3-99cc-6031edfa30ae"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:43:52 crc kubenswrapper[4766]: I0130 16:43:52.438870 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:52 crc kubenswrapper[4766]: I0130 16:43:52.438926 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:52 crc kubenswrapper[4766]: I0130 16:43:52.438938 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdnct\" (UniqueName: \"kubernetes.io/projected/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-kube-api-access-wdnct\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:52 crc kubenswrapper[4766]: I0130 16:43:52.438955 4766 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:52 crc kubenswrapper[4766]: I0130 16:43:52.438965 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:52 crc kubenswrapper[4766]: I0130 16:43:52.445670 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a7ccb2d3-4270-48e3-99cc-6031edfa30ae" (UID: "a7ccb2d3-4270-48e3-99cc-6031edfa30ae"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:43:52 crc kubenswrapper[4766]: I0130 16:43:52.541308 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:53 crc kubenswrapper[4766]: I0130 16:43:53.182821 4766 generic.go:334] "Generic (PLEG): container finished" podID="24818215-6fcc-4a45-8f7c-4f65e993eb7d" containerID="f6f621ee7a9bda91392bd4eba1aef2c1a325f5a982ced4f20f15f94074a8ba5e" exitCode=0 Jan 30 16:43:53 crc kubenswrapper[4766]: I0130 16:43:53.182950 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-jlsp7" Jan 30 16:43:53 crc kubenswrapper[4766]: I0130 16:43:53.192046 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"24818215-6fcc-4a45-8f7c-4f65e993eb7d","Type":"ContainerDied","Data":"f6f621ee7a9bda91392bd4eba1aef2c1a325f5a982ced4f20f15f94074a8ba5e"} Jan 30 16:43:53 crc kubenswrapper[4766]: I0130 16:43:53.215958 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-jlsp7"] Jan 30 16:43:53 crc kubenswrapper[4766]: I0130 16:43:53.224817 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-jlsp7"] Jan 30 16:43:53 crc kubenswrapper[4766]: I0130 16:43:53.278034 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 30 16:43:53 crc kubenswrapper[4766]: I0130 16:43:53.545996 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-869cbffcd-4n87d" podUID="6c0217e5-bcc8-482c-9e44-4be03ee7d059" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.155:9311/healthcheck\": read tcp 10.217.0.2:48772->10.217.0.155:9311: read: connection reset by peer" Jan 30 16:43:53 crc kubenswrapper[4766]: I0130 16:43:53.546619 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-869cbffcd-4n87d" podUID="6c0217e5-bcc8-482c-9e44-4be03ee7d059" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.155:9311/healthcheck\": read tcp 10.217.0.2:48770->10.217.0.155:9311: read: connection reset by peer" Jan 30 16:43:54 crc kubenswrapper[4766]: I0130 16:43:54.010763 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-869cbffcd-4n87d" Jan 30 16:43:54 crc kubenswrapper[4766]: I0130 16:43:54.052385 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7ccb2d3-4270-48e3-99cc-6031edfa30ae" path="/var/lib/kubelet/pods/a7ccb2d3-4270-48e3-99cc-6031edfa30ae/volumes" Jan 30 16:43:54 crc kubenswrapper[4766]: I0130 16:43:54.175029 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c0217e5-bcc8-482c-9e44-4be03ee7d059-config-data-custom\") pod \"6c0217e5-bcc8-482c-9e44-4be03ee7d059\" (UID: \"6c0217e5-bcc8-482c-9e44-4be03ee7d059\") " Jan 30 16:43:54 crc kubenswrapper[4766]: I0130 16:43:54.175092 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kcg5\" (UniqueName: \"kubernetes.io/projected/6c0217e5-bcc8-482c-9e44-4be03ee7d059-kube-api-access-4kcg5\") pod \"6c0217e5-bcc8-482c-9e44-4be03ee7d059\" (UID: \"6c0217e5-bcc8-482c-9e44-4be03ee7d059\") " Jan 30 16:43:54 crc kubenswrapper[4766]: I0130 16:43:54.175217 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c0217e5-bcc8-482c-9e44-4be03ee7d059-combined-ca-bundle\") pod \"6c0217e5-bcc8-482c-9e44-4be03ee7d059\" (UID: \"6c0217e5-bcc8-482c-9e44-4be03ee7d059\") " Jan 30 16:43:54 crc kubenswrapper[4766]: I0130 16:43:54.175410 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c0217e5-bcc8-482c-9e44-4be03ee7d059-logs\") pod \"6c0217e5-bcc8-482c-9e44-4be03ee7d059\" (UID: \"6c0217e5-bcc8-482c-9e44-4be03ee7d059\") " Jan 30 16:43:54 crc kubenswrapper[4766]: I0130 16:43:54.175478 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c0217e5-bcc8-482c-9e44-4be03ee7d059-config-data\") pod \"6c0217e5-bcc8-482c-9e44-4be03ee7d059\" (UID: \"6c0217e5-bcc8-482c-9e44-4be03ee7d059\") " Jan 30 16:43:54 crc kubenswrapper[4766]: I0130 16:43:54.175976 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c0217e5-bcc8-482c-9e44-4be03ee7d059-logs" (OuterVolumeSpecName: "logs") pod "6c0217e5-bcc8-482c-9e44-4be03ee7d059" (UID: "6c0217e5-bcc8-482c-9e44-4be03ee7d059"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:43:54 crc kubenswrapper[4766]: I0130 16:43:54.182913 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c0217e5-bcc8-482c-9e44-4be03ee7d059-kube-api-access-4kcg5" (OuterVolumeSpecName: "kube-api-access-4kcg5") pod "6c0217e5-bcc8-482c-9e44-4be03ee7d059" (UID: "6c0217e5-bcc8-482c-9e44-4be03ee7d059"). InnerVolumeSpecName "kube-api-access-4kcg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:43:54 crc kubenswrapper[4766]: I0130 16:43:54.190439 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c0217e5-bcc8-482c-9e44-4be03ee7d059-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6c0217e5-bcc8-482c-9e44-4be03ee7d059" (UID: "6c0217e5-bcc8-482c-9e44-4be03ee7d059"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:54 crc kubenswrapper[4766]: I0130 16:43:54.215405 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c0217e5-bcc8-482c-9e44-4be03ee7d059-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c0217e5-bcc8-482c-9e44-4be03ee7d059" (UID: "6c0217e5-bcc8-482c-9e44-4be03ee7d059"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:54 crc kubenswrapper[4766]: I0130 16:43:54.218765 4766 generic.go:334] "Generic (PLEG): container finished" podID="6c0217e5-bcc8-482c-9e44-4be03ee7d059" containerID="997bcdb331587a6a7b0af6004a1001be1e445ca8d6604c747e6d479bac0d7b45" exitCode=0 Jan 30 16:43:54 crc kubenswrapper[4766]: I0130 16:43:54.218813 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-869cbffcd-4n87d" event={"ID":"6c0217e5-bcc8-482c-9e44-4be03ee7d059","Type":"ContainerDied","Data":"997bcdb331587a6a7b0af6004a1001be1e445ca8d6604c747e6d479bac0d7b45"} Jan 30 16:43:54 crc kubenswrapper[4766]: I0130 16:43:54.218847 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-869cbffcd-4n87d" event={"ID":"6c0217e5-bcc8-482c-9e44-4be03ee7d059","Type":"ContainerDied","Data":"92d1aaa2960ed19f9dead271c07bcadcb09aafba2b36e05ba013dc148c76ebbf"} Jan 30 16:43:54 crc kubenswrapper[4766]: I0130 16:43:54.218866 4766 scope.go:117] "RemoveContainer" containerID="997bcdb331587a6a7b0af6004a1001be1e445ca8d6604c747e6d479bac0d7b45" Jan 30 16:43:54 crc kubenswrapper[4766]: I0130 16:43:54.218999 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-869cbffcd-4n87d" Jan 30 16:43:54 crc kubenswrapper[4766]: I0130 16:43:54.244361 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c0217e5-bcc8-482c-9e44-4be03ee7d059-config-data" (OuterVolumeSpecName: "config-data") pod "6c0217e5-bcc8-482c-9e44-4be03ee7d059" (UID: "6c0217e5-bcc8-482c-9e44-4be03ee7d059"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:54 crc kubenswrapper[4766]: I0130 16:43:54.278393 4766 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c0217e5-bcc8-482c-9e44-4be03ee7d059-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:54 crc kubenswrapper[4766]: I0130 16:43:54.278439 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kcg5\" (UniqueName: \"kubernetes.io/projected/6c0217e5-bcc8-482c-9e44-4be03ee7d059-kube-api-access-4kcg5\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:54 crc kubenswrapper[4766]: I0130 16:43:54.278456 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c0217e5-bcc8-482c-9e44-4be03ee7d059-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:54 crc kubenswrapper[4766]: I0130 16:43:54.278467 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c0217e5-bcc8-482c-9e44-4be03ee7d059-logs\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:54 crc kubenswrapper[4766]: I0130 16:43:54.278479 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c0217e5-bcc8-482c-9e44-4be03ee7d059-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:54 crc kubenswrapper[4766]: I0130 16:43:54.347636 4766 scope.go:117] "RemoveContainer" containerID="bb7f2daa116cebbbf2cc73455a4ae3436f99d4ad876646c055fc1ee90da5e4c0" Jan 30 16:43:54 crc kubenswrapper[4766]: I0130 16:43:54.367469 4766 scope.go:117] "RemoveContainer" containerID="997bcdb331587a6a7b0af6004a1001be1e445ca8d6604c747e6d479bac0d7b45" Jan 30 16:43:54 crc kubenswrapper[4766]: E0130 16:43:54.368370 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"997bcdb331587a6a7b0af6004a1001be1e445ca8d6604c747e6d479bac0d7b45\": container with ID starting with 997bcdb331587a6a7b0af6004a1001be1e445ca8d6604c747e6d479bac0d7b45 not found: ID does not exist" containerID="997bcdb331587a6a7b0af6004a1001be1e445ca8d6604c747e6d479bac0d7b45" Jan 30 16:43:54 crc kubenswrapper[4766]: I0130 16:43:54.368498 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"997bcdb331587a6a7b0af6004a1001be1e445ca8d6604c747e6d479bac0d7b45"} err="failed to get container status \"997bcdb331587a6a7b0af6004a1001be1e445ca8d6604c747e6d479bac0d7b45\": rpc error: code = NotFound desc = could not find container \"997bcdb331587a6a7b0af6004a1001be1e445ca8d6604c747e6d479bac0d7b45\": container with ID starting with 997bcdb331587a6a7b0af6004a1001be1e445ca8d6604c747e6d479bac0d7b45 not found: ID does not exist" Jan 30 16:43:54 crc kubenswrapper[4766]: I0130 16:43:54.368589 4766 scope.go:117] "RemoveContainer" containerID="bb7f2daa116cebbbf2cc73455a4ae3436f99d4ad876646c055fc1ee90da5e4c0" Jan 30 16:43:54 crc kubenswrapper[4766]: E0130 16:43:54.368990 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb7f2daa116cebbbf2cc73455a4ae3436f99d4ad876646c055fc1ee90da5e4c0\": container with ID starting with bb7f2daa116cebbbf2cc73455a4ae3436f99d4ad876646c055fc1ee90da5e4c0 not found: ID does not exist" containerID="bb7f2daa116cebbbf2cc73455a4ae3436f99d4ad876646c055fc1ee90da5e4c0" Jan 30 16:43:54 crc kubenswrapper[4766]: I0130 16:43:54.369027 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb7f2daa116cebbbf2cc73455a4ae3436f99d4ad876646c055fc1ee90da5e4c0"} err="failed to get container status \"bb7f2daa116cebbbf2cc73455a4ae3436f99d4ad876646c055fc1ee90da5e4c0\": rpc error: code = NotFound desc = could not find container \"bb7f2daa116cebbbf2cc73455a4ae3436f99d4ad876646c055fc1ee90da5e4c0\": container with ID starting with bb7f2daa116cebbbf2cc73455a4ae3436f99d4ad876646c055fc1ee90da5e4c0 not found: ID does not exist" Jan 30 16:43:54 crc kubenswrapper[4766]: I0130 16:43:54.551555 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-869cbffcd-4n87d"] Jan 30 16:43:54 crc kubenswrapper[4766]: I0130 16:43:54.559559 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-869cbffcd-4n87d"] Jan 30 16:43:54 crc kubenswrapper[4766]: I0130 16:43:54.999824 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7bc6f65df6-mx4xk" Jan 30 16:43:55 crc kubenswrapper[4766]: E0130 16:43:55.394511 4766 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24818215_6fcc_4a45_8f7c_4f65e993eb7d.slice/crio-a4a70788d20ee25591f8d8de9fb5af1054325b5f27e262a04e6b420b99b6a70a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24818215_6fcc_4a45_8f7c_4f65e993eb7d.slice/crio-conmon-a4a70788d20ee25591f8d8de9fb5af1054325b5f27e262a04e6b420b99b6a70a.scope\": RecentStats: unable to find data in memory cache]" Jan 30 16:43:55 crc kubenswrapper[4766]: I0130 16:43:55.725797 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 16:43:55 crc kubenswrapper[4766]: I0130 16:43:55.809085 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24818215-6fcc-4a45-8f7c-4f65e993eb7d-config-data-custom\") pod \"24818215-6fcc-4a45-8f7c-4f65e993eb7d\" (UID: \"24818215-6fcc-4a45-8f7c-4f65e993eb7d\") " Jan 30 16:43:55 crc kubenswrapper[4766]: I0130 16:43:55.809154 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24818215-6fcc-4a45-8f7c-4f65e993eb7d-scripts\") pod \"24818215-6fcc-4a45-8f7c-4f65e993eb7d\" (UID: \"24818215-6fcc-4a45-8f7c-4f65e993eb7d\") " Jan 30 16:43:55 crc kubenswrapper[4766]: I0130 16:43:55.809248 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/24818215-6fcc-4a45-8f7c-4f65e993eb7d-etc-machine-id\") pod \"24818215-6fcc-4a45-8f7c-4f65e993eb7d\" (UID: \"24818215-6fcc-4a45-8f7c-4f65e993eb7d\") " Jan 30 16:43:55 crc kubenswrapper[4766]: I0130 16:43:55.809317 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24818215-6fcc-4a45-8f7c-4f65e993eb7d-config-data\") pod \"24818215-6fcc-4a45-8f7c-4f65e993eb7d\" (UID: \"24818215-6fcc-4a45-8f7c-4f65e993eb7d\") " Jan 30 16:43:55 crc kubenswrapper[4766]: I0130 16:43:55.809424 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24818215-6fcc-4a45-8f7c-4f65e993eb7d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "24818215-6fcc-4a45-8f7c-4f65e993eb7d" (UID: "24818215-6fcc-4a45-8f7c-4f65e993eb7d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:43:55 crc kubenswrapper[4766]: I0130 16:43:55.810085 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24818215-6fcc-4a45-8f7c-4f65e993eb7d-combined-ca-bundle\") pod \"24818215-6fcc-4a45-8f7c-4f65e993eb7d\" (UID: \"24818215-6fcc-4a45-8f7c-4f65e993eb7d\") " Jan 30 16:43:55 crc kubenswrapper[4766]: I0130 16:43:55.810120 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvf5r\" (UniqueName: \"kubernetes.io/projected/24818215-6fcc-4a45-8f7c-4f65e993eb7d-kube-api-access-jvf5r\") pod \"24818215-6fcc-4a45-8f7c-4f65e993eb7d\" (UID: \"24818215-6fcc-4a45-8f7c-4f65e993eb7d\") " Jan 30 16:43:55 crc kubenswrapper[4766]: I0130 16:43:55.810719 4766 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/24818215-6fcc-4a45-8f7c-4f65e993eb7d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:55 crc kubenswrapper[4766]: I0130 16:43:55.819418 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24818215-6fcc-4a45-8f7c-4f65e993eb7d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "24818215-6fcc-4a45-8f7c-4f65e993eb7d" (UID: "24818215-6fcc-4a45-8f7c-4f65e993eb7d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:55 crc kubenswrapper[4766]: I0130 16:43:55.819470 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24818215-6fcc-4a45-8f7c-4f65e993eb7d-kube-api-access-jvf5r" (OuterVolumeSpecName: "kube-api-access-jvf5r") pod "24818215-6fcc-4a45-8f7c-4f65e993eb7d" (UID: "24818215-6fcc-4a45-8f7c-4f65e993eb7d"). InnerVolumeSpecName "kube-api-access-jvf5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:43:55 crc kubenswrapper[4766]: I0130 16:43:55.820729 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24818215-6fcc-4a45-8f7c-4f65e993eb7d-scripts" (OuterVolumeSpecName: "scripts") pod "24818215-6fcc-4a45-8f7c-4f65e993eb7d" (UID: "24818215-6fcc-4a45-8f7c-4f65e993eb7d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:55 crc kubenswrapper[4766]: I0130 16:43:55.871355 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24818215-6fcc-4a45-8f7c-4f65e993eb7d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "24818215-6fcc-4a45-8f7c-4f65e993eb7d" (UID: "24818215-6fcc-4a45-8f7c-4f65e993eb7d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:55 crc kubenswrapper[4766]: I0130 16:43:55.912098 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24818215-6fcc-4a45-8f7c-4f65e993eb7d-config-data" (OuterVolumeSpecName: "config-data") pod "24818215-6fcc-4a45-8f7c-4f65e993eb7d" (UID: "24818215-6fcc-4a45-8f7c-4f65e993eb7d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:55 crc kubenswrapper[4766]: I0130 16:43:55.912207 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24818215-6fcc-4a45-8f7c-4f65e993eb7d-config-data\") pod \"24818215-6fcc-4a45-8f7c-4f65e993eb7d\" (UID: \"24818215-6fcc-4a45-8f7c-4f65e993eb7d\") " Jan 30 16:43:55 crc kubenswrapper[4766]: W0130 16:43:55.912527 4766 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/24818215-6fcc-4a45-8f7c-4f65e993eb7d/volumes/kubernetes.io~secret/config-data Jan 30 16:43:55 crc kubenswrapper[4766]: I0130 16:43:55.912551 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24818215-6fcc-4a45-8f7c-4f65e993eb7d-config-data" (OuterVolumeSpecName: "config-data") pod "24818215-6fcc-4a45-8f7c-4f65e993eb7d" (UID: "24818215-6fcc-4a45-8f7c-4f65e993eb7d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:55 crc kubenswrapper[4766]: I0130 16:43:55.912849 4766 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24818215-6fcc-4a45-8f7c-4f65e993eb7d-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:55 crc kubenswrapper[4766]: I0130 16:43:55.912880 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24818215-6fcc-4a45-8f7c-4f65e993eb7d-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:55 crc kubenswrapper[4766]: I0130 16:43:55.912892 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24818215-6fcc-4a45-8f7c-4f65e993eb7d-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:55 crc kubenswrapper[4766]: I0130 16:43:55.912903 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24818215-6fcc-4a45-8f7c-4f65e993eb7d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:55 crc kubenswrapper[4766]: I0130 16:43:55.912915 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvf5r\" (UniqueName: \"kubernetes.io/projected/24818215-6fcc-4a45-8f7c-4f65e993eb7d-kube-api-access-jvf5r\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.051214 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c0217e5-bcc8-482c-9e44-4be03ee7d059" path="/var/lib/kubelet/pods/6c0217e5-bcc8-482c-9e44-4be03ee7d059/volumes" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.245384 4766 generic.go:334] "Generic (PLEG): container finished" podID="24818215-6fcc-4a45-8f7c-4f65e993eb7d" containerID="a4a70788d20ee25591f8d8de9fb5af1054325b5f27e262a04e6b420b99b6a70a" exitCode=0 Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.245447 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"24818215-6fcc-4a45-8f7c-4f65e993eb7d","Type":"ContainerDied","Data":"a4a70788d20ee25591f8d8de9fb5af1054325b5f27e262a04e6b420b99b6a70a"} Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.245473 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"24818215-6fcc-4a45-8f7c-4f65e993eb7d","Type":"ContainerDied","Data":"6b7b6fbe45be35df26ed12004dacb8c6bf29682f09f9e1548db68481d831f9f3"} Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.245477 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.245488 4766 scope.go:117] "RemoveContainer" containerID="f6f621ee7a9bda91392bd4eba1aef2c1a325f5a982ced4f20f15f94074a8ba5e" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.278438 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.281657 4766 scope.go:117] "RemoveContainer" containerID="a4a70788d20ee25591f8d8de9fb5af1054325b5f27e262a04e6b420b99b6a70a" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.310465 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.321281 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 16:43:56 crc kubenswrapper[4766]: E0130 16:43:56.321691 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c0217e5-bcc8-482c-9e44-4be03ee7d059" containerName="barbican-api" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.321705 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c0217e5-bcc8-482c-9e44-4be03ee7d059" containerName="barbican-api" Jan 30 16:43:56 crc kubenswrapper[4766]: E0130 16:43:56.321721 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7ccb2d3-4270-48e3-99cc-6031edfa30ae" containerName="dnsmasq-dns" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.321727 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7ccb2d3-4270-48e3-99cc-6031edfa30ae" containerName="dnsmasq-dns" Jan 30 16:43:56 crc kubenswrapper[4766]: E0130 16:43:56.321736 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24818215-6fcc-4a45-8f7c-4f65e993eb7d" containerName="cinder-scheduler" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.321743 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="24818215-6fcc-4a45-8f7c-4f65e993eb7d" containerName="cinder-scheduler" Jan 30 16:43:56 crc kubenswrapper[4766]: E0130 16:43:56.321766 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c0217e5-bcc8-482c-9e44-4be03ee7d059" containerName="barbican-api-log" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.321771 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c0217e5-bcc8-482c-9e44-4be03ee7d059" containerName="barbican-api-log" Jan 30 16:43:56 crc kubenswrapper[4766]: E0130 16:43:56.321783 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7ccb2d3-4270-48e3-99cc-6031edfa30ae" containerName="init" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.321788 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7ccb2d3-4270-48e3-99cc-6031edfa30ae" containerName="init" Jan 30 16:43:56 crc kubenswrapper[4766]: E0130 16:43:56.321800 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24818215-6fcc-4a45-8f7c-4f65e993eb7d" containerName="probe" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.321806 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="24818215-6fcc-4a45-8f7c-4f65e993eb7d" containerName="probe" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.321977 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7ccb2d3-4270-48e3-99cc-6031edfa30ae" containerName="dnsmasq-dns" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.321992 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c0217e5-bcc8-482c-9e44-4be03ee7d059" containerName="barbican-api" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.322004 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="24818215-6fcc-4a45-8f7c-4f65e993eb7d" containerName="probe" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.322013 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c0217e5-bcc8-482c-9e44-4be03ee7d059" containerName="barbican-api-log" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.322026 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="24818215-6fcc-4a45-8f7c-4f65e993eb7d" containerName="cinder-scheduler" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.323009 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.338471 4766 scope.go:117] "RemoveContainer" containerID="f6f621ee7a9bda91392bd4eba1aef2c1a325f5a982ced4f20f15f94074a8ba5e" Jan 30 16:43:56 crc kubenswrapper[4766]: E0130 16:43:56.341279 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6f621ee7a9bda91392bd4eba1aef2c1a325f5a982ced4f20f15f94074a8ba5e\": container with ID starting with f6f621ee7a9bda91392bd4eba1aef2c1a325f5a982ced4f20f15f94074a8ba5e not found: ID does not exist" containerID="f6f621ee7a9bda91392bd4eba1aef2c1a325f5a982ced4f20f15f94074a8ba5e" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.341313 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6f621ee7a9bda91392bd4eba1aef2c1a325f5a982ced4f20f15f94074a8ba5e"} err="failed to get container status \"f6f621ee7a9bda91392bd4eba1aef2c1a325f5a982ced4f20f15f94074a8ba5e\": rpc error: code = NotFound desc = could not find container \"f6f621ee7a9bda91392bd4eba1aef2c1a325f5a982ced4f20f15f94074a8ba5e\": container with ID starting with f6f621ee7a9bda91392bd4eba1aef2c1a325f5a982ced4f20f15f94074a8ba5e not found: ID does not exist" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.341333 4766 scope.go:117] "RemoveContainer" containerID="a4a70788d20ee25591f8d8de9fb5af1054325b5f27e262a04e6b420b99b6a70a" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.341795 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 16:43:56 crc kubenswrapper[4766]: E0130 16:43:56.345155 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4a70788d20ee25591f8d8de9fb5af1054325b5f27e262a04e6b420b99b6a70a\": container with ID starting with a4a70788d20ee25591f8d8de9fb5af1054325b5f27e262a04e6b420b99b6a70a not found: ID does not exist" containerID="a4a70788d20ee25591f8d8de9fb5af1054325b5f27e262a04e6b420b99b6a70a" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.345442 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4a70788d20ee25591f8d8de9fb5af1054325b5f27e262a04e6b420b99b6a70a"} err="failed to get container status \"a4a70788d20ee25591f8d8de9fb5af1054325b5f27e262a04e6b420b99b6a70a\": rpc error: code = NotFound desc = could not find container \"a4a70788d20ee25591f8d8de9fb5af1054325b5f27e262a04e6b420b99b6a70a\": container with ID starting with a4a70788d20ee25591f8d8de9fb5af1054325b5f27e262a04e6b420b99b6a70a not found: ID does not exist" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.353850 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.424140 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/063ebe65-0175-443e-8c75-5018c42b3f36-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"063ebe65-0175-443e-8c75-5018c42b3f36\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.424249 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/063ebe65-0175-443e-8c75-5018c42b3f36-scripts\") pod \"cinder-scheduler-0\" (UID: \"063ebe65-0175-443e-8c75-5018c42b3f36\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.424329 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/063ebe65-0175-443e-8c75-5018c42b3f36-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"063ebe65-0175-443e-8c75-5018c42b3f36\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.424474 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26q8r\" (UniqueName: \"kubernetes.io/projected/063ebe65-0175-443e-8c75-5018c42b3f36-kube-api-access-26q8r\") pod \"cinder-scheduler-0\" (UID: \"063ebe65-0175-443e-8c75-5018c42b3f36\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.424513 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/063ebe65-0175-443e-8c75-5018c42b3f36-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"063ebe65-0175-443e-8c75-5018c42b3f36\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.424557 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/063ebe65-0175-443e-8c75-5018c42b3f36-config-data\") pod \"cinder-scheduler-0\" (UID: \"063ebe65-0175-443e-8c75-5018c42b3f36\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.525767 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/063ebe65-0175-443e-8c75-5018c42b3f36-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"063ebe65-0175-443e-8c75-5018c42b3f36\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.525826 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/063ebe65-0175-443e-8c75-5018c42b3f36-scripts\") pod \"cinder-scheduler-0\" (UID: \"063ebe65-0175-443e-8c75-5018c42b3f36\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.525874 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/063ebe65-0175-443e-8c75-5018c42b3f36-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"063ebe65-0175-443e-8c75-5018c42b3f36\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.525918 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26q8r\" (UniqueName: \"kubernetes.io/projected/063ebe65-0175-443e-8c75-5018c42b3f36-kube-api-access-26q8r\") pod \"cinder-scheduler-0\" (UID: \"063ebe65-0175-443e-8c75-5018c42b3f36\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.525937 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/063ebe65-0175-443e-8c75-5018c42b3f36-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"063ebe65-0175-443e-8c75-5018c42b3f36\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.525971 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/063ebe65-0175-443e-8c75-5018c42b3f36-config-data\") pod \"cinder-scheduler-0\" (UID: \"063ebe65-0175-443e-8c75-5018c42b3f36\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.527000 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/063ebe65-0175-443e-8c75-5018c42b3f36-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"063ebe65-0175-443e-8c75-5018c42b3f36\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.529900 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/063ebe65-0175-443e-8c75-5018c42b3f36-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"063ebe65-0175-443e-8c75-5018c42b3f36\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.530720 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/063ebe65-0175-443e-8c75-5018c42b3f36-scripts\") pod \"cinder-scheduler-0\" (UID: \"063ebe65-0175-443e-8c75-5018c42b3f36\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.531933 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/063ebe65-0175-443e-8c75-5018c42b3f36-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"063ebe65-0175-443e-8c75-5018c42b3f36\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.532460 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/063ebe65-0175-443e-8c75-5018c42b3f36-config-data\") pod \"cinder-scheduler-0\" (UID: \"063ebe65-0175-443e-8c75-5018c42b3f36\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.546548 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26q8r\" (UniqueName: \"kubernetes.io/projected/063ebe65-0175-443e-8c75-5018c42b3f36-kube-api-access-26q8r\") pod \"cinder-scheduler-0\" (UID: \"063ebe65-0175-443e-8c75-5018c42b3f36\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.672207 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 16:43:57 crc kubenswrapper[4766]: W0130 16:43:57.258679 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod063ebe65_0175_443e_8c75_5018c42b3f36.slice/crio-edc0ddf8609d91064e135d7b1badffa0f2b9c01a737dbf1954007ac34a36f143 WatchSource:0}: Error finding container edc0ddf8609d91064e135d7b1badffa0f2b9c01a737dbf1954007ac34a36f143: Status 404 returned error can't find the container with id edc0ddf8609d91064e135d7b1badffa0f2b9c01a737dbf1954007ac34a36f143 Jan 30 16:43:57 crc kubenswrapper[4766]: I0130 16:43:57.261906 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 16:43:58 crc kubenswrapper[4766]: I0130 16:43:58.061739 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24818215-6fcc-4a45-8f7c-4f65e993eb7d" path="/var/lib/kubelet/pods/24818215-6fcc-4a45-8f7c-4f65e993eb7d/volumes" Jan 30 16:43:58 crc kubenswrapper[4766]: I0130 16:43:58.271247 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"063ebe65-0175-443e-8c75-5018c42b3f36","Type":"ContainerStarted","Data":"e5049dc222f6a4c60730423ca57b88c9c36337971b3ab52ed5de35266e17e533"} Jan 30 16:43:58 crc kubenswrapper[4766]: I0130 16:43:58.271480 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"063ebe65-0175-443e-8c75-5018c42b3f36","Type":"ContainerStarted","Data":"edc0ddf8609d91064e135d7b1badffa0f2b9c01a737dbf1954007ac34a36f143"} Jan 30 16:43:59 crc kubenswrapper[4766]: I0130 16:43:59.282954 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"063ebe65-0175-443e-8c75-5018c42b3f36","Type":"ContainerStarted","Data":"a33a51c4ce72a3331d749a25239fbd5adeae2f5c2b9a417968c58a83c32f6d49"} Jan 30 16:43:59 crc kubenswrapper[4766]: I0130 16:43:59.311541 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.3115197419999998 podStartE2EDuration="3.311519742s" podCreationTimestamp="2026-01-30 16:43:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:43:59.301467532 +0000 UTC m=+1293.939424898" watchObservedRunningTime="2026-01-30 16:43:59.311519742 +0000 UTC m=+1293.949477088" Jan 30 16:43:59 crc kubenswrapper[4766]: I0130 16:43:59.377390 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 30 16:43:59 crc kubenswrapper[4766]: I0130 16:43:59.378516 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 16:43:59 crc kubenswrapper[4766]: I0130 16:43:59.385296 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-6wwf9" Jan 30 16:43:59 crc kubenswrapper[4766]: I0130 16:43:59.385589 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 30 16:43:59 crc kubenswrapper[4766]: I0130 16:43:59.386030 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 30 16:43:59 crc kubenswrapper[4766]: I0130 16:43:59.392161 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 16:43:59 crc kubenswrapper[4766]: I0130 16:43:59.485151 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-69d8797fb6-zzsfd" Jan 30 16:43:59 crc kubenswrapper[4766]: I0130 16:43:59.490490 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d4d8\" (UniqueName: \"kubernetes.io/projected/372f7d7a-9066-4b9b-884a-5257785ed101-kube-api-access-8d4d8\") pod \"openstackclient\" (UID: \"372f7d7a-9066-4b9b-884a-5257785ed101\") " pod="openstack/openstackclient" Jan 30 16:43:59 crc kubenswrapper[4766]: I0130 16:43:59.490610 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/372f7d7a-9066-4b9b-884a-5257785ed101-openstack-config-secret\") pod \"openstackclient\" (UID: \"372f7d7a-9066-4b9b-884a-5257785ed101\") " pod="openstack/openstackclient" Jan 30 16:43:59 crc kubenswrapper[4766]: I0130 16:43:59.490646 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/372f7d7a-9066-4b9b-884a-5257785ed101-combined-ca-bundle\") pod \"openstackclient\" (UID: \"372f7d7a-9066-4b9b-884a-5257785ed101\") " pod="openstack/openstackclient" Jan 30 16:43:59 crc kubenswrapper[4766]: I0130 16:43:59.490770 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/372f7d7a-9066-4b9b-884a-5257785ed101-openstack-config\") pod \"openstackclient\" (UID: \"372f7d7a-9066-4b9b-884a-5257785ed101\") " pod="openstack/openstackclient" Jan 30 16:43:59 crc kubenswrapper[4766]: I0130 16:43:59.551157 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-69d8797fb6-zzsfd" Jan 30 16:43:59 crc kubenswrapper[4766]: I0130 16:43:59.592949 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8d4d8\" (UniqueName: \"kubernetes.io/projected/372f7d7a-9066-4b9b-884a-5257785ed101-kube-api-access-8d4d8\") pod \"openstackclient\" (UID: \"372f7d7a-9066-4b9b-884a-5257785ed101\") " pod="openstack/openstackclient" Jan 30 16:43:59 crc kubenswrapper[4766]: I0130 16:43:59.593061 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/372f7d7a-9066-4b9b-884a-5257785ed101-openstack-config-secret\") pod \"openstackclient\" (UID: \"372f7d7a-9066-4b9b-884a-5257785ed101\") " pod="openstack/openstackclient" Jan 30 16:43:59 crc kubenswrapper[4766]: I0130 16:43:59.593088 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/372f7d7a-9066-4b9b-884a-5257785ed101-combined-ca-bundle\") pod \"openstackclient\" (UID: \"372f7d7a-9066-4b9b-884a-5257785ed101\") " pod="openstack/openstackclient" Jan 30 16:43:59 crc kubenswrapper[4766]: I0130 16:43:59.593205 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/372f7d7a-9066-4b9b-884a-5257785ed101-openstack-config\") pod \"openstackclient\" (UID: \"372f7d7a-9066-4b9b-884a-5257785ed101\") " pod="openstack/openstackclient" Jan 30 16:43:59 crc kubenswrapper[4766]: I0130 16:43:59.596322 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/372f7d7a-9066-4b9b-884a-5257785ed101-openstack-config\") pod \"openstackclient\" (UID: \"372f7d7a-9066-4b9b-884a-5257785ed101\") " pod="openstack/openstackclient" Jan 30 16:43:59 crc kubenswrapper[4766]: I0130 16:43:59.601150 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/372f7d7a-9066-4b9b-884a-5257785ed101-openstack-config-secret\") pod \"openstackclient\" (UID: \"372f7d7a-9066-4b9b-884a-5257785ed101\") " pod="openstack/openstackclient" Jan 30 16:43:59 crc kubenswrapper[4766]: I0130 16:43:59.619872 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/372f7d7a-9066-4b9b-884a-5257785ed101-combined-ca-bundle\") pod \"openstackclient\" (UID: \"372f7d7a-9066-4b9b-884a-5257785ed101\") " pod="openstack/openstackclient" Jan 30 16:43:59 crc kubenswrapper[4766]: I0130 16:43:59.634812 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8d4d8\" (UniqueName: \"kubernetes.io/projected/372f7d7a-9066-4b9b-884a-5257785ed101-kube-api-access-8d4d8\") pod \"openstackclient\" (UID: \"372f7d7a-9066-4b9b-884a-5257785ed101\") " pod="openstack/openstackclient" Jan 30 16:43:59 crc kubenswrapper[4766]: I0130 16:43:59.725758 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 16:44:00 crc kubenswrapper[4766]: I0130 16:44:00.282580 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 16:44:00 crc kubenswrapper[4766]: I0130 16:44:00.295882 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"372f7d7a-9066-4b9b-884a-5257785ed101","Type":"ContainerStarted","Data":"b7b9378e6f0958ebc3c0de7dd982fb62b932e45e6c09c05227810636618c61d1"} Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.135170 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-7d7d659cc9-88mc9"] Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.136781 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7d7d659cc9-88mc9" Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.143795 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.143845 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.143917 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.155697 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7d7d659cc9-88mc9"] Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.225993 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3997cdc-9abd-4aa3-9201-0015456d4750-run-httpd\") pod \"swift-proxy-7d7d659cc9-88mc9\" (UID: \"c3997cdc-9abd-4aa3-9201-0015456d4750\") " pod="openstack/swift-proxy-7d7d659cc9-88mc9" Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.226058 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3997cdc-9abd-4aa3-9201-0015456d4750-config-data\") pod \"swift-proxy-7d7d659cc9-88mc9\" (UID: \"c3997cdc-9abd-4aa3-9201-0015456d4750\") " pod="openstack/swift-proxy-7d7d659cc9-88mc9" Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.226088 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c3997cdc-9abd-4aa3-9201-0015456d4750-etc-swift\") pod \"swift-proxy-7d7d659cc9-88mc9\" (UID: \"c3997cdc-9abd-4aa3-9201-0015456d4750\") " pod="openstack/swift-proxy-7d7d659cc9-88mc9" Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.226193 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3997cdc-9abd-4aa3-9201-0015456d4750-log-httpd\") pod \"swift-proxy-7d7d659cc9-88mc9\" (UID: \"c3997cdc-9abd-4aa3-9201-0015456d4750\") " pod="openstack/swift-proxy-7d7d659cc9-88mc9" Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.226223 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3997cdc-9abd-4aa3-9201-0015456d4750-internal-tls-certs\") pod \"swift-proxy-7d7d659cc9-88mc9\" (UID: \"c3997cdc-9abd-4aa3-9201-0015456d4750\") " pod="openstack/swift-proxy-7d7d659cc9-88mc9" Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.226294 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3997cdc-9abd-4aa3-9201-0015456d4750-public-tls-certs\") pod \"swift-proxy-7d7d659cc9-88mc9\" (UID: \"c3997cdc-9abd-4aa3-9201-0015456d4750\") " pod="openstack/swift-proxy-7d7d659cc9-88mc9" Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.226341 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3997cdc-9abd-4aa3-9201-0015456d4750-combined-ca-bundle\") pod \"swift-proxy-7d7d659cc9-88mc9\" (UID: \"c3997cdc-9abd-4aa3-9201-0015456d4750\") " pod="openstack/swift-proxy-7d7d659cc9-88mc9" Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.226378 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dsts\" (UniqueName: \"kubernetes.io/projected/c3997cdc-9abd-4aa3-9201-0015456d4750-kube-api-access-7dsts\") pod \"swift-proxy-7d7d659cc9-88mc9\" (UID: \"c3997cdc-9abd-4aa3-9201-0015456d4750\") " pod="openstack/swift-proxy-7d7d659cc9-88mc9" Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.327630 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3997cdc-9abd-4aa3-9201-0015456d4750-log-httpd\") pod \"swift-proxy-7d7d659cc9-88mc9\" (UID: \"c3997cdc-9abd-4aa3-9201-0015456d4750\") " pod="openstack/swift-proxy-7d7d659cc9-88mc9" Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.327683 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3997cdc-9abd-4aa3-9201-0015456d4750-internal-tls-certs\") pod \"swift-proxy-7d7d659cc9-88mc9\" (UID: \"c3997cdc-9abd-4aa3-9201-0015456d4750\") " pod="openstack/swift-proxy-7d7d659cc9-88mc9" Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.327754 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3997cdc-9abd-4aa3-9201-0015456d4750-public-tls-certs\") pod \"swift-proxy-7d7d659cc9-88mc9\" (UID: \"c3997cdc-9abd-4aa3-9201-0015456d4750\") " pod="openstack/swift-proxy-7d7d659cc9-88mc9" Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.327800 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3997cdc-9abd-4aa3-9201-0015456d4750-combined-ca-bundle\") pod \"swift-proxy-7d7d659cc9-88mc9\" (UID: \"c3997cdc-9abd-4aa3-9201-0015456d4750\") " pod="openstack/swift-proxy-7d7d659cc9-88mc9" Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.327832 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dsts\" (UniqueName: \"kubernetes.io/projected/c3997cdc-9abd-4aa3-9201-0015456d4750-kube-api-access-7dsts\") pod \"swift-proxy-7d7d659cc9-88mc9\" (UID: \"c3997cdc-9abd-4aa3-9201-0015456d4750\") " pod="openstack/swift-proxy-7d7d659cc9-88mc9" Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.327869 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3997cdc-9abd-4aa3-9201-0015456d4750-run-httpd\") pod \"swift-proxy-7d7d659cc9-88mc9\" (UID: \"c3997cdc-9abd-4aa3-9201-0015456d4750\") " pod="openstack/swift-proxy-7d7d659cc9-88mc9" Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.327895 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3997cdc-9abd-4aa3-9201-0015456d4750-config-data\") pod \"swift-proxy-7d7d659cc9-88mc9\" (UID: \"c3997cdc-9abd-4aa3-9201-0015456d4750\") " pod="openstack/swift-proxy-7d7d659cc9-88mc9" Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.327925 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c3997cdc-9abd-4aa3-9201-0015456d4750-etc-swift\") pod \"swift-proxy-7d7d659cc9-88mc9\" (UID: \"c3997cdc-9abd-4aa3-9201-0015456d4750\") " pod="openstack/swift-proxy-7d7d659cc9-88mc9" Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.328218 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3997cdc-9abd-4aa3-9201-0015456d4750-log-httpd\") pod \"swift-proxy-7d7d659cc9-88mc9\" (UID: \"c3997cdc-9abd-4aa3-9201-0015456d4750\") " pod="openstack/swift-proxy-7d7d659cc9-88mc9" Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.329624 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3997cdc-9abd-4aa3-9201-0015456d4750-run-httpd\") pod \"swift-proxy-7d7d659cc9-88mc9\" (UID: \"c3997cdc-9abd-4aa3-9201-0015456d4750\") " pod="openstack/swift-proxy-7d7d659cc9-88mc9" Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.334648 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c3997cdc-9abd-4aa3-9201-0015456d4750-etc-swift\") pod \"swift-proxy-7d7d659cc9-88mc9\" (UID: \"c3997cdc-9abd-4aa3-9201-0015456d4750\") " pod="openstack/swift-proxy-7d7d659cc9-88mc9" Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.338610 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3997cdc-9abd-4aa3-9201-0015456d4750-public-tls-certs\") pod \"swift-proxy-7d7d659cc9-88mc9\" (UID: \"c3997cdc-9abd-4aa3-9201-0015456d4750\") " pod="openstack/swift-proxy-7d7d659cc9-88mc9" Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.338835 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3997cdc-9abd-4aa3-9201-0015456d4750-internal-tls-certs\") pod \"swift-proxy-7d7d659cc9-88mc9\" (UID: \"c3997cdc-9abd-4aa3-9201-0015456d4750\") " pod="openstack/swift-proxy-7d7d659cc9-88mc9" Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.338985 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3997cdc-9abd-4aa3-9201-0015456d4750-config-data\") pod \"swift-proxy-7d7d659cc9-88mc9\" (UID: \"c3997cdc-9abd-4aa3-9201-0015456d4750\") " pod="openstack/swift-proxy-7d7d659cc9-88mc9" Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.341323 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3997cdc-9abd-4aa3-9201-0015456d4750-combined-ca-bundle\") pod \"swift-proxy-7d7d659cc9-88mc9\" (UID: \"c3997cdc-9abd-4aa3-9201-0015456d4750\") " pod="openstack/swift-proxy-7d7d659cc9-88mc9" Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.345835 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dsts\" (UniqueName: \"kubernetes.io/projected/c3997cdc-9abd-4aa3-9201-0015456d4750-kube-api-access-7dsts\") pod \"swift-proxy-7d7d659cc9-88mc9\" (UID: \"c3997cdc-9abd-4aa3-9201-0015456d4750\") " pod="openstack/swift-proxy-7d7d659cc9-88mc9" Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.462235 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7d7d659cc9-88mc9" Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.673728 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.757500 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.760635 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cbdbf5c1-c9c8-4dbd-9c68-f6067465e632" containerName="ceilometer-central-agent" containerID="cri-o://17bfefeda5ec998de43d5f88a70223bab321f691bb9c648b2910fa5820e7f804" gracePeriod=30 Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.760817 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cbdbf5c1-c9c8-4dbd-9c68-f6067465e632" containerName="proxy-httpd" containerID="cri-o://05048db503ef6992d3bc323c39e15291dcbc536c204e47706f4e99b6d8070301" gracePeriod=30 Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.760869 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cbdbf5c1-c9c8-4dbd-9c68-f6067465e632" containerName="sg-core" containerID="cri-o://93ae5a7f81e887b02a21081139212a780cdb5e34370af4b94974276fd478d895" gracePeriod=30 Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.760905 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cbdbf5c1-c9c8-4dbd-9c68-f6067465e632" containerName="ceilometer-notification-agent" containerID="cri-o://c134999c83f4c4e99a82493ef600daf161d562a5ef3c81f4abd0f6a30ecfc2e4" gracePeriod=30 Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.868418 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="cbdbf5c1-c9c8-4dbd-9c68-f6067465e632" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.157:3000/\": read tcp 10.217.0.2:57172->10.217.0.157:3000: read: connection reset by peer" Jan 30 16:44:02 crc kubenswrapper[4766]: I0130 16:44:02.118622 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7d7d659cc9-88mc9"] Jan 30 16:44:02 crc kubenswrapper[4766]: W0130 16:44:02.137236 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3997cdc_9abd_4aa3_9201_0015456d4750.slice/crio-49605357677b39efe33a4677710b6828509af2272af5c0ba35f1272ec2a825ae WatchSource:0}: Error finding container 49605357677b39efe33a4677710b6828509af2272af5c0ba35f1272ec2a825ae: Status 404 returned error can't find the container with id 49605357677b39efe33a4677710b6828509af2272af5c0ba35f1272ec2a825ae Jan 30 16:44:02 crc kubenswrapper[4766]: I0130 16:44:02.316462 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7d7d659cc9-88mc9" event={"ID":"c3997cdc-9abd-4aa3-9201-0015456d4750","Type":"ContainerStarted","Data":"49605357677b39efe33a4677710b6828509af2272af5c0ba35f1272ec2a825ae"} Jan 30 16:44:02 crc kubenswrapper[4766]: I0130 16:44:02.319673 4766 generic.go:334] "Generic (PLEG): container finished" podID="cbdbf5c1-c9c8-4dbd-9c68-f6067465e632" containerID="05048db503ef6992d3bc323c39e15291dcbc536c204e47706f4e99b6d8070301" exitCode=0 Jan 30 16:44:02 crc kubenswrapper[4766]: I0130 16:44:02.319741 4766 generic.go:334] "Generic (PLEG): container finished" podID="cbdbf5c1-c9c8-4dbd-9c68-f6067465e632" containerID="93ae5a7f81e887b02a21081139212a780cdb5e34370af4b94974276fd478d895" exitCode=2 Jan 30 16:44:02 crc kubenswrapper[4766]: I0130 16:44:02.319751 4766 generic.go:334] "Generic (PLEG): container finished" podID="cbdbf5c1-c9c8-4dbd-9c68-f6067465e632" containerID="17bfefeda5ec998de43d5f88a70223bab321f691bb9c648b2910fa5820e7f804" exitCode=0 Jan 30 16:44:02 crc kubenswrapper[4766]: I0130 16:44:02.319715 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632","Type":"ContainerDied","Data":"05048db503ef6992d3bc323c39e15291dcbc536c204e47706f4e99b6d8070301"} Jan 30 16:44:02 crc kubenswrapper[4766]: I0130 16:44:02.319813 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632","Type":"ContainerDied","Data":"93ae5a7f81e887b02a21081139212a780cdb5e34370af4b94974276fd478d895"} Jan 30 16:44:02 crc kubenswrapper[4766]: I0130 16:44:02.319830 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632","Type":"ContainerDied","Data":"17bfefeda5ec998de43d5f88a70223bab321f691bb9c648b2910fa5820e7f804"} Jan 30 16:44:03 crc kubenswrapper[4766]: I0130 16:44:03.330967 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7d7d659cc9-88mc9" event={"ID":"c3997cdc-9abd-4aa3-9201-0015456d4750","Type":"ContainerStarted","Data":"75ae716b873f99b1ddf625fab6e52abbd22acfb11f4be4e5163116f6fbbe7e1a"} Jan 30 16:44:03 crc kubenswrapper[4766]: I0130 16:44:03.331313 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7d7d659cc9-88mc9" event={"ID":"c3997cdc-9abd-4aa3-9201-0015456d4750","Type":"ContainerStarted","Data":"068d2554e4761fa5c6a952a00e96bf35b9eae89ff6aa26d9e05f71a814e8b350"} Jan 30 16:44:03 crc kubenswrapper[4766]: I0130 16:44:03.331910 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7d7d659cc9-88mc9" Jan 30 16:44:03 crc kubenswrapper[4766]: I0130 16:44:03.331937 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7d7d659cc9-88mc9" Jan 30 16:44:03 crc kubenswrapper[4766]: I0130 16:44:03.363352 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-7d7d659cc9-88mc9" podStartSLOduration=2.363330817 podStartE2EDuration="2.363330817s" podCreationTimestamp="2026-01-30 16:44:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:44:03.351524161 +0000 UTC m=+1297.989481527" watchObservedRunningTime="2026-01-30 16:44:03.363330817 +0000 UTC m=+1298.001288163" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.229853 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.359909 4766 generic.go:334] "Generic (PLEG): container finished" podID="cbdbf5c1-c9c8-4dbd-9c68-f6067465e632" containerID="c134999c83f4c4e99a82493ef600daf161d562a5ef3c81f4abd0f6a30ecfc2e4" exitCode=0 Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.359951 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632","Type":"ContainerDied","Data":"c134999c83f4c4e99a82493ef600daf161d562a5ef3c81f4abd0f6a30ecfc2e4"} Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.359975 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632","Type":"ContainerDied","Data":"cd2c2b2506c59c114c23d0ceb86a25fba0633c14ce109f4881053f349d4e17dc"} Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.359992 4766 scope.go:117] "RemoveContainer" containerID="05048db503ef6992d3bc323c39e15291dcbc536c204e47706f4e99b6d8070301" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.360239 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.365688 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-log-httpd\") pod \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\" (UID: \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\") " Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.366090 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-run-httpd\") pod \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\" (UID: \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\") " Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.366249 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-config-data\") pod \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\" (UID: \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\") " Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.366283 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-combined-ca-bundle\") pod \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\" (UID: \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\") " Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.366307 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2p4d\" (UniqueName: \"kubernetes.io/projected/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-kube-api-access-f2p4d\") pod \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\" (UID: \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\") " Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.366339 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-scripts\") pod \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\" (UID: \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\") " Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.366365 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-sg-core-conf-yaml\") pod \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\" (UID: \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\") " Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.366453 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "cbdbf5c1-c9c8-4dbd-9c68-f6067465e632" (UID: "cbdbf5c1-c9c8-4dbd-9c68-f6067465e632"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.366513 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "cbdbf5c1-c9c8-4dbd-9c68-f6067465e632" (UID: "cbdbf5c1-c9c8-4dbd-9c68-f6067465e632"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.367110 4766 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.367140 4766 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.372573 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-kube-api-access-f2p4d" (OuterVolumeSpecName: "kube-api-access-f2p4d") pod "cbdbf5c1-c9c8-4dbd-9c68-f6067465e632" (UID: "cbdbf5c1-c9c8-4dbd-9c68-f6067465e632"). InnerVolumeSpecName "kube-api-access-f2p4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.384954 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-scripts" (OuterVolumeSpecName: "scripts") pod "cbdbf5c1-c9c8-4dbd-9c68-f6067465e632" (UID: "cbdbf5c1-c9c8-4dbd-9c68-f6067465e632"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.395787 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "cbdbf5c1-c9c8-4dbd-9c68-f6067465e632" (UID: "cbdbf5c1-c9c8-4dbd-9c68-f6067465e632"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.458419 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cbdbf5c1-c9c8-4dbd-9c68-f6067465e632" (UID: "cbdbf5c1-c9c8-4dbd-9c68-f6067465e632"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.468366 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.468412 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2p4d\" (UniqueName: \"kubernetes.io/projected/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-kube-api-access-f2p4d\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.468426 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.468434 4766 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.491495 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-config-data" (OuterVolumeSpecName: "config-data") pod "cbdbf5c1-c9c8-4dbd-9c68-f6067465e632" (UID: "cbdbf5c1-c9c8-4dbd-9c68-f6067465e632"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.516234 4766 scope.go:117] "RemoveContainer" containerID="93ae5a7f81e887b02a21081139212a780cdb5e34370af4b94974276fd478d895" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.540056 4766 scope.go:117] "RemoveContainer" containerID="c134999c83f4c4e99a82493ef600daf161d562a5ef3c81f4abd0f6a30ecfc2e4" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.570391 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.570754 4766 scope.go:117] "RemoveContainer" containerID="17bfefeda5ec998de43d5f88a70223bab321f691bb9c648b2910fa5820e7f804" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.588759 4766 scope.go:117] "RemoveContainer" containerID="05048db503ef6992d3bc323c39e15291dcbc536c204e47706f4e99b6d8070301" Jan 30 16:44:06 crc kubenswrapper[4766]: E0130 16:44:06.589083 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05048db503ef6992d3bc323c39e15291dcbc536c204e47706f4e99b6d8070301\": container with ID starting with 05048db503ef6992d3bc323c39e15291dcbc536c204e47706f4e99b6d8070301 not found: ID does not exist" containerID="05048db503ef6992d3bc323c39e15291dcbc536c204e47706f4e99b6d8070301" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.589124 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05048db503ef6992d3bc323c39e15291dcbc536c204e47706f4e99b6d8070301"} err="failed to get container status \"05048db503ef6992d3bc323c39e15291dcbc536c204e47706f4e99b6d8070301\": rpc error: code = NotFound desc = could not find container \"05048db503ef6992d3bc323c39e15291dcbc536c204e47706f4e99b6d8070301\": container with ID starting with 05048db503ef6992d3bc323c39e15291dcbc536c204e47706f4e99b6d8070301 not found: ID does not exist" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.589153 4766 scope.go:117] "RemoveContainer" containerID="93ae5a7f81e887b02a21081139212a780cdb5e34370af4b94974276fd478d895" Jan 30 16:44:06 crc kubenswrapper[4766]: E0130 16:44:06.589493 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93ae5a7f81e887b02a21081139212a780cdb5e34370af4b94974276fd478d895\": container with ID starting with 93ae5a7f81e887b02a21081139212a780cdb5e34370af4b94974276fd478d895 not found: ID does not exist" containerID="93ae5a7f81e887b02a21081139212a780cdb5e34370af4b94974276fd478d895" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.589584 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93ae5a7f81e887b02a21081139212a780cdb5e34370af4b94974276fd478d895"} err="failed to get container status \"93ae5a7f81e887b02a21081139212a780cdb5e34370af4b94974276fd478d895\": rpc error: code = NotFound desc = could not find container \"93ae5a7f81e887b02a21081139212a780cdb5e34370af4b94974276fd478d895\": container with ID starting with 93ae5a7f81e887b02a21081139212a780cdb5e34370af4b94974276fd478d895 not found: ID does not exist" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.589662 4766 scope.go:117] "RemoveContainer" containerID="c134999c83f4c4e99a82493ef600daf161d562a5ef3c81f4abd0f6a30ecfc2e4" Jan 30 16:44:06 crc kubenswrapper[4766]: E0130 16:44:06.589913 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c134999c83f4c4e99a82493ef600daf161d562a5ef3c81f4abd0f6a30ecfc2e4\": container with ID starting with c134999c83f4c4e99a82493ef600daf161d562a5ef3c81f4abd0f6a30ecfc2e4 not found: ID does not exist" containerID="c134999c83f4c4e99a82493ef600daf161d562a5ef3c81f4abd0f6a30ecfc2e4" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.589999 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c134999c83f4c4e99a82493ef600daf161d562a5ef3c81f4abd0f6a30ecfc2e4"} err="failed to get container status \"c134999c83f4c4e99a82493ef600daf161d562a5ef3c81f4abd0f6a30ecfc2e4\": rpc error: code = NotFound desc = could not find container \"c134999c83f4c4e99a82493ef600daf161d562a5ef3c81f4abd0f6a30ecfc2e4\": container with ID starting with c134999c83f4c4e99a82493ef600daf161d562a5ef3c81f4abd0f6a30ecfc2e4 not found: ID does not exist" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.590067 4766 scope.go:117] "RemoveContainer" containerID="17bfefeda5ec998de43d5f88a70223bab321f691bb9c648b2910fa5820e7f804" Jan 30 16:44:06 crc kubenswrapper[4766]: E0130 16:44:06.590356 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17bfefeda5ec998de43d5f88a70223bab321f691bb9c648b2910fa5820e7f804\": container with ID starting with 17bfefeda5ec998de43d5f88a70223bab321f691bb9c648b2910fa5820e7f804 not found: ID does not exist" containerID="17bfefeda5ec998de43d5f88a70223bab321f691bb9c648b2910fa5820e7f804" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.590451 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17bfefeda5ec998de43d5f88a70223bab321f691bb9c648b2910fa5820e7f804"} err="failed to get container status \"17bfefeda5ec998de43d5f88a70223bab321f691bb9c648b2910fa5820e7f804\": rpc error: code = NotFound desc = could not find container \"17bfefeda5ec998de43d5f88a70223bab321f691bb9c648b2910fa5820e7f804\": container with ID starting with 17bfefeda5ec998de43d5f88a70223bab321f691bb9c648b2910fa5820e7f804 not found: ID does not exist" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.697705 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.705571 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.729443 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:44:06 crc kubenswrapper[4766]: E0130 16:44:06.729851 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbdbf5c1-c9c8-4dbd-9c68-f6067465e632" containerName="proxy-httpd" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.729869 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbdbf5c1-c9c8-4dbd-9c68-f6067465e632" containerName="proxy-httpd" Jan 30 16:44:06 crc kubenswrapper[4766]: E0130 16:44:06.729890 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbdbf5c1-c9c8-4dbd-9c68-f6067465e632" containerName="sg-core" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.729897 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbdbf5c1-c9c8-4dbd-9c68-f6067465e632" containerName="sg-core" Jan 30 16:44:06 crc kubenswrapper[4766]: E0130 16:44:06.729911 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbdbf5c1-c9c8-4dbd-9c68-f6067465e632" containerName="ceilometer-notification-agent" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.729917 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbdbf5c1-c9c8-4dbd-9c68-f6067465e632" containerName="ceilometer-notification-agent" Jan 30 16:44:06 crc kubenswrapper[4766]: E0130 16:44:06.729926 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbdbf5c1-c9c8-4dbd-9c68-f6067465e632" containerName="ceilometer-central-agent" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.729932 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbdbf5c1-c9c8-4dbd-9c68-f6067465e632" containerName="ceilometer-central-agent" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.730078 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbdbf5c1-c9c8-4dbd-9c68-f6067465e632" containerName="ceilometer-central-agent" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.730095 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbdbf5c1-c9c8-4dbd-9c68-f6067465e632" containerName="proxy-httpd" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.730107 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbdbf5c1-c9c8-4dbd-9c68-f6067465e632" containerName="sg-core" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.730123 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbdbf5c1-c9c8-4dbd-9c68-f6067465e632" containerName="ceilometer-notification-agent" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.731655 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.734870 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.736251 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.738446 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.875667 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvtdw\" (UniqueName: \"kubernetes.io/projected/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-kube-api-access-jvtdw\") pod \"ceilometer-0\" (UID: \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\") " pod="openstack/ceilometer-0" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.875740 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\") " pod="openstack/ceilometer-0" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.875782 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-scripts\") pod \"ceilometer-0\" (UID: \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\") " pod="openstack/ceilometer-0" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.875836 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\") " pod="openstack/ceilometer-0" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.875895 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-run-httpd\") pod \"ceilometer-0\" (UID: \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\") " pod="openstack/ceilometer-0" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.875931 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-log-httpd\") pod \"ceilometer-0\" (UID: \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\") " pod="openstack/ceilometer-0" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.875997 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-config-data\") pod \"ceilometer-0\" (UID: \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\") " pod="openstack/ceilometer-0" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.978708 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvtdw\" (UniqueName: \"kubernetes.io/projected/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-kube-api-access-jvtdw\") pod \"ceilometer-0\" (UID: \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\") " pod="openstack/ceilometer-0" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.978762 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\") " pod="openstack/ceilometer-0" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.978792 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-scripts\") pod \"ceilometer-0\" (UID: \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\") " pod="openstack/ceilometer-0" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.978830 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\") " pod="openstack/ceilometer-0" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.978869 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-run-httpd\") pod \"ceilometer-0\" (UID: \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\") " pod="openstack/ceilometer-0" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.978892 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-log-httpd\") pod \"ceilometer-0\" (UID: \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\") " pod="openstack/ceilometer-0" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.978935 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-config-data\") pod \"ceilometer-0\" (UID: \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\") " pod="openstack/ceilometer-0" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.980647 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-run-httpd\") pod \"ceilometer-0\" (UID: \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\") " pod="openstack/ceilometer-0" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.982063 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-log-httpd\") pod \"ceilometer-0\" (UID: \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\") " pod="openstack/ceilometer-0" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.982921 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\") " pod="openstack/ceilometer-0" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.984484 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\") " pod="openstack/ceilometer-0" Jan 30 16:44:07 crc kubenswrapper[4766]: I0130 16:44:07.004431 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvtdw\" (UniqueName: \"kubernetes.io/projected/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-kube-api-access-jvtdw\") pod \"ceilometer-0\" (UID: \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\") " pod="openstack/ceilometer-0" Jan 30 16:44:07 crc kubenswrapper[4766]: I0130 16:44:07.004702 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-scripts\") pod \"ceilometer-0\" (UID: \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\") " pod="openstack/ceilometer-0" Jan 30 16:44:07 crc kubenswrapper[4766]: I0130 16:44:07.005775 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-config-data\") pod \"ceilometer-0\" (UID: \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\") " pod="openstack/ceilometer-0" Jan 30 16:44:07 crc kubenswrapper[4766]: I0130 16:44:07.150785 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 30 16:44:07 crc kubenswrapper[4766]: I0130 16:44:07.154550 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:44:08 crc kubenswrapper[4766]: I0130 16:44:08.057259 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbdbf5c1-c9c8-4dbd-9c68-f6067465e632" path="/var/lib/kubelet/pods/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632/volumes" Jan 30 16:44:08 crc kubenswrapper[4766]: I0130 16:44:08.597945 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:44:09 crc kubenswrapper[4766]: I0130 16:44:09.045511 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:44:09 crc kubenswrapper[4766]: I0130 16:44:09.045907 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:44:11 crc kubenswrapper[4766]: I0130 16:44:11.467219 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7d7d659cc9-88mc9" Jan 30 16:44:11 crc kubenswrapper[4766]: I0130 16:44:11.467833 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7d7d659cc9-88mc9" Jan 30 16:44:11 crc kubenswrapper[4766]: I0130 16:44:11.802742 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5995f74f66-6c62l" Jan 30 16:44:13 crc kubenswrapper[4766]: I0130 16:44:13.567286 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:44:14 crc kubenswrapper[4766]: I0130 16:44:14.451667 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e6f8d1d-5532-47c4-97db-68a1b5b3f876","Type":"ContainerStarted","Data":"a1e5f15ece17462fa98655bf351efadbb053907815e9f63a9046768408f27c8a"} Jan 30 16:44:14 crc kubenswrapper[4766]: I0130 16:44:14.454162 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"372f7d7a-9066-4b9b-884a-5257785ed101","Type":"ContainerStarted","Data":"df788f30600005e9bd630dc70c223ed28619ad8b7870fd3b9815867378945be2"} Jan 30 16:44:14 crc kubenswrapper[4766]: I0130 16:44:14.458668 4766 generic.go:334] "Generic (PLEG): container finished" podID="87ea3ac4-577b-4c1d-bf9d-816ad975cce1" containerID="9b228d765a873cea41f2139537c23bbfc06db149fe1e44721d80abc73ff98c0b" exitCode=137 Jan 30 16:44:14 crc kubenswrapper[4766]: I0130 16:44:14.458716 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"87ea3ac4-577b-4c1d-bf9d-816ad975cce1","Type":"ContainerDied","Data":"9b228d765a873cea41f2139537c23bbfc06db149fe1e44721d80abc73ff98c0b"} Jan 30 16:44:14 crc kubenswrapper[4766]: I0130 16:44:14.589998 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 16:44:14 crc kubenswrapper[4766]: I0130 16:44:14.609001 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.497601463 podStartE2EDuration="15.608981288s" podCreationTimestamp="2026-01-30 16:43:59 +0000 UTC" firstStartedPulling="2026-01-30 16:44:00.284513019 +0000 UTC m=+1294.922470365" lastFinishedPulling="2026-01-30 16:44:13.395892844 +0000 UTC m=+1308.033850190" observedRunningTime="2026-01-30 16:44:14.47165316 +0000 UTC m=+1309.109610506" watchObservedRunningTime="2026-01-30 16:44:14.608981288 +0000 UTC m=+1309.246938634" Jan 30 16:44:14 crc kubenswrapper[4766]: I0130 16:44:14.722042 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-config-data\") pod \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\" (UID: \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\") " Jan 30 16:44:14 crc kubenswrapper[4766]: I0130 16:44:14.722090 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-logs\") pod \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\" (UID: \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\") " Jan 30 16:44:14 crc kubenswrapper[4766]: I0130 16:44:14.722126 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-etc-machine-id\") pod \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\" (UID: \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\") " Jan 30 16:44:14 crc kubenswrapper[4766]: I0130 16:44:14.722348 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-config-data-custom\") pod \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\" (UID: \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\") " Jan 30 16:44:14 crc kubenswrapper[4766]: I0130 16:44:14.722405 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "87ea3ac4-577b-4c1d-bf9d-816ad975cce1" (UID: "87ea3ac4-577b-4c1d-bf9d-816ad975cce1"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:44:14 crc kubenswrapper[4766]: I0130 16:44:14.722730 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-logs" (OuterVolumeSpecName: "logs") pod "87ea3ac4-577b-4c1d-bf9d-816ad975cce1" (UID: "87ea3ac4-577b-4c1d-bf9d-816ad975cce1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:44:14 crc kubenswrapper[4766]: I0130 16:44:14.722804 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-combined-ca-bundle\") pod \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\" (UID: \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\") " Jan 30 16:44:14 crc kubenswrapper[4766]: I0130 16:44:14.722884 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-scripts\") pod \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\" (UID: \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\") " Jan 30 16:44:14 crc kubenswrapper[4766]: I0130 16:44:14.722916 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9x2tt\" (UniqueName: \"kubernetes.io/projected/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-kube-api-access-9x2tt\") pod \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\" (UID: \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\") " Jan 30 16:44:14 crc kubenswrapper[4766]: I0130 16:44:14.723332 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-logs\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:14 crc kubenswrapper[4766]: I0130 16:44:14.723360 4766 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:14 crc kubenswrapper[4766]: I0130 16:44:14.727301 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-scripts" (OuterVolumeSpecName: "scripts") pod "87ea3ac4-577b-4c1d-bf9d-816ad975cce1" (UID: "87ea3ac4-577b-4c1d-bf9d-816ad975cce1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:44:14 crc kubenswrapper[4766]: I0130 16:44:14.727379 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "87ea3ac4-577b-4c1d-bf9d-816ad975cce1" (UID: "87ea3ac4-577b-4c1d-bf9d-816ad975cce1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:44:14 crc kubenswrapper[4766]: I0130 16:44:14.727815 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-kube-api-access-9x2tt" (OuterVolumeSpecName: "kube-api-access-9x2tt") pod "87ea3ac4-577b-4c1d-bf9d-816ad975cce1" (UID: "87ea3ac4-577b-4c1d-bf9d-816ad975cce1"). InnerVolumeSpecName "kube-api-access-9x2tt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:44:14 crc kubenswrapper[4766]: I0130 16:44:14.752737 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "87ea3ac4-577b-4c1d-bf9d-816ad975cce1" (UID: "87ea3ac4-577b-4c1d-bf9d-816ad975cce1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:44:14 crc kubenswrapper[4766]: I0130 16:44:14.782140 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-config-data" (OuterVolumeSpecName: "config-data") pod "87ea3ac4-577b-4c1d-bf9d-816ad975cce1" (UID: "87ea3ac4-577b-4c1d-bf9d-816ad975cce1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:44:14 crc kubenswrapper[4766]: I0130 16:44:14.824772 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:14 crc kubenswrapper[4766]: I0130 16:44:14.824808 4766 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:14 crc kubenswrapper[4766]: I0130 16:44:14.824820 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:14 crc kubenswrapper[4766]: I0130 16:44:14.824833 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:14 crc kubenswrapper[4766]: I0130 16:44:14.824846 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9x2tt\" (UniqueName: \"kubernetes.io/projected/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-kube-api-access-9x2tt\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.470867 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.470865 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"87ea3ac4-577b-4c1d-bf9d-816ad975cce1","Type":"ContainerDied","Data":"c8586f92647bbb5a114dcd6f6899c5036c3e271083fa860bf64d7866744bcc76"} Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.471367 4766 scope.go:117] "RemoveContainer" containerID="9b228d765a873cea41f2139537c23bbfc06db149fe1e44721d80abc73ff98c0b" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.473698 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e6f8d1d-5532-47c4-97db-68a1b5b3f876","Type":"ContainerStarted","Data":"abfc1996fe1de3fb5534b103074354ef84caf8f9b984c1f476a8f7df648534ed"} Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.473747 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e6f8d1d-5532-47c4-97db-68a1b5b3f876","Type":"ContainerStarted","Data":"0457579c3fc1a9ef824883cd41ddabdf9c479beff458b6eac6ddb0bd7fa49d24"} Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.504304 4766 scope.go:117] "RemoveContainer" containerID="672ed2d0c3fa05620751134ad4ec14075e011d163f9d3075b0cc19ed389afb1c" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.526277 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.540283 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.556004 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 30 16:44:15 crc kubenswrapper[4766]: E0130 16:44:15.556628 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87ea3ac4-577b-4c1d-bf9d-816ad975cce1" containerName="cinder-api-log" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.556749 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="87ea3ac4-577b-4c1d-bf9d-816ad975cce1" containerName="cinder-api-log" Jan 30 16:44:15 crc kubenswrapper[4766]: E0130 16:44:15.556807 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87ea3ac4-577b-4c1d-bf9d-816ad975cce1" containerName="cinder-api" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.556867 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="87ea3ac4-577b-4c1d-bf9d-816ad975cce1" containerName="cinder-api" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.557194 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="87ea3ac4-577b-4c1d-bf9d-816ad975cce1" containerName="cinder-api" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.557338 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="87ea3ac4-577b-4c1d-bf9d-816ad975cce1" containerName="cinder-api-log" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.558536 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.564575 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.564624 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.564575 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.584070 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.638247 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-public-tls-certs\") pod \"cinder-api-0\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " pod="openstack/cinder-api-0" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.638300 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-scripts\") pod \"cinder-api-0\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " pod="openstack/cinder-api-0" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.638604 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-config-data-custom\") pod \"cinder-api-0\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " pod="openstack/cinder-api-0" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.638640 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-config-data\") pod \"cinder-api-0\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " pod="openstack/cinder-api-0" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.638671 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69h5t\" (UniqueName: \"kubernetes.io/projected/aca8dfc0-f915-4696-95c1-3c232f2ea35a-kube-api-access-69h5t\") pod \"cinder-api-0\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " pod="openstack/cinder-api-0" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.638755 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aca8dfc0-f915-4696-95c1-3c232f2ea35a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " pod="openstack/cinder-api-0" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.638773 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " pod="openstack/cinder-api-0" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.638807 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " pod="openstack/cinder-api-0" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.638834 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aca8dfc0-f915-4696-95c1-3c232f2ea35a-logs\") pod \"cinder-api-0\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " pod="openstack/cinder-api-0" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.740898 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-public-tls-certs\") pod \"cinder-api-0\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " pod="openstack/cinder-api-0" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.740956 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-scripts\") pod \"cinder-api-0\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " pod="openstack/cinder-api-0" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.741007 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-config-data-custom\") pod \"cinder-api-0\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " pod="openstack/cinder-api-0" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.741027 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-config-data\") pod \"cinder-api-0\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " pod="openstack/cinder-api-0" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.741048 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69h5t\" (UniqueName: \"kubernetes.io/projected/aca8dfc0-f915-4696-95c1-3c232f2ea35a-kube-api-access-69h5t\") pod \"cinder-api-0\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " pod="openstack/cinder-api-0" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.741107 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aca8dfc0-f915-4696-95c1-3c232f2ea35a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " pod="openstack/cinder-api-0" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.741156 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " pod="openstack/cinder-api-0" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.741208 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " pod="openstack/cinder-api-0" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.741237 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aca8dfc0-f915-4696-95c1-3c232f2ea35a-logs\") pod \"cinder-api-0\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " pod="openstack/cinder-api-0" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.741910 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aca8dfc0-f915-4696-95c1-3c232f2ea35a-logs\") pod \"cinder-api-0\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " pod="openstack/cinder-api-0" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.741977 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aca8dfc0-f915-4696-95c1-3c232f2ea35a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " pod="openstack/cinder-api-0" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.745832 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-public-tls-certs\") pod \"cinder-api-0\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " pod="openstack/cinder-api-0" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.750809 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " pod="openstack/cinder-api-0" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.751363 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-config-data-custom\") pod \"cinder-api-0\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " pod="openstack/cinder-api-0" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.752087 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-config-data\") pod \"cinder-api-0\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " pod="openstack/cinder-api-0" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.761794 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " pod="openstack/cinder-api-0" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.762227 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-scripts\") pod \"cinder-api-0\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " pod="openstack/cinder-api-0" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.767654 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69h5t\" (UniqueName: \"kubernetes.io/projected/aca8dfc0-f915-4696-95c1-3c232f2ea35a-kube-api-access-69h5t\") pod \"cinder-api-0\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " pod="openstack/cinder-api-0" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.878787 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 16:44:16 crc kubenswrapper[4766]: I0130 16:44:16.079697 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87ea3ac4-577b-4c1d-bf9d-816ad975cce1" path="/var/lib/kubelet/pods/87ea3ac4-577b-4c1d-bf9d-816ad975cce1/volumes" Jan 30 16:44:16 crc kubenswrapper[4766]: I0130 16:44:16.270014 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 16:44:16 crc kubenswrapper[4766]: I0130 16:44:16.487568 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e6f8d1d-5532-47c4-97db-68a1b5b3f876","Type":"ContainerStarted","Data":"ea43d9b31d9aa5149b7739b7621868cd96a13807e7953d198fd25510949afdca"} Jan 30 16:44:16 crc kubenswrapper[4766]: I0130 16:44:16.488462 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"aca8dfc0-f915-4696-95c1-3c232f2ea35a","Type":"ContainerStarted","Data":"7e89f84a27af28de0ff96a206ea024d02e0721f6cc45b38d9fef889091b6e08b"} Jan 30 16:44:16 crc kubenswrapper[4766]: I0130 16:44:16.819452 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 16:44:16 crc kubenswrapper[4766]: I0130 16:44:16.824346 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="64f88e91-eb62-45a5-bfcb-d38a918e23da" containerName="glance-log" containerID="cri-o://9352e7173393de9492d5222999bb67029ca5e0dcf5693f98de32c79fb6c9bf8a" gracePeriod=30 Jan 30 16:44:16 crc kubenswrapper[4766]: I0130 16:44:16.824577 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="64f88e91-eb62-45a5-bfcb-d38a918e23da" containerName="glance-httpd" containerID="cri-o://87f92f11ae60bdb748400bbfd15a3c1c211bca6b2a6c5c076aef71b15e044896" gracePeriod=30 Jan 30 16:44:17 crc kubenswrapper[4766]: I0130 16:44:17.246022 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6d4bdf9c45-5nxgr" Jan 30 16:44:17 crc kubenswrapper[4766]: I0130 16:44:17.311943 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5995f74f66-6c62l"] Jan 30 16:44:17 crc kubenswrapper[4766]: I0130 16:44:17.312162 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5995f74f66-6c62l" podUID="41b169a2-8e44-4929-97b3-dbffe0cde1e3" containerName="neutron-api" containerID="cri-o://f90d990e4bb708269fa857e162e408508f66de376ebca39148001cfa19b15e64" gracePeriod=30 Jan 30 16:44:17 crc kubenswrapper[4766]: I0130 16:44:17.312631 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5995f74f66-6c62l" podUID="41b169a2-8e44-4929-97b3-dbffe0cde1e3" containerName="neutron-httpd" containerID="cri-o://6b2451aedacfbae9e7985dd3d8faea0afad8a36f5d5db78f320e3010f19ea30d" gracePeriod=30 Jan 30 16:44:17 crc kubenswrapper[4766]: I0130 16:44:17.503537 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"aca8dfc0-f915-4696-95c1-3c232f2ea35a","Type":"ContainerStarted","Data":"a733f4a35556a7749ab7a8a117d06d26d21581815da5c3972ae1bb6be001e832"} Jan 30 16:44:17 crc kubenswrapper[4766]: I0130 16:44:17.505075 4766 generic.go:334] "Generic (PLEG): container finished" podID="41b169a2-8e44-4929-97b3-dbffe0cde1e3" containerID="6b2451aedacfbae9e7985dd3d8faea0afad8a36f5d5db78f320e3010f19ea30d" exitCode=0 Jan 30 16:44:17 crc kubenswrapper[4766]: I0130 16:44:17.505127 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5995f74f66-6c62l" event={"ID":"41b169a2-8e44-4929-97b3-dbffe0cde1e3","Type":"ContainerDied","Data":"6b2451aedacfbae9e7985dd3d8faea0afad8a36f5d5db78f320e3010f19ea30d"} Jan 30 16:44:17 crc kubenswrapper[4766]: I0130 16:44:17.507846 4766 generic.go:334] "Generic (PLEG): container finished" podID="64f88e91-eb62-45a5-bfcb-d38a918e23da" containerID="9352e7173393de9492d5222999bb67029ca5e0dcf5693f98de32c79fb6c9bf8a" exitCode=143 Jan 30 16:44:17 crc kubenswrapper[4766]: I0130 16:44:17.507876 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"64f88e91-eb62-45a5-bfcb-d38a918e23da","Type":"ContainerDied","Data":"9352e7173393de9492d5222999bb67029ca5e0dcf5693f98de32c79fb6c9bf8a"} Jan 30 16:44:17 crc kubenswrapper[4766]: I0130 16:44:17.912308 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 16:44:17 crc kubenswrapper[4766]: I0130 16:44:17.912775 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="fdb8de08-c6c3-4dac-b9cc-0178d79a7eda" containerName="glance-log" containerID="cri-o://c628aa6775fa8d17ac86f5683f6cf5c80fc38a33f4c92757b020af220822f50a" gracePeriod=30 Jan 30 16:44:17 crc kubenswrapper[4766]: I0130 16:44:17.912905 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="fdb8de08-c6c3-4dac-b9cc-0178d79a7eda" containerName="glance-httpd" containerID="cri-o://3cb23532304b03e1da0f93a0cdcb7fa000cdddef8c5037121da270eaf943e938" gracePeriod=30 Jan 30 16:44:18 crc kubenswrapper[4766]: I0130 16:44:18.519210 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e6f8d1d-5532-47c4-97db-68a1b5b3f876","Type":"ContainerStarted","Data":"a63129fee7968993f35cbb7b7849c29b9a1b79d14cad68020d591e8f586579b1"} Jan 30 16:44:18 crc kubenswrapper[4766]: I0130 16:44:18.519491 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9e6f8d1d-5532-47c4-97db-68a1b5b3f876" containerName="proxy-httpd" containerID="cri-o://a63129fee7968993f35cbb7b7849c29b9a1b79d14cad68020d591e8f586579b1" gracePeriod=30 Jan 30 16:44:18 crc kubenswrapper[4766]: I0130 16:44:18.519529 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9e6f8d1d-5532-47c4-97db-68a1b5b3f876" containerName="ceilometer-notification-agent" containerID="cri-o://abfc1996fe1de3fb5534b103074354ef84caf8f9b984c1f476a8f7df648534ed" gracePeriod=30 Jan 30 16:44:18 crc kubenswrapper[4766]: I0130 16:44:18.519568 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9e6f8d1d-5532-47c4-97db-68a1b5b3f876" containerName="sg-core" containerID="cri-o://ea43d9b31d9aa5149b7739b7621868cd96a13807e7953d198fd25510949afdca" gracePeriod=30 Jan 30 16:44:18 crc kubenswrapper[4766]: I0130 16:44:18.521016 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9e6f8d1d-5532-47c4-97db-68a1b5b3f876" containerName="ceilometer-central-agent" containerID="cri-o://0457579c3fc1a9ef824883cd41ddabdf9c479beff458b6eac6ddb0bd7fa49d24" gracePeriod=30 Jan 30 16:44:18 crc kubenswrapper[4766]: I0130 16:44:18.521920 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"aca8dfc0-f915-4696-95c1-3c232f2ea35a","Type":"ContainerStarted","Data":"f8cd42c17a2b9677c49633243f1573eb74a7d64e3fcc5f7dba33bde53ccef668"} Jan 30 16:44:18 crc kubenswrapper[4766]: I0130 16:44:18.522717 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 30 16:44:18 crc kubenswrapper[4766]: I0130 16:44:18.527120 4766 generic.go:334] "Generic (PLEG): container finished" podID="fdb8de08-c6c3-4dac-b9cc-0178d79a7eda" containerID="c628aa6775fa8d17ac86f5683f6cf5c80fc38a33f4c92757b020af220822f50a" exitCode=143 Jan 30 16:44:18 crc kubenswrapper[4766]: I0130 16:44:18.527203 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda","Type":"ContainerDied","Data":"c628aa6775fa8d17ac86f5683f6cf5c80fc38a33f4c92757b020af220822f50a"} Jan 30 16:44:18 crc kubenswrapper[4766]: I0130 16:44:18.551510 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=8.03078637 podStartE2EDuration="12.551489031s" podCreationTimestamp="2026-01-30 16:44:06 +0000 UTC" firstStartedPulling="2026-01-30 16:44:13.564986506 +0000 UTC m=+1308.202943852" lastFinishedPulling="2026-01-30 16:44:18.085689167 +0000 UTC m=+1312.723646513" observedRunningTime="2026-01-30 16:44:18.541198238 +0000 UTC m=+1313.179155594" watchObservedRunningTime="2026-01-30 16:44:18.551489031 +0000 UTC m=+1313.189446377" Jan 30 16:44:18 crc kubenswrapper[4766]: I0130 16:44:18.562520 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.562499814 podStartE2EDuration="3.562499814s" podCreationTimestamp="2026-01-30 16:44:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:44:18.558638288 +0000 UTC m=+1313.196595634" watchObservedRunningTime="2026-01-30 16:44:18.562499814 +0000 UTC m=+1313.200457160" Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.378071 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5995f74f66-6c62l" Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.517542 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/41b169a2-8e44-4929-97b3-dbffe0cde1e3-httpd-config\") pod \"41b169a2-8e44-4929-97b3-dbffe0cde1e3\" (UID: \"41b169a2-8e44-4929-97b3-dbffe0cde1e3\") " Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.517679 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41b169a2-8e44-4929-97b3-dbffe0cde1e3-combined-ca-bundle\") pod \"41b169a2-8e44-4929-97b3-dbffe0cde1e3\" (UID: \"41b169a2-8e44-4929-97b3-dbffe0cde1e3\") " Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.517728 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5bmm\" (UniqueName: \"kubernetes.io/projected/41b169a2-8e44-4929-97b3-dbffe0cde1e3-kube-api-access-p5bmm\") pod \"41b169a2-8e44-4929-97b3-dbffe0cde1e3\" (UID: \"41b169a2-8e44-4929-97b3-dbffe0cde1e3\") " Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.517777 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/41b169a2-8e44-4929-97b3-dbffe0cde1e3-ovndb-tls-certs\") pod \"41b169a2-8e44-4929-97b3-dbffe0cde1e3\" (UID: \"41b169a2-8e44-4929-97b3-dbffe0cde1e3\") " Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.517856 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/41b169a2-8e44-4929-97b3-dbffe0cde1e3-config\") pod \"41b169a2-8e44-4929-97b3-dbffe0cde1e3\" (UID: \"41b169a2-8e44-4929-97b3-dbffe0cde1e3\") " Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.522945 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41b169a2-8e44-4929-97b3-dbffe0cde1e3-kube-api-access-p5bmm" (OuterVolumeSpecName: "kube-api-access-p5bmm") pod "41b169a2-8e44-4929-97b3-dbffe0cde1e3" (UID: "41b169a2-8e44-4929-97b3-dbffe0cde1e3"). InnerVolumeSpecName "kube-api-access-p5bmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.529590 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41b169a2-8e44-4929-97b3-dbffe0cde1e3-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "41b169a2-8e44-4929-97b3-dbffe0cde1e3" (UID: "41b169a2-8e44-4929-97b3-dbffe0cde1e3"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.550747 4766 generic.go:334] "Generic (PLEG): container finished" podID="41b169a2-8e44-4929-97b3-dbffe0cde1e3" containerID="f90d990e4bb708269fa857e162e408508f66de376ebca39148001cfa19b15e64" exitCode=0 Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.550820 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5995f74f66-6c62l" event={"ID":"41b169a2-8e44-4929-97b3-dbffe0cde1e3","Type":"ContainerDied","Data":"f90d990e4bb708269fa857e162e408508f66de376ebca39148001cfa19b15e64"} Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.550849 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5995f74f66-6c62l" event={"ID":"41b169a2-8e44-4929-97b3-dbffe0cde1e3","Type":"ContainerDied","Data":"e2b7b271b357b586463753be91e6e23e2c8d157467dd4ac8a1278aee093a63d3"} Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.550864 4766 scope.go:117] "RemoveContainer" containerID="6b2451aedacfbae9e7985dd3d8faea0afad8a36f5d5db78f320e3010f19ea30d" Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.551243 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5995f74f66-6c62l" Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.563607 4766 generic.go:334] "Generic (PLEG): container finished" podID="9e6f8d1d-5532-47c4-97db-68a1b5b3f876" containerID="a63129fee7968993f35cbb7b7849c29b9a1b79d14cad68020d591e8f586579b1" exitCode=0 Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.563636 4766 generic.go:334] "Generic (PLEG): container finished" podID="9e6f8d1d-5532-47c4-97db-68a1b5b3f876" containerID="ea43d9b31d9aa5149b7739b7621868cd96a13807e7953d198fd25510949afdca" exitCode=2 Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.563644 4766 generic.go:334] "Generic (PLEG): container finished" podID="9e6f8d1d-5532-47c4-97db-68a1b5b3f876" containerID="abfc1996fe1de3fb5534b103074354ef84caf8f9b984c1f476a8f7df648534ed" exitCode=0 Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.564475 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e6f8d1d-5532-47c4-97db-68a1b5b3f876","Type":"ContainerDied","Data":"a63129fee7968993f35cbb7b7849c29b9a1b79d14cad68020d591e8f586579b1"} Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.564502 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e6f8d1d-5532-47c4-97db-68a1b5b3f876","Type":"ContainerDied","Data":"ea43d9b31d9aa5149b7739b7621868cd96a13807e7953d198fd25510949afdca"} Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.564513 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e6f8d1d-5532-47c4-97db-68a1b5b3f876","Type":"ContainerDied","Data":"abfc1996fe1de3fb5534b103074354ef84caf8f9b984c1f476a8f7df648534ed"} Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.587369 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41b169a2-8e44-4929-97b3-dbffe0cde1e3-config" (OuterVolumeSpecName: "config") pod "41b169a2-8e44-4929-97b3-dbffe0cde1e3" (UID: "41b169a2-8e44-4929-97b3-dbffe0cde1e3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.622540 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5bmm\" (UniqueName: \"kubernetes.io/projected/41b169a2-8e44-4929-97b3-dbffe0cde1e3-kube-api-access-p5bmm\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.622894 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/41b169a2-8e44-4929-97b3-dbffe0cde1e3-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.622909 4766 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/41b169a2-8e44-4929-97b3-dbffe0cde1e3-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.627385 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41b169a2-8e44-4929-97b3-dbffe0cde1e3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "41b169a2-8e44-4929-97b3-dbffe0cde1e3" (UID: "41b169a2-8e44-4929-97b3-dbffe0cde1e3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.650009 4766 scope.go:117] "RemoveContainer" containerID="f90d990e4bb708269fa857e162e408508f66de376ebca39148001cfa19b15e64" Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.655408 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41b169a2-8e44-4929-97b3-dbffe0cde1e3-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "41b169a2-8e44-4929-97b3-dbffe0cde1e3" (UID: "41b169a2-8e44-4929-97b3-dbffe0cde1e3"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.682008 4766 scope.go:117] "RemoveContainer" containerID="6b2451aedacfbae9e7985dd3d8faea0afad8a36f5d5db78f320e3010f19ea30d" Jan 30 16:44:19 crc kubenswrapper[4766]: E0130 16:44:19.685327 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b2451aedacfbae9e7985dd3d8faea0afad8a36f5d5db78f320e3010f19ea30d\": container with ID starting with 6b2451aedacfbae9e7985dd3d8faea0afad8a36f5d5db78f320e3010f19ea30d not found: ID does not exist" containerID="6b2451aedacfbae9e7985dd3d8faea0afad8a36f5d5db78f320e3010f19ea30d" Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.685381 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b2451aedacfbae9e7985dd3d8faea0afad8a36f5d5db78f320e3010f19ea30d"} err="failed to get container status \"6b2451aedacfbae9e7985dd3d8faea0afad8a36f5d5db78f320e3010f19ea30d\": rpc error: code = NotFound desc = could not find container \"6b2451aedacfbae9e7985dd3d8faea0afad8a36f5d5db78f320e3010f19ea30d\": container with ID starting with 6b2451aedacfbae9e7985dd3d8faea0afad8a36f5d5db78f320e3010f19ea30d not found: ID does not exist" Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.685403 4766 scope.go:117] "RemoveContainer" containerID="f90d990e4bb708269fa857e162e408508f66de376ebca39148001cfa19b15e64" Jan 30 16:44:19 crc kubenswrapper[4766]: E0130 16:44:19.696718 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f90d990e4bb708269fa857e162e408508f66de376ebca39148001cfa19b15e64\": container with ID starting with f90d990e4bb708269fa857e162e408508f66de376ebca39148001cfa19b15e64 not found: ID does not exist" containerID="f90d990e4bb708269fa857e162e408508f66de376ebca39148001cfa19b15e64" Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.696802 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f90d990e4bb708269fa857e162e408508f66de376ebca39148001cfa19b15e64"} err="failed to get container status \"f90d990e4bb708269fa857e162e408508f66de376ebca39148001cfa19b15e64\": rpc error: code = NotFound desc = could not find container \"f90d990e4bb708269fa857e162e408508f66de376ebca39148001cfa19b15e64\": container with ID starting with f90d990e4bb708269fa857e162e408508f66de376ebca39148001cfa19b15e64 not found: ID does not exist" Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.726620 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41b169a2-8e44-4929-97b3-dbffe0cde1e3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.726806 4766 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/41b169a2-8e44-4929-97b3-dbffe0cde1e3-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.885111 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5995f74f66-6c62l"] Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.897819 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5995f74f66-6c62l"] Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.073236 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41b169a2-8e44-4929-97b3-dbffe0cde1e3" path="/var/lib/kubelet/pods/41b169a2-8e44-4929-97b3-dbffe0cde1e3/volumes" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.448663 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.467692 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64f88e91-eb62-45a5-bfcb-d38a918e23da-logs\") pod \"64f88e91-eb62-45a5-bfcb-d38a918e23da\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") " Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.467744 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q78hl\" (UniqueName: \"kubernetes.io/projected/64f88e91-eb62-45a5-bfcb-d38a918e23da-kube-api-access-q78hl\") pod \"64f88e91-eb62-45a5-bfcb-d38a918e23da\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") " Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.467791 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/64f88e91-eb62-45a5-bfcb-d38a918e23da-httpd-run\") pod \"64f88e91-eb62-45a5-bfcb-d38a918e23da\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") " Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.467822 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"64f88e91-eb62-45a5-bfcb-d38a918e23da\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") " Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.467889 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64f88e91-eb62-45a5-bfcb-d38a918e23da-scripts\") pod \"64f88e91-eb62-45a5-bfcb-d38a918e23da\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") " Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.468223 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64f88e91-eb62-45a5-bfcb-d38a918e23da-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "64f88e91-eb62-45a5-bfcb-d38a918e23da" (UID: "64f88e91-eb62-45a5-bfcb-d38a918e23da"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.468235 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64f88e91-eb62-45a5-bfcb-d38a918e23da-logs" (OuterVolumeSpecName: "logs") pod "64f88e91-eb62-45a5-bfcb-d38a918e23da" (UID: "64f88e91-eb62-45a5-bfcb-d38a918e23da"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.468326 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/64f88e91-eb62-45a5-bfcb-d38a918e23da-public-tls-certs\") pod \"64f88e91-eb62-45a5-bfcb-d38a918e23da\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") " Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.468370 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64f88e91-eb62-45a5-bfcb-d38a918e23da-combined-ca-bundle\") pod \"64f88e91-eb62-45a5-bfcb-d38a918e23da\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") " Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.468412 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64f88e91-eb62-45a5-bfcb-d38a918e23da-config-data\") pod \"64f88e91-eb62-45a5-bfcb-d38a918e23da\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") " Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.468686 4766 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/64f88e91-eb62-45a5-bfcb-d38a918e23da-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.468701 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64f88e91-eb62-45a5-bfcb-d38a918e23da-logs\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.482554 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64f88e91-eb62-45a5-bfcb-d38a918e23da-kube-api-access-q78hl" (OuterVolumeSpecName: "kube-api-access-q78hl") pod "64f88e91-eb62-45a5-bfcb-d38a918e23da" (UID: "64f88e91-eb62-45a5-bfcb-d38a918e23da"). InnerVolumeSpecName "kube-api-access-q78hl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.484661 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "64f88e91-eb62-45a5-bfcb-d38a918e23da" (UID: "64f88e91-eb62-45a5-bfcb-d38a918e23da"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.501250 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64f88e91-eb62-45a5-bfcb-d38a918e23da-scripts" (OuterVolumeSpecName: "scripts") pod "64f88e91-eb62-45a5-bfcb-d38a918e23da" (UID: "64f88e91-eb62-45a5-bfcb-d38a918e23da"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.534292 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64f88e91-eb62-45a5-bfcb-d38a918e23da-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "64f88e91-eb62-45a5-bfcb-d38a918e23da" (UID: "64f88e91-eb62-45a5-bfcb-d38a918e23da"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.564244 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64f88e91-eb62-45a5-bfcb-d38a918e23da-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "64f88e91-eb62-45a5-bfcb-d38a918e23da" (UID: "64f88e91-eb62-45a5-bfcb-d38a918e23da"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.569999 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64f88e91-eb62-45a5-bfcb-d38a918e23da-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.570304 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q78hl\" (UniqueName: \"kubernetes.io/projected/64f88e91-eb62-45a5-bfcb-d38a918e23da-kube-api-access-q78hl\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.570401 4766 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.570467 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64f88e91-eb62-45a5-bfcb-d38a918e23da-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.570523 4766 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/64f88e91-eb62-45a5-bfcb-d38a918e23da-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.592483 4766 generic.go:334] "Generic (PLEG): container finished" podID="64f88e91-eb62-45a5-bfcb-d38a918e23da" containerID="87f92f11ae60bdb748400bbfd15a3c1c211bca6b2a6c5c076aef71b15e044896" exitCode=0 Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.592648 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"64f88e91-eb62-45a5-bfcb-d38a918e23da","Type":"ContainerDied","Data":"87f92f11ae60bdb748400bbfd15a3c1c211bca6b2a6c5c076aef71b15e044896"} Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.592733 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"64f88e91-eb62-45a5-bfcb-d38a918e23da","Type":"ContainerDied","Data":"935c723156bfbd5c9680c8c0177ab173e556ff98d5fd8edb1776d96225b947f7"} Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.592797 4766 scope.go:117] "RemoveContainer" containerID="87f92f11ae60bdb748400bbfd15a3c1c211bca6b2a6c5c076aef71b15e044896" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.592949 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.610358 4766 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.645494 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64f88e91-eb62-45a5-bfcb-d38a918e23da-config-data" (OuterVolumeSpecName: "config-data") pod "64f88e91-eb62-45a5-bfcb-d38a918e23da" (UID: "64f88e91-eb62-45a5-bfcb-d38a918e23da"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.651378 4766 scope.go:117] "RemoveContainer" containerID="9352e7173393de9492d5222999bb67029ca5e0dcf5693f98de32c79fb6c9bf8a" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.670591 4766 scope.go:117] "RemoveContainer" containerID="87f92f11ae60bdb748400bbfd15a3c1c211bca6b2a6c5c076aef71b15e044896" Jan 30 16:44:20 crc kubenswrapper[4766]: E0130 16:44:20.671015 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87f92f11ae60bdb748400bbfd15a3c1c211bca6b2a6c5c076aef71b15e044896\": container with ID starting with 87f92f11ae60bdb748400bbfd15a3c1c211bca6b2a6c5c076aef71b15e044896 not found: ID does not exist" containerID="87f92f11ae60bdb748400bbfd15a3c1c211bca6b2a6c5c076aef71b15e044896" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.671045 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87f92f11ae60bdb748400bbfd15a3c1c211bca6b2a6c5c076aef71b15e044896"} err="failed to get container status \"87f92f11ae60bdb748400bbfd15a3c1c211bca6b2a6c5c076aef71b15e044896\": rpc error: code = NotFound desc = could not find container \"87f92f11ae60bdb748400bbfd15a3c1c211bca6b2a6c5c076aef71b15e044896\": container with ID starting with 87f92f11ae60bdb748400bbfd15a3c1c211bca6b2a6c5c076aef71b15e044896 not found: ID does not exist" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.671066 4766 scope.go:117] "RemoveContainer" containerID="9352e7173393de9492d5222999bb67029ca5e0dcf5693f98de32c79fb6c9bf8a" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.671226 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64f88e91-eb62-45a5-bfcb-d38a918e23da-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.671243 4766 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:20 crc kubenswrapper[4766]: E0130 16:44:20.672117 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9352e7173393de9492d5222999bb67029ca5e0dcf5693f98de32c79fb6c9bf8a\": container with ID starting with 9352e7173393de9492d5222999bb67029ca5e0dcf5693f98de32c79fb6c9bf8a not found: ID does not exist" containerID="9352e7173393de9492d5222999bb67029ca5e0dcf5693f98de32c79fb6c9bf8a" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.672139 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9352e7173393de9492d5222999bb67029ca5e0dcf5693f98de32c79fb6c9bf8a"} err="failed to get container status \"9352e7173393de9492d5222999bb67029ca5e0dcf5693f98de32c79fb6c9bf8a\": rpc error: code = NotFound desc = could not find container \"9352e7173393de9492d5222999bb67029ca5e0dcf5693f98de32c79fb6c9bf8a\": container with ID starting with 9352e7173393de9492d5222999bb67029ca5e0dcf5693f98de32c79fb6c9bf8a not found: ID does not exist" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.932571 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.942493 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.973409 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 16:44:20 crc kubenswrapper[4766]: E0130 16:44:20.973838 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64f88e91-eb62-45a5-bfcb-d38a918e23da" containerName="glance-httpd" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.973859 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="64f88e91-eb62-45a5-bfcb-d38a918e23da" containerName="glance-httpd" Jan 30 16:44:20 crc kubenswrapper[4766]: E0130 16:44:20.973889 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64f88e91-eb62-45a5-bfcb-d38a918e23da" containerName="glance-log" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.973897 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="64f88e91-eb62-45a5-bfcb-d38a918e23da" containerName="glance-log" Jan 30 16:44:20 crc kubenswrapper[4766]: E0130 16:44:20.973911 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41b169a2-8e44-4929-97b3-dbffe0cde1e3" containerName="neutron-httpd" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.973920 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="41b169a2-8e44-4929-97b3-dbffe0cde1e3" containerName="neutron-httpd" Jan 30 16:44:20 crc kubenswrapper[4766]: E0130 16:44:20.973940 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41b169a2-8e44-4929-97b3-dbffe0cde1e3" containerName="neutron-api" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.973947 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="41b169a2-8e44-4929-97b3-dbffe0cde1e3" containerName="neutron-api" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.974258 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="41b169a2-8e44-4929-97b3-dbffe0cde1e3" containerName="neutron-httpd" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.974294 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="41b169a2-8e44-4929-97b3-dbffe0cde1e3" containerName="neutron-api" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.974369 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="64f88e91-eb62-45a5-bfcb-d38a918e23da" containerName="glance-httpd" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.974391 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="64f88e91-eb62-45a5-bfcb-d38a918e23da" containerName="glance-log" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.975512 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.977745 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.988054 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.997835 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.114007 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-smswb"] Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.119418 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-smswb" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.162951 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-smswb"] Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.177600 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") " pod="openstack/glance-default-external-api-0" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.177678 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") " pod="openstack/glance-default-external-api-0" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.177722 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") " pod="openstack/glance-default-external-api-0" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.177755 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") " pod="openstack/glance-default-external-api-0" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.177821 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-scripts\") pod \"glance-default-external-api-0\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") " pod="openstack/glance-default-external-api-0" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.177899 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-config-data\") pod \"glance-default-external-api-0\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") " pod="openstack/glance-default-external-api-0" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.177923 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-logs\") pod \"glance-default-external-api-0\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") " pod="openstack/glance-default-external-api-0" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.177963 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xl7b\" (UniqueName: \"kubernetes.io/projected/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-kube-api-access-2xl7b\") pod \"glance-default-external-api-0\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") " pod="openstack/glance-default-external-api-0" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.235893 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-b00e-account-create-update-r7p4m"] Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.237507 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-b00e-account-create-update-r7p4m" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.245641 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.248562 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-pq28c"] Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.249776 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-pq28c" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.263597 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-pq28c"] Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.276315 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-b00e-account-create-update-r7p4m"] Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.279985 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-scripts\") pod \"glance-default-external-api-0\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") " pod="openstack/glance-default-external-api-0" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.280058 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mmpv\" (UniqueName: \"kubernetes.io/projected/cea24037-4775-49f8-8a3b-d194ea750544-kube-api-access-7mmpv\") pod \"nova-api-db-create-smswb\" (UID: \"cea24037-4775-49f8-8a3b-d194ea750544\") " pod="openstack/nova-api-db-create-smswb" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.280142 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-config-data\") pod \"glance-default-external-api-0\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") " pod="openstack/glance-default-external-api-0" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.280163 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-logs\") pod \"glance-default-external-api-0\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") " pod="openstack/glance-default-external-api-0" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.280211 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xl7b\" (UniqueName: \"kubernetes.io/projected/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-kube-api-access-2xl7b\") pod \"glance-default-external-api-0\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") " pod="openstack/glance-default-external-api-0" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.280240 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") " pod="openstack/glance-default-external-api-0" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.280272 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") " pod="openstack/glance-default-external-api-0" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.280311 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") " pod="openstack/glance-default-external-api-0" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.280339 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") " pod="openstack/glance-default-external-api-0" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.280369 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cea24037-4775-49f8-8a3b-d194ea750544-operator-scripts\") pod \"nova-api-db-create-smswb\" (UID: \"cea24037-4775-49f8-8a3b-d194ea750544\") " pod="openstack/nova-api-db-create-smswb" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.280907 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.283652 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") " pod="openstack/glance-default-external-api-0" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.283716 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-logs\") pod \"glance-default-external-api-0\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") " pod="openstack/glance-default-external-api-0" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.292023 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") " pod="openstack/glance-default-external-api-0" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.295771 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-config-data\") pod \"glance-default-external-api-0\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") " pod="openstack/glance-default-external-api-0" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.297786 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") " pod="openstack/glance-default-external-api-0" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.327037 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-scripts\") pod \"glance-default-external-api-0\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") " pod="openstack/glance-default-external-api-0" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.360843 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xl7b\" (UniqueName: \"kubernetes.io/projected/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-kube-api-access-2xl7b\") pod \"glance-default-external-api-0\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") " pod="openstack/glance-default-external-api-0" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.389542 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjsdt\" (UniqueName: \"kubernetes.io/projected/0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d-kube-api-access-bjsdt\") pod \"nova-api-b00e-account-create-update-r7p4m\" (UID: \"0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d\") " pod="openstack/nova-api-b00e-account-create-update-r7p4m" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.389610 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cea24037-4775-49f8-8a3b-d194ea750544-operator-scripts\") pod \"nova-api-db-create-smswb\" (UID: \"cea24037-4775-49f8-8a3b-d194ea750544\") " pod="openstack/nova-api-db-create-smswb" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.389640 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d-operator-scripts\") pod \"nova-api-b00e-account-create-update-r7p4m\" (UID: \"0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d\") " pod="openstack/nova-api-b00e-account-create-update-r7p4m" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.389668 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d707ae8a-f650-48e3-87e8-dc79076433e4-operator-scripts\") pod \"nova-cell0-db-create-pq28c\" (UID: \"d707ae8a-f650-48e3-87e8-dc79076433e4\") " pod="openstack/nova-cell0-db-create-pq28c" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.389703 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mmpv\" (UniqueName: \"kubernetes.io/projected/cea24037-4775-49f8-8a3b-d194ea750544-kube-api-access-7mmpv\") pod \"nova-api-db-create-smswb\" (UID: \"cea24037-4775-49f8-8a3b-d194ea750544\") " pod="openstack/nova-api-db-create-smswb" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.389722 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpwqk\" (UniqueName: \"kubernetes.io/projected/d707ae8a-f650-48e3-87e8-dc79076433e4-kube-api-access-lpwqk\") pod \"nova-cell0-db-create-pq28c\" (UID: \"d707ae8a-f650-48e3-87e8-dc79076433e4\") " pod="openstack/nova-cell0-db-create-pq28c" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.390569 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cea24037-4775-49f8-8a3b-d194ea750544-operator-scripts\") pod \"nova-api-db-create-smswb\" (UID: \"cea24037-4775-49f8-8a3b-d194ea750544\") " pod="openstack/nova-api-db-create-smswb" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.391020 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") " pod="openstack/glance-default-external-api-0" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.422417 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mmpv\" (UniqueName: \"kubernetes.io/projected/cea24037-4775-49f8-8a3b-d194ea750544-kube-api-access-7mmpv\") pod \"nova-api-db-create-smswb\" (UID: \"cea24037-4775-49f8-8a3b-d194ea750544\") " pod="openstack/nova-api-db-create-smswb" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.430274 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-8mgkl"] Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.431377 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-8mgkl" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.440835 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-8mgkl"] Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.446768 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-1273-account-create-update-d2bd4"] Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.447952 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1273-account-create-update-d2bd4" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.449945 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.455112 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-1273-account-create-update-d2bd4"] Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.491869 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpwqk\" (UniqueName: \"kubernetes.io/projected/d707ae8a-f650-48e3-87e8-dc79076433e4-kube-api-access-lpwqk\") pod \"nova-cell0-db-create-pq28c\" (UID: \"d707ae8a-f650-48e3-87e8-dc79076433e4\") " pod="openstack/nova-cell0-db-create-pq28c" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.491942 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c69ac66-232c-41b5-95a8-66eeb597bf70-operator-scripts\") pod \"nova-cell0-1273-account-create-update-d2bd4\" (UID: \"0c69ac66-232c-41b5-95a8-66eeb597bf70\") " pod="openstack/nova-cell0-1273-account-create-update-d2bd4" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.491970 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/574fc4f9-56c3-44bf-bb85-26bb97a23ddc-operator-scripts\") pod \"nova-cell1-db-create-8mgkl\" (UID: \"574fc4f9-56c3-44bf-bb85-26bb97a23ddc\") " pod="openstack/nova-cell1-db-create-8mgkl" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.492000 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlzjn\" (UniqueName: \"kubernetes.io/projected/574fc4f9-56c3-44bf-bb85-26bb97a23ddc-kube-api-access-dlzjn\") pod \"nova-cell1-db-create-8mgkl\" (UID: \"574fc4f9-56c3-44bf-bb85-26bb97a23ddc\") " pod="openstack/nova-cell1-db-create-8mgkl" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.492058 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjsdt\" (UniqueName: \"kubernetes.io/projected/0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d-kube-api-access-bjsdt\") pod \"nova-api-b00e-account-create-update-r7p4m\" (UID: \"0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d\") " pod="openstack/nova-api-b00e-account-create-update-r7p4m" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.492110 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlvzc\" (UniqueName: \"kubernetes.io/projected/0c69ac66-232c-41b5-95a8-66eeb597bf70-kube-api-access-jlvzc\") pod \"nova-cell0-1273-account-create-update-d2bd4\" (UID: \"0c69ac66-232c-41b5-95a8-66eeb597bf70\") " pod="openstack/nova-cell0-1273-account-create-update-d2bd4" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.492133 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d-operator-scripts\") pod \"nova-api-b00e-account-create-update-r7p4m\" (UID: \"0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d\") " pod="openstack/nova-api-b00e-account-create-update-r7p4m" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.492161 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d707ae8a-f650-48e3-87e8-dc79076433e4-operator-scripts\") pod \"nova-cell0-db-create-pq28c\" (UID: \"d707ae8a-f650-48e3-87e8-dc79076433e4\") " pod="openstack/nova-cell0-db-create-pq28c" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.492851 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d707ae8a-f650-48e3-87e8-dc79076433e4-operator-scripts\") pod \"nova-cell0-db-create-pq28c\" (UID: \"d707ae8a-f650-48e3-87e8-dc79076433e4\") " pod="openstack/nova-cell0-db-create-pq28c" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.492871 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d-operator-scripts\") pod \"nova-api-b00e-account-create-update-r7p4m\" (UID: \"0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d\") " pod="openstack/nova-api-b00e-account-create-update-r7p4m" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.511303 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpwqk\" (UniqueName: \"kubernetes.io/projected/d707ae8a-f650-48e3-87e8-dc79076433e4-kube-api-access-lpwqk\") pod \"nova-cell0-db-create-pq28c\" (UID: \"d707ae8a-f650-48e3-87e8-dc79076433e4\") " pod="openstack/nova-cell0-db-create-pq28c" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.515327 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjsdt\" (UniqueName: \"kubernetes.io/projected/0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d-kube-api-access-bjsdt\") pod \"nova-api-b00e-account-create-update-r7p4m\" (UID: \"0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d\") " pod="openstack/nova-api-b00e-account-create-update-r7p4m" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.583156 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-smswb" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.602890 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.603815 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlvzc\" (UniqueName: \"kubernetes.io/projected/0c69ac66-232c-41b5-95a8-66eeb597bf70-kube-api-access-jlvzc\") pod \"nova-cell0-1273-account-create-update-d2bd4\" (UID: \"0c69ac66-232c-41b5-95a8-66eeb597bf70\") " pod="openstack/nova-cell0-1273-account-create-update-d2bd4" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.603902 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c69ac66-232c-41b5-95a8-66eeb597bf70-operator-scripts\") pod \"nova-cell0-1273-account-create-update-d2bd4\" (UID: \"0c69ac66-232c-41b5-95a8-66eeb597bf70\") " pod="openstack/nova-cell0-1273-account-create-update-d2bd4" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.603928 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/574fc4f9-56c3-44bf-bb85-26bb97a23ddc-operator-scripts\") pod \"nova-cell1-db-create-8mgkl\" (UID: \"574fc4f9-56c3-44bf-bb85-26bb97a23ddc\") " pod="openstack/nova-cell1-db-create-8mgkl" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.603953 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlzjn\" (UniqueName: \"kubernetes.io/projected/574fc4f9-56c3-44bf-bb85-26bb97a23ddc-kube-api-access-dlzjn\") pod \"nova-cell1-db-create-8mgkl\" (UID: \"574fc4f9-56c3-44bf-bb85-26bb97a23ddc\") " pod="openstack/nova-cell1-db-create-8mgkl" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.604896 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c69ac66-232c-41b5-95a8-66eeb597bf70-operator-scripts\") pod \"nova-cell0-1273-account-create-update-d2bd4\" (UID: \"0c69ac66-232c-41b5-95a8-66eeb597bf70\") " pod="openstack/nova-cell0-1273-account-create-update-d2bd4" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.624023 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlzjn\" (UniqueName: \"kubernetes.io/projected/574fc4f9-56c3-44bf-bb85-26bb97a23ddc-kube-api-access-dlzjn\") pod \"nova-cell1-db-create-8mgkl\" (UID: \"574fc4f9-56c3-44bf-bb85-26bb97a23ddc\") " pod="openstack/nova-cell1-db-create-8mgkl" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.625526 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlvzc\" (UniqueName: \"kubernetes.io/projected/0c69ac66-232c-41b5-95a8-66eeb597bf70-kube-api-access-jlvzc\") pod \"nova-cell0-1273-account-create-update-d2bd4\" (UID: \"0c69ac66-232c-41b5-95a8-66eeb597bf70\") " pod="openstack/nova-cell0-1273-account-create-update-d2bd4" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.631847 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-83af-account-create-update-87kzk"] Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.634200 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-83af-account-create-update-87kzk" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.639531 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/574fc4f9-56c3-44bf-bb85-26bb97a23ddc-operator-scripts\") pod \"nova-cell1-db-create-8mgkl\" (UID: \"574fc4f9-56c3-44bf-bb85-26bb97a23ddc\") " pod="openstack/nova-cell1-db-create-8mgkl" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.639628 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.645924 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-83af-account-create-update-87kzk"] Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.664515 4766 generic.go:334] "Generic (PLEG): container finished" podID="fdb8de08-c6c3-4dac-b9cc-0178d79a7eda" containerID="3cb23532304b03e1da0f93a0cdcb7fa000cdddef8c5037121da270eaf943e938" exitCode=0 Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.664864 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda","Type":"ContainerDied","Data":"3cb23532304b03e1da0f93a0cdcb7fa000cdddef8c5037121da270eaf943e938"} Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.679729 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-b00e-account-create-update-r7p4m" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.705058 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75wfp\" (UniqueName: \"kubernetes.io/projected/98478911-5d75-4bba-a256-e1c2c28e56de-kube-api-access-75wfp\") pod \"nova-cell1-83af-account-create-update-87kzk\" (UID: \"98478911-5d75-4bba-a256-e1c2c28e56de\") " pod="openstack/nova-cell1-83af-account-create-update-87kzk" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.705116 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98478911-5d75-4bba-a256-e1c2c28e56de-operator-scripts\") pod \"nova-cell1-83af-account-create-update-87kzk\" (UID: \"98478911-5d75-4bba-a256-e1c2c28e56de\") " pod="openstack/nova-cell1-83af-account-create-update-87kzk" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.709189 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.765575 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-pq28c" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.789608 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-8mgkl" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.810303 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75wfp\" (UniqueName: \"kubernetes.io/projected/98478911-5d75-4bba-a256-e1c2c28e56de-kube-api-access-75wfp\") pod \"nova-cell1-83af-account-create-update-87kzk\" (UID: \"98478911-5d75-4bba-a256-e1c2c28e56de\") " pod="openstack/nova-cell1-83af-account-create-update-87kzk" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.810393 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98478911-5d75-4bba-a256-e1c2c28e56de-operator-scripts\") pod \"nova-cell1-83af-account-create-update-87kzk\" (UID: \"98478911-5d75-4bba-a256-e1c2c28e56de\") " pod="openstack/nova-cell1-83af-account-create-update-87kzk" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.810757 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1273-account-create-update-d2bd4" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.811421 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98478911-5d75-4bba-a256-e1c2c28e56de-operator-scripts\") pod \"nova-cell1-83af-account-create-update-87kzk\" (UID: \"98478911-5d75-4bba-a256-e1c2c28e56de\") " pod="openstack/nova-cell1-83af-account-create-update-87kzk" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.845368 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75wfp\" (UniqueName: \"kubernetes.io/projected/98478911-5d75-4bba-a256-e1c2c28e56de-kube-api-access-75wfp\") pod \"nova-cell1-83af-account-create-update-87kzk\" (UID: \"98478911-5d75-4bba-a256-e1c2c28e56de\") " pod="openstack/nova-cell1-83af-account-create-update-87kzk" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.914873 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-httpd-run\") pod \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") " Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.914953 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-logs\") pod \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") " Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.915015 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-scripts\") pod \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") " Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.915038 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-combined-ca-bundle\") pod \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") " Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.915148 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5t96m\" (UniqueName: \"kubernetes.io/projected/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-kube-api-access-5t96m\") pod \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") " Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.915211 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-config-data\") pod \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") " Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.915248 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") " Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.915302 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-internal-tls-certs\") pod \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") " Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.915513 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "fdb8de08-c6c3-4dac-b9cc-0178d79a7eda" (UID: "fdb8de08-c6c3-4dac-b9cc-0178d79a7eda"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.916506 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-logs" (OuterVolumeSpecName: "logs") pod "fdb8de08-c6c3-4dac-b9cc-0178d79a7eda" (UID: "fdb8de08-c6c3-4dac-b9cc-0178d79a7eda"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.926237 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-kube-api-access-5t96m" (OuterVolumeSpecName: "kube-api-access-5t96m") pod "fdb8de08-c6c3-4dac-b9cc-0178d79a7eda" (UID: "fdb8de08-c6c3-4dac-b9cc-0178d79a7eda"). InnerVolumeSpecName "kube-api-access-5t96m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.926559 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-scripts" (OuterVolumeSpecName: "scripts") pod "fdb8de08-c6c3-4dac-b9cc-0178d79a7eda" (UID: "fdb8de08-c6c3-4dac-b9cc-0178d79a7eda"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.937219 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "fdb8de08-c6c3-4dac-b9cc-0178d79a7eda" (UID: "fdb8de08-c6c3-4dac-b9cc-0178d79a7eda"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.937674 4766 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.937705 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-logs\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.937715 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.937724 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5t96m\" (UniqueName: \"kubernetes.io/projected/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-kube-api-access-5t96m\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.937753 4766 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.961743 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-83af-account-create-update-87kzk" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.965163 4766 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.979559 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fdb8de08-c6c3-4dac-b9cc-0178d79a7eda" (UID: "fdb8de08-c6c3-4dac-b9cc-0178d79a7eda"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.008384 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "fdb8de08-c6c3-4dac-b9cc-0178d79a7eda" (UID: "fdb8de08-c6c3-4dac-b9cc-0178d79a7eda"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.008781 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-config-data" (OuterVolumeSpecName: "config-data") pod "fdb8de08-c6c3-4dac-b9cc-0178d79a7eda" (UID: "fdb8de08-c6c3-4dac-b9cc-0178d79a7eda"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.039097 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.039137 4766 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.039150 4766 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.039162 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.091504 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64f88e91-eb62-45a5-bfcb-d38a918e23da" path="/var/lib/kubelet/pods/64f88e91-eb62-45a5-bfcb-d38a918e23da/volumes" Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.341130 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-smswb"] Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.398285 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-b00e-account-create-update-r7p4m"] Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.529804 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 16:44:22 crc kubenswrapper[4766]: W0130 16:44:22.551919 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d5b8a42_39dd_4b1b_9f92_1e3585b6707b.slice/crio-a9a6840755fd2b986bdb4ab361591ae6bb5de2cf1574ac6d83650a445bab4f37 WatchSource:0}: Error finding container a9a6840755fd2b986bdb4ab361591ae6bb5de2cf1574ac6d83650a445bab4f37: Status 404 returned error can't find the container with id a9a6840755fd2b986bdb4ab361591ae6bb5de2cf1574ac6d83650a445bab4f37 Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.603943 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-8mgkl"] Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.626661 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-pq28c"] Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.717332 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-smswb" event={"ID":"cea24037-4775-49f8-8a3b-d194ea750544","Type":"ContainerStarted","Data":"d026a97eccd46197ca4c58ce5cfec6afaefc72df68f93832ff6fb3ba15cfc040"} Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.717385 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-smswb" event={"ID":"cea24037-4775-49f8-8a3b-d194ea750544","Type":"ContainerStarted","Data":"45e141adfd656f2833367fd8aeb9a9701e7d26dcc680c32948849f3fdcd2f429"} Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.721345 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-1273-account-create-update-d2bd4"] Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.739583 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-pq28c" event={"ID":"d707ae8a-f650-48e3-87e8-dc79076433e4","Type":"ContainerStarted","Data":"e0d5e3a423c014f40e96b177e972dc5cff17fe4bb117654eaa11b3e1ea2eb5e4"} Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.742856 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-b00e-account-create-update-r7p4m" event={"ID":"0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d","Type":"ContainerStarted","Data":"40233374d2b83e45828fdfde099831302925232fe79bde3b2bea863dce7854c1"} Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.746833 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-smswb" podStartSLOduration=1.7468131900000001 podStartE2EDuration="1.74681319s" podCreationTimestamp="2026-01-30 16:44:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:44:22.735429786 +0000 UTC m=+1317.373387152" watchObservedRunningTime="2026-01-30 16:44:22.74681319 +0000 UTC m=+1317.384770536" Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.748883 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b","Type":"ContainerStarted","Data":"a9a6840755fd2b986bdb4ab361591ae6bb5de2cf1574ac6d83650a445bab4f37"} Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.754575 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-8mgkl" event={"ID":"574fc4f9-56c3-44bf-bb85-26bb97a23ddc","Type":"ContainerStarted","Data":"3cb7c13be781ce5d3b078694b8badbe417819385de26cb3b0df7b2d9025fad6e"} Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.765150 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda","Type":"ContainerDied","Data":"323ddb58f9d31b5bc758e9920b4b5a6270bffb075aa3aec77b37c8af05f7ec01"} Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.765205 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.765243 4766 scope.go:117] "RemoveContainer" containerID="3cb23532304b03e1da0f93a0cdcb7fa000cdddef8c5037121da270eaf943e938" Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.771886 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-b00e-account-create-update-r7p4m" podStartSLOduration=1.771870759 podStartE2EDuration="1.771870759s" podCreationTimestamp="2026-01-30 16:44:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:44:22.758004238 +0000 UTC m=+1317.395961584" watchObservedRunningTime="2026-01-30 16:44:22.771870759 +0000 UTC m=+1317.409828095" Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.800907 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.802806 4766 scope.go:117] "RemoveContainer" containerID="c628aa6775fa8d17ac86f5683f6cf5c80fc38a33f4c92757b020af220822f50a" Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.807765 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.850786 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 16:44:22 crc kubenswrapper[4766]: E0130 16:44:22.852645 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdb8de08-c6c3-4dac-b9cc-0178d79a7eda" containerName="glance-httpd" Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.852679 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdb8de08-c6c3-4dac-b9cc-0178d79a7eda" containerName="glance-httpd" Jan 30 16:44:22 crc kubenswrapper[4766]: E0130 16:44:22.852724 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdb8de08-c6c3-4dac-b9cc-0178d79a7eda" containerName="glance-log" Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.852732 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdb8de08-c6c3-4dac-b9cc-0178d79a7eda" containerName="glance-log" Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.853318 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdb8de08-c6c3-4dac-b9cc-0178d79a7eda" containerName="glance-log" Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.853378 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdb8de08-c6c3-4dac-b9cc-0178d79a7eda" containerName="glance-httpd" Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.868558 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.875049 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.879023 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.879426 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.974249 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-83af-account-create-update-87kzk"] Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.993959 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4bc2931b-8439-4c5c-be4d-43f4aab528f2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.993999 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bc2931b-8439-4c5c-be4d-43f4aab528f2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.994017 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bc2931b-8439-4c5c-be4d-43f4aab528f2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.994036 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bc2931b-8439-4c5c-be4d-43f4aab528f2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.994065 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4btb\" (UniqueName: \"kubernetes.io/projected/4bc2931b-8439-4c5c-be4d-43f4aab528f2-kube-api-access-r4btb\") pod \"glance-default-internal-api-0\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.994109 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4bc2931b-8439-4c5c-be4d-43f4aab528f2-logs\") pod \"glance-default-internal-api-0\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.994124 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bc2931b-8439-4c5c-be4d-43f4aab528f2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.994160 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.107129 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.109465 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4bc2931b-8439-4c5c-be4d-43f4aab528f2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.109506 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bc2931b-8439-4c5c-be4d-43f4aab528f2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.109528 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bc2931b-8439-4c5c-be4d-43f4aab528f2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.109558 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bc2931b-8439-4c5c-be4d-43f4aab528f2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.109593 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4btb\" (UniqueName: \"kubernetes.io/projected/4bc2931b-8439-4c5c-be4d-43f4aab528f2-kube-api-access-r4btb\") pod \"glance-default-internal-api-0\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.109688 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4bc2931b-8439-4c5c-be4d-43f4aab528f2-logs\") pod \"glance-default-internal-api-0\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.109706 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bc2931b-8439-4c5c-be4d-43f4aab528f2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.112202 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4bc2931b-8439-4c5c-be4d-43f4aab528f2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.107741 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.119714 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4bc2931b-8439-4c5c-be4d-43f4aab528f2-logs\") pod \"glance-default-internal-api-0\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.122708 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bc2931b-8439-4c5c-be4d-43f4aab528f2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.122936 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bc2931b-8439-4c5c-be4d-43f4aab528f2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.125098 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bc2931b-8439-4c5c-be4d-43f4aab528f2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.135504 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bc2931b-8439-4c5c-be4d-43f4aab528f2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.151164 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4btb\" (UniqueName: \"kubernetes.io/projected/4bc2931b-8439-4c5c-be4d-43f4aab528f2-kube-api-access-r4btb\") pod \"glance-default-internal-api-0\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.166893 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.228889 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.785557 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-83af-account-create-update-87kzk" event={"ID":"98478911-5d75-4bba-a256-e1c2c28e56de","Type":"ContainerStarted","Data":"9307aab20bd3270327a754ce5f0bf1e56e353502d938552c29a20aa0ffc8654a"} Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.785953 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-83af-account-create-update-87kzk" event={"ID":"98478911-5d75-4bba-a256-e1c2c28e56de","Type":"ContainerStarted","Data":"c1cc24a1b2be73c7dd0072b1a89bb90e958b2833e86ee694006d7eee9e3c395e"} Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.790711 4766 generic.go:334] "Generic (PLEG): container finished" podID="cea24037-4775-49f8-8a3b-d194ea750544" containerID="d026a97eccd46197ca4c58ce5cfec6afaefc72df68f93832ff6fb3ba15cfc040" exitCode=0 Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.790773 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-smswb" event={"ID":"cea24037-4775-49f8-8a3b-d194ea750544","Type":"ContainerDied","Data":"d026a97eccd46197ca4c58ce5cfec6afaefc72df68f93832ff6fb3ba15cfc040"} Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.813484 4766 generic.go:334] "Generic (PLEG): container finished" podID="d707ae8a-f650-48e3-87e8-dc79076433e4" containerID="894f0e780f43b16d39f549c963adf0e206c485f0cd403b0f3895c8cb5e61299b" exitCode=0 Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.813580 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-pq28c" event={"ID":"d707ae8a-f650-48e3-87e8-dc79076433e4","Type":"ContainerDied","Data":"894f0e780f43b16d39f549c963adf0e206c485f0cd403b0f3895c8cb5e61299b"} Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.822096 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1273-account-create-update-d2bd4" event={"ID":"0c69ac66-232c-41b5-95a8-66eeb597bf70","Type":"ContainerStarted","Data":"ffb6abd846e3b8a61ca7c66fafb67111cf511533b90b2d4f5d986377b3dc5cfe"} Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.822137 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1273-account-create-update-d2bd4" event={"ID":"0c69ac66-232c-41b5-95a8-66eeb597bf70","Type":"ContainerStarted","Data":"334bd2587d275c4f6e18823ddbfefa781776489d5ac69fe7932fc5178e4e33fe"} Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.823602 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-83af-account-create-update-87kzk" podStartSLOduration=2.823591243 podStartE2EDuration="2.823591243s" podCreationTimestamp="2026-01-30 16:44:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:44:23.80968304 +0000 UTC m=+1318.447640386" watchObservedRunningTime="2026-01-30 16:44:23.823591243 +0000 UTC m=+1318.461548589" Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.834305 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-b00e-account-create-update-r7p4m" event={"ID":"0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d","Type":"ContainerStarted","Data":"ffd3b38875d4c33ec892cb23c7ec536f295d1ae5853614ed528ebfd986790523"} Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.843686 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b","Type":"ContainerStarted","Data":"ae7dde53148ed0648831b0890e97ecd78fb3fcfcd4a2cb421fec3c285c7bcafc"} Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.848820 4766 generic.go:334] "Generic (PLEG): container finished" podID="574fc4f9-56c3-44bf-bb85-26bb97a23ddc" containerID="c614875e8dcd6859612c0ffca023d9ad703182eac04c4334607745a26ed492e7" exitCode=0 Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.848935 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-8mgkl" event={"ID":"574fc4f9-56c3-44bf-bb85-26bb97a23ddc","Type":"ContainerDied","Data":"c614875e8dcd6859612c0ffca023d9ad703182eac04c4334607745a26ed492e7"} Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.875451 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-1273-account-create-update-d2bd4" podStartSLOduration=2.875425819 podStartE2EDuration="2.875425819s" podCreationTimestamp="2026-01-30 16:44:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:44:23.870495584 +0000 UTC m=+1318.508452930" watchObservedRunningTime="2026-01-30 16:44:23.875425819 +0000 UTC m=+1318.513383165" Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.917376 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 16:44:24 crc kubenswrapper[4766]: I0130 16:44:24.057376 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdb8de08-c6c3-4dac-b9cc-0178d79a7eda" path="/var/lib/kubelet/pods/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda/volumes" Jan 30 16:44:24 crc kubenswrapper[4766]: W0130 16:44:24.070385 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4bc2931b_8439_4c5c_be4d_43f4aab528f2.slice/crio-2797b67ea13c41adaa6a8bb781fc530c7226e6d8ca440692aa04b6d42362f33b WatchSource:0}: Error finding container 2797b67ea13c41adaa6a8bb781fc530c7226e6d8ca440692aa04b6d42362f33b: Status 404 returned error can't find the container with id 2797b67ea13c41adaa6a8bb781fc530c7226e6d8ca440692aa04b6d42362f33b Jan 30 16:44:24 crc kubenswrapper[4766]: I0130 16:44:24.886914 4766 generic.go:334] "Generic (PLEG): container finished" podID="0c69ac66-232c-41b5-95a8-66eeb597bf70" containerID="ffb6abd846e3b8a61ca7c66fafb67111cf511533b90b2d4f5d986377b3dc5cfe" exitCode=0 Jan 30 16:44:24 crc kubenswrapper[4766]: I0130 16:44:24.887013 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1273-account-create-update-d2bd4" event={"ID":"0c69ac66-232c-41b5-95a8-66eeb597bf70","Type":"ContainerDied","Data":"ffb6abd846e3b8a61ca7c66fafb67111cf511533b90b2d4f5d986377b3dc5cfe"} Jan 30 16:44:24 crc kubenswrapper[4766]: I0130 16:44:24.893138 4766 generic.go:334] "Generic (PLEG): container finished" podID="0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d" containerID="ffd3b38875d4c33ec892cb23c7ec536f295d1ae5853614ed528ebfd986790523" exitCode=0 Jan 30 16:44:24 crc kubenswrapper[4766]: I0130 16:44:24.893304 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-b00e-account-create-update-r7p4m" event={"ID":"0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d","Type":"ContainerDied","Data":"ffd3b38875d4c33ec892cb23c7ec536f295d1ae5853614ed528ebfd986790523"} Jan 30 16:44:24 crc kubenswrapper[4766]: I0130 16:44:24.895692 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b","Type":"ContainerStarted","Data":"a67df9afdf256b25cf9bda3ba94d84b8fbd05c6c568cae7f599293e1949d5425"} Jan 30 16:44:24 crc kubenswrapper[4766]: I0130 16:44:24.897544 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4bc2931b-8439-4c5c-be4d-43f4aab528f2","Type":"ContainerStarted","Data":"7a019f6cf432acd6921c269ed116db1aa5dfd42bb062f9567ee28226592d75f9"} Jan 30 16:44:24 crc kubenswrapper[4766]: I0130 16:44:24.897578 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4bc2931b-8439-4c5c-be4d-43f4aab528f2","Type":"ContainerStarted","Data":"2797b67ea13c41adaa6a8bb781fc530c7226e6d8ca440692aa04b6d42362f33b"} Jan 30 16:44:24 crc kubenswrapper[4766]: I0130 16:44:24.909775 4766 generic.go:334] "Generic (PLEG): container finished" podID="98478911-5d75-4bba-a256-e1c2c28e56de" containerID="9307aab20bd3270327a754ce5f0bf1e56e353502d938552c29a20aa0ffc8654a" exitCode=0 Jan 30 16:44:24 crc kubenswrapper[4766]: I0130 16:44:24.909866 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-83af-account-create-update-87kzk" event={"ID":"98478911-5d75-4bba-a256-e1c2c28e56de","Type":"ContainerDied","Data":"9307aab20bd3270327a754ce5f0bf1e56e353502d938552c29a20aa0ffc8654a"} Jan 30 16:44:24 crc kubenswrapper[4766]: I0130 16:44:24.912777 4766 generic.go:334] "Generic (PLEG): container finished" podID="9e6f8d1d-5532-47c4-97db-68a1b5b3f876" containerID="0457579c3fc1a9ef824883cd41ddabdf9c479beff458b6eac6ddb0bd7fa49d24" exitCode=0 Jan 30 16:44:24 crc kubenswrapper[4766]: I0130 16:44:24.913772 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e6f8d1d-5532-47c4-97db-68a1b5b3f876","Type":"ContainerDied","Data":"0457579c3fc1a9ef824883cd41ddabdf9c479beff458b6eac6ddb0bd7fa49d24"} Jan 30 16:44:24 crc kubenswrapper[4766]: I0130 16:44:24.913810 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e6f8d1d-5532-47c4-97db-68a1b5b3f876","Type":"ContainerDied","Data":"a1e5f15ece17462fa98655bf351efadbb053907815e9f63a9046768408f27c8a"} Jan 30 16:44:24 crc kubenswrapper[4766]: I0130 16:44:24.913827 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1e5f15ece17462fa98655bf351efadbb053907815e9f63a9046768408f27c8a" Jan 30 16:44:24 crc kubenswrapper[4766]: I0130 16:44:24.974349 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.974333122 podStartE2EDuration="4.974333122s" podCreationTimestamp="2026-01-30 16:44:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:44:24.972570643 +0000 UTC m=+1319.610527989" watchObservedRunningTime="2026-01-30 16:44:24.974333122 +0000 UTC m=+1319.612290468" Jan 30 16:44:24 crc kubenswrapper[4766]: I0130 16:44:24.981834 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.062711 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-scripts\") pod \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\" (UID: \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\") " Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.062760 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-sg-core-conf-yaml\") pod \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\" (UID: \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\") " Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.062783 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-log-httpd\") pod \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\" (UID: \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\") " Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.062812 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-run-httpd\") pod \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\" (UID: \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\") " Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.062840 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-combined-ca-bundle\") pod \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\" (UID: \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\") " Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.062886 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-config-data\") pod \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\" (UID: \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\") " Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.062921 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvtdw\" (UniqueName: \"kubernetes.io/projected/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-kube-api-access-jvtdw\") pod \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\" (UID: \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\") " Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.070488 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9e6f8d1d-5532-47c4-97db-68a1b5b3f876" (UID: "9e6f8d1d-5532-47c4-97db-68a1b5b3f876"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.070959 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9e6f8d1d-5532-47c4-97db-68a1b5b3f876" (UID: "9e6f8d1d-5532-47c4-97db-68a1b5b3f876"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.076345 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-kube-api-access-jvtdw" (OuterVolumeSpecName: "kube-api-access-jvtdw") pod "9e6f8d1d-5532-47c4-97db-68a1b5b3f876" (UID: "9e6f8d1d-5532-47c4-97db-68a1b5b3f876"). InnerVolumeSpecName "kube-api-access-jvtdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.076430 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-scripts" (OuterVolumeSpecName: "scripts") pod "9e6f8d1d-5532-47c4-97db-68a1b5b3f876" (UID: "9e6f8d1d-5532-47c4-97db-68a1b5b3f876"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.169249 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvtdw\" (UniqueName: \"kubernetes.io/projected/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-kube-api-access-jvtdw\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.176346 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.176729 4766 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.176824 4766 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.206288 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9e6f8d1d-5532-47c4-97db-68a1b5b3f876" (UID: "9e6f8d1d-5532-47c4-97db-68a1b5b3f876"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.238387 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9e6f8d1d-5532-47c4-97db-68a1b5b3f876" (UID: "9e6f8d1d-5532-47c4-97db-68a1b5b3f876"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.280546 4766 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.280773 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.338293 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-config-data" (OuterVolumeSpecName: "config-data") pod "9e6f8d1d-5532-47c4-97db-68a1b5b3f876" (UID: "9e6f8d1d-5532-47c4-97db-68a1b5b3f876"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.386025 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.445579 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-8mgkl" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.486928 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/574fc4f9-56c3-44bf-bb85-26bb97a23ddc-operator-scripts\") pod \"574fc4f9-56c3-44bf-bb85-26bb97a23ddc\" (UID: \"574fc4f9-56c3-44bf-bb85-26bb97a23ddc\") " Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.487037 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlzjn\" (UniqueName: \"kubernetes.io/projected/574fc4f9-56c3-44bf-bb85-26bb97a23ddc-kube-api-access-dlzjn\") pod \"574fc4f9-56c3-44bf-bb85-26bb97a23ddc\" (UID: \"574fc4f9-56c3-44bf-bb85-26bb97a23ddc\") " Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.495409 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/574fc4f9-56c3-44bf-bb85-26bb97a23ddc-kube-api-access-dlzjn" (OuterVolumeSpecName: "kube-api-access-dlzjn") pod "574fc4f9-56c3-44bf-bb85-26bb97a23ddc" (UID: "574fc4f9-56c3-44bf-bb85-26bb97a23ddc"). InnerVolumeSpecName "kube-api-access-dlzjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.499316 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/574fc4f9-56c3-44bf-bb85-26bb97a23ddc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "574fc4f9-56c3-44bf-bb85-26bb97a23ddc" (UID: "574fc4f9-56c3-44bf-bb85-26bb97a23ddc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.592353 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/574fc4f9-56c3-44bf-bb85-26bb97a23ddc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.592378 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlzjn\" (UniqueName: \"kubernetes.io/projected/574fc4f9-56c3-44bf-bb85-26bb97a23ddc-kube-api-access-dlzjn\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.633216 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-smswb" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.642731 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-pq28c" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.693397 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mmpv\" (UniqueName: \"kubernetes.io/projected/cea24037-4775-49f8-8a3b-d194ea750544-kube-api-access-7mmpv\") pod \"cea24037-4775-49f8-8a3b-d194ea750544\" (UID: \"cea24037-4775-49f8-8a3b-d194ea750544\") " Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.694190 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d707ae8a-f650-48e3-87e8-dc79076433e4-operator-scripts\") pod \"d707ae8a-f650-48e3-87e8-dc79076433e4\" (UID: \"d707ae8a-f650-48e3-87e8-dc79076433e4\") " Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.694256 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpwqk\" (UniqueName: \"kubernetes.io/projected/d707ae8a-f650-48e3-87e8-dc79076433e4-kube-api-access-lpwqk\") pod \"d707ae8a-f650-48e3-87e8-dc79076433e4\" (UID: \"d707ae8a-f650-48e3-87e8-dc79076433e4\") " Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.694277 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cea24037-4775-49f8-8a3b-d194ea750544-operator-scripts\") pod \"cea24037-4775-49f8-8a3b-d194ea750544\" (UID: \"cea24037-4775-49f8-8a3b-d194ea750544\") " Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.695227 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d707ae8a-f650-48e3-87e8-dc79076433e4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d707ae8a-f650-48e3-87e8-dc79076433e4" (UID: "d707ae8a-f650-48e3-87e8-dc79076433e4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.695633 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d707ae8a-f650-48e3-87e8-dc79076433e4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.696150 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cea24037-4775-49f8-8a3b-d194ea750544-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cea24037-4775-49f8-8a3b-d194ea750544" (UID: "cea24037-4775-49f8-8a3b-d194ea750544"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.700718 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cea24037-4775-49f8-8a3b-d194ea750544-kube-api-access-7mmpv" (OuterVolumeSpecName: "kube-api-access-7mmpv") pod "cea24037-4775-49f8-8a3b-d194ea750544" (UID: "cea24037-4775-49f8-8a3b-d194ea750544"). InnerVolumeSpecName "kube-api-access-7mmpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.700910 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d707ae8a-f650-48e3-87e8-dc79076433e4-kube-api-access-lpwqk" (OuterVolumeSpecName: "kube-api-access-lpwqk") pod "d707ae8a-f650-48e3-87e8-dc79076433e4" (UID: "d707ae8a-f650-48e3-87e8-dc79076433e4"). InnerVolumeSpecName "kube-api-access-lpwqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.797792 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpwqk\" (UniqueName: \"kubernetes.io/projected/d707ae8a-f650-48e3-87e8-dc79076433e4-kube-api-access-lpwqk\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.798136 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cea24037-4775-49f8-8a3b-d194ea750544-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.798151 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mmpv\" (UniqueName: \"kubernetes.io/projected/cea24037-4775-49f8-8a3b-d194ea750544-kube-api-access-7mmpv\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.928527 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-pq28c" event={"ID":"d707ae8a-f650-48e3-87e8-dc79076433e4","Type":"ContainerDied","Data":"e0d5e3a423c014f40e96b177e972dc5cff17fe4bb117654eaa11b3e1ea2eb5e4"} Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.929787 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0d5e3a423c014f40e96b177e972dc5cff17fe4bb117654eaa11b3e1ea2eb5e4" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.929948 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-pq28c" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.933737 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4bc2931b-8439-4c5c-be4d-43f4aab528f2","Type":"ContainerStarted","Data":"7cb223d43c8f7f218cb3801a506f0b8a1c37370133be56bce90a766f5556e3ab"} Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.940716 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-8mgkl" event={"ID":"574fc4f9-56c3-44bf-bb85-26bb97a23ddc","Type":"ContainerDied","Data":"3cb7c13be781ce5d3b078694b8badbe417819385de26cb3b0df7b2d9025fad6e"} Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.940757 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3cb7c13be781ce5d3b078694b8badbe417819385de26cb3b0df7b2d9025fad6e" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.940811 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-8mgkl" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.957452 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-smswb" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.963933 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-smswb" event={"ID":"cea24037-4775-49f8-8a3b-d194ea750544","Type":"ContainerDied","Data":"45e141adfd656f2833367fd8aeb9a9701e7d26dcc680c32948849f3fdcd2f429"} Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.968617 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45e141adfd656f2833367fd8aeb9a9701e7d26dcc680c32948849f3fdcd2f429" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.966634 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.971760 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.971725371 podStartE2EDuration="3.971725371s" podCreationTimestamp="2026-01-30 16:44:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:44:25.96623106 +0000 UTC m=+1320.604188426" watchObservedRunningTime="2026-01-30 16:44:25.971725371 +0000 UTC m=+1320.609682717" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.070695 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.078218 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.090235 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:44:26 crc kubenswrapper[4766]: E0130 16:44:26.090991 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d707ae8a-f650-48e3-87e8-dc79076433e4" containerName="mariadb-database-create" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.091063 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d707ae8a-f650-48e3-87e8-dc79076433e4" containerName="mariadb-database-create" Jan 30 16:44:26 crc kubenswrapper[4766]: E0130 16:44:26.091121 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cea24037-4775-49f8-8a3b-d194ea750544" containerName="mariadb-database-create" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.091200 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="cea24037-4775-49f8-8a3b-d194ea750544" containerName="mariadb-database-create" Jan 30 16:44:26 crc kubenswrapper[4766]: E0130 16:44:26.091286 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e6f8d1d-5532-47c4-97db-68a1b5b3f876" containerName="ceilometer-notification-agent" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.091413 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e6f8d1d-5532-47c4-97db-68a1b5b3f876" containerName="ceilometer-notification-agent" Jan 30 16:44:26 crc kubenswrapper[4766]: E0130 16:44:26.091478 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e6f8d1d-5532-47c4-97db-68a1b5b3f876" containerName="ceilometer-central-agent" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.091527 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e6f8d1d-5532-47c4-97db-68a1b5b3f876" containerName="ceilometer-central-agent" Jan 30 16:44:26 crc kubenswrapper[4766]: E0130 16:44:26.091593 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e6f8d1d-5532-47c4-97db-68a1b5b3f876" containerName="proxy-httpd" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.091645 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e6f8d1d-5532-47c4-97db-68a1b5b3f876" containerName="proxy-httpd" Jan 30 16:44:26 crc kubenswrapper[4766]: E0130 16:44:26.091708 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e6f8d1d-5532-47c4-97db-68a1b5b3f876" containerName="sg-core" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.091761 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e6f8d1d-5532-47c4-97db-68a1b5b3f876" containerName="sg-core" Jan 30 16:44:26 crc kubenswrapper[4766]: E0130 16:44:26.091823 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="574fc4f9-56c3-44bf-bb85-26bb97a23ddc" containerName="mariadb-database-create" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.091876 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="574fc4f9-56c3-44bf-bb85-26bb97a23ddc" containerName="mariadb-database-create" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.092082 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e6f8d1d-5532-47c4-97db-68a1b5b3f876" containerName="sg-core" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.092146 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e6f8d1d-5532-47c4-97db-68a1b5b3f876" containerName="proxy-httpd" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.092250 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="574fc4f9-56c3-44bf-bb85-26bb97a23ddc" containerName="mariadb-database-create" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.092324 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e6f8d1d-5532-47c4-97db-68a1b5b3f876" containerName="ceilometer-central-agent" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.092396 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="d707ae8a-f650-48e3-87e8-dc79076433e4" containerName="mariadb-database-create" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.092464 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e6f8d1d-5532-47c4-97db-68a1b5b3f876" containerName="ceilometer-notification-agent" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.092520 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="cea24037-4775-49f8-8a3b-d194ea750544" containerName="mariadb-database-create" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.094215 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.099355 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.100090 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.118688 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.206216 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\") " pod="openstack/ceilometer-0" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.206283 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-config-data\") pod \"ceilometer-0\" (UID: \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\") " pod="openstack/ceilometer-0" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.206864 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-run-httpd\") pod \"ceilometer-0\" (UID: \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\") " pod="openstack/ceilometer-0" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.206994 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\") " pod="openstack/ceilometer-0" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.207029 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-scripts\") pod \"ceilometer-0\" (UID: \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\") " pod="openstack/ceilometer-0" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.207104 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqnpm\" (UniqueName: \"kubernetes.io/projected/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-kube-api-access-wqnpm\") pod \"ceilometer-0\" (UID: \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\") " pod="openstack/ceilometer-0" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.207121 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-log-httpd\") pod \"ceilometer-0\" (UID: \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\") " pod="openstack/ceilometer-0" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.309085 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\") " pod="openstack/ceilometer-0" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.309408 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-scripts\") pod \"ceilometer-0\" (UID: \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\") " pod="openstack/ceilometer-0" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.309449 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqnpm\" (UniqueName: \"kubernetes.io/projected/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-kube-api-access-wqnpm\") pod \"ceilometer-0\" (UID: \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\") " pod="openstack/ceilometer-0" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.309465 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-log-httpd\") pod \"ceilometer-0\" (UID: \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\") " pod="openstack/ceilometer-0" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.309591 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\") " pod="openstack/ceilometer-0" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.309626 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-config-data\") pod \"ceilometer-0\" (UID: \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\") " pod="openstack/ceilometer-0" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.309647 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-run-httpd\") pod \"ceilometer-0\" (UID: \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\") " pod="openstack/ceilometer-0" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.310282 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-run-httpd\") pod \"ceilometer-0\" (UID: \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\") " pod="openstack/ceilometer-0" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.310755 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-log-httpd\") pod \"ceilometer-0\" (UID: \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\") " pod="openstack/ceilometer-0" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.315515 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-config-data\") pod \"ceilometer-0\" (UID: \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\") " pod="openstack/ceilometer-0" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.317865 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\") " pod="openstack/ceilometer-0" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.325312 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\") " pod="openstack/ceilometer-0" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.331090 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqnpm\" (UniqueName: \"kubernetes.io/projected/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-kube-api-access-wqnpm\") pod \"ceilometer-0\" (UID: \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\") " pod="openstack/ceilometer-0" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.332265 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-scripts\") pod \"ceilometer-0\" (UID: \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\") " pod="openstack/ceilometer-0" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.456199 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.616457 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-83af-account-create-update-87kzk" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.637992 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1273-account-create-update-d2bd4" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.641120 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-b00e-account-create-update-r7p4m" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.663726 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75wfp\" (UniqueName: \"kubernetes.io/projected/98478911-5d75-4bba-a256-e1c2c28e56de-kube-api-access-75wfp\") pod \"98478911-5d75-4bba-a256-e1c2c28e56de\" (UID: \"98478911-5d75-4bba-a256-e1c2c28e56de\") " Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.663827 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98478911-5d75-4bba-a256-e1c2c28e56de-operator-scripts\") pod \"98478911-5d75-4bba-a256-e1c2c28e56de\" (UID: \"98478911-5d75-4bba-a256-e1c2c28e56de\") " Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.665478 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98478911-5d75-4bba-a256-e1c2c28e56de-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "98478911-5d75-4bba-a256-e1c2c28e56de" (UID: "98478911-5d75-4bba-a256-e1c2c28e56de"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.679386 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98478911-5d75-4bba-a256-e1c2c28e56de-kube-api-access-75wfp" (OuterVolumeSpecName: "kube-api-access-75wfp") pod "98478911-5d75-4bba-a256-e1c2c28e56de" (UID: "98478911-5d75-4bba-a256-e1c2c28e56de"). InnerVolumeSpecName "kube-api-access-75wfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.767326 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlvzc\" (UniqueName: \"kubernetes.io/projected/0c69ac66-232c-41b5-95a8-66eeb597bf70-kube-api-access-jlvzc\") pod \"0c69ac66-232c-41b5-95a8-66eeb597bf70\" (UID: \"0c69ac66-232c-41b5-95a8-66eeb597bf70\") " Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.767427 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d-operator-scripts\") pod \"0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d\" (UID: \"0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d\") " Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.767680 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c69ac66-232c-41b5-95a8-66eeb597bf70-operator-scripts\") pod \"0c69ac66-232c-41b5-95a8-66eeb597bf70\" (UID: \"0c69ac66-232c-41b5-95a8-66eeb597bf70\") " Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.767725 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjsdt\" (UniqueName: \"kubernetes.io/projected/0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d-kube-api-access-bjsdt\") pod \"0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d\" (UID: \"0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d\") " Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.768282 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75wfp\" (UniqueName: \"kubernetes.io/projected/98478911-5d75-4bba-a256-e1c2c28e56de-kube-api-access-75wfp\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.768308 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98478911-5d75-4bba-a256-e1c2c28e56de-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.770153 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d" (UID: "0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.770765 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c69ac66-232c-41b5-95a8-66eeb597bf70-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0c69ac66-232c-41b5-95a8-66eeb597bf70" (UID: "0c69ac66-232c-41b5-95a8-66eeb597bf70"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.774478 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d-kube-api-access-bjsdt" (OuterVolumeSpecName: "kube-api-access-bjsdt") pod "0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d" (UID: "0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d"). InnerVolumeSpecName "kube-api-access-bjsdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.783284 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c69ac66-232c-41b5-95a8-66eeb597bf70-kube-api-access-jlvzc" (OuterVolumeSpecName: "kube-api-access-jlvzc") pod "0c69ac66-232c-41b5-95a8-66eeb597bf70" (UID: "0c69ac66-232c-41b5-95a8-66eeb597bf70"). InnerVolumeSpecName "kube-api-access-jlvzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.869577 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlvzc\" (UniqueName: \"kubernetes.io/projected/0c69ac66-232c-41b5-95a8-66eeb597bf70-kube-api-access-jlvzc\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.869614 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.869624 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c69ac66-232c-41b5-95a8-66eeb597bf70-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.869632 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjsdt\" (UniqueName: \"kubernetes.io/projected/0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d-kube-api-access-bjsdt\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.973780 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-83af-account-create-update-87kzk" event={"ID":"98478911-5d75-4bba-a256-e1c2c28e56de","Type":"ContainerDied","Data":"c1cc24a1b2be73c7dd0072b1a89bb90e958b2833e86ee694006d7eee9e3c395e"} Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.973842 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1cc24a1b2be73c7dd0072b1a89bb90e958b2833e86ee694006d7eee9e3c395e" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.973790 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-83af-account-create-update-87kzk" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.976907 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-b00e-account-create-update-r7p4m" event={"ID":"0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d","Type":"ContainerDied","Data":"40233374d2b83e45828fdfde099831302925232fe79bde3b2bea863dce7854c1"} Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.976980 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40233374d2b83e45828fdfde099831302925232fe79bde3b2bea863dce7854c1" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.977073 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-b00e-account-create-update-r7p4m" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.993657 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1273-account-create-update-d2bd4" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.993876 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1273-account-create-update-d2bd4" event={"ID":"0c69ac66-232c-41b5-95a8-66eeb597bf70","Type":"ContainerDied","Data":"334bd2587d275c4f6e18823ddbfefa781776489d5ac69fe7932fc5178e4e33fe"} Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.993928 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="334bd2587d275c4f6e18823ddbfefa781776489d5ac69fe7932fc5178e4e33fe" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.993955 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:44:27 crc kubenswrapper[4766]: I0130 16:44:27.012800 4766 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 16:44:28 crc kubenswrapper[4766]: I0130 16:44:28.002530 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1d1e402-7f4e-4c9e-9831-0a5d14616fde","Type":"ContainerStarted","Data":"9b528af22b1b5581dbc2a01e256cf97cec5bfd26af827ddc74d5e4d0a050df47"} Jan 30 16:44:28 crc kubenswrapper[4766]: I0130 16:44:28.052042 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e6f8d1d-5532-47c4-97db-68a1b5b3f876" path="/var/lib/kubelet/pods/9e6f8d1d-5532-47c4-97db-68a1b5b3f876/volumes" Jan 30 16:44:28 crc kubenswrapper[4766]: I0130 16:44:28.208891 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 30 16:44:30 crc kubenswrapper[4766]: I0130 16:44:30.025869 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1d1e402-7f4e-4c9e-9831-0a5d14616fde","Type":"ContainerStarted","Data":"76c88c6567a93336687e962e7d2517bf67cd4cf174d2091c90be59d55a672150"} Jan 30 16:44:30 crc kubenswrapper[4766]: I0130 16:44:30.026496 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1d1e402-7f4e-4c9e-9831-0a5d14616fde","Type":"ContainerStarted","Data":"b285e8e69d7ab02b0bfae305890b5a29b3d4f19eea785d5a2b4ad8f1c688ad59"} Jan 30 16:44:31 crc kubenswrapper[4766]: I0130 16:44:31.603972 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 16:44:31 crc kubenswrapper[4766]: I0130 16:44:31.604276 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 16:44:31 crc kubenswrapper[4766]: I0130 16:44:31.634613 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 16:44:31 crc kubenswrapper[4766]: I0130 16:44:31.653735 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 16:44:31 crc kubenswrapper[4766]: I0130 16:44:31.905927 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xsc6g"] Jan 30 16:44:31 crc kubenswrapper[4766]: E0130 16:44:31.906321 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d" containerName="mariadb-account-create-update" Jan 30 16:44:31 crc kubenswrapper[4766]: I0130 16:44:31.906339 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d" containerName="mariadb-account-create-update" Jan 30 16:44:31 crc kubenswrapper[4766]: E0130 16:44:31.906353 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98478911-5d75-4bba-a256-e1c2c28e56de" containerName="mariadb-account-create-update" Jan 30 16:44:31 crc kubenswrapper[4766]: I0130 16:44:31.906360 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="98478911-5d75-4bba-a256-e1c2c28e56de" containerName="mariadb-account-create-update" Jan 30 16:44:31 crc kubenswrapper[4766]: E0130 16:44:31.906380 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c69ac66-232c-41b5-95a8-66eeb597bf70" containerName="mariadb-account-create-update" Jan 30 16:44:31 crc kubenswrapper[4766]: I0130 16:44:31.906386 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c69ac66-232c-41b5-95a8-66eeb597bf70" containerName="mariadb-account-create-update" Jan 30 16:44:31 crc kubenswrapper[4766]: I0130 16:44:31.906566 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c69ac66-232c-41b5-95a8-66eeb597bf70" containerName="mariadb-account-create-update" Jan 30 16:44:31 crc kubenswrapper[4766]: I0130 16:44:31.906594 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="98478911-5d75-4bba-a256-e1c2c28e56de" containerName="mariadb-account-create-update" Jan 30 16:44:31 crc kubenswrapper[4766]: I0130 16:44:31.906603 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d" containerName="mariadb-account-create-update" Jan 30 16:44:31 crc kubenswrapper[4766]: I0130 16:44:31.907143 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-xsc6g" Jan 30 16:44:31 crc kubenswrapper[4766]: I0130 16:44:31.909638 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-5t29t" Jan 30 16:44:31 crc kubenswrapper[4766]: I0130 16:44:31.909839 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 30 16:44:31 crc kubenswrapper[4766]: I0130 16:44:31.911770 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 30 16:44:31 crc kubenswrapper[4766]: I0130 16:44:31.926467 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xsc6g"] Jan 30 16:44:31 crc kubenswrapper[4766]: I0130 16:44:31.967992 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b88e4495-e013-4fc2-b65b-c3d914b89dd8-config-data\") pod \"nova-cell0-conductor-db-sync-xsc6g\" (UID: \"b88e4495-e013-4fc2-b65b-c3d914b89dd8\") " pod="openstack/nova-cell0-conductor-db-sync-xsc6g" Jan 30 16:44:31 crc kubenswrapper[4766]: I0130 16:44:31.968197 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b88e4495-e013-4fc2-b65b-c3d914b89dd8-scripts\") pod \"nova-cell0-conductor-db-sync-xsc6g\" (UID: \"b88e4495-e013-4fc2-b65b-c3d914b89dd8\") " pod="openstack/nova-cell0-conductor-db-sync-xsc6g" Jan 30 16:44:31 crc kubenswrapper[4766]: I0130 16:44:31.968257 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thmr6\" (UniqueName: \"kubernetes.io/projected/b88e4495-e013-4fc2-b65b-c3d914b89dd8-kube-api-access-thmr6\") pod \"nova-cell0-conductor-db-sync-xsc6g\" (UID: \"b88e4495-e013-4fc2-b65b-c3d914b89dd8\") " pod="openstack/nova-cell0-conductor-db-sync-xsc6g" Jan 30 16:44:31 crc kubenswrapper[4766]: I0130 16:44:31.968327 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b88e4495-e013-4fc2-b65b-c3d914b89dd8-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-xsc6g\" (UID: \"b88e4495-e013-4fc2-b65b-c3d914b89dd8\") " pod="openstack/nova-cell0-conductor-db-sync-xsc6g" Jan 30 16:44:32 crc kubenswrapper[4766]: I0130 16:44:32.050655 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1d1e402-7f4e-4c9e-9831-0a5d14616fde","Type":"ContainerStarted","Data":"0cc930c24dd0e619bf1c708ae90cfe124b8542a4f7be4495b512b8f0f80d9112"} Jan 30 16:44:32 crc kubenswrapper[4766]: I0130 16:44:32.050706 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 16:44:32 crc kubenswrapper[4766]: I0130 16:44:32.050738 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 16:44:32 crc kubenswrapper[4766]: I0130 16:44:32.069712 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thmr6\" (UniqueName: \"kubernetes.io/projected/b88e4495-e013-4fc2-b65b-c3d914b89dd8-kube-api-access-thmr6\") pod \"nova-cell0-conductor-db-sync-xsc6g\" (UID: \"b88e4495-e013-4fc2-b65b-c3d914b89dd8\") " pod="openstack/nova-cell0-conductor-db-sync-xsc6g" Jan 30 16:44:32 crc kubenswrapper[4766]: I0130 16:44:32.069844 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b88e4495-e013-4fc2-b65b-c3d914b89dd8-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-xsc6g\" (UID: \"b88e4495-e013-4fc2-b65b-c3d914b89dd8\") " pod="openstack/nova-cell0-conductor-db-sync-xsc6g" Jan 30 16:44:32 crc kubenswrapper[4766]: I0130 16:44:32.069885 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b88e4495-e013-4fc2-b65b-c3d914b89dd8-config-data\") pod \"nova-cell0-conductor-db-sync-xsc6g\" (UID: \"b88e4495-e013-4fc2-b65b-c3d914b89dd8\") " pod="openstack/nova-cell0-conductor-db-sync-xsc6g" Jan 30 16:44:32 crc kubenswrapper[4766]: I0130 16:44:32.070102 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b88e4495-e013-4fc2-b65b-c3d914b89dd8-scripts\") pod \"nova-cell0-conductor-db-sync-xsc6g\" (UID: \"b88e4495-e013-4fc2-b65b-c3d914b89dd8\") " pod="openstack/nova-cell0-conductor-db-sync-xsc6g" Jan 30 16:44:32 crc kubenswrapper[4766]: I0130 16:44:32.074903 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b88e4495-e013-4fc2-b65b-c3d914b89dd8-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-xsc6g\" (UID: \"b88e4495-e013-4fc2-b65b-c3d914b89dd8\") " pod="openstack/nova-cell0-conductor-db-sync-xsc6g" Jan 30 16:44:32 crc kubenswrapper[4766]: I0130 16:44:32.077610 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b88e4495-e013-4fc2-b65b-c3d914b89dd8-scripts\") pod \"nova-cell0-conductor-db-sync-xsc6g\" (UID: \"b88e4495-e013-4fc2-b65b-c3d914b89dd8\") " pod="openstack/nova-cell0-conductor-db-sync-xsc6g" Jan 30 16:44:32 crc kubenswrapper[4766]: I0130 16:44:32.090768 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b88e4495-e013-4fc2-b65b-c3d914b89dd8-config-data\") pod \"nova-cell0-conductor-db-sync-xsc6g\" (UID: \"b88e4495-e013-4fc2-b65b-c3d914b89dd8\") " pod="openstack/nova-cell0-conductor-db-sync-xsc6g" Jan 30 16:44:32 crc kubenswrapper[4766]: I0130 16:44:32.093801 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thmr6\" (UniqueName: \"kubernetes.io/projected/b88e4495-e013-4fc2-b65b-c3d914b89dd8-kube-api-access-thmr6\") pod \"nova-cell0-conductor-db-sync-xsc6g\" (UID: \"b88e4495-e013-4fc2-b65b-c3d914b89dd8\") " pod="openstack/nova-cell0-conductor-db-sync-xsc6g" Jan 30 16:44:32 crc kubenswrapper[4766]: I0130 16:44:32.226423 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-xsc6g" Jan 30 16:44:32 crc kubenswrapper[4766]: I0130 16:44:32.697383 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xsc6g"] Jan 30 16:44:33 crc kubenswrapper[4766]: I0130 16:44:33.055113 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-xsc6g" event={"ID":"b88e4495-e013-4fc2-b65b-c3d914b89dd8","Type":"ContainerStarted","Data":"de33c59a496f86bc7326b1527b7dc3b9a3d5c593c7c83837b47d719057a9c4e6"} Jan 30 16:44:33 crc kubenswrapper[4766]: I0130 16:44:33.230097 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 16:44:33 crc kubenswrapper[4766]: I0130 16:44:33.230156 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 16:44:33 crc kubenswrapper[4766]: I0130 16:44:33.270345 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 16:44:33 crc kubenswrapper[4766]: I0130 16:44:33.281911 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 16:44:34 crc kubenswrapper[4766]: I0130 16:44:34.064245 4766 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 16:44:34 crc kubenswrapper[4766]: I0130 16:44:34.064558 4766 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 16:44:34 crc kubenswrapper[4766]: I0130 16:44:34.066047 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 16:44:34 crc kubenswrapper[4766]: I0130 16:44:34.066077 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 16:44:34 crc kubenswrapper[4766]: I0130 16:44:34.192191 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 16:44:34 crc kubenswrapper[4766]: I0130 16:44:34.196189 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 16:44:35 crc kubenswrapper[4766]: I0130 16:44:35.075405 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1d1e402-7f4e-4c9e-9831-0a5d14616fde","Type":"ContainerStarted","Data":"8ccac3aa0a587d70b3197d39e4a424c5d3c4b97bb45f69730f92ad4056adf33d"} Jan 30 16:44:35 crc kubenswrapper[4766]: I0130 16:44:35.076349 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 16:44:35 crc kubenswrapper[4766]: I0130 16:44:35.108053 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.6450019839999999 podStartE2EDuration="9.108027953s" podCreationTimestamp="2026-01-30 16:44:26 +0000 UTC" firstStartedPulling="2026-01-30 16:44:27.012585806 +0000 UTC m=+1321.650543152" lastFinishedPulling="2026-01-30 16:44:34.475611775 +0000 UTC m=+1329.113569121" observedRunningTime="2026-01-30 16:44:35.103067227 +0000 UTC m=+1329.741024573" watchObservedRunningTime="2026-01-30 16:44:35.108027953 +0000 UTC m=+1329.745985309" Jan 30 16:44:36 crc kubenswrapper[4766]: I0130 16:44:36.086272 4766 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 16:44:36 crc kubenswrapper[4766]: I0130 16:44:36.086312 4766 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 16:44:36 crc kubenswrapper[4766]: I0130 16:44:36.228414 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 16:44:36 crc kubenswrapper[4766]: I0130 16:44:36.317466 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 16:44:39 crc kubenswrapper[4766]: I0130 16:44:39.045304 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:44:39 crc kubenswrapper[4766]: I0130 16:44:39.045668 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:44:39 crc kubenswrapper[4766]: I0130 16:44:39.045706 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 16:44:39 crc kubenswrapper[4766]: I0130 16:44:39.046232 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"401c81042a218118cfba77ecd472ad3789063907971964c9b9416c5db7f3d8ba"} pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 16:44:39 crc kubenswrapper[4766]: I0130 16:44:39.046286 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" containerID="cri-o://401c81042a218118cfba77ecd472ad3789063907971964c9b9416c5db7f3d8ba" gracePeriod=600 Jan 30 16:44:40 crc kubenswrapper[4766]: I0130 16:44:40.120709 4766 generic.go:334] "Generic (PLEG): container finished" podID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerID="401c81042a218118cfba77ecd472ad3789063907971964c9b9416c5db7f3d8ba" exitCode=0 Jan 30 16:44:40 crc kubenswrapper[4766]: I0130 16:44:40.120761 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerDied","Data":"401c81042a218118cfba77ecd472ad3789063907971964c9b9416c5db7f3d8ba"} Jan 30 16:44:40 crc kubenswrapper[4766]: I0130 16:44:40.120808 4766 scope.go:117] "RemoveContainer" containerID="ff8a362ea851503bbb575c0aae10eba4412530904ed767a62c62bad94b884ce0" Jan 30 16:44:41 crc kubenswrapper[4766]: I0130 16:44:41.130578 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-xsc6g" event={"ID":"b88e4495-e013-4fc2-b65b-c3d914b89dd8","Type":"ContainerStarted","Data":"53abeb8a5618ddec5f224dfed1ba79dfbbd62eada83931393de17bebf2e1d5ab"} Jan 30 16:44:41 crc kubenswrapper[4766]: I0130 16:44:41.132634 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerStarted","Data":"d754c53961dd92b39433a4ade8bd484064163d9f01082dc2421d03e086c5b027"} Jan 30 16:44:42 crc kubenswrapper[4766]: I0130 16:44:42.059350 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:44:42 crc kubenswrapper[4766]: I0130 16:44:42.059975 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d1d1e402-7f4e-4c9e-9831-0a5d14616fde" containerName="ceilometer-central-agent" containerID="cri-o://b285e8e69d7ab02b0bfae305890b5a29b3d4f19eea785d5a2b4ad8f1c688ad59" gracePeriod=30 Jan 30 16:44:42 crc kubenswrapper[4766]: I0130 16:44:42.060084 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d1d1e402-7f4e-4c9e-9831-0a5d14616fde" containerName="sg-core" containerID="cri-o://0cc930c24dd0e619bf1c708ae90cfe124b8542a4f7be4495b512b8f0f80d9112" gracePeriod=30 Jan 30 16:44:42 crc kubenswrapper[4766]: I0130 16:44:42.060110 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d1d1e402-7f4e-4c9e-9831-0a5d14616fde" containerName="ceilometer-notification-agent" containerID="cri-o://76c88c6567a93336687e962e7d2517bf67cd4cf174d2091c90be59d55a672150" gracePeriod=30 Jan 30 16:44:42 crc kubenswrapper[4766]: I0130 16:44:42.060483 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d1d1e402-7f4e-4c9e-9831-0a5d14616fde" containerName="proxy-httpd" containerID="cri-o://8ccac3aa0a587d70b3197d39e4a424c5d3c4b97bb45f69730f92ad4056adf33d" gracePeriod=30 Jan 30 16:44:42 crc kubenswrapper[4766]: I0130 16:44:42.158611 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-xsc6g" podStartSLOduration=3.12062016 podStartE2EDuration="11.158596795s" podCreationTimestamp="2026-01-30 16:44:31 +0000 UTC" firstStartedPulling="2026-01-30 16:44:32.701028464 +0000 UTC m=+1327.338985820" lastFinishedPulling="2026-01-30 16:44:40.739005109 +0000 UTC m=+1335.376962455" observedRunningTime="2026-01-30 16:44:42.150982375 +0000 UTC m=+1336.788939741" watchObservedRunningTime="2026-01-30 16:44:42.158596795 +0000 UTC m=+1336.796554141" Jan 30 16:44:43 crc kubenswrapper[4766]: I0130 16:44:43.150898 4766 generic.go:334] "Generic (PLEG): container finished" podID="d1d1e402-7f4e-4c9e-9831-0a5d14616fde" containerID="8ccac3aa0a587d70b3197d39e4a424c5d3c4b97bb45f69730f92ad4056adf33d" exitCode=0 Jan 30 16:44:43 crc kubenswrapper[4766]: I0130 16:44:43.151408 4766 generic.go:334] "Generic (PLEG): container finished" podID="d1d1e402-7f4e-4c9e-9831-0a5d14616fde" containerID="0cc930c24dd0e619bf1c708ae90cfe124b8542a4f7be4495b512b8f0f80d9112" exitCode=2 Jan 30 16:44:43 crc kubenswrapper[4766]: I0130 16:44:43.151417 4766 generic.go:334] "Generic (PLEG): container finished" podID="d1d1e402-7f4e-4c9e-9831-0a5d14616fde" containerID="b285e8e69d7ab02b0bfae305890b5a29b3d4f19eea785d5a2b4ad8f1c688ad59" exitCode=0 Jan 30 16:44:43 crc kubenswrapper[4766]: I0130 16:44:43.150988 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1d1e402-7f4e-4c9e-9831-0a5d14616fde","Type":"ContainerDied","Data":"8ccac3aa0a587d70b3197d39e4a424c5d3c4b97bb45f69730f92ad4056adf33d"} Jan 30 16:44:43 crc kubenswrapper[4766]: I0130 16:44:43.151452 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1d1e402-7f4e-4c9e-9831-0a5d14616fde","Type":"ContainerDied","Data":"0cc930c24dd0e619bf1c708ae90cfe124b8542a4f7be4495b512b8f0f80d9112"} Jan 30 16:44:43 crc kubenswrapper[4766]: I0130 16:44:43.151466 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1d1e402-7f4e-4c9e-9831-0a5d14616fde","Type":"ContainerDied","Data":"b285e8e69d7ab02b0bfae305890b5a29b3d4f19eea785d5a2b4ad8f1c688ad59"} Jan 30 16:44:44 crc kubenswrapper[4766]: I0130 16:44:44.175373 4766 generic.go:334] "Generic (PLEG): container finished" podID="d1d1e402-7f4e-4c9e-9831-0a5d14616fde" containerID="76c88c6567a93336687e962e7d2517bf67cd4cf174d2091c90be59d55a672150" exitCode=0 Jan 30 16:44:44 crc kubenswrapper[4766]: I0130 16:44:44.175461 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1d1e402-7f4e-4c9e-9831-0a5d14616fde","Type":"ContainerDied","Data":"76c88c6567a93336687e962e7d2517bf67cd4cf174d2091c90be59d55a672150"} Jan 30 16:44:44 crc kubenswrapper[4766]: I0130 16:44:44.619513 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:44:44 crc kubenswrapper[4766]: I0130 16:44:44.719740 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-sg-core-conf-yaml\") pod \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\" (UID: \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\") " Jan 30 16:44:44 crc kubenswrapper[4766]: I0130 16:44:44.719835 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-scripts\") pod \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\" (UID: \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\") " Jan 30 16:44:44 crc kubenswrapper[4766]: I0130 16:44:44.719870 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-run-httpd\") pod \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\" (UID: \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\") " Jan 30 16:44:44 crc kubenswrapper[4766]: I0130 16:44:44.719914 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-combined-ca-bundle\") pod \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\" (UID: \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\") " Jan 30 16:44:44 crc kubenswrapper[4766]: I0130 16:44:44.720055 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqnpm\" (UniqueName: \"kubernetes.io/projected/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-kube-api-access-wqnpm\") pod \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\" (UID: \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\") " Jan 30 16:44:44 crc kubenswrapper[4766]: I0130 16:44:44.720078 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-log-httpd\") pod \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\" (UID: \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\") " Jan 30 16:44:44 crc kubenswrapper[4766]: I0130 16:44:44.720145 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-config-data\") pod \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\" (UID: \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\") " Jan 30 16:44:44 crc kubenswrapper[4766]: I0130 16:44:44.720607 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d1d1e402-7f4e-4c9e-9831-0a5d14616fde" (UID: "d1d1e402-7f4e-4c9e-9831-0a5d14616fde"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:44:44 crc kubenswrapper[4766]: I0130 16:44:44.720737 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d1d1e402-7f4e-4c9e-9831-0a5d14616fde" (UID: "d1d1e402-7f4e-4c9e-9831-0a5d14616fde"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:44:44 crc kubenswrapper[4766]: I0130 16:44:44.721452 4766 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:44 crc kubenswrapper[4766]: I0130 16:44:44.721479 4766 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:44 crc kubenswrapper[4766]: I0130 16:44:44.727306 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-kube-api-access-wqnpm" (OuterVolumeSpecName: "kube-api-access-wqnpm") pod "d1d1e402-7f4e-4c9e-9831-0a5d14616fde" (UID: "d1d1e402-7f4e-4c9e-9831-0a5d14616fde"). InnerVolumeSpecName "kube-api-access-wqnpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:44:44 crc kubenswrapper[4766]: I0130 16:44:44.728599 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-scripts" (OuterVolumeSpecName: "scripts") pod "d1d1e402-7f4e-4c9e-9831-0a5d14616fde" (UID: "d1d1e402-7f4e-4c9e-9831-0a5d14616fde"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:44:44 crc kubenswrapper[4766]: I0130 16:44:44.761915 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d1d1e402-7f4e-4c9e-9831-0a5d14616fde" (UID: "d1d1e402-7f4e-4c9e-9831-0a5d14616fde"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:44:44 crc kubenswrapper[4766]: I0130 16:44:44.800159 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d1d1e402-7f4e-4c9e-9831-0a5d14616fde" (UID: "d1d1e402-7f4e-4c9e-9831-0a5d14616fde"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:44:44 crc kubenswrapper[4766]: I0130 16:44:44.822865 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:44 crc kubenswrapper[4766]: I0130 16:44:44.823240 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:44 crc kubenswrapper[4766]: I0130 16:44:44.823335 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqnpm\" (UniqueName: \"kubernetes.io/projected/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-kube-api-access-wqnpm\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:44 crc kubenswrapper[4766]: I0130 16:44:44.823443 4766 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:44 crc kubenswrapper[4766]: I0130 16:44:44.824817 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-config-data" (OuterVolumeSpecName: "config-data") pod "d1d1e402-7f4e-4c9e-9831-0a5d14616fde" (UID: "d1d1e402-7f4e-4c9e-9831-0a5d14616fde"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:44:44 crc kubenswrapper[4766]: I0130 16:44:44.925336 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.192716 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1d1e402-7f4e-4c9e-9831-0a5d14616fde","Type":"ContainerDied","Data":"9b528af22b1b5581dbc2a01e256cf97cec5bfd26af827ddc74d5e4d0a050df47"} Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.192826 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.193222 4766 scope.go:117] "RemoveContainer" containerID="8ccac3aa0a587d70b3197d39e4a424c5d3c4b97bb45f69730f92ad4056adf33d" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.218315 4766 scope.go:117] "RemoveContainer" containerID="0cc930c24dd0e619bf1c708ae90cfe124b8542a4f7be4495b512b8f0f80d9112" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.248385 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.265547 4766 scope.go:117] "RemoveContainer" containerID="76c88c6567a93336687e962e7d2517bf67cd4cf174d2091c90be59d55a672150" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.280683 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.290138 4766 scope.go:117] "RemoveContainer" containerID="b285e8e69d7ab02b0bfae305890b5a29b3d4f19eea785d5a2b4ad8f1c688ad59" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.290381 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:44:45 crc kubenswrapper[4766]: E0130 16:44:45.291538 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1d1e402-7f4e-4c9e-9831-0a5d14616fde" containerName="ceilometer-notification-agent" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.291588 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1d1e402-7f4e-4c9e-9831-0a5d14616fde" containerName="ceilometer-notification-agent" Jan 30 16:44:45 crc kubenswrapper[4766]: E0130 16:44:45.291631 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1d1e402-7f4e-4c9e-9831-0a5d14616fde" containerName="ceilometer-central-agent" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.291641 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1d1e402-7f4e-4c9e-9831-0a5d14616fde" containerName="ceilometer-central-agent" Jan 30 16:44:45 crc kubenswrapper[4766]: E0130 16:44:45.291656 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1d1e402-7f4e-4c9e-9831-0a5d14616fde" containerName="sg-core" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.291666 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1d1e402-7f4e-4c9e-9831-0a5d14616fde" containerName="sg-core" Jan 30 16:44:45 crc kubenswrapper[4766]: E0130 16:44:45.291681 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1d1e402-7f4e-4c9e-9831-0a5d14616fde" containerName="proxy-httpd" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.291688 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1d1e402-7f4e-4c9e-9831-0a5d14616fde" containerName="proxy-httpd" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.292011 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1d1e402-7f4e-4c9e-9831-0a5d14616fde" containerName="sg-core" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.292040 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1d1e402-7f4e-4c9e-9831-0a5d14616fde" containerName="ceilometer-central-agent" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.292056 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1d1e402-7f4e-4c9e-9831-0a5d14616fde" containerName="proxy-httpd" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.292073 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1d1e402-7f4e-4c9e-9831-0a5d14616fde" containerName="ceilometer-notification-agent" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.294216 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.298580 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.299254 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.300305 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.456935 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-run-httpd\") pod \"ceilometer-0\" (UID: \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\") " pod="openstack/ceilometer-0" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.457153 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-log-httpd\") pod \"ceilometer-0\" (UID: \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\") " pod="openstack/ceilometer-0" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.457251 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\") " pod="openstack/ceilometer-0" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.457418 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\") " pod="openstack/ceilometer-0" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.457589 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-config-data\") pod \"ceilometer-0\" (UID: \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\") " pod="openstack/ceilometer-0" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.457667 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-scripts\") pod \"ceilometer-0\" (UID: \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\") " pod="openstack/ceilometer-0" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.457916 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t27b4\" (UniqueName: \"kubernetes.io/projected/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-kube-api-access-t27b4\") pod \"ceilometer-0\" (UID: \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\") " pod="openstack/ceilometer-0" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.559483 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-run-httpd\") pod \"ceilometer-0\" (UID: \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\") " pod="openstack/ceilometer-0" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.559570 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-log-httpd\") pod \"ceilometer-0\" (UID: \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\") " pod="openstack/ceilometer-0" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.559591 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\") " pod="openstack/ceilometer-0" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.559628 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\") " pod="openstack/ceilometer-0" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.559660 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-config-data\") pod \"ceilometer-0\" (UID: \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\") " pod="openstack/ceilometer-0" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.559687 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-scripts\") pod \"ceilometer-0\" (UID: \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\") " pod="openstack/ceilometer-0" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.559724 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t27b4\" (UniqueName: \"kubernetes.io/projected/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-kube-api-access-t27b4\") pod \"ceilometer-0\" (UID: \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\") " pod="openstack/ceilometer-0" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.560070 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-run-httpd\") pod \"ceilometer-0\" (UID: \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\") " pod="openstack/ceilometer-0" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.560421 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-log-httpd\") pod \"ceilometer-0\" (UID: \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\") " pod="openstack/ceilometer-0" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.563559 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\") " pod="openstack/ceilometer-0" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.567905 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\") " pod="openstack/ceilometer-0" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.567934 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-config-data\") pod \"ceilometer-0\" (UID: \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\") " pod="openstack/ceilometer-0" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.569665 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-scripts\") pod \"ceilometer-0\" (UID: \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\") " pod="openstack/ceilometer-0" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.579236 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t27b4\" (UniqueName: \"kubernetes.io/projected/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-kube-api-access-t27b4\") pod \"ceilometer-0\" (UID: \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\") " pod="openstack/ceilometer-0" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.638514 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:44:46 crc kubenswrapper[4766]: I0130 16:44:46.050744 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1d1e402-7f4e-4c9e-9831-0a5d14616fde" path="/var/lib/kubelet/pods/d1d1e402-7f4e-4c9e-9831-0a5d14616fde/volumes" Jan 30 16:44:46 crc kubenswrapper[4766]: I0130 16:44:46.300694 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:44:47 crc kubenswrapper[4766]: I0130 16:44:47.210461 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08","Type":"ContainerStarted","Data":"13122a4eabb8082652f9569e2e13ff1ecd84b7d291ad1c35b8176811386f299a"} Jan 30 16:44:47 crc kubenswrapper[4766]: I0130 16:44:47.210786 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08","Type":"ContainerStarted","Data":"61a267512b3a74db1d89e0f87c3f2b0cc5973c3838b369b646d6b0db83c2aa4a"} Jan 30 16:44:48 crc kubenswrapper[4766]: I0130 16:44:48.225782 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08","Type":"ContainerStarted","Data":"555282210fbcf9d58864febf2b6688957bfaeee8bcfeba4a5957116d8831663c"} Jan 30 16:44:48 crc kubenswrapper[4766]: I0130 16:44:48.436803 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:44:49 crc kubenswrapper[4766]: I0130 16:44:49.247408 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08","Type":"ContainerStarted","Data":"0c78464fc87a4f06711694310fdc641ab69421eec8cc23d2052721654b1114c1"} Jan 30 16:44:55 crc kubenswrapper[4766]: I0130 16:44:55.313658 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08","Type":"ContainerStarted","Data":"84d8da172448129956d93ef1d07772a89a79900849f061edf1f9286dfa4bb591"} Jan 30 16:44:56 crc kubenswrapper[4766]: I0130 16:44:56.320629 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="90b7f3b2-6f0a-441b-8ff3-a09b5c453a08" containerName="ceilometer-central-agent" containerID="cri-o://13122a4eabb8082652f9569e2e13ff1ecd84b7d291ad1c35b8176811386f299a" gracePeriod=30 Jan 30 16:44:56 crc kubenswrapper[4766]: I0130 16:44:56.320693 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 16:44:56 crc kubenswrapper[4766]: I0130 16:44:56.320741 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="90b7f3b2-6f0a-441b-8ff3-a09b5c453a08" containerName="sg-core" containerID="cri-o://0c78464fc87a4f06711694310fdc641ab69421eec8cc23d2052721654b1114c1" gracePeriod=30 Jan 30 16:44:56 crc kubenswrapper[4766]: I0130 16:44:56.320762 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="90b7f3b2-6f0a-441b-8ff3-a09b5c453a08" containerName="ceilometer-notification-agent" containerID="cri-o://555282210fbcf9d58864febf2b6688957bfaeee8bcfeba4a5957116d8831663c" gracePeriod=30 Jan 30 16:44:56 crc kubenswrapper[4766]: I0130 16:44:56.320810 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="90b7f3b2-6f0a-441b-8ff3-a09b5c453a08" containerName="proxy-httpd" containerID="cri-o://84d8da172448129956d93ef1d07772a89a79900849f061edf1f9286dfa4bb591" gracePeriod=30 Jan 30 16:44:56 crc kubenswrapper[4766]: I0130 16:44:56.359870 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.48163305 podStartE2EDuration="11.35982763s" podCreationTimestamp="2026-01-30 16:44:45 +0000 UTC" firstStartedPulling="2026-01-30 16:44:46.304528495 +0000 UTC m=+1340.942485841" lastFinishedPulling="2026-01-30 16:44:54.182723075 +0000 UTC m=+1348.820680421" observedRunningTime="2026-01-30 16:44:56.349190427 +0000 UTC m=+1350.987147773" watchObservedRunningTime="2026-01-30 16:44:56.35982763 +0000 UTC m=+1350.997784976" Jan 30 16:44:57 crc kubenswrapper[4766]: I0130 16:44:57.337016 4766 generic.go:334] "Generic (PLEG): container finished" podID="90b7f3b2-6f0a-441b-8ff3-a09b5c453a08" containerID="84d8da172448129956d93ef1d07772a89a79900849f061edf1f9286dfa4bb591" exitCode=0 Jan 30 16:44:57 crc kubenswrapper[4766]: I0130 16:44:57.337056 4766 generic.go:334] "Generic (PLEG): container finished" podID="90b7f3b2-6f0a-441b-8ff3-a09b5c453a08" containerID="0c78464fc87a4f06711694310fdc641ab69421eec8cc23d2052721654b1114c1" exitCode=2 Jan 30 16:44:57 crc kubenswrapper[4766]: I0130 16:44:57.337079 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08","Type":"ContainerDied","Data":"84d8da172448129956d93ef1d07772a89a79900849f061edf1f9286dfa4bb591"} Jan 30 16:44:57 crc kubenswrapper[4766]: I0130 16:44:57.337108 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08","Type":"ContainerDied","Data":"0c78464fc87a4f06711694310fdc641ab69421eec8cc23d2052721654b1114c1"} Jan 30 16:45:00 crc kubenswrapper[4766]: I0130 16:45:00.142215 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496525-bphwz"] Jan 30 16:45:00 crc kubenswrapper[4766]: I0130 16:45:00.143862 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-bphwz" Jan 30 16:45:00 crc kubenswrapper[4766]: I0130 16:45:00.150216 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 16:45:00 crc kubenswrapper[4766]: I0130 16:45:00.150485 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 16:45:00 crc kubenswrapper[4766]: I0130 16:45:00.158800 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496525-bphwz"] Jan 30 16:45:00 crc kubenswrapper[4766]: I0130 16:45:00.258460 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae50e63c-8d14-4773-85f7-1deaaee40da6-config-volume\") pod \"collect-profiles-29496525-bphwz\" (UID: \"ae50e63c-8d14-4773-85f7-1deaaee40da6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-bphwz" Jan 30 16:45:00 crc kubenswrapper[4766]: I0130 16:45:00.258582 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsh68\" (UniqueName: \"kubernetes.io/projected/ae50e63c-8d14-4773-85f7-1deaaee40da6-kube-api-access-zsh68\") pod \"collect-profiles-29496525-bphwz\" (UID: \"ae50e63c-8d14-4773-85f7-1deaaee40da6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-bphwz" Jan 30 16:45:00 crc kubenswrapper[4766]: I0130 16:45:00.258672 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae50e63c-8d14-4773-85f7-1deaaee40da6-secret-volume\") pod \"collect-profiles-29496525-bphwz\" (UID: \"ae50e63c-8d14-4773-85f7-1deaaee40da6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-bphwz" Jan 30 16:45:00 crc kubenswrapper[4766]: I0130 16:45:00.360394 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsh68\" (UniqueName: \"kubernetes.io/projected/ae50e63c-8d14-4773-85f7-1deaaee40da6-kube-api-access-zsh68\") pod \"collect-profiles-29496525-bphwz\" (UID: \"ae50e63c-8d14-4773-85f7-1deaaee40da6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-bphwz" Jan 30 16:45:00 crc kubenswrapper[4766]: I0130 16:45:00.360474 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae50e63c-8d14-4773-85f7-1deaaee40da6-secret-volume\") pod \"collect-profiles-29496525-bphwz\" (UID: \"ae50e63c-8d14-4773-85f7-1deaaee40da6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-bphwz" Jan 30 16:45:00 crc kubenswrapper[4766]: I0130 16:45:00.360660 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae50e63c-8d14-4773-85f7-1deaaee40da6-config-volume\") pod \"collect-profiles-29496525-bphwz\" (UID: \"ae50e63c-8d14-4773-85f7-1deaaee40da6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-bphwz" Jan 30 16:45:00 crc kubenswrapper[4766]: I0130 16:45:00.361698 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae50e63c-8d14-4773-85f7-1deaaee40da6-config-volume\") pod \"collect-profiles-29496525-bphwz\" (UID: \"ae50e63c-8d14-4773-85f7-1deaaee40da6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-bphwz" Jan 30 16:45:00 crc kubenswrapper[4766]: I0130 16:45:00.368056 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae50e63c-8d14-4773-85f7-1deaaee40da6-secret-volume\") pod \"collect-profiles-29496525-bphwz\" (UID: \"ae50e63c-8d14-4773-85f7-1deaaee40da6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-bphwz" Jan 30 16:45:00 crc kubenswrapper[4766]: I0130 16:45:00.378540 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsh68\" (UniqueName: \"kubernetes.io/projected/ae50e63c-8d14-4773-85f7-1deaaee40da6-kube-api-access-zsh68\") pod \"collect-profiles-29496525-bphwz\" (UID: \"ae50e63c-8d14-4773-85f7-1deaaee40da6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-bphwz" Jan 30 16:45:00 crc kubenswrapper[4766]: I0130 16:45:00.462376 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-bphwz" Jan 30 16:45:00 crc kubenswrapper[4766]: I0130 16:45:00.885685 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496525-bphwz"] Jan 30 16:45:01 crc kubenswrapper[4766]: I0130 16:45:01.374970 4766 generic.go:334] "Generic (PLEG): container finished" podID="90b7f3b2-6f0a-441b-8ff3-a09b5c453a08" containerID="555282210fbcf9d58864febf2b6688957bfaeee8bcfeba4a5957116d8831663c" exitCode=0 Jan 30 16:45:01 crc kubenswrapper[4766]: I0130 16:45:01.375055 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08","Type":"ContainerDied","Data":"555282210fbcf9d58864febf2b6688957bfaeee8bcfeba4a5957116d8831663c"} Jan 30 16:45:01 crc kubenswrapper[4766]: I0130 16:45:01.377588 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-bphwz" event={"ID":"ae50e63c-8d14-4773-85f7-1deaaee40da6","Type":"ContainerStarted","Data":"451689aa105db363115bdf472e856a43d1bc5d29077b40817c715c822208a7f1"} Jan 30 16:45:02 crc kubenswrapper[4766]: I0130 16:45:02.386221 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-bphwz" event={"ID":"ae50e63c-8d14-4773-85f7-1deaaee40da6","Type":"ContainerStarted","Data":"8dd7d74e3c7ee802070a55313e5ed776854ad2a4f3bbdd635c4f840d40fcfbc2"} Jan 30 16:45:02 crc kubenswrapper[4766]: I0130 16:45:02.401517 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-bphwz" podStartSLOduration=2.401502994 podStartE2EDuration="2.401502994s" podCreationTimestamp="2026-01-30 16:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:45:02.399732425 +0000 UTC m=+1357.037689791" watchObservedRunningTime="2026-01-30 16:45:02.401502994 +0000 UTC m=+1357.039460340" Jan 30 16:45:04 crc kubenswrapper[4766]: I0130 16:45:04.411246 4766 generic.go:334] "Generic (PLEG): container finished" podID="ae50e63c-8d14-4773-85f7-1deaaee40da6" containerID="8dd7d74e3c7ee802070a55313e5ed776854ad2a4f3bbdd635c4f840d40fcfbc2" exitCode=0 Jan 30 16:45:04 crc kubenswrapper[4766]: I0130 16:45:04.411379 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-bphwz" event={"ID":"ae50e63c-8d14-4773-85f7-1deaaee40da6","Type":"ContainerDied","Data":"8dd7d74e3c7ee802070a55313e5ed776854ad2a4f3bbdd635c4f840d40fcfbc2"} Jan 30 16:45:05 crc kubenswrapper[4766]: I0130 16:45:05.830228 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-bphwz" Jan 30 16:45:05 crc kubenswrapper[4766]: I0130 16:45:05.959403 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsh68\" (UniqueName: \"kubernetes.io/projected/ae50e63c-8d14-4773-85f7-1deaaee40da6-kube-api-access-zsh68\") pod \"ae50e63c-8d14-4773-85f7-1deaaee40da6\" (UID: \"ae50e63c-8d14-4773-85f7-1deaaee40da6\") " Jan 30 16:45:05 crc kubenswrapper[4766]: I0130 16:45:05.959559 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae50e63c-8d14-4773-85f7-1deaaee40da6-secret-volume\") pod \"ae50e63c-8d14-4773-85f7-1deaaee40da6\" (UID: \"ae50e63c-8d14-4773-85f7-1deaaee40da6\") " Jan 30 16:45:05 crc kubenswrapper[4766]: I0130 16:45:05.959647 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae50e63c-8d14-4773-85f7-1deaaee40da6-config-volume\") pod \"ae50e63c-8d14-4773-85f7-1deaaee40da6\" (UID: \"ae50e63c-8d14-4773-85f7-1deaaee40da6\") " Jan 30 16:45:05 crc kubenswrapper[4766]: I0130 16:45:05.961464 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae50e63c-8d14-4773-85f7-1deaaee40da6-config-volume" (OuterVolumeSpecName: "config-volume") pod "ae50e63c-8d14-4773-85f7-1deaaee40da6" (UID: "ae50e63c-8d14-4773-85f7-1deaaee40da6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:45:05 crc kubenswrapper[4766]: I0130 16:45:05.966839 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae50e63c-8d14-4773-85f7-1deaaee40da6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ae50e63c-8d14-4773-85f7-1deaaee40da6" (UID: "ae50e63c-8d14-4773-85f7-1deaaee40da6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:45:05 crc kubenswrapper[4766]: I0130 16:45:05.967506 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae50e63c-8d14-4773-85f7-1deaaee40da6-kube-api-access-zsh68" (OuterVolumeSpecName: "kube-api-access-zsh68") pod "ae50e63c-8d14-4773-85f7-1deaaee40da6" (UID: "ae50e63c-8d14-4773-85f7-1deaaee40da6"). InnerVolumeSpecName "kube-api-access-zsh68". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:45:06 crc kubenswrapper[4766]: I0130 16:45:06.062044 4766 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae50e63c-8d14-4773-85f7-1deaaee40da6-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:06 crc kubenswrapper[4766]: I0130 16:45:06.062082 4766 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae50e63c-8d14-4773-85f7-1deaaee40da6-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:06 crc kubenswrapper[4766]: I0130 16:45:06.062093 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zsh68\" (UniqueName: \"kubernetes.io/projected/ae50e63c-8d14-4773-85f7-1deaaee40da6-kube-api-access-zsh68\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:06 crc kubenswrapper[4766]: I0130 16:45:06.428744 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-bphwz" event={"ID":"ae50e63c-8d14-4773-85f7-1deaaee40da6","Type":"ContainerDied","Data":"451689aa105db363115bdf472e856a43d1bc5d29077b40817c715c822208a7f1"} Jan 30 16:45:06 crc kubenswrapper[4766]: I0130 16:45:06.428765 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-bphwz" Jan 30 16:45:06 crc kubenswrapper[4766]: I0130 16:45:06.428781 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="451689aa105db363115bdf472e856a43d1bc5d29077b40817c715c822208a7f1" Jan 30 16:45:06 crc kubenswrapper[4766]: I0130 16:45:06.434055 4766 generic.go:334] "Generic (PLEG): container finished" podID="90b7f3b2-6f0a-441b-8ff3-a09b5c453a08" containerID="13122a4eabb8082652f9569e2e13ff1ecd84b7d291ad1c35b8176811386f299a" exitCode=0 Jan 30 16:45:06 crc kubenswrapper[4766]: I0130 16:45:06.434088 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08","Type":"ContainerDied","Data":"13122a4eabb8082652f9569e2e13ff1ecd84b7d291ad1c35b8176811386f299a"} Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.194013 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.204873 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-combined-ca-bundle\") pod \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\" (UID: \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\") " Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.204980 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-log-httpd\") pod \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\" (UID: \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\") " Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.205032 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-sg-core-conf-yaml\") pod \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\" (UID: \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\") " Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.205169 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-config-data\") pod \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\" (UID: \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\") " Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.205668 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "90b7f3b2-6f0a-441b-8ff3-a09b5c453a08" (UID: "90b7f3b2-6f0a-441b-8ff3-a09b5c453a08"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.205817 4766 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.249358 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "90b7f3b2-6f0a-441b-8ff3-a09b5c453a08" (UID: "90b7f3b2-6f0a-441b-8ff3-a09b5c453a08"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.286917 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "90b7f3b2-6f0a-441b-8ff3-a09b5c453a08" (UID: "90b7f3b2-6f0a-441b-8ff3-a09b5c453a08"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.306362 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-run-httpd\") pod \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\" (UID: \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\") " Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.306403 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t27b4\" (UniqueName: \"kubernetes.io/projected/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-kube-api-access-t27b4\") pod \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\" (UID: \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\") " Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.306425 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-scripts\") pod \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\" (UID: \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\") " Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.306746 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "90b7f3b2-6f0a-441b-8ff3-a09b5c453a08" (UID: "90b7f3b2-6f0a-441b-8ff3-a09b5c453a08"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.307030 4766 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.307049 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.307061 4766 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.307794 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-config-data" (OuterVolumeSpecName: "config-data") pod "90b7f3b2-6f0a-441b-8ff3-a09b5c453a08" (UID: "90b7f3b2-6f0a-441b-8ff3-a09b5c453a08"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.310348 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-scripts" (OuterVolumeSpecName: "scripts") pod "90b7f3b2-6f0a-441b-8ff3-a09b5c453a08" (UID: "90b7f3b2-6f0a-441b-8ff3-a09b5c453a08"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.311451 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-kube-api-access-t27b4" (OuterVolumeSpecName: "kube-api-access-t27b4") pod "90b7f3b2-6f0a-441b-8ff3-a09b5c453a08" (UID: "90b7f3b2-6f0a-441b-8ff3-a09b5c453a08"). InnerVolumeSpecName "kube-api-access-t27b4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.408501 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.408542 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t27b4\" (UniqueName: \"kubernetes.io/projected/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-kube-api-access-t27b4\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.408552 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.459192 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08","Type":"ContainerDied","Data":"61a267512b3a74db1d89e0f87c3f2b0cc5973c3838b369b646d6b0db83c2aa4a"} Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.459552 4766 scope.go:117] "RemoveContainer" containerID="84d8da172448129956d93ef1d07772a89a79900849f061edf1f9286dfa4bb591" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.459262 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.489923 4766 scope.go:117] "RemoveContainer" containerID="0c78464fc87a4f06711694310fdc641ab69421eec8cc23d2052721654b1114c1" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.496116 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.506223 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.522914 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:45:08 crc kubenswrapper[4766]: E0130 16:45:08.523492 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90b7f3b2-6f0a-441b-8ff3-a09b5c453a08" containerName="sg-core" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.523503 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="90b7f3b2-6f0a-441b-8ff3-a09b5c453a08" containerName="sg-core" Jan 30 16:45:08 crc kubenswrapper[4766]: E0130 16:45:08.523515 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90b7f3b2-6f0a-441b-8ff3-a09b5c453a08" containerName="proxy-httpd" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.523521 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="90b7f3b2-6f0a-441b-8ff3-a09b5c453a08" containerName="proxy-httpd" Jan 30 16:45:08 crc kubenswrapper[4766]: E0130 16:45:08.523539 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90b7f3b2-6f0a-441b-8ff3-a09b5c453a08" containerName="ceilometer-notification-agent" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.523546 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="90b7f3b2-6f0a-441b-8ff3-a09b5c453a08" containerName="ceilometer-notification-agent" Jan 30 16:45:08 crc kubenswrapper[4766]: E0130 16:45:08.523560 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90b7f3b2-6f0a-441b-8ff3-a09b5c453a08" containerName="ceilometer-central-agent" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.523566 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="90b7f3b2-6f0a-441b-8ff3-a09b5c453a08" containerName="ceilometer-central-agent" Jan 30 16:45:08 crc kubenswrapper[4766]: E0130 16:45:08.523579 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae50e63c-8d14-4773-85f7-1deaaee40da6" containerName="collect-profiles" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.523585 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae50e63c-8d14-4773-85f7-1deaaee40da6" containerName="collect-profiles" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.523740 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="90b7f3b2-6f0a-441b-8ff3-a09b5c453a08" containerName="ceilometer-notification-agent" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.523751 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="90b7f3b2-6f0a-441b-8ff3-a09b5c453a08" containerName="sg-core" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.523762 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="90b7f3b2-6f0a-441b-8ff3-a09b5c453a08" containerName="proxy-httpd" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.523773 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="90b7f3b2-6f0a-441b-8ff3-a09b5c453a08" containerName="ceilometer-central-agent" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.523782 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae50e63c-8d14-4773-85f7-1deaaee40da6" containerName="collect-profiles" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.525243 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.530310 4766 scope.go:117] "RemoveContainer" containerID="555282210fbcf9d58864febf2b6688957bfaeee8bcfeba4a5957116d8831663c" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.530453 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.531754 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.537099 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.560524 4766 scope.go:117] "RemoveContainer" containerID="13122a4eabb8082652f9569e2e13ff1ecd84b7d291ad1c35b8176811386f299a" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.616971 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-run-httpd\") pod \"ceilometer-0\" (UID: \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\") " pod="openstack/ceilometer-0" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.617120 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh88l\" (UniqueName: \"kubernetes.io/projected/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-kube-api-access-wh88l\") pod \"ceilometer-0\" (UID: \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\") " pod="openstack/ceilometer-0" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.617156 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-scripts\") pod \"ceilometer-0\" (UID: \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\") " pod="openstack/ceilometer-0" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.617221 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-log-httpd\") pod \"ceilometer-0\" (UID: \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\") " pod="openstack/ceilometer-0" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.617313 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-config-data\") pod \"ceilometer-0\" (UID: \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\") " pod="openstack/ceilometer-0" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.617411 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\") " pod="openstack/ceilometer-0" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.617493 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\") " pod="openstack/ceilometer-0" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.719006 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\") " pod="openstack/ceilometer-0" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.719103 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-run-httpd\") pod \"ceilometer-0\" (UID: \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\") " pod="openstack/ceilometer-0" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.719208 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wh88l\" (UniqueName: \"kubernetes.io/projected/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-kube-api-access-wh88l\") pod \"ceilometer-0\" (UID: \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\") " pod="openstack/ceilometer-0" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.719516 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-run-httpd\") pod \"ceilometer-0\" (UID: \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\") " pod="openstack/ceilometer-0" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.719242 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-scripts\") pod \"ceilometer-0\" (UID: \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\") " pod="openstack/ceilometer-0" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.719576 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-log-httpd\") pod \"ceilometer-0\" (UID: \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\") " pod="openstack/ceilometer-0" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.719608 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-config-data\") pod \"ceilometer-0\" (UID: \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\") " pod="openstack/ceilometer-0" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.719632 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\") " pod="openstack/ceilometer-0" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.719853 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-log-httpd\") pod \"ceilometer-0\" (UID: \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\") " pod="openstack/ceilometer-0" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.724408 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\") " pod="openstack/ceilometer-0" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.724683 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\") " pod="openstack/ceilometer-0" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.724746 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-config-data\") pod \"ceilometer-0\" (UID: \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\") " pod="openstack/ceilometer-0" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.725850 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-scripts\") pod \"ceilometer-0\" (UID: \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\") " pod="openstack/ceilometer-0" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.741822 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wh88l\" (UniqueName: \"kubernetes.io/projected/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-kube-api-access-wh88l\") pod \"ceilometer-0\" (UID: \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\") " pod="openstack/ceilometer-0" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.849997 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:45:09 crc kubenswrapper[4766]: I0130 16:45:09.305562 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:45:09 crc kubenswrapper[4766]: I0130 16:45:09.468412 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c5a66dd3-f929-4a64-a1c3-82731fbe06e6","Type":"ContainerStarted","Data":"254afe617ee7d083f8aef7d6025266a07966124e61977849a39348c5dd429afe"} Jan 30 16:45:10 crc kubenswrapper[4766]: I0130 16:45:10.049393 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90b7f3b2-6f0a-441b-8ff3-a09b5c453a08" path="/var/lib/kubelet/pods/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08/volumes" Jan 30 16:45:10 crc kubenswrapper[4766]: I0130 16:45:10.477877 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c5a66dd3-f929-4a64-a1c3-82731fbe06e6","Type":"ContainerStarted","Data":"3ee57530bf58f120ed8444d7871c4320f84fdbdb876a1a3538ae1dbd71c9fce0"} Jan 30 16:45:11 crc kubenswrapper[4766]: I0130 16:45:11.488091 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c5a66dd3-f929-4a64-a1c3-82731fbe06e6","Type":"ContainerStarted","Data":"c2354b49358944598e501410cfcfb12f2f2f0e5dbbd8985d03806c1dbd2dee14"} Jan 30 16:45:11 crc kubenswrapper[4766]: I0130 16:45:11.488744 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c5a66dd3-f929-4a64-a1c3-82731fbe06e6","Type":"ContainerStarted","Data":"d7000a4a86f88652cbcf7efd3a70364e4333986ca8785e8a1e1b51af011f84a2"} Jan 30 16:45:15 crc kubenswrapper[4766]: I0130 16:45:15.521878 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c5a66dd3-f929-4a64-a1c3-82731fbe06e6","Type":"ContainerStarted","Data":"095fbd04500fabe32c5529637fb29524bb631970072bad270f7a2ca05c8984eb"} Jan 30 16:45:15 crc kubenswrapper[4766]: I0130 16:45:15.522547 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 16:45:15 crc kubenswrapper[4766]: I0130 16:45:15.551710 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.459936242 podStartE2EDuration="7.551691873s" podCreationTimestamp="2026-01-30 16:45:08 +0000 UTC" firstStartedPulling="2026-01-30 16:45:09.311847627 +0000 UTC m=+1363.949804973" lastFinishedPulling="2026-01-30 16:45:14.403603258 +0000 UTC m=+1369.041560604" observedRunningTime="2026-01-30 16:45:15.546612403 +0000 UTC m=+1370.184569749" watchObservedRunningTime="2026-01-30 16:45:15.551691873 +0000 UTC m=+1370.189649219" Jan 30 16:45:21 crc kubenswrapper[4766]: I0130 16:45:21.572739 4766 generic.go:334] "Generic (PLEG): container finished" podID="b88e4495-e013-4fc2-b65b-c3d914b89dd8" containerID="53abeb8a5618ddec5f224dfed1ba79dfbbd62eada83931393de17bebf2e1d5ab" exitCode=0 Jan 30 16:45:21 crc kubenswrapper[4766]: I0130 16:45:21.572861 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-xsc6g" event={"ID":"b88e4495-e013-4fc2-b65b-c3d914b89dd8","Type":"ContainerDied","Data":"53abeb8a5618ddec5f224dfed1ba79dfbbd62eada83931393de17bebf2e1d5ab"} Jan 30 16:45:22 crc kubenswrapper[4766]: I0130 16:45:22.930680 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-xsc6g" Jan 30 16:45:23 crc kubenswrapper[4766]: I0130 16:45:23.002784 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b88e4495-e013-4fc2-b65b-c3d914b89dd8-config-data\") pod \"b88e4495-e013-4fc2-b65b-c3d914b89dd8\" (UID: \"b88e4495-e013-4fc2-b65b-c3d914b89dd8\") " Jan 30 16:45:23 crc kubenswrapper[4766]: I0130 16:45:23.003095 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thmr6\" (UniqueName: \"kubernetes.io/projected/b88e4495-e013-4fc2-b65b-c3d914b89dd8-kube-api-access-thmr6\") pod \"b88e4495-e013-4fc2-b65b-c3d914b89dd8\" (UID: \"b88e4495-e013-4fc2-b65b-c3d914b89dd8\") " Jan 30 16:45:23 crc kubenswrapper[4766]: I0130 16:45:23.003225 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b88e4495-e013-4fc2-b65b-c3d914b89dd8-scripts\") pod \"b88e4495-e013-4fc2-b65b-c3d914b89dd8\" (UID: \"b88e4495-e013-4fc2-b65b-c3d914b89dd8\") " Jan 30 16:45:23 crc kubenswrapper[4766]: I0130 16:45:23.003396 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b88e4495-e013-4fc2-b65b-c3d914b89dd8-combined-ca-bundle\") pod \"b88e4495-e013-4fc2-b65b-c3d914b89dd8\" (UID: \"b88e4495-e013-4fc2-b65b-c3d914b89dd8\") " Jan 30 16:45:23 crc kubenswrapper[4766]: I0130 16:45:23.016450 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b88e4495-e013-4fc2-b65b-c3d914b89dd8-scripts" (OuterVolumeSpecName: "scripts") pod "b88e4495-e013-4fc2-b65b-c3d914b89dd8" (UID: "b88e4495-e013-4fc2-b65b-c3d914b89dd8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:45:23 crc kubenswrapper[4766]: I0130 16:45:23.016605 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b88e4495-e013-4fc2-b65b-c3d914b89dd8-kube-api-access-thmr6" (OuterVolumeSpecName: "kube-api-access-thmr6") pod "b88e4495-e013-4fc2-b65b-c3d914b89dd8" (UID: "b88e4495-e013-4fc2-b65b-c3d914b89dd8"). InnerVolumeSpecName "kube-api-access-thmr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:45:23 crc kubenswrapper[4766]: I0130 16:45:23.026813 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b88e4495-e013-4fc2-b65b-c3d914b89dd8-config-data" (OuterVolumeSpecName: "config-data") pod "b88e4495-e013-4fc2-b65b-c3d914b89dd8" (UID: "b88e4495-e013-4fc2-b65b-c3d914b89dd8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:45:23 crc kubenswrapper[4766]: I0130 16:45:23.028582 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b88e4495-e013-4fc2-b65b-c3d914b89dd8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b88e4495-e013-4fc2-b65b-c3d914b89dd8" (UID: "b88e4495-e013-4fc2-b65b-c3d914b89dd8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:45:23 crc kubenswrapper[4766]: I0130 16:45:23.106466 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b88e4495-e013-4fc2-b65b-c3d914b89dd8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:23 crc kubenswrapper[4766]: I0130 16:45:23.106527 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b88e4495-e013-4fc2-b65b-c3d914b89dd8-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:23 crc kubenswrapper[4766]: I0130 16:45:23.106539 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thmr6\" (UniqueName: \"kubernetes.io/projected/b88e4495-e013-4fc2-b65b-c3d914b89dd8-kube-api-access-thmr6\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:23 crc kubenswrapper[4766]: I0130 16:45:23.106550 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b88e4495-e013-4fc2-b65b-c3d914b89dd8-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:23 crc kubenswrapper[4766]: I0130 16:45:23.591341 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-xsc6g" event={"ID":"b88e4495-e013-4fc2-b65b-c3d914b89dd8","Type":"ContainerDied","Data":"de33c59a496f86bc7326b1527b7dc3b9a3d5c593c7c83837b47d719057a9c4e6"} Jan 30 16:45:23 crc kubenswrapper[4766]: I0130 16:45:23.591673 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de33c59a496f86bc7326b1527b7dc3b9a3d5c593c7c83837b47d719057a9c4e6" Jan 30 16:45:23 crc kubenswrapper[4766]: I0130 16:45:23.591431 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-xsc6g" Jan 30 16:45:23 crc kubenswrapper[4766]: I0130 16:45:23.701043 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 16:45:23 crc kubenswrapper[4766]: E0130 16:45:23.701571 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b88e4495-e013-4fc2-b65b-c3d914b89dd8" containerName="nova-cell0-conductor-db-sync" Jan 30 16:45:23 crc kubenswrapper[4766]: I0130 16:45:23.701597 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="b88e4495-e013-4fc2-b65b-c3d914b89dd8" containerName="nova-cell0-conductor-db-sync" Jan 30 16:45:23 crc kubenswrapper[4766]: I0130 16:45:23.701827 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="b88e4495-e013-4fc2-b65b-c3d914b89dd8" containerName="nova-cell0-conductor-db-sync" Jan 30 16:45:23 crc kubenswrapper[4766]: I0130 16:45:23.702660 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 16:45:23 crc kubenswrapper[4766]: I0130 16:45:23.707563 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 30 16:45:23 crc kubenswrapper[4766]: I0130 16:45:23.708064 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-5t29t" Jan 30 16:45:23 crc kubenswrapper[4766]: I0130 16:45:23.712735 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 16:45:23 crc kubenswrapper[4766]: I0130 16:45:23.716073 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5346df4-67e7-4a20-bb56-11173908a334-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"e5346df4-67e7-4a20-bb56-11173908a334\") " pod="openstack/nova-cell0-conductor-0" Jan 30 16:45:23 crc kubenswrapper[4766]: I0130 16:45:23.716164 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpsfm\" (UniqueName: \"kubernetes.io/projected/e5346df4-67e7-4a20-bb56-11173908a334-kube-api-access-wpsfm\") pod \"nova-cell0-conductor-0\" (UID: \"e5346df4-67e7-4a20-bb56-11173908a334\") " pod="openstack/nova-cell0-conductor-0" Jan 30 16:45:23 crc kubenswrapper[4766]: I0130 16:45:23.716313 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5346df4-67e7-4a20-bb56-11173908a334-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"e5346df4-67e7-4a20-bb56-11173908a334\") " pod="openstack/nova-cell0-conductor-0" Jan 30 16:45:23 crc kubenswrapper[4766]: I0130 16:45:23.818408 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5346df4-67e7-4a20-bb56-11173908a334-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"e5346df4-67e7-4a20-bb56-11173908a334\") " pod="openstack/nova-cell0-conductor-0" Jan 30 16:45:23 crc kubenswrapper[4766]: I0130 16:45:23.818500 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpsfm\" (UniqueName: \"kubernetes.io/projected/e5346df4-67e7-4a20-bb56-11173908a334-kube-api-access-wpsfm\") pod \"nova-cell0-conductor-0\" (UID: \"e5346df4-67e7-4a20-bb56-11173908a334\") " pod="openstack/nova-cell0-conductor-0" Jan 30 16:45:23 crc kubenswrapper[4766]: I0130 16:45:23.818630 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5346df4-67e7-4a20-bb56-11173908a334-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"e5346df4-67e7-4a20-bb56-11173908a334\") " pod="openstack/nova-cell0-conductor-0" Jan 30 16:45:23 crc kubenswrapper[4766]: I0130 16:45:23.825063 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5346df4-67e7-4a20-bb56-11173908a334-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"e5346df4-67e7-4a20-bb56-11173908a334\") " pod="openstack/nova-cell0-conductor-0" Jan 30 16:45:23 crc kubenswrapper[4766]: I0130 16:45:23.829169 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5346df4-67e7-4a20-bb56-11173908a334-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"e5346df4-67e7-4a20-bb56-11173908a334\") " pod="openstack/nova-cell0-conductor-0" Jan 30 16:45:23 crc kubenswrapper[4766]: I0130 16:45:23.839431 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpsfm\" (UniqueName: \"kubernetes.io/projected/e5346df4-67e7-4a20-bb56-11173908a334-kube-api-access-wpsfm\") pod \"nova-cell0-conductor-0\" (UID: \"e5346df4-67e7-4a20-bb56-11173908a334\") " pod="openstack/nova-cell0-conductor-0" Jan 30 16:45:24 crc kubenswrapper[4766]: I0130 16:45:24.024720 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 16:45:24 crc kubenswrapper[4766]: I0130 16:45:24.479271 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 16:45:24 crc kubenswrapper[4766]: W0130 16:45:24.483051 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5346df4_67e7_4a20_bb56_11173908a334.slice/crio-33febc3f7d219c782652c5547871f0fec7686207e6742c6b6d2b0ff232b61a09 WatchSource:0}: Error finding container 33febc3f7d219c782652c5547871f0fec7686207e6742c6b6d2b0ff232b61a09: Status 404 returned error can't find the container with id 33febc3f7d219c782652c5547871f0fec7686207e6742c6b6d2b0ff232b61a09 Jan 30 16:45:24 crc kubenswrapper[4766]: I0130 16:45:24.601402 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"e5346df4-67e7-4a20-bb56-11173908a334","Type":"ContainerStarted","Data":"33febc3f7d219c782652c5547871f0fec7686207e6742c6b6d2b0ff232b61a09"} Jan 30 16:45:25 crc kubenswrapper[4766]: I0130 16:45:25.611128 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"e5346df4-67e7-4a20-bb56-11173908a334","Type":"ContainerStarted","Data":"f89cf49e9f838960fe8746366d6f0b6e8a301a5e9f763e18897855dbba68d6bf"} Jan 30 16:45:25 crc kubenswrapper[4766]: I0130 16:45:25.611454 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 30 16:45:25 crc kubenswrapper[4766]: I0130 16:45:25.635070 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.635049661 podStartE2EDuration="2.635049661s" podCreationTimestamp="2026-01-30 16:45:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:45:25.628445149 +0000 UTC m=+1380.266402495" watchObservedRunningTime="2026-01-30 16:45:25.635049661 +0000 UTC m=+1380.273007007" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.050856 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.484253 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-2sfxl"] Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.485657 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-2sfxl" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.493460 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.496072 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.497971 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-2sfxl"] Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.532998 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7639b60e-a348-4203-84b6-68af413cd517-scripts\") pod \"nova-cell0-cell-mapping-2sfxl\" (UID: \"7639b60e-a348-4203-84b6-68af413cd517\") " pod="openstack/nova-cell0-cell-mapping-2sfxl" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.533080 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7639b60e-a348-4203-84b6-68af413cd517-config-data\") pod \"nova-cell0-cell-mapping-2sfxl\" (UID: \"7639b60e-a348-4203-84b6-68af413cd517\") " pod="openstack/nova-cell0-cell-mapping-2sfxl" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.533110 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7639b60e-a348-4203-84b6-68af413cd517-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-2sfxl\" (UID: \"7639b60e-a348-4203-84b6-68af413cd517\") " pod="openstack/nova-cell0-cell-mapping-2sfxl" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.533138 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q7gs\" (UniqueName: \"kubernetes.io/projected/7639b60e-a348-4203-84b6-68af413cd517-kube-api-access-2q7gs\") pod \"nova-cell0-cell-mapping-2sfxl\" (UID: \"7639b60e-a348-4203-84b6-68af413cd517\") " pod="openstack/nova-cell0-cell-mapping-2sfxl" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.635849 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7639b60e-a348-4203-84b6-68af413cd517-scripts\") pod \"nova-cell0-cell-mapping-2sfxl\" (UID: \"7639b60e-a348-4203-84b6-68af413cd517\") " pod="openstack/nova-cell0-cell-mapping-2sfxl" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.635926 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7639b60e-a348-4203-84b6-68af413cd517-config-data\") pod \"nova-cell0-cell-mapping-2sfxl\" (UID: \"7639b60e-a348-4203-84b6-68af413cd517\") " pod="openstack/nova-cell0-cell-mapping-2sfxl" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.635955 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7639b60e-a348-4203-84b6-68af413cd517-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-2sfxl\" (UID: \"7639b60e-a348-4203-84b6-68af413cd517\") " pod="openstack/nova-cell0-cell-mapping-2sfxl" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.635984 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2q7gs\" (UniqueName: \"kubernetes.io/projected/7639b60e-a348-4203-84b6-68af413cd517-kube-api-access-2q7gs\") pod \"nova-cell0-cell-mapping-2sfxl\" (UID: \"7639b60e-a348-4203-84b6-68af413cd517\") " pod="openstack/nova-cell0-cell-mapping-2sfxl" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.641919 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7639b60e-a348-4203-84b6-68af413cd517-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-2sfxl\" (UID: \"7639b60e-a348-4203-84b6-68af413cd517\") " pod="openstack/nova-cell0-cell-mapping-2sfxl" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.644881 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7639b60e-a348-4203-84b6-68af413cd517-config-data\") pod \"nova-cell0-cell-mapping-2sfxl\" (UID: \"7639b60e-a348-4203-84b6-68af413cd517\") " pod="openstack/nova-cell0-cell-mapping-2sfxl" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.656236 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7639b60e-a348-4203-84b6-68af413cd517-scripts\") pod \"nova-cell0-cell-mapping-2sfxl\" (UID: \"7639b60e-a348-4203-84b6-68af413cd517\") " pod="openstack/nova-cell0-cell-mapping-2sfxl" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.664215 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2q7gs\" (UniqueName: \"kubernetes.io/projected/7639b60e-a348-4203-84b6-68af413cd517-kube-api-access-2q7gs\") pod \"nova-cell0-cell-mapping-2sfxl\" (UID: \"7639b60e-a348-4203-84b6-68af413cd517\") " pod="openstack/nova-cell0-cell-mapping-2sfxl" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.693247 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.694962 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.698081 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.734959 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.739022 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79d5404e-802d-42c7-9245-579f6724b524-logs\") pod \"nova-api-0\" (UID: \"79d5404e-802d-42c7-9245-579f6724b524\") " pod="openstack/nova-api-0" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.739081 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79d5404e-802d-42c7-9245-579f6724b524-config-data\") pod \"nova-api-0\" (UID: \"79d5404e-802d-42c7-9245-579f6724b524\") " pod="openstack/nova-api-0" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.739144 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d5404e-802d-42c7-9245-579f6724b524-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"79d5404e-802d-42c7-9245-579f6724b524\") " pod="openstack/nova-api-0" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.739238 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbq7m\" (UniqueName: \"kubernetes.io/projected/79d5404e-802d-42c7-9245-579f6724b524-kube-api-access-lbq7m\") pod \"nova-api-0\" (UID: \"79d5404e-802d-42c7-9245-579f6724b524\") " pod="openstack/nova-api-0" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.779019 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.780629 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.785316 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.796795 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.815312 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-2sfxl" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.840520 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79d5404e-802d-42c7-9245-579f6724b524-logs\") pod \"nova-api-0\" (UID: \"79d5404e-802d-42c7-9245-579f6724b524\") " pod="openstack/nova-api-0" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.840575 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79d5404e-802d-42c7-9245-579f6724b524-config-data\") pod \"nova-api-0\" (UID: \"79d5404e-802d-42c7-9245-579f6724b524\") " pod="openstack/nova-api-0" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.840645 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63\") " pod="openstack/nova-scheduler-0" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.840668 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d5404e-802d-42c7-9245-579f6724b524-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"79d5404e-802d-42c7-9245-579f6724b524\") " pod="openstack/nova-api-0" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.840686 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63-config-data\") pod \"nova-scheduler-0\" (UID: \"aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63\") " pod="openstack/nova-scheduler-0" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.840720 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvws2\" (UniqueName: \"kubernetes.io/projected/aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63-kube-api-access-pvws2\") pod \"nova-scheduler-0\" (UID: \"aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63\") " pod="openstack/nova-scheduler-0" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.840749 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbq7m\" (UniqueName: \"kubernetes.io/projected/79d5404e-802d-42c7-9245-579f6724b524-kube-api-access-lbq7m\") pod \"nova-api-0\" (UID: \"79d5404e-802d-42c7-9245-579f6724b524\") " pod="openstack/nova-api-0" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.841540 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79d5404e-802d-42c7-9245-579f6724b524-logs\") pod \"nova-api-0\" (UID: \"79d5404e-802d-42c7-9245-579f6724b524\") " pod="openstack/nova-api-0" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.846146 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d5404e-802d-42c7-9245-579f6724b524-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"79d5404e-802d-42c7-9245-579f6724b524\") " pod="openstack/nova-api-0" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.847810 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79d5404e-802d-42c7-9245-579f6724b524-config-data\") pod \"nova-api-0\" (UID: \"79d5404e-802d-42c7-9245-579f6724b524\") " pod="openstack/nova-api-0" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.865017 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbq7m\" (UniqueName: \"kubernetes.io/projected/79d5404e-802d-42c7-9245-579f6724b524-kube-api-access-lbq7m\") pod \"nova-api-0\" (UID: \"79d5404e-802d-42c7-9245-579f6724b524\") " pod="openstack/nova-api-0" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.895443 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.896738 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.899417 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.914163 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.945236 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxcbk\" (UniqueName: \"kubernetes.io/projected/e0275f96-c8b4-4219-8a95-f8cfa7a4edca-kube-api-access-xxcbk\") pod \"nova-cell1-novncproxy-0\" (UID: \"e0275f96-c8b4-4219-8a95-f8cfa7a4edca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.945474 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0275f96-c8b4-4219-8a95-f8cfa7a4edca-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e0275f96-c8b4-4219-8a95-f8cfa7a4edca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.945634 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63\") " pod="openstack/nova-scheduler-0" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.945714 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63-config-data\") pod \"nova-scheduler-0\" (UID: \"aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63\") " pod="openstack/nova-scheduler-0" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.945797 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvws2\" (UniqueName: \"kubernetes.io/projected/aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63-kube-api-access-pvws2\") pod \"nova-scheduler-0\" (UID: \"aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63\") " pod="openstack/nova-scheduler-0" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.945869 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0275f96-c8b4-4219-8a95-f8cfa7a4edca-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e0275f96-c8b4-4219-8a95-f8cfa7a4edca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.958952 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63-config-data\") pod \"nova-scheduler-0\" (UID: \"aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63\") " pod="openstack/nova-scheduler-0" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.959422 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63\") " pod="openstack/nova-scheduler-0" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.004119 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvws2\" (UniqueName: \"kubernetes.io/projected/aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63-kube-api-access-pvws2\") pod \"nova-scheduler-0\" (UID: \"aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63\") " pod="openstack/nova-scheduler-0" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.014928 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.017321 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.020878 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.048205 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0275f96-c8b4-4219-8a95-f8cfa7a4edca-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e0275f96-c8b4-4219-8a95-f8cfa7a4edca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.048272 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e111c80e-0c45-49f2-bfc0-665fbdd2ac56-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e111c80e-0c45-49f2-bfc0-665fbdd2ac56\") " pod="openstack/nova-metadata-0" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.048494 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxcbk\" (UniqueName: \"kubernetes.io/projected/e0275f96-c8b4-4219-8a95-f8cfa7a4edca-kube-api-access-xxcbk\") pod \"nova-cell1-novncproxy-0\" (UID: \"e0275f96-c8b4-4219-8a95-f8cfa7a4edca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.048527 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0275f96-c8b4-4219-8a95-f8cfa7a4edca-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e0275f96-c8b4-4219-8a95-f8cfa7a4edca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.048586 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e111c80e-0c45-49f2-bfc0-665fbdd2ac56-config-data\") pod \"nova-metadata-0\" (UID: \"e111c80e-0c45-49f2-bfc0-665fbdd2ac56\") " pod="openstack/nova-metadata-0" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.048604 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e111c80e-0c45-49f2-bfc0-665fbdd2ac56-logs\") pod \"nova-metadata-0\" (UID: \"e111c80e-0c45-49f2-bfc0-665fbdd2ac56\") " pod="openstack/nova-metadata-0" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.048624 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2n44\" (UniqueName: \"kubernetes.io/projected/e111c80e-0c45-49f2-bfc0-665fbdd2ac56-kube-api-access-l2n44\") pod \"nova-metadata-0\" (UID: \"e111c80e-0c45-49f2-bfc0-665fbdd2ac56\") " pod="openstack/nova-metadata-0" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.050762 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.095413 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0275f96-c8b4-4219-8a95-f8cfa7a4edca-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e0275f96-c8b4-4219-8a95-f8cfa7a4edca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.098799 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0275f96-c8b4-4219-8a95-f8cfa7a4edca-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e0275f96-c8b4-4219-8a95-f8cfa7a4edca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.102213 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxcbk\" (UniqueName: \"kubernetes.io/projected/e0275f96-c8b4-4219-8a95-f8cfa7a4edca-kube-api-access-xxcbk\") pod \"nova-cell1-novncproxy-0\" (UID: \"e0275f96-c8b4-4219-8a95-f8cfa7a4edca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.109797 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.134879 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.151142 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e111c80e-0c45-49f2-bfc0-665fbdd2ac56-config-data\") pod \"nova-metadata-0\" (UID: \"e111c80e-0c45-49f2-bfc0-665fbdd2ac56\") " pod="openstack/nova-metadata-0" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.151213 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e111c80e-0c45-49f2-bfc0-665fbdd2ac56-logs\") pod \"nova-metadata-0\" (UID: \"e111c80e-0c45-49f2-bfc0-665fbdd2ac56\") " pod="openstack/nova-metadata-0" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.151241 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2n44\" (UniqueName: \"kubernetes.io/projected/e111c80e-0c45-49f2-bfc0-665fbdd2ac56-kube-api-access-l2n44\") pod \"nova-metadata-0\" (UID: \"e111c80e-0c45-49f2-bfc0-665fbdd2ac56\") " pod="openstack/nova-metadata-0" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.151373 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e111c80e-0c45-49f2-bfc0-665fbdd2ac56-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e111c80e-0c45-49f2-bfc0-665fbdd2ac56\") " pod="openstack/nova-metadata-0" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.152272 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e111c80e-0c45-49f2-bfc0-665fbdd2ac56-logs\") pod \"nova-metadata-0\" (UID: \"e111c80e-0c45-49f2-bfc0-665fbdd2ac56\") " pod="openstack/nova-metadata-0" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.179687 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e111c80e-0c45-49f2-bfc0-665fbdd2ac56-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e111c80e-0c45-49f2-bfc0-665fbdd2ac56\") " pod="openstack/nova-metadata-0" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.215621 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2n44\" (UniqueName: \"kubernetes.io/projected/e111c80e-0c45-49f2-bfc0-665fbdd2ac56-kube-api-access-l2n44\") pod \"nova-metadata-0\" (UID: \"e111c80e-0c45-49f2-bfc0-665fbdd2ac56\") " pod="openstack/nova-metadata-0" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.217689 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-lx7hm"] Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.222338 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-lx7hm" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.230225 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e111c80e-0c45-49f2-bfc0-665fbdd2ac56-config-data\") pod \"nova-metadata-0\" (UID: \"e111c80e-0c45-49f2-bfc0-665fbdd2ac56\") " pod="openstack/nova-metadata-0" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.248722 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-lx7hm"] Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.257235 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d92d5f78-a271-41e7-bde9-410e3db6ee58-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-lx7hm\" (UID: \"d92d5f78-a271-41e7-bde9-410e3db6ee58\") " pod="openstack/dnsmasq-dns-845d6d6f59-lx7hm" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.257543 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d92d5f78-a271-41e7-bde9-410e3db6ee58-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-lx7hm\" (UID: \"d92d5f78-a271-41e7-bde9-410e3db6ee58\") " pod="openstack/dnsmasq-dns-845d6d6f59-lx7hm" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.258018 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhm67\" (UniqueName: \"kubernetes.io/projected/d92d5f78-a271-41e7-bde9-410e3db6ee58-kube-api-access-nhm67\") pod \"dnsmasq-dns-845d6d6f59-lx7hm\" (UID: \"d92d5f78-a271-41e7-bde9-410e3db6ee58\") " pod="openstack/dnsmasq-dns-845d6d6f59-lx7hm" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.259496 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d92d5f78-a271-41e7-bde9-410e3db6ee58-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-lx7hm\" (UID: \"d92d5f78-a271-41e7-bde9-410e3db6ee58\") " pod="openstack/dnsmasq-dns-845d6d6f59-lx7hm" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.259593 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d92d5f78-a271-41e7-bde9-410e3db6ee58-config\") pod \"dnsmasq-dns-845d6d6f59-lx7hm\" (UID: \"d92d5f78-a271-41e7-bde9-410e3db6ee58\") " pod="openstack/dnsmasq-dns-845d6d6f59-lx7hm" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.259661 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d92d5f78-a271-41e7-bde9-410e3db6ee58-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-lx7hm\" (UID: \"d92d5f78-a271-41e7-bde9-410e3db6ee58\") " pod="openstack/dnsmasq-dns-845d6d6f59-lx7hm" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.364005 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.364142 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhm67\" (UniqueName: \"kubernetes.io/projected/d92d5f78-a271-41e7-bde9-410e3db6ee58-kube-api-access-nhm67\") pod \"dnsmasq-dns-845d6d6f59-lx7hm\" (UID: \"d92d5f78-a271-41e7-bde9-410e3db6ee58\") " pod="openstack/dnsmasq-dns-845d6d6f59-lx7hm" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.364259 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d92d5f78-a271-41e7-bde9-410e3db6ee58-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-lx7hm\" (UID: \"d92d5f78-a271-41e7-bde9-410e3db6ee58\") " pod="openstack/dnsmasq-dns-845d6d6f59-lx7hm" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.364290 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d92d5f78-a271-41e7-bde9-410e3db6ee58-config\") pod \"dnsmasq-dns-845d6d6f59-lx7hm\" (UID: \"d92d5f78-a271-41e7-bde9-410e3db6ee58\") " pod="openstack/dnsmasq-dns-845d6d6f59-lx7hm" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.364315 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d92d5f78-a271-41e7-bde9-410e3db6ee58-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-lx7hm\" (UID: \"d92d5f78-a271-41e7-bde9-410e3db6ee58\") " pod="openstack/dnsmasq-dns-845d6d6f59-lx7hm" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.364353 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d92d5f78-a271-41e7-bde9-410e3db6ee58-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-lx7hm\" (UID: \"d92d5f78-a271-41e7-bde9-410e3db6ee58\") " pod="openstack/dnsmasq-dns-845d6d6f59-lx7hm" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.364396 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d92d5f78-a271-41e7-bde9-410e3db6ee58-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-lx7hm\" (UID: \"d92d5f78-a271-41e7-bde9-410e3db6ee58\") " pod="openstack/dnsmasq-dns-845d6d6f59-lx7hm" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.365475 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d92d5f78-a271-41e7-bde9-410e3db6ee58-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-lx7hm\" (UID: \"d92d5f78-a271-41e7-bde9-410e3db6ee58\") " pod="openstack/dnsmasq-dns-845d6d6f59-lx7hm" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.374606 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d92d5f78-a271-41e7-bde9-410e3db6ee58-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-lx7hm\" (UID: \"d92d5f78-a271-41e7-bde9-410e3db6ee58\") " pod="openstack/dnsmasq-dns-845d6d6f59-lx7hm" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.376905 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d92d5f78-a271-41e7-bde9-410e3db6ee58-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-lx7hm\" (UID: \"d92d5f78-a271-41e7-bde9-410e3db6ee58\") " pod="openstack/dnsmasq-dns-845d6d6f59-lx7hm" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.379376 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d92d5f78-a271-41e7-bde9-410e3db6ee58-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-lx7hm\" (UID: \"d92d5f78-a271-41e7-bde9-410e3db6ee58\") " pod="openstack/dnsmasq-dns-845d6d6f59-lx7hm" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.389134 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d92d5f78-a271-41e7-bde9-410e3db6ee58-config\") pod \"dnsmasq-dns-845d6d6f59-lx7hm\" (UID: \"d92d5f78-a271-41e7-bde9-410e3db6ee58\") " pod="openstack/dnsmasq-dns-845d6d6f59-lx7hm" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.419304 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhm67\" (UniqueName: \"kubernetes.io/projected/d92d5f78-a271-41e7-bde9-410e3db6ee58-kube-api-access-nhm67\") pod \"dnsmasq-dns-845d6d6f59-lx7hm\" (UID: \"d92d5f78-a271-41e7-bde9-410e3db6ee58\") " pod="openstack/dnsmasq-dns-845d6d6f59-lx7hm" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.434699 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.590272 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-2sfxl"] Jan 30 16:45:30 crc kubenswrapper[4766]: W0130 16:45:30.595908 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7639b60e_a348_4203_84b6_68af413cd517.slice/crio-a5b32e14c48ab98ec0dc4ccb66deca2794d95c0f1a764e15ce4a040ed275b855 WatchSource:0}: Error finding container a5b32e14c48ab98ec0dc4ccb66deca2794d95c0f1a764e15ce4a040ed275b855: Status 404 returned error can't find the container with id a5b32e14c48ab98ec0dc4ccb66deca2794d95c0f1a764e15ce4a040ed275b855 Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.627881 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-lx7hm" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.674322 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-2sfxl" event={"ID":"7639b60e-a348-4203-84b6-68af413cd517","Type":"ContainerStarted","Data":"a5b32e14c48ab98ec0dc4ccb66deca2794d95c0f1a764e15ce4a040ed275b855"} Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.943427 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 16:45:31 crc kubenswrapper[4766]: I0130 16:45:31.042326 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-d5p85"] Jan 30 16:45:31 crc kubenswrapper[4766]: I0130 16:45:31.043491 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-d5p85" Jan 30 16:45:31 crc kubenswrapper[4766]: I0130 16:45:31.046750 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 30 16:45:31 crc kubenswrapper[4766]: I0130 16:45:31.050211 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 30 16:45:31 crc kubenswrapper[4766]: I0130 16:45:31.073333 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-d5p85"] Jan 30 16:45:31 crc kubenswrapper[4766]: I0130 16:45:31.079282 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aeb40512-6ec4-4dd4-a623-ed2232387ee3-scripts\") pod \"nova-cell1-conductor-db-sync-d5p85\" (UID: \"aeb40512-6ec4-4dd4-a623-ed2232387ee3\") " pod="openstack/nova-cell1-conductor-db-sync-d5p85" Jan 30 16:45:31 crc kubenswrapper[4766]: I0130 16:45:31.079338 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aeb40512-6ec4-4dd4-a623-ed2232387ee3-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-d5p85\" (UID: \"aeb40512-6ec4-4dd4-a623-ed2232387ee3\") " pod="openstack/nova-cell1-conductor-db-sync-d5p85" Jan 30 16:45:31 crc kubenswrapper[4766]: I0130 16:45:31.079489 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwmmx\" (UniqueName: \"kubernetes.io/projected/aeb40512-6ec4-4dd4-a623-ed2232387ee3-kube-api-access-xwmmx\") pod \"nova-cell1-conductor-db-sync-d5p85\" (UID: \"aeb40512-6ec4-4dd4-a623-ed2232387ee3\") " pod="openstack/nova-cell1-conductor-db-sync-d5p85" Jan 30 16:45:31 crc kubenswrapper[4766]: I0130 16:45:31.079607 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aeb40512-6ec4-4dd4-a623-ed2232387ee3-config-data\") pod \"nova-cell1-conductor-db-sync-d5p85\" (UID: \"aeb40512-6ec4-4dd4-a623-ed2232387ee3\") " pod="openstack/nova-cell1-conductor-db-sync-d5p85" Jan 30 16:45:31 crc kubenswrapper[4766]: I0130 16:45:31.100016 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 16:45:31 crc kubenswrapper[4766]: I0130 16:45:31.155506 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 16:45:31 crc kubenswrapper[4766]: I0130 16:45:31.182792 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwmmx\" (UniqueName: \"kubernetes.io/projected/aeb40512-6ec4-4dd4-a623-ed2232387ee3-kube-api-access-xwmmx\") pod \"nova-cell1-conductor-db-sync-d5p85\" (UID: \"aeb40512-6ec4-4dd4-a623-ed2232387ee3\") " pod="openstack/nova-cell1-conductor-db-sync-d5p85" Jan 30 16:45:31 crc kubenswrapper[4766]: I0130 16:45:31.182913 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aeb40512-6ec4-4dd4-a623-ed2232387ee3-config-data\") pod \"nova-cell1-conductor-db-sync-d5p85\" (UID: \"aeb40512-6ec4-4dd4-a623-ed2232387ee3\") " pod="openstack/nova-cell1-conductor-db-sync-d5p85" Jan 30 16:45:31 crc kubenswrapper[4766]: I0130 16:45:31.182984 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aeb40512-6ec4-4dd4-a623-ed2232387ee3-scripts\") pod \"nova-cell1-conductor-db-sync-d5p85\" (UID: \"aeb40512-6ec4-4dd4-a623-ed2232387ee3\") " pod="openstack/nova-cell1-conductor-db-sync-d5p85" Jan 30 16:45:31 crc kubenswrapper[4766]: I0130 16:45:31.183027 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aeb40512-6ec4-4dd4-a623-ed2232387ee3-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-d5p85\" (UID: \"aeb40512-6ec4-4dd4-a623-ed2232387ee3\") " pod="openstack/nova-cell1-conductor-db-sync-d5p85" Jan 30 16:45:31 crc kubenswrapper[4766]: I0130 16:45:31.192039 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aeb40512-6ec4-4dd4-a623-ed2232387ee3-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-d5p85\" (UID: \"aeb40512-6ec4-4dd4-a623-ed2232387ee3\") " pod="openstack/nova-cell1-conductor-db-sync-d5p85" Jan 30 16:45:31 crc kubenswrapper[4766]: I0130 16:45:31.192125 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aeb40512-6ec4-4dd4-a623-ed2232387ee3-config-data\") pod \"nova-cell1-conductor-db-sync-d5p85\" (UID: \"aeb40512-6ec4-4dd4-a623-ed2232387ee3\") " pod="openstack/nova-cell1-conductor-db-sync-d5p85" Jan 30 16:45:31 crc kubenswrapper[4766]: I0130 16:45:31.193414 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aeb40512-6ec4-4dd4-a623-ed2232387ee3-scripts\") pod \"nova-cell1-conductor-db-sync-d5p85\" (UID: \"aeb40512-6ec4-4dd4-a623-ed2232387ee3\") " pod="openstack/nova-cell1-conductor-db-sync-d5p85" Jan 30 16:45:31 crc kubenswrapper[4766]: I0130 16:45:31.201071 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwmmx\" (UniqueName: \"kubernetes.io/projected/aeb40512-6ec4-4dd4-a623-ed2232387ee3-kube-api-access-xwmmx\") pod \"nova-cell1-conductor-db-sync-d5p85\" (UID: \"aeb40512-6ec4-4dd4-a623-ed2232387ee3\") " pod="openstack/nova-cell1-conductor-db-sync-d5p85" Jan 30 16:45:31 crc kubenswrapper[4766]: I0130 16:45:31.257162 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 16:45:31 crc kubenswrapper[4766]: W0130 16:45:31.267102 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode111c80e_0c45_49f2_bfc0_665fbdd2ac56.slice/crio-7e91b518e9df7a0df66e77c94960005b770ffdf960887cd6b2ab17f156b3e56d WatchSource:0}: Error finding container 7e91b518e9df7a0df66e77c94960005b770ffdf960887cd6b2ab17f156b3e56d: Status 404 returned error can't find the container with id 7e91b518e9df7a0df66e77c94960005b770ffdf960887cd6b2ab17f156b3e56d Jan 30 16:45:31 crc kubenswrapper[4766]: W0130 16:45:31.323807 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd92d5f78_a271_41e7_bde9_410e3db6ee58.slice/crio-91e65d6f256f9a51617953fca1ed6a1c2f94a9d6c711363f89bd2892d38340cc WatchSource:0}: Error finding container 91e65d6f256f9a51617953fca1ed6a1c2f94a9d6c711363f89bd2892d38340cc: Status 404 returned error can't find the container with id 91e65d6f256f9a51617953fca1ed6a1c2f94a9d6c711363f89bd2892d38340cc Jan 30 16:45:31 crc kubenswrapper[4766]: I0130 16:45:31.326386 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-lx7hm"] Jan 30 16:45:31 crc kubenswrapper[4766]: I0130 16:45:31.382530 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-d5p85" Jan 30 16:45:31 crc kubenswrapper[4766]: I0130 16:45:31.684761 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-2sfxl" event={"ID":"7639b60e-a348-4203-84b6-68af413cd517","Type":"ContainerStarted","Data":"66e9bc5a59fbbe0d1e3626146e5f88333d931fe0fc8ec6bf9dc52c16d98e0f27"} Jan 30 16:45:31 crc kubenswrapper[4766]: I0130 16:45:31.686101 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-lx7hm" event={"ID":"d92d5f78-a271-41e7-bde9-410e3db6ee58","Type":"ContainerStarted","Data":"1d1aebce59ff54c2cba777487e05b9692a4d8d12844694e6387583c2af634532"} Jan 30 16:45:31 crc kubenswrapper[4766]: I0130 16:45:31.686147 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-lx7hm" event={"ID":"d92d5f78-a271-41e7-bde9-410e3db6ee58","Type":"ContainerStarted","Data":"91e65d6f256f9a51617953fca1ed6a1c2f94a9d6c711363f89bd2892d38340cc"} Jan 30 16:45:31 crc kubenswrapper[4766]: I0130 16:45:31.687744 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63","Type":"ContainerStarted","Data":"e0faf2b25288d8c56af242de92e6a4e63d3647846b88fc5ff898477a334052e0"} Jan 30 16:45:31 crc kubenswrapper[4766]: I0130 16:45:31.698404 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e0275f96-c8b4-4219-8a95-f8cfa7a4edca","Type":"ContainerStarted","Data":"d03cdc6170eebcf6ba04199860083b79a704186bcc24a8f0c94fb427aa1473a0"} Jan 30 16:45:31 crc kubenswrapper[4766]: I0130 16:45:31.704270 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-2sfxl" podStartSLOduration=2.7042499429999998 podStartE2EDuration="2.704249943s" podCreationTimestamp="2026-01-30 16:45:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:45:31.701674472 +0000 UTC m=+1386.339631818" watchObservedRunningTime="2026-01-30 16:45:31.704249943 +0000 UTC m=+1386.342207299" Jan 30 16:45:31 crc kubenswrapper[4766]: I0130 16:45:31.708664 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"79d5404e-802d-42c7-9245-579f6724b524","Type":"ContainerStarted","Data":"02b9f2097968ae69cd7109fa143ebd5cddb3e07d1afbc01d074eaa6ede05fb7b"} Jan 30 16:45:31 crc kubenswrapper[4766]: I0130 16:45:31.710251 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e111c80e-0c45-49f2-bfc0-665fbdd2ac56","Type":"ContainerStarted","Data":"7e91b518e9df7a0df66e77c94960005b770ffdf960887cd6b2ab17f156b3e56d"} Jan 30 16:45:31 crc kubenswrapper[4766]: I0130 16:45:31.841441 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-d5p85"] Jan 30 16:45:32 crc kubenswrapper[4766]: I0130 16:45:32.741775 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-d5p85" event={"ID":"aeb40512-6ec4-4dd4-a623-ed2232387ee3","Type":"ContainerStarted","Data":"244b298b75af4ffc60d556fb768c258be1dcf5b89d3142b104861f7e022ebee0"} Jan 30 16:45:32 crc kubenswrapper[4766]: I0130 16:45:32.742089 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-d5p85" event={"ID":"aeb40512-6ec4-4dd4-a623-ed2232387ee3","Type":"ContainerStarted","Data":"d8e8fa91258ad408fb0e5fe2f36ffb083a7f80ad736cddc099769fad39b945a5"} Jan 30 16:45:32 crc kubenswrapper[4766]: I0130 16:45:32.744707 4766 generic.go:334] "Generic (PLEG): container finished" podID="d92d5f78-a271-41e7-bde9-410e3db6ee58" containerID="1d1aebce59ff54c2cba777487e05b9692a4d8d12844694e6387583c2af634532" exitCode=0 Jan 30 16:45:32 crc kubenswrapper[4766]: I0130 16:45:32.744794 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-lx7hm" event={"ID":"d92d5f78-a271-41e7-bde9-410e3db6ee58","Type":"ContainerDied","Data":"1d1aebce59ff54c2cba777487e05b9692a4d8d12844694e6387583c2af634532"} Jan 30 16:45:32 crc kubenswrapper[4766]: I0130 16:45:32.761925 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-d5p85" podStartSLOduration=1.7618991400000001 podStartE2EDuration="1.76189914s" podCreationTimestamp="2026-01-30 16:45:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:45:32.754569528 +0000 UTC m=+1387.392526894" watchObservedRunningTime="2026-01-30 16:45:32.76189914 +0000 UTC m=+1387.399856486" Jan 30 16:45:34 crc kubenswrapper[4766]: I0130 16:45:34.167890 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 16:45:34 crc kubenswrapper[4766]: I0130 16:45:34.235903 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 16:45:35 crc kubenswrapper[4766]: I0130 16:45:35.783997 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e111c80e-0c45-49f2-bfc0-665fbdd2ac56","Type":"ContainerStarted","Data":"6416df1047fe308e33b040e08526583d0654fc7b7b0b8ca00590a24d666f84b7"} Jan 30 16:45:35 crc kubenswrapper[4766]: I0130 16:45:35.784344 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e111c80e-0c45-49f2-bfc0-665fbdd2ac56","Type":"ContainerStarted","Data":"c231075c5dfb247437daaaeb176a6b0d3dea211afca691c38725b8939aa2480b"} Jan 30 16:45:35 crc kubenswrapper[4766]: I0130 16:45:35.784130 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e111c80e-0c45-49f2-bfc0-665fbdd2ac56" containerName="nova-metadata-metadata" containerID="cri-o://6416df1047fe308e33b040e08526583d0654fc7b7b0b8ca00590a24d666f84b7" gracePeriod=30 Jan 30 16:45:35 crc kubenswrapper[4766]: I0130 16:45:35.785763 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e111c80e-0c45-49f2-bfc0-665fbdd2ac56" containerName="nova-metadata-log" containerID="cri-o://c231075c5dfb247437daaaeb176a6b0d3dea211afca691c38725b8939aa2480b" gracePeriod=30 Jan 30 16:45:35 crc kubenswrapper[4766]: I0130 16:45:35.789106 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-lx7hm" event={"ID":"d92d5f78-a271-41e7-bde9-410e3db6ee58","Type":"ContainerStarted","Data":"89198eaaa434920b555079a794b492c6b89bd55b10487cc59b3d6ea529f6ecbf"} Jan 30 16:45:35 crc kubenswrapper[4766]: I0130 16:45:35.789252 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-845d6d6f59-lx7hm" Jan 30 16:45:35 crc kubenswrapper[4766]: I0130 16:45:35.796241 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63","Type":"ContainerStarted","Data":"1b1a2904ce23cae48baba8e63ee622ca248edd69cb5a2d3b876a5a4e606607d5"} Jan 30 16:45:35 crc kubenswrapper[4766]: I0130 16:45:35.800430 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="e0275f96-c8b4-4219-8a95-f8cfa7a4edca" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://135c1956a860be59824b856b724e9e55eaa85db098e7c6b8d270f3404e379bf5" gracePeriod=30 Jan 30 16:45:35 crc kubenswrapper[4766]: I0130 16:45:35.801158 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e0275f96-c8b4-4219-8a95-f8cfa7a4edca","Type":"ContainerStarted","Data":"135c1956a860be59824b856b724e9e55eaa85db098e7c6b8d270f3404e379bf5"} Jan 30 16:45:35 crc kubenswrapper[4766]: I0130 16:45:35.806986 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.975874226 podStartE2EDuration="6.806968004s" podCreationTimestamp="2026-01-30 16:45:29 +0000 UTC" firstStartedPulling="2026-01-30 16:45:31.274374127 +0000 UTC m=+1385.912331463" lastFinishedPulling="2026-01-30 16:45:35.105467895 +0000 UTC m=+1389.743425241" observedRunningTime="2026-01-30 16:45:35.805671208 +0000 UTC m=+1390.443628564" watchObservedRunningTime="2026-01-30 16:45:35.806968004 +0000 UTC m=+1390.444925360" Jan 30 16:45:35 crc kubenswrapper[4766]: I0130 16:45:35.808832 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"79d5404e-802d-42c7-9245-579f6724b524","Type":"ContainerStarted","Data":"0eb138c4b0d9c3e478ee70cc70d6bcf339b2097c5f2e77f54e8392a51dc7cccf"} Jan 30 16:45:35 crc kubenswrapper[4766]: I0130 16:45:35.808984 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"79d5404e-802d-42c7-9245-579f6724b524","Type":"ContainerStarted","Data":"bf9c2c64ce4dc263c25693c43e19bbb4bc11a93e8ffdf74b6c65121f4ef29b72"} Jan 30 16:45:35 crc kubenswrapper[4766]: I0130 16:45:35.834056 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.6902316170000002 podStartE2EDuration="6.834037269s" podCreationTimestamp="2026-01-30 16:45:29 +0000 UTC" firstStartedPulling="2026-01-30 16:45:30.954092274 +0000 UTC m=+1385.592049620" lastFinishedPulling="2026-01-30 16:45:35.097897926 +0000 UTC m=+1389.735855272" observedRunningTime="2026-01-30 16:45:35.823085027 +0000 UTC m=+1390.461042383" watchObservedRunningTime="2026-01-30 16:45:35.834037269 +0000 UTC m=+1390.471994615" Jan 30 16:45:35 crc kubenswrapper[4766]: I0130 16:45:35.848917 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-845d6d6f59-lx7hm" podStartSLOduration=5.848895438 podStartE2EDuration="5.848895438s" podCreationTimestamp="2026-01-30 16:45:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:45:35.840442895 +0000 UTC m=+1390.478400251" watchObservedRunningTime="2026-01-30 16:45:35.848895438 +0000 UTC m=+1390.486852784" Jan 30 16:45:35 crc kubenswrapper[4766]: I0130 16:45:35.868400 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.907164185 podStartE2EDuration="6.868378243s" podCreationTimestamp="2026-01-30 16:45:29 +0000 UTC" firstStartedPulling="2026-01-30 16:45:31.13708637 +0000 UTC m=+1385.775043716" lastFinishedPulling="2026-01-30 16:45:35.098300428 +0000 UTC m=+1389.736257774" observedRunningTime="2026-01-30 16:45:35.865277628 +0000 UTC m=+1390.503235004" watchObservedRunningTime="2026-01-30 16:45:35.868378243 +0000 UTC m=+1390.506335589" Jan 30 16:45:35 crc kubenswrapper[4766]: I0130 16:45:35.888911 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.952517554 podStartE2EDuration="6.888887588s" podCreationTimestamp="2026-01-30 16:45:29 +0000 UTC" firstStartedPulling="2026-01-30 16:45:31.16036313 +0000 UTC m=+1385.798320476" lastFinishedPulling="2026-01-30 16:45:35.096733164 +0000 UTC m=+1389.734690510" observedRunningTime="2026-01-30 16:45:35.884700942 +0000 UTC m=+1390.522658298" watchObservedRunningTime="2026-01-30 16:45:35.888887588 +0000 UTC m=+1390.526844954" Jan 30 16:45:36 crc kubenswrapper[4766]: I0130 16:45:36.819410 4766 generic.go:334] "Generic (PLEG): container finished" podID="e111c80e-0c45-49f2-bfc0-665fbdd2ac56" containerID="c231075c5dfb247437daaaeb176a6b0d3dea211afca691c38725b8939aa2480b" exitCode=143 Jan 30 16:45:36 crc kubenswrapper[4766]: I0130 16:45:36.819551 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e111c80e-0c45-49f2-bfc0-665fbdd2ac56","Type":"ContainerDied","Data":"c231075c5dfb247437daaaeb176a6b0d3dea211afca691c38725b8939aa2480b"} Jan 30 16:45:38 crc kubenswrapper[4766]: I0130 16:45:38.857738 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 30 16:45:39 crc kubenswrapper[4766]: I0130 16:45:39.847715 4766 generic.go:334] "Generic (PLEG): container finished" podID="7639b60e-a348-4203-84b6-68af413cd517" containerID="66e9bc5a59fbbe0d1e3626146e5f88333d931fe0fc8ec6bf9dc52c16d98e0f27" exitCode=0 Jan 30 16:45:39 crc kubenswrapper[4766]: I0130 16:45:39.847762 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-2sfxl" event={"ID":"7639b60e-a348-4203-84b6-68af413cd517","Type":"ContainerDied","Data":"66e9bc5a59fbbe0d1e3626146e5f88333d931fe0fc8ec6bf9dc52c16d98e0f27"} Jan 30 16:45:40 crc kubenswrapper[4766]: I0130 16:45:40.052027 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 16:45:40 crc kubenswrapper[4766]: I0130 16:45:40.052092 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 16:45:40 crc kubenswrapper[4766]: I0130 16:45:40.116202 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 16:45:40 crc kubenswrapper[4766]: I0130 16:45:40.116258 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 16:45:40 crc kubenswrapper[4766]: I0130 16:45:40.142977 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 16:45:40 crc kubenswrapper[4766]: I0130 16:45:40.366306 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:45:40 crc kubenswrapper[4766]: I0130 16:45:40.435817 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 16:45:40 crc kubenswrapper[4766]: I0130 16:45:40.435860 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 16:45:40 crc kubenswrapper[4766]: I0130 16:45:40.630380 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-845d6d6f59-lx7hm" Jan 30 16:45:40 crc kubenswrapper[4766]: I0130 16:45:40.731440 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-689xd"] Jan 30 16:45:40 crc kubenswrapper[4766]: I0130 16:45:40.735466 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5784cf869f-689xd" podUID="0d9443ad-23f2-4953-8fe3-1e30cddbb3ae" containerName="dnsmasq-dns" containerID="cri-o://c65acb718d30ac6457c863184074fe84d257f4ac320cf7f985745ed5d35f59e2" gracePeriod=10 Jan 30 16:45:40 crc kubenswrapper[4766]: I0130 16:45:40.889873 4766 generic.go:334] "Generic (PLEG): container finished" podID="0d9443ad-23f2-4953-8fe3-1e30cddbb3ae" containerID="c65acb718d30ac6457c863184074fe84d257f4ac320cf7f985745ed5d35f59e2" exitCode=0 Jan 30 16:45:40 crc kubenswrapper[4766]: I0130 16:45:40.889934 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-689xd" event={"ID":"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae","Type":"ContainerDied","Data":"c65acb718d30ac6457c863184074fe84d257f4ac320cf7f985745ed5d35f59e2"} Jan 30 16:45:40 crc kubenswrapper[4766]: I0130 16:45:40.891825 4766 generic.go:334] "Generic (PLEG): container finished" podID="aeb40512-6ec4-4dd4-a623-ed2232387ee3" containerID="244b298b75af4ffc60d556fb768c258be1dcf5b89d3142b104861f7e022ebee0" exitCode=0 Jan 30 16:45:40 crc kubenswrapper[4766]: I0130 16:45:40.892737 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-d5p85" event={"ID":"aeb40512-6ec4-4dd4-a623-ed2232387ee3","Type":"ContainerDied","Data":"244b298b75af4ffc60d556fb768c258be1dcf5b89d3142b104861f7e022ebee0"} Jan 30 16:45:40 crc kubenswrapper[4766]: I0130 16:45:40.932309 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.138446 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="79d5404e-802d-42c7-9245-579f6724b524" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.184:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.138406 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="79d5404e-802d-42c7-9245-579f6724b524" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.184:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.350452 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-689xd" Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.355312 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-2sfxl" Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.447276 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-config\") pod \"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae\" (UID: \"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae\") " Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.447399 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-dns-swift-storage-0\") pod \"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae\" (UID: \"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae\") " Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.447455 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2q7gs\" (UniqueName: \"kubernetes.io/projected/7639b60e-a348-4203-84b6-68af413cd517-kube-api-access-2q7gs\") pod \"7639b60e-a348-4203-84b6-68af413cd517\" (UID: \"7639b60e-a348-4203-84b6-68af413cd517\") " Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.447505 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5hb7\" (UniqueName: \"kubernetes.io/projected/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-kube-api-access-f5hb7\") pod \"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae\" (UID: \"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae\") " Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.447608 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-ovsdbserver-sb\") pod \"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae\" (UID: \"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae\") " Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.447650 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-dns-svc\") pod \"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae\" (UID: \"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae\") " Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.447693 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7639b60e-a348-4203-84b6-68af413cd517-combined-ca-bundle\") pod \"7639b60e-a348-4203-84b6-68af413cd517\" (UID: \"7639b60e-a348-4203-84b6-68af413cd517\") " Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.447752 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7639b60e-a348-4203-84b6-68af413cd517-config-data\") pod \"7639b60e-a348-4203-84b6-68af413cd517\" (UID: \"7639b60e-a348-4203-84b6-68af413cd517\") " Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.447776 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7639b60e-a348-4203-84b6-68af413cd517-scripts\") pod \"7639b60e-a348-4203-84b6-68af413cd517\" (UID: \"7639b60e-a348-4203-84b6-68af413cd517\") " Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.447855 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-ovsdbserver-nb\") pod \"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae\" (UID: \"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae\") " Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.469398 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7639b60e-a348-4203-84b6-68af413cd517-kube-api-access-2q7gs" (OuterVolumeSpecName: "kube-api-access-2q7gs") pod "7639b60e-a348-4203-84b6-68af413cd517" (UID: "7639b60e-a348-4203-84b6-68af413cd517"). InnerVolumeSpecName "kube-api-access-2q7gs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.469497 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-kube-api-access-f5hb7" (OuterVolumeSpecName: "kube-api-access-f5hb7") pod "0d9443ad-23f2-4953-8fe3-1e30cddbb3ae" (UID: "0d9443ad-23f2-4953-8fe3-1e30cddbb3ae"). InnerVolumeSpecName "kube-api-access-f5hb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.480503 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7639b60e-a348-4203-84b6-68af413cd517-scripts" (OuterVolumeSpecName: "scripts") pod "7639b60e-a348-4203-84b6-68af413cd517" (UID: "7639b60e-a348-4203-84b6-68af413cd517"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.540336 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7639b60e-a348-4203-84b6-68af413cd517-config-data" (OuterVolumeSpecName: "config-data") pod "7639b60e-a348-4203-84b6-68af413cd517" (UID: "7639b60e-a348-4203-84b6-68af413cd517"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.551467 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2q7gs\" (UniqueName: \"kubernetes.io/projected/7639b60e-a348-4203-84b6-68af413cd517-kube-api-access-2q7gs\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.551507 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5hb7\" (UniqueName: \"kubernetes.io/projected/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-kube-api-access-f5hb7\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.551517 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7639b60e-a348-4203-84b6-68af413cd517-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.551544 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7639b60e-a348-4203-84b6-68af413cd517-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.569287 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7639b60e-a348-4203-84b6-68af413cd517-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7639b60e-a348-4203-84b6-68af413cd517" (UID: "7639b60e-a348-4203-84b6-68af413cd517"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.601822 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-config" (OuterVolumeSpecName: "config") pod "0d9443ad-23f2-4953-8fe3-1e30cddbb3ae" (UID: "0d9443ad-23f2-4953-8fe3-1e30cddbb3ae"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.609591 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0d9443ad-23f2-4953-8fe3-1e30cddbb3ae" (UID: "0d9443ad-23f2-4953-8fe3-1e30cddbb3ae"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.619462 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0d9443ad-23f2-4953-8fe3-1e30cddbb3ae" (UID: "0d9443ad-23f2-4953-8fe3-1e30cddbb3ae"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.622812 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0d9443ad-23f2-4953-8fe3-1e30cddbb3ae" (UID: "0d9443ad-23f2-4953-8fe3-1e30cddbb3ae"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.630087 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0d9443ad-23f2-4953-8fe3-1e30cddbb3ae" (UID: "0d9443ad-23f2-4953-8fe3-1e30cddbb3ae"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.653617 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.653662 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.653673 4766 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.653682 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.653690 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.653699 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7639b60e-a348-4203-84b6-68af413cd517-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.903128 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-2sfxl" event={"ID":"7639b60e-a348-4203-84b6-68af413cd517","Type":"ContainerDied","Data":"a5b32e14c48ab98ec0dc4ccb66deca2794d95c0f1a764e15ce4a040ed275b855"} Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.903580 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5b32e14c48ab98ec0dc4ccb66deca2794d95c0f1a764e15ce4a040ed275b855" Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.903252 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-2sfxl" Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.907774 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-689xd" event={"ID":"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae","Type":"ContainerDied","Data":"4beec3b7b2815bc010286da11d4373b366b5518d41bb70db8fd44faa4b14d146"} Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.907834 4766 scope.go:117] "RemoveContainer" containerID="c65acb718d30ac6457c863184074fe84d257f4ac320cf7f985745ed5d35f59e2" Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.907874 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-689xd" Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.961768 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-689xd"] Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.967734 4766 scope.go:117] "RemoveContainer" containerID="4d2657555f1f9716d5dd3ad8f0603e91ccb9d9b3d7434f90175a66e09ade98bf" Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.971083 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-689xd"] Jan 30 16:45:42 crc kubenswrapper[4766]: I0130 16:45:42.062560 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d9443ad-23f2-4953-8fe3-1e30cddbb3ae" path="/var/lib/kubelet/pods/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae/volumes" Jan 30 16:45:42 crc kubenswrapper[4766]: I0130 16:45:42.098524 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 16:45:42 crc kubenswrapper[4766]: I0130 16:45:42.129215 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 16:45:42 crc kubenswrapper[4766]: I0130 16:45:42.129672 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="79d5404e-802d-42c7-9245-579f6724b524" containerName="nova-api-log" containerID="cri-o://bf9c2c64ce4dc263c25693c43e19bbb4bc11a93e8ffdf74b6c65121f4ef29b72" gracePeriod=30 Jan 30 16:45:42 crc kubenswrapper[4766]: I0130 16:45:42.129725 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="79d5404e-802d-42c7-9245-579f6724b524" containerName="nova-api-api" containerID="cri-o://0eb138c4b0d9c3e478ee70cc70d6bcf339b2097c5f2e77f54e8392a51dc7cccf" gracePeriod=30 Jan 30 16:45:42 crc kubenswrapper[4766]: I0130 16:45:42.342014 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-d5p85" Jan 30 16:45:42 crc kubenswrapper[4766]: I0130 16:45:42.489135 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aeb40512-6ec4-4dd4-a623-ed2232387ee3-config-data\") pod \"aeb40512-6ec4-4dd4-a623-ed2232387ee3\" (UID: \"aeb40512-6ec4-4dd4-a623-ed2232387ee3\") " Jan 30 16:45:42 crc kubenswrapper[4766]: I0130 16:45:42.490195 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aeb40512-6ec4-4dd4-a623-ed2232387ee3-combined-ca-bundle\") pod \"aeb40512-6ec4-4dd4-a623-ed2232387ee3\" (UID: \"aeb40512-6ec4-4dd4-a623-ed2232387ee3\") " Jan 30 16:45:42 crc kubenswrapper[4766]: I0130 16:45:42.491215 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwmmx\" (UniqueName: \"kubernetes.io/projected/aeb40512-6ec4-4dd4-a623-ed2232387ee3-kube-api-access-xwmmx\") pod \"aeb40512-6ec4-4dd4-a623-ed2232387ee3\" (UID: \"aeb40512-6ec4-4dd4-a623-ed2232387ee3\") " Jan 30 16:45:42 crc kubenswrapper[4766]: I0130 16:45:42.491300 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aeb40512-6ec4-4dd4-a623-ed2232387ee3-scripts\") pod \"aeb40512-6ec4-4dd4-a623-ed2232387ee3\" (UID: \"aeb40512-6ec4-4dd4-a623-ed2232387ee3\") " Jan 30 16:45:42 crc kubenswrapper[4766]: I0130 16:45:42.496927 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aeb40512-6ec4-4dd4-a623-ed2232387ee3-scripts" (OuterVolumeSpecName: "scripts") pod "aeb40512-6ec4-4dd4-a623-ed2232387ee3" (UID: "aeb40512-6ec4-4dd4-a623-ed2232387ee3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:45:42 crc kubenswrapper[4766]: I0130 16:45:42.497088 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aeb40512-6ec4-4dd4-a623-ed2232387ee3-kube-api-access-xwmmx" (OuterVolumeSpecName: "kube-api-access-xwmmx") pod "aeb40512-6ec4-4dd4-a623-ed2232387ee3" (UID: "aeb40512-6ec4-4dd4-a623-ed2232387ee3"). InnerVolumeSpecName "kube-api-access-xwmmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:45:42 crc kubenswrapper[4766]: I0130 16:45:42.531024 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aeb40512-6ec4-4dd4-a623-ed2232387ee3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aeb40512-6ec4-4dd4-a623-ed2232387ee3" (UID: "aeb40512-6ec4-4dd4-a623-ed2232387ee3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:45:42 crc kubenswrapper[4766]: I0130 16:45:42.537437 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aeb40512-6ec4-4dd4-a623-ed2232387ee3-config-data" (OuterVolumeSpecName: "config-data") pod "aeb40512-6ec4-4dd4-a623-ed2232387ee3" (UID: "aeb40512-6ec4-4dd4-a623-ed2232387ee3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:45:42 crc kubenswrapper[4766]: I0130 16:45:42.593329 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aeb40512-6ec4-4dd4-a623-ed2232387ee3-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:42 crc kubenswrapper[4766]: I0130 16:45:42.593419 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aeb40512-6ec4-4dd4-a623-ed2232387ee3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:42 crc kubenswrapper[4766]: I0130 16:45:42.593439 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwmmx\" (UniqueName: \"kubernetes.io/projected/aeb40512-6ec4-4dd4-a623-ed2232387ee3-kube-api-access-xwmmx\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:42 crc kubenswrapper[4766]: I0130 16:45:42.593451 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aeb40512-6ec4-4dd4-a623-ed2232387ee3-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:42 crc kubenswrapper[4766]: I0130 16:45:42.869975 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 16:45:42 crc kubenswrapper[4766]: I0130 16:45:42.870203 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="17273647-f97c-490b-a766-fd4f004d3732" containerName="kube-state-metrics" containerID="cri-o://e7df447fae2323214b04214ed5721ca2aff6ea2f59b7f5db7687b2d27c39a32a" gracePeriod=30 Jan 30 16:45:42 crc kubenswrapper[4766]: I0130 16:45:42.916582 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-d5p85" Jan 30 16:45:42 crc kubenswrapper[4766]: I0130 16:45:42.916576 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-d5p85" event={"ID":"aeb40512-6ec4-4dd4-a623-ed2232387ee3","Type":"ContainerDied","Data":"d8e8fa91258ad408fb0e5fe2f36ffb083a7f80ad736cddc099769fad39b945a5"} Jan 30 16:45:42 crc kubenswrapper[4766]: I0130 16:45:42.916917 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8e8fa91258ad408fb0e5fe2f36ffb083a7f80ad736cddc099769fad39b945a5" Jan 30 16:45:42 crc kubenswrapper[4766]: I0130 16:45:42.919089 4766 generic.go:334] "Generic (PLEG): container finished" podID="79d5404e-802d-42c7-9245-579f6724b524" containerID="bf9c2c64ce4dc263c25693c43e19bbb4bc11a93e8ffdf74b6c65121f4ef29b72" exitCode=143 Jan 30 16:45:42 crc kubenswrapper[4766]: I0130 16:45:42.919191 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"79d5404e-802d-42c7-9245-579f6724b524","Type":"ContainerDied","Data":"bf9c2c64ce4dc263c25693c43e19bbb4bc11a93e8ffdf74b6c65121f4ef29b72"} Jan 30 16:45:42 crc kubenswrapper[4766]: I0130 16:45:42.919315 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63" containerName="nova-scheduler-scheduler" containerID="cri-o://1b1a2904ce23cae48baba8e63ee622ca248edd69cb5a2d3b876a5a4e606607d5" gracePeriod=30 Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.026772 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 16:45:43 crc kubenswrapper[4766]: E0130 16:45:43.027466 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7639b60e-a348-4203-84b6-68af413cd517" containerName="nova-manage" Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.027561 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="7639b60e-a348-4203-84b6-68af413cd517" containerName="nova-manage" Jan 30 16:45:43 crc kubenswrapper[4766]: E0130 16:45:43.027639 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d9443ad-23f2-4953-8fe3-1e30cddbb3ae" containerName="init" Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.027742 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d9443ad-23f2-4953-8fe3-1e30cddbb3ae" containerName="init" Jan 30 16:45:43 crc kubenswrapper[4766]: E0130 16:45:43.027828 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d9443ad-23f2-4953-8fe3-1e30cddbb3ae" containerName="dnsmasq-dns" Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.027898 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d9443ad-23f2-4953-8fe3-1e30cddbb3ae" containerName="dnsmasq-dns" Jan 30 16:45:43 crc kubenswrapper[4766]: E0130 16:45:43.027995 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aeb40512-6ec4-4dd4-a623-ed2232387ee3" containerName="nova-cell1-conductor-db-sync" Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.028072 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="aeb40512-6ec4-4dd4-a623-ed2232387ee3" containerName="nova-cell1-conductor-db-sync" Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.028404 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="aeb40512-6ec4-4dd4-a623-ed2232387ee3" containerName="nova-cell1-conductor-db-sync" Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.028514 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="7639b60e-a348-4203-84b6-68af413cd517" containerName="nova-manage" Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.028676 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d9443ad-23f2-4953-8fe3-1e30cddbb3ae" containerName="dnsmasq-dns" Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.034996 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.040994 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.053728 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.102508 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fa69536-b701-43a4-814a-2ba16974b1dd-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"7fa69536-b701-43a4-814a-2ba16974b1dd\") " pod="openstack/nova-cell1-conductor-0" Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.102559 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r45p\" (UniqueName: \"kubernetes.io/projected/7fa69536-b701-43a4-814a-2ba16974b1dd-kube-api-access-5r45p\") pod \"nova-cell1-conductor-0\" (UID: \"7fa69536-b701-43a4-814a-2ba16974b1dd\") " pod="openstack/nova-cell1-conductor-0" Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.102621 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fa69536-b701-43a4-814a-2ba16974b1dd-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"7fa69536-b701-43a4-814a-2ba16974b1dd\") " pod="openstack/nova-cell1-conductor-0" Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.204989 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fa69536-b701-43a4-814a-2ba16974b1dd-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"7fa69536-b701-43a4-814a-2ba16974b1dd\") " pod="openstack/nova-cell1-conductor-0" Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.205054 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r45p\" (UniqueName: \"kubernetes.io/projected/7fa69536-b701-43a4-814a-2ba16974b1dd-kube-api-access-5r45p\") pod \"nova-cell1-conductor-0\" (UID: \"7fa69536-b701-43a4-814a-2ba16974b1dd\") " pod="openstack/nova-cell1-conductor-0" Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.205089 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fa69536-b701-43a4-814a-2ba16974b1dd-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"7fa69536-b701-43a4-814a-2ba16974b1dd\") " pod="openstack/nova-cell1-conductor-0" Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.210605 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fa69536-b701-43a4-814a-2ba16974b1dd-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"7fa69536-b701-43a4-814a-2ba16974b1dd\") " pod="openstack/nova-cell1-conductor-0" Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.218123 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fa69536-b701-43a4-814a-2ba16974b1dd-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"7fa69536-b701-43a4-814a-2ba16974b1dd\") " pod="openstack/nova-cell1-conductor-0" Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.260204 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r45p\" (UniqueName: \"kubernetes.io/projected/7fa69536-b701-43a4-814a-2ba16974b1dd-kube-api-access-5r45p\") pod \"nova-cell1-conductor-0\" (UID: \"7fa69536-b701-43a4-814a-2ba16974b1dd\") " pod="openstack/nova-cell1-conductor-0" Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.358675 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.535922 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.614216 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpp5m\" (UniqueName: \"kubernetes.io/projected/17273647-f97c-490b-a766-fd4f004d3732-kube-api-access-hpp5m\") pod \"17273647-f97c-490b-a766-fd4f004d3732\" (UID: \"17273647-f97c-490b-a766-fd4f004d3732\") " Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.622637 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17273647-f97c-490b-a766-fd4f004d3732-kube-api-access-hpp5m" (OuterVolumeSpecName: "kube-api-access-hpp5m") pod "17273647-f97c-490b-a766-fd4f004d3732" (UID: "17273647-f97c-490b-a766-fd4f004d3732"). InnerVolumeSpecName "kube-api-access-hpp5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.719735 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hpp5m\" (UniqueName: \"kubernetes.io/projected/17273647-f97c-490b-a766-fd4f004d3732-kube-api-access-hpp5m\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.931026 4766 generic.go:334] "Generic (PLEG): container finished" podID="17273647-f97c-490b-a766-fd4f004d3732" containerID="e7df447fae2323214b04214ed5721ca2aff6ea2f59b7f5db7687b2d27c39a32a" exitCode=2 Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.931065 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"17273647-f97c-490b-a766-fd4f004d3732","Type":"ContainerDied","Data":"e7df447fae2323214b04214ed5721ca2aff6ea2f59b7f5db7687b2d27c39a32a"} Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.931091 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"17273647-f97c-490b-a766-fd4f004d3732","Type":"ContainerDied","Data":"6ab83b607cb34660892c3f858dbee7a7095d74efd1f6621864cf951d1afb4fc6"} Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.931108 4766 scope.go:117] "RemoveContainer" containerID="e7df447fae2323214b04214ed5721ca2aff6ea2f59b7f5db7687b2d27c39a32a" Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.931242 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.974788 4766 scope.go:117] "RemoveContainer" containerID="e7df447fae2323214b04214ed5721ca2aff6ea2f59b7f5db7687b2d27c39a32a" Jan 30 16:45:43 crc kubenswrapper[4766]: E0130 16:45:43.976373 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7df447fae2323214b04214ed5721ca2aff6ea2f59b7f5db7687b2d27c39a32a\": container with ID starting with e7df447fae2323214b04214ed5721ca2aff6ea2f59b7f5db7687b2d27c39a32a not found: ID does not exist" containerID="e7df447fae2323214b04214ed5721ca2aff6ea2f59b7f5db7687b2d27c39a32a" Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.976421 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7df447fae2323214b04214ed5721ca2aff6ea2f59b7f5db7687b2d27c39a32a"} err="failed to get container status \"e7df447fae2323214b04214ed5721ca2aff6ea2f59b7f5db7687b2d27c39a32a\": rpc error: code = NotFound desc = could not find container \"e7df447fae2323214b04214ed5721ca2aff6ea2f59b7f5db7687b2d27c39a32a\": container with ID starting with e7df447fae2323214b04214ed5721ca2aff6ea2f59b7f5db7687b2d27c39a32a not found: ID does not exist" Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.981070 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.998685 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 16:45:44 crc kubenswrapper[4766]: W0130 16:45:44.005028 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7fa69536_b701_43a4_814a_2ba16974b1dd.slice/crio-dc9c6135c4c38d623c7e0c8ee4ec3b3b5ccbc4d503c09310d8f4f5dcfd14f0b7 WatchSource:0}: Error finding container dc9c6135c4c38d623c7e0c8ee4ec3b3b5ccbc4d503c09310d8f4f5dcfd14f0b7: Status 404 returned error can't find the container with id dc9c6135c4c38d623c7e0c8ee4ec3b3b5ccbc4d503c09310d8f4f5dcfd14f0b7 Jan 30 16:45:44 crc kubenswrapper[4766]: I0130 16:45:44.008946 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 16:45:44 crc kubenswrapper[4766]: E0130 16:45:44.009540 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17273647-f97c-490b-a766-fd4f004d3732" containerName="kube-state-metrics" Jan 30 16:45:44 crc kubenswrapper[4766]: I0130 16:45:44.009564 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="17273647-f97c-490b-a766-fd4f004d3732" containerName="kube-state-metrics" Jan 30 16:45:44 crc kubenswrapper[4766]: I0130 16:45:44.009764 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="17273647-f97c-490b-a766-fd4f004d3732" containerName="kube-state-metrics" Jan 30 16:45:44 crc kubenswrapper[4766]: I0130 16:45:44.010781 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 16:45:44 crc kubenswrapper[4766]: I0130 16:45:44.013155 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 30 16:45:44 crc kubenswrapper[4766]: I0130 16:45:44.019481 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 30 16:45:44 crc kubenswrapper[4766]: I0130 16:45:44.030368 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 16:45:44 crc kubenswrapper[4766]: I0130 16:45:44.053105 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17273647-f97c-490b-a766-fd4f004d3732" path="/var/lib/kubelet/pods/17273647-f97c-490b-a766-fd4f004d3732/volumes" Jan 30 16:45:44 crc kubenswrapper[4766]: I0130 16:45:44.053756 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 16:45:44 crc kubenswrapper[4766]: I0130 16:45:44.127363 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/bb576787-90a5-4e81-a047-6fcf37921335-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"bb576787-90a5-4e81-a047-6fcf37921335\") " pod="openstack/kube-state-metrics-0" Jan 30 16:45:44 crc kubenswrapper[4766]: I0130 16:45:44.127739 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb576787-90a5-4e81-a047-6fcf37921335-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"bb576787-90a5-4e81-a047-6fcf37921335\") " pod="openstack/kube-state-metrics-0" Jan 30 16:45:44 crc kubenswrapper[4766]: I0130 16:45:44.127974 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2f48\" (UniqueName: \"kubernetes.io/projected/bb576787-90a5-4e81-a047-6fcf37921335-kube-api-access-s2f48\") pod \"kube-state-metrics-0\" (UID: \"bb576787-90a5-4e81-a047-6fcf37921335\") " pod="openstack/kube-state-metrics-0" Jan 30 16:45:44 crc kubenswrapper[4766]: I0130 16:45:44.128068 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb576787-90a5-4e81-a047-6fcf37921335-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"bb576787-90a5-4e81-a047-6fcf37921335\") " pod="openstack/kube-state-metrics-0" Jan 30 16:45:44 crc kubenswrapper[4766]: I0130 16:45:44.229929 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb576787-90a5-4e81-a047-6fcf37921335-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"bb576787-90a5-4e81-a047-6fcf37921335\") " pod="openstack/kube-state-metrics-0" Jan 30 16:45:44 crc kubenswrapper[4766]: I0130 16:45:44.230072 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2f48\" (UniqueName: \"kubernetes.io/projected/bb576787-90a5-4e81-a047-6fcf37921335-kube-api-access-s2f48\") pod \"kube-state-metrics-0\" (UID: \"bb576787-90a5-4e81-a047-6fcf37921335\") " pod="openstack/kube-state-metrics-0" Jan 30 16:45:44 crc kubenswrapper[4766]: I0130 16:45:44.230105 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb576787-90a5-4e81-a047-6fcf37921335-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"bb576787-90a5-4e81-a047-6fcf37921335\") " pod="openstack/kube-state-metrics-0" Jan 30 16:45:44 crc kubenswrapper[4766]: I0130 16:45:44.230209 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/bb576787-90a5-4e81-a047-6fcf37921335-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"bb576787-90a5-4e81-a047-6fcf37921335\") " pod="openstack/kube-state-metrics-0" Jan 30 16:45:44 crc kubenswrapper[4766]: I0130 16:45:44.235572 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb576787-90a5-4e81-a047-6fcf37921335-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"bb576787-90a5-4e81-a047-6fcf37921335\") " pod="openstack/kube-state-metrics-0" Jan 30 16:45:44 crc kubenswrapper[4766]: I0130 16:45:44.236859 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb576787-90a5-4e81-a047-6fcf37921335-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"bb576787-90a5-4e81-a047-6fcf37921335\") " pod="openstack/kube-state-metrics-0" Jan 30 16:45:44 crc kubenswrapper[4766]: I0130 16:45:44.244854 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/bb576787-90a5-4e81-a047-6fcf37921335-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"bb576787-90a5-4e81-a047-6fcf37921335\") " pod="openstack/kube-state-metrics-0" Jan 30 16:45:44 crc kubenswrapper[4766]: I0130 16:45:44.268141 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2f48\" (UniqueName: \"kubernetes.io/projected/bb576787-90a5-4e81-a047-6fcf37921335-kube-api-access-s2f48\") pod \"kube-state-metrics-0\" (UID: \"bb576787-90a5-4e81-a047-6fcf37921335\") " pod="openstack/kube-state-metrics-0" Jan 30 16:45:44 crc kubenswrapper[4766]: I0130 16:45:44.463021 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 16:45:44 crc kubenswrapper[4766]: I0130 16:45:44.913624 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 16:45:44 crc kubenswrapper[4766]: W0130 16:45:44.917987 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb576787_90a5_4e81_a047_6fcf37921335.slice/crio-004a4dbb8938c5e8f1cfef5ca99ba208dc91ea1d26f1a6bd59dd513328e8e0c0 WatchSource:0}: Error finding container 004a4dbb8938c5e8f1cfef5ca99ba208dc91ea1d26f1a6bd59dd513328e8e0c0: Status 404 returned error can't find the container with id 004a4dbb8938c5e8f1cfef5ca99ba208dc91ea1d26f1a6bd59dd513328e8e0c0 Jan 30 16:45:44 crc kubenswrapper[4766]: I0130 16:45:44.964127 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"7fa69536-b701-43a4-814a-2ba16974b1dd","Type":"ContainerStarted","Data":"7d4fac86f391b975d4d442ab4194690e25c69e0569d23636e3c1ed6941b267b8"} Jan 30 16:45:44 crc kubenswrapper[4766]: I0130 16:45:44.966533 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 30 16:45:44 crc kubenswrapper[4766]: I0130 16:45:44.966562 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"7fa69536-b701-43a4-814a-2ba16974b1dd","Type":"ContainerStarted","Data":"dc9c6135c4c38d623c7e0c8ee4ec3b3b5ccbc4d503c09310d8f4f5dcfd14f0b7"} Jan 30 16:45:44 crc kubenswrapper[4766]: I0130 16:45:44.967707 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bb576787-90a5-4e81-a047-6fcf37921335","Type":"ContainerStarted","Data":"004a4dbb8938c5e8f1cfef5ca99ba208dc91ea1d26f1a6bd59dd513328e8e0c0"} Jan 30 16:45:44 crc kubenswrapper[4766]: I0130 16:45:44.988384 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.988365167 podStartE2EDuration="2.988365167s" podCreationTimestamp="2026-01-30 16:45:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:45:44.98154534 +0000 UTC m=+1399.619502696" watchObservedRunningTime="2026-01-30 16:45:44.988365167 +0000 UTC m=+1399.626322513" Jan 30 16:45:45 crc kubenswrapper[4766]: I0130 16:45:45.081743 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:45:45 crc kubenswrapper[4766]: I0130 16:45:45.082202 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c5a66dd3-f929-4a64-a1c3-82731fbe06e6" containerName="ceilometer-central-agent" containerID="cri-o://3ee57530bf58f120ed8444d7871c4320f84fdbdb876a1a3538ae1dbd71c9fce0" gracePeriod=30 Jan 30 16:45:45 crc kubenswrapper[4766]: I0130 16:45:45.082837 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c5a66dd3-f929-4a64-a1c3-82731fbe06e6" containerName="proxy-httpd" containerID="cri-o://095fbd04500fabe32c5529637fb29524bb631970072bad270f7a2ca05c8984eb" gracePeriod=30 Jan 30 16:45:45 crc kubenswrapper[4766]: I0130 16:45:45.082907 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c5a66dd3-f929-4a64-a1c3-82731fbe06e6" containerName="sg-core" containerID="cri-o://c2354b49358944598e501410cfcfb12f2f2f0e5dbbd8985d03806c1dbd2dee14" gracePeriod=30 Jan 30 16:45:45 crc kubenswrapper[4766]: I0130 16:45:45.082958 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c5a66dd3-f929-4a64-a1c3-82731fbe06e6" containerName="ceilometer-notification-agent" containerID="cri-o://d7000a4a86f88652cbcf7efd3a70364e4333986ca8785e8a1e1b51af011f84a2" gracePeriod=30 Jan 30 16:45:45 crc kubenswrapper[4766]: E0130 16:45:45.121394 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1b1a2904ce23cae48baba8e63ee622ca248edd69cb5a2d3b876a5a4e606607d5" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 16:45:45 crc kubenswrapper[4766]: E0130 16:45:45.124193 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1b1a2904ce23cae48baba8e63ee622ca248edd69cb5a2d3b876a5a4e606607d5" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 16:45:45 crc kubenswrapper[4766]: E0130 16:45:45.125720 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1b1a2904ce23cae48baba8e63ee622ca248edd69cb5a2d3b876a5a4e606607d5" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 16:45:45 crc kubenswrapper[4766]: E0130 16:45:45.125754 4766 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63" containerName="nova-scheduler-scheduler" Jan 30 16:45:45 crc kubenswrapper[4766]: I0130 16:45:45.981210 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bb576787-90a5-4e81-a047-6fcf37921335","Type":"ContainerStarted","Data":"b169f04387ed060fbbaaafe5ea96dd7518c3bc7deab7064d883b932c7d250d26"} Jan 30 16:45:45 crc kubenswrapper[4766]: I0130 16:45:45.981522 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 30 16:45:45 crc kubenswrapper[4766]: I0130 16:45:45.984348 4766 generic.go:334] "Generic (PLEG): container finished" podID="c5a66dd3-f929-4a64-a1c3-82731fbe06e6" containerID="095fbd04500fabe32c5529637fb29524bb631970072bad270f7a2ca05c8984eb" exitCode=0 Jan 30 16:45:45 crc kubenswrapper[4766]: I0130 16:45:45.984843 4766 generic.go:334] "Generic (PLEG): container finished" podID="c5a66dd3-f929-4a64-a1c3-82731fbe06e6" containerID="c2354b49358944598e501410cfcfb12f2f2f0e5dbbd8985d03806c1dbd2dee14" exitCode=2 Jan 30 16:45:45 crc kubenswrapper[4766]: I0130 16:45:45.984922 4766 generic.go:334] "Generic (PLEG): container finished" podID="c5a66dd3-f929-4a64-a1c3-82731fbe06e6" containerID="3ee57530bf58f120ed8444d7871c4320f84fdbdb876a1a3538ae1dbd71c9fce0" exitCode=0 Jan 30 16:45:45 crc kubenswrapper[4766]: I0130 16:45:45.984415 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c5a66dd3-f929-4a64-a1c3-82731fbe06e6","Type":"ContainerDied","Data":"095fbd04500fabe32c5529637fb29524bb631970072bad270f7a2ca05c8984eb"} Jan 30 16:45:45 crc kubenswrapper[4766]: I0130 16:45:45.985036 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c5a66dd3-f929-4a64-a1c3-82731fbe06e6","Type":"ContainerDied","Data":"c2354b49358944598e501410cfcfb12f2f2f0e5dbbd8985d03806c1dbd2dee14"} Jan 30 16:45:45 crc kubenswrapper[4766]: I0130 16:45:45.985058 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c5a66dd3-f929-4a64-a1c3-82731fbe06e6","Type":"ContainerDied","Data":"3ee57530bf58f120ed8444d7871c4320f84fdbdb876a1a3538ae1dbd71c9fce0"} Jan 30 16:45:46 crc kubenswrapper[4766]: I0130 16:45:46.026463 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.661777633 podStartE2EDuration="3.026443156s" podCreationTimestamp="2026-01-30 16:45:43 +0000 UTC" firstStartedPulling="2026-01-30 16:45:44.919967255 +0000 UTC m=+1399.557924601" lastFinishedPulling="2026-01-30 16:45:45.284632788 +0000 UTC m=+1399.922590124" observedRunningTime="2026-01-30 16:45:46.018770575 +0000 UTC m=+1400.656727921" watchObservedRunningTime="2026-01-30 16:45:46.026443156 +0000 UTC m=+1400.664400502" Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.532715 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.535305 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.619693 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-sg-core-conf-yaml\") pod \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\" (UID: \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\") " Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.619821 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63-config-data\") pod \"aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63\" (UID: \"aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63\") " Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.619870 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-scripts\") pod \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\" (UID: \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\") " Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.619896 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-combined-ca-bundle\") pod \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\" (UID: \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\") " Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.619992 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvws2\" (UniqueName: \"kubernetes.io/projected/aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63-kube-api-access-pvws2\") pod \"aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63\" (UID: \"aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63\") " Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.620022 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-run-httpd\") pod \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\" (UID: \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\") " Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.620047 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-config-data\") pod \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\" (UID: \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\") " Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.620080 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-log-httpd\") pod \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\" (UID: \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\") " Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.620122 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wh88l\" (UniqueName: \"kubernetes.io/projected/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-kube-api-access-wh88l\") pod \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\" (UID: \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\") " Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.620195 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63-combined-ca-bundle\") pod \"aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63\" (UID: \"aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63\") " Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.620657 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c5a66dd3-f929-4a64-a1c3-82731fbe06e6" (UID: "c5a66dd3-f929-4a64-a1c3-82731fbe06e6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.620907 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c5a66dd3-f929-4a64-a1c3-82731fbe06e6" (UID: "c5a66dd3-f929-4a64-a1c3-82731fbe06e6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.625046 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-scripts" (OuterVolumeSpecName: "scripts") pod "c5a66dd3-f929-4a64-a1c3-82731fbe06e6" (UID: "c5a66dd3-f929-4a64-a1c3-82731fbe06e6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.643211 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-kube-api-access-wh88l" (OuterVolumeSpecName: "kube-api-access-wh88l") pod "c5a66dd3-f929-4a64-a1c3-82731fbe06e6" (UID: "c5a66dd3-f929-4a64-a1c3-82731fbe06e6"). InnerVolumeSpecName "kube-api-access-wh88l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.643286 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63-kube-api-access-pvws2" (OuterVolumeSpecName: "kube-api-access-pvws2") pod "aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63" (UID: "aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63"). InnerVolumeSpecName "kube-api-access-pvws2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.658553 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c5a66dd3-f929-4a64-a1c3-82731fbe06e6" (UID: "c5a66dd3-f929-4a64-a1c3-82731fbe06e6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.679388 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63" (UID: "aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.694404 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63-config-data" (OuterVolumeSpecName: "config-data") pod "aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63" (UID: "aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.723626 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.723655 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.723667 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvws2\" (UniqueName: \"kubernetes.io/projected/aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63-kube-api-access-pvws2\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.723676 4766 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.723684 4766 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.723692 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wh88l\" (UniqueName: \"kubernetes.io/projected/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-kube-api-access-wh88l\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.723700 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.723708 4766 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.749126 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c5a66dd3-f929-4a64-a1c3-82731fbe06e6" (UID: "c5a66dd3-f929-4a64-a1c3-82731fbe06e6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.768009 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-config-data" (OuterVolumeSpecName: "config-data") pod "c5a66dd3-f929-4a64-a1c3-82731fbe06e6" (UID: "c5a66dd3-f929-4a64-a1c3-82731fbe06e6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.826922 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.826954 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.925500 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.006695 4766 generic.go:334] "Generic (PLEG): container finished" podID="c5a66dd3-f929-4a64-a1c3-82731fbe06e6" containerID="d7000a4a86f88652cbcf7efd3a70364e4333986ca8785e8a1e1b51af011f84a2" exitCode=0 Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.006798 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c5a66dd3-f929-4a64-a1c3-82731fbe06e6","Type":"ContainerDied","Data":"d7000a4a86f88652cbcf7efd3a70364e4333986ca8785e8a1e1b51af011f84a2"} Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.006831 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c5a66dd3-f929-4a64-a1c3-82731fbe06e6","Type":"ContainerDied","Data":"254afe617ee7d083f8aef7d6025266a07966124e61977849a39348c5dd429afe"} Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.006852 4766 scope.go:117] "RemoveContainer" containerID="095fbd04500fabe32c5529637fb29524bb631970072bad270f7a2ca05c8984eb" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.007017 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.018597 4766 generic.go:334] "Generic (PLEG): container finished" podID="aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63" containerID="1b1a2904ce23cae48baba8e63ee622ca248edd69cb5a2d3b876a5a4e606607d5" exitCode=0 Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.018660 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.018706 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63","Type":"ContainerDied","Data":"1b1a2904ce23cae48baba8e63ee622ca248edd69cb5a2d3b876a5a4e606607d5"} Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.018786 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63","Type":"ContainerDied","Data":"e0faf2b25288d8c56af242de92e6a4e63d3647846b88fc5ff898477a334052e0"} Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.024445 4766 generic.go:334] "Generic (PLEG): container finished" podID="79d5404e-802d-42c7-9245-579f6724b524" containerID="0eb138c4b0d9c3e478ee70cc70d6bcf339b2097c5f2e77f54e8392a51dc7cccf" exitCode=0 Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.024474 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"79d5404e-802d-42c7-9245-579f6724b524","Type":"ContainerDied","Data":"0eb138c4b0d9c3e478ee70cc70d6bcf339b2097c5f2e77f54e8392a51dc7cccf"} Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.024492 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"79d5404e-802d-42c7-9245-579f6724b524","Type":"ContainerDied","Data":"02b9f2097968ae69cd7109fa143ebd5cddb3e07d1afbc01d074eaa6ede05fb7b"} Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.024531 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.029125 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d5404e-802d-42c7-9245-579f6724b524-combined-ca-bundle\") pod \"79d5404e-802d-42c7-9245-579f6724b524\" (UID: \"79d5404e-802d-42c7-9245-579f6724b524\") " Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.029292 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79d5404e-802d-42c7-9245-579f6724b524-logs\") pod \"79d5404e-802d-42c7-9245-579f6724b524\" (UID: \"79d5404e-802d-42c7-9245-579f6724b524\") " Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.029321 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79d5404e-802d-42c7-9245-579f6724b524-config-data\") pod \"79d5404e-802d-42c7-9245-579f6724b524\" (UID: \"79d5404e-802d-42c7-9245-579f6724b524\") " Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.029389 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbq7m\" (UniqueName: \"kubernetes.io/projected/79d5404e-802d-42c7-9245-579f6724b524-kube-api-access-lbq7m\") pod \"79d5404e-802d-42c7-9245-579f6724b524\" (UID: \"79d5404e-802d-42c7-9245-579f6724b524\") " Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.029963 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79d5404e-802d-42c7-9245-579f6724b524-logs" (OuterVolumeSpecName: "logs") pod "79d5404e-802d-42c7-9245-579f6724b524" (UID: "79d5404e-802d-42c7-9245-579f6724b524"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.038140 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79d5404e-802d-42c7-9245-579f6724b524-kube-api-access-lbq7m" (OuterVolumeSpecName: "kube-api-access-lbq7m") pod "79d5404e-802d-42c7-9245-579f6724b524" (UID: "79d5404e-802d-42c7-9245-579f6724b524"). InnerVolumeSpecName "kube-api-access-lbq7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.064479 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79d5404e-802d-42c7-9245-579f6724b524-config-data" (OuterVolumeSpecName: "config-data") pod "79d5404e-802d-42c7-9245-579f6724b524" (UID: "79d5404e-802d-42c7-9245-579f6724b524"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.067429 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79d5404e-802d-42c7-9245-579f6724b524-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "79d5404e-802d-42c7-9245-579f6724b524" (UID: "79d5404e-802d-42c7-9245-579f6724b524"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.131210 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d5404e-802d-42c7-9245-579f6724b524-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.131250 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79d5404e-802d-42c7-9245-579f6724b524-logs\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.131259 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79d5404e-802d-42c7-9245-579f6724b524-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.131270 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbq7m\" (UniqueName: \"kubernetes.io/projected/79d5404e-802d-42c7-9245-579f6724b524-kube-api-access-lbq7m\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.153949 4766 scope.go:117] "RemoveContainer" containerID="c2354b49358944598e501410cfcfb12f2f2f0e5dbbd8985d03806c1dbd2dee14" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.183239 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.194878 4766 scope.go:117] "RemoveContainer" containerID="d7000a4a86f88652cbcf7efd3a70364e4333986ca8785e8a1e1b51af011f84a2" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.208915 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.220422 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.240595 4766 scope.go:117] "RemoveContainer" containerID="3ee57530bf58f120ed8444d7871c4320f84fdbdb876a1a3538ae1dbd71c9fce0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.244130 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 16:45:48 crc kubenswrapper[4766]: E0130 16:45:48.244600 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5a66dd3-f929-4a64-a1c3-82731fbe06e6" containerName="proxy-httpd" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.244613 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5a66dd3-f929-4a64-a1c3-82731fbe06e6" containerName="proxy-httpd" Jan 30 16:45:48 crc kubenswrapper[4766]: E0130 16:45:48.244630 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5a66dd3-f929-4a64-a1c3-82731fbe06e6" containerName="ceilometer-central-agent" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.244639 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5a66dd3-f929-4a64-a1c3-82731fbe06e6" containerName="ceilometer-central-agent" Jan 30 16:45:48 crc kubenswrapper[4766]: E0130 16:45:48.244669 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63" containerName="nova-scheduler-scheduler" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.244675 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63" containerName="nova-scheduler-scheduler" Jan 30 16:45:48 crc kubenswrapper[4766]: E0130 16:45:48.244686 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5a66dd3-f929-4a64-a1c3-82731fbe06e6" containerName="ceilometer-notification-agent" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.244692 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5a66dd3-f929-4a64-a1c3-82731fbe06e6" containerName="ceilometer-notification-agent" Jan 30 16:45:48 crc kubenswrapper[4766]: E0130 16:45:48.244700 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79d5404e-802d-42c7-9245-579f6724b524" containerName="nova-api-log" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.244706 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="79d5404e-802d-42c7-9245-579f6724b524" containerName="nova-api-log" Jan 30 16:45:48 crc kubenswrapper[4766]: E0130 16:45:48.244723 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79d5404e-802d-42c7-9245-579f6724b524" containerName="nova-api-api" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.244731 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="79d5404e-802d-42c7-9245-579f6724b524" containerName="nova-api-api" Jan 30 16:45:48 crc kubenswrapper[4766]: E0130 16:45:48.244746 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5a66dd3-f929-4a64-a1c3-82731fbe06e6" containerName="sg-core" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.244752 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5a66dd3-f929-4a64-a1c3-82731fbe06e6" containerName="sg-core" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.244920 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5a66dd3-f929-4a64-a1c3-82731fbe06e6" containerName="ceilometer-central-agent" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.244934 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="79d5404e-802d-42c7-9245-579f6724b524" containerName="nova-api-log" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.244946 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5a66dd3-f929-4a64-a1c3-82731fbe06e6" containerName="sg-core" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.244954 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63" containerName="nova-scheduler-scheduler" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.244969 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5a66dd3-f929-4a64-a1c3-82731fbe06e6" containerName="ceilometer-notification-agent" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.244979 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5a66dd3-f929-4a64-a1c3-82731fbe06e6" containerName="proxy-httpd" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.244994 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="79d5404e-802d-42c7-9245-579f6724b524" containerName="nova-api-api" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.258425 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.258533 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.261383 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.267717 4766 scope.go:117] "RemoveContainer" containerID="095fbd04500fabe32c5529637fb29524bb631970072bad270f7a2ca05c8984eb" Jan 30 16:45:48 crc kubenswrapper[4766]: E0130 16:45:48.268361 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"095fbd04500fabe32c5529637fb29524bb631970072bad270f7a2ca05c8984eb\": container with ID starting with 095fbd04500fabe32c5529637fb29524bb631970072bad270f7a2ca05c8984eb not found: ID does not exist" containerID="095fbd04500fabe32c5529637fb29524bb631970072bad270f7a2ca05c8984eb" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.268420 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"095fbd04500fabe32c5529637fb29524bb631970072bad270f7a2ca05c8984eb"} err="failed to get container status \"095fbd04500fabe32c5529637fb29524bb631970072bad270f7a2ca05c8984eb\": rpc error: code = NotFound desc = could not find container \"095fbd04500fabe32c5529637fb29524bb631970072bad270f7a2ca05c8984eb\": container with ID starting with 095fbd04500fabe32c5529637fb29524bb631970072bad270f7a2ca05c8984eb not found: ID does not exist" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.268456 4766 scope.go:117] "RemoveContainer" containerID="c2354b49358944598e501410cfcfb12f2f2f0e5dbbd8985d03806c1dbd2dee14" Jan 30 16:45:48 crc kubenswrapper[4766]: E0130 16:45:48.269212 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2354b49358944598e501410cfcfb12f2f2f0e5dbbd8985d03806c1dbd2dee14\": container with ID starting with c2354b49358944598e501410cfcfb12f2f2f0e5dbbd8985d03806c1dbd2dee14 not found: ID does not exist" containerID="c2354b49358944598e501410cfcfb12f2f2f0e5dbbd8985d03806c1dbd2dee14" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.269259 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2354b49358944598e501410cfcfb12f2f2f0e5dbbd8985d03806c1dbd2dee14"} err="failed to get container status \"c2354b49358944598e501410cfcfb12f2f2f0e5dbbd8985d03806c1dbd2dee14\": rpc error: code = NotFound desc = could not find container \"c2354b49358944598e501410cfcfb12f2f2f0e5dbbd8985d03806c1dbd2dee14\": container with ID starting with c2354b49358944598e501410cfcfb12f2f2f0e5dbbd8985d03806c1dbd2dee14 not found: ID does not exist" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.269404 4766 scope.go:117] "RemoveContainer" containerID="d7000a4a86f88652cbcf7efd3a70364e4333986ca8785e8a1e1b51af011f84a2" Jan 30 16:45:48 crc kubenswrapper[4766]: E0130 16:45:48.269951 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7000a4a86f88652cbcf7efd3a70364e4333986ca8785e8a1e1b51af011f84a2\": container with ID starting with d7000a4a86f88652cbcf7efd3a70364e4333986ca8785e8a1e1b51af011f84a2 not found: ID does not exist" containerID="d7000a4a86f88652cbcf7efd3a70364e4333986ca8785e8a1e1b51af011f84a2" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.269969 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7000a4a86f88652cbcf7efd3a70364e4333986ca8785e8a1e1b51af011f84a2"} err="failed to get container status \"d7000a4a86f88652cbcf7efd3a70364e4333986ca8785e8a1e1b51af011f84a2\": rpc error: code = NotFound desc = could not find container \"d7000a4a86f88652cbcf7efd3a70364e4333986ca8785e8a1e1b51af011f84a2\": container with ID starting with d7000a4a86f88652cbcf7efd3a70364e4333986ca8785e8a1e1b51af011f84a2 not found: ID does not exist" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.270081 4766 scope.go:117] "RemoveContainer" containerID="3ee57530bf58f120ed8444d7871c4320f84fdbdb876a1a3538ae1dbd71c9fce0" Jan 30 16:45:48 crc kubenswrapper[4766]: E0130 16:45:48.271500 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ee57530bf58f120ed8444d7871c4320f84fdbdb876a1a3538ae1dbd71c9fce0\": container with ID starting with 3ee57530bf58f120ed8444d7871c4320f84fdbdb876a1a3538ae1dbd71c9fce0 not found: ID does not exist" containerID="3ee57530bf58f120ed8444d7871c4320f84fdbdb876a1a3538ae1dbd71c9fce0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.271547 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ee57530bf58f120ed8444d7871c4320f84fdbdb876a1a3538ae1dbd71c9fce0"} err="failed to get container status \"3ee57530bf58f120ed8444d7871c4320f84fdbdb876a1a3538ae1dbd71c9fce0\": rpc error: code = NotFound desc = could not find container \"3ee57530bf58f120ed8444d7871c4320f84fdbdb876a1a3538ae1dbd71c9fce0\": container with ID starting with 3ee57530bf58f120ed8444d7871c4320f84fdbdb876a1a3538ae1dbd71c9fce0 not found: ID does not exist" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.271577 4766 scope.go:117] "RemoveContainer" containerID="1b1a2904ce23cae48baba8e63ee622ca248edd69cb5a2d3b876a5a4e606607d5" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.272247 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.285288 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.288090 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.290340 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.290653 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.290850 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.293276 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.298031 4766 scope.go:117] "RemoveContainer" containerID="1b1a2904ce23cae48baba8e63ee622ca248edd69cb5a2d3b876a5a4e606607d5" Jan 30 16:45:48 crc kubenswrapper[4766]: E0130 16:45:48.301806 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b1a2904ce23cae48baba8e63ee622ca248edd69cb5a2d3b876a5a4e606607d5\": container with ID starting with 1b1a2904ce23cae48baba8e63ee622ca248edd69cb5a2d3b876a5a4e606607d5 not found: ID does not exist" containerID="1b1a2904ce23cae48baba8e63ee622ca248edd69cb5a2d3b876a5a4e606607d5" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.301843 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b1a2904ce23cae48baba8e63ee622ca248edd69cb5a2d3b876a5a4e606607d5"} err="failed to get container status \"1b1a2904ce23cae48baba8e63ee622ca248edd69cb5a2d3b876a5a4e606607d5\": rpc error: code = NotFound desc = could not find container \"1b1a2904ce23cae48baba8e63ee622ca248edd69cb5a2d3b876a5a4e606607d5\": container with ID starting with 1b1a2904ce23cae48baba8e63ee622ca248edd69cb5a2d3b876a5a4e606607d5 not found: ID does not exist" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.301868 4766 scope.go:117] "RemoveContainer" containerID="0eb138c4b0d9c3e478ee70cc70d6bcf339b2097c5f2e77f54e8392a51dc7cccf" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.332058 4766 scope.go:117] "RemoveContainer" containerID="bf9c2c64ce4dc263c25693c43e19bbb4bc11a93e8ffdf74b6c65121f4ef29b72" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.334308 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4682a3ba-d8f2-48f0-820c-961ee175193e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4682a3ba-d8f2-48f0-820c-961ee175193e\") " pod="openstack/ceilometer-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.334376 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9wmh\" (UniqueName: \"kubernetes.io/projected/4682a3ba-d8f2-48f0-820c-961ee175193e-kube-api-access-f9wmh\") pod \"ceilometer-0\" (UID: \"4682a3ba-d8f2-48f0-820c-961ee175193e\") " pod="openstack/ceilometer-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.334418 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8nc8\" (UniqueName: \"kubernetes.io/projected/e7e7ef23-9d73-45f9-aeae-9fb0bf16b861-kube-api-access-q8nc8\") pod \"nova-scheduler-0\" (UID: \"e7e7ef23-9d73-45f9-aeae-9fb0bf16b861\") " pod="openstack/nova-scheduler-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.334477 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4682a3ba-d8f2-48f0-820c-961ee175193e-scripts\") pod \"ceilometer-0\" (UID: \"4682a3ba-d8f2-48f0-820c-961ee175193e\") " pod="openstack/ceilometer-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.334499 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4682a3ba-d8f2-48f0-820c-961ee175193e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4682a3ba-d8f2-48f0-820c-961ee175193e\") " pod="openstack/ceilometer-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.334522 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7e7ef23-9d73-45f9-aeae-9fb0bf16b861-config-data\") pod \"nova-scheduler-0\" (UID: \"e7e7ef23-9d73-45f9-aeae-9fb0bf16b861\") " pod="openstack/nova-scheduler-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.334561 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4682a3ba-d8f2-48f0-820c-961ee175193e-config-data\") pod \"ceilometer-0\" (UID: \"4682a3ba-d8f2-48f0-820c-961ee175193e\") " pod="openstack/ceilometer-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.334764 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7e7ef23-9d73-45f9-aeae-9fb0bf16b861-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e7e7ef23-9d73-45f9-aeae-9fb0bf16b861\") " pod="openstack/nova-scheduler-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.334895 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4682a3ba-d8f2-48f0-820c-961ee175193e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4682a3ba-d8f2-48f0-820c-961ee175193e\") " pod="openstack/ceilometer-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.334927 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4682a3ba-d8f2-48f0-820c-961ee175193e-log-httpd\") pod \"ceilometer-0\" (UID: \"4682a3ba-d8f2-48f0-820c-961ee175193e\") " pod="openstack/ceilometer-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.335110 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4682a3ba-d8f2-48f0-820c-961ee175193e-run-httpd\") pod \"ceilometer-0\" (UID: \"4682a3ba-d8f2-48f0-820c-961ee175193e\") " pod="openstack/ceilometer-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.363291 4766 scope.go:117] "RemoveContainer" containerID="0eb138c4b0d9c3e478ee70cc70d6bcf339b2097c5f2e77f54e8392a51dc7cccf" Jan 30 16:45:48 crc kubenswrapper[4766]: E0130 16:45:48.365101 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0eb138c4b0d9c3e478ee70cc70d6bcf339b2097c5f2e77f54e8392a51dc7cccf\": container with ID starting with 0eb138c4b0d9c3e478ee70cc70d6bcf339b2097c5f2e77f54e8392a51dc7cccf not found: ID does not exist" containerID="0eb138c4b0d9c3e478ee70cc70d6bcf339b2097c5f2e77f54e8392a51dc7cccf" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.365142 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0eb138c4b0d9c3e478ee70cc70d6bcf339b2097c5f2e77f54e8392a51dc7cccf"} err="failed to get container status \"0eb138c4b0d9c3e478ee70cc70d6bcf339b2097c5f2e77f54e8392a51dc7cccf\": rpc error: code = NotFound desc = could not find container \"0eb138c4b0d9c3e478ee70cc70d6bcf339b2097c5f2e77f54e8392a51dc7cccf\": container with ID starting with 0eb138c4b0d9c3e478ee70cc70d6bcf339b2097c5f2e77f54e8392a51dc7cccf not found: ID does not exist" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.365171 4766 scope.go:117] "RemoveContainer" containerID="bf9c2c64ce4dc263c25693c43e19bbb4bc11a93e8ffdf74b6c65121f4ef29b72" Jan 30 16:45:48 crc kubenswrapper[4766]: E0130 16:45:48.366545 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf9c2c64ce4dc263c25693c43e19bbb4bc11a93e8ffdf74b6c65121f4ef29b72\": container with ID starting with bf9c2c64ce4dc263c25693c43e19bbb4bc11a93e8ffdf74b6c65121f4ef29b72 not found: ID does not exist" containerID="bf9c2c64ce4dc263c25693c43e19bbb4bc11a93e8ffdf74b6c65121f4ef29b72" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.366579 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf9c2c64ce4dc263c25693c43e19bbb4bc11a93e8ffdf74b6c65121f4ef29b72"} err="failed to get container status \"bf9c2c64ce4dc263c25693c43e19bbb4bc11a93e8ffdf74b6c65121f4ef29b72\": rpc error: code = NotFound desc = could not find container \"bf9c2c64ce4dc263c25693c43e19bbb4bc11a93e8ffdf74b6c65121f4ef29b72\": container with ID starting with bf9c2c64ce4dc263c25693c43e19bbb4bc11a93e8ffdf74b6c65121f4ef29b72 not found: ID does not exist" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.377096 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.391582 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.404799 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.406305 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.412470 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 16:45:48 crc kubenswrapper[4766]: E0130 16:45:48.428344 4766 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5a66dd3_f929_4a64_a1c3_82731fbe06e6.slice/crio-254afe617ee7d083f8aef7d6025266a07966124e61977849a39348c5dd429afe\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa0b5877_d5fe_4d24_aaaa_d88eedb8ef63.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79d5404e_802d_42c7_9245_579f6724b524.slice/crio-02b9f2097968ae69cd7109fa143ebd5cddb3e07d1afbc01d074eaa6ede05fb7b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa0b5877_d5fe_4d24_aaaa_d88eedb8ef63.slice/crio-e0faf2b25288d8c56af242de92e6a4e63d3647846b88fc5ff898477a334052e0\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5a66dd3_f929_4a64_a1c3_82731fbe06e6.slice\": RecentStats: unable to find data in memory cache]" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.431611 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.436579 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4682a3ba-d8f2-48f0-820c-961ee175193e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4682a3ba-d8f2-48f0-820c-961ee175193e\") " pod="openstack/ceilometer-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.436621 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4682a3ba-d8f2-48f0-820c-961ee175193e-log-httpd\") pod \"ceilometer-0\" (UID: \"4682a3ba-d8f2-48f0-820c-961ee175193e\") " pod="openstack/ceilometer-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.436663 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb8mb\" (UniqueName: \"kubernetes.io/projected/0f697e9a-6e36-40c9-a199-29dc8ec19900-kube-api-access-gb8mb\") pod \"nova-api-0\" (UID: \"0f697e9a-6e36-40c9-a199-29dc8ec19900\") " pod="openstack/nova-api-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.436703 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f697e9a-6e36-40c9-a199-29dc8ec19900-config-data\") pod \"nova-api-0\" (UID: \"0f697e9a-6e36-40c9-a199-29dc8ec19900\") " pod="openstack/nova-api-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.436761 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4682a3ba-d8f2-48f0-820c-961ee175193e-run-httpd\") pod \"ceilometer-0\" (UID: \"4682a3ba-d8f2-48f0-820c-961ee175193e\") " pod="openstack/ceilometer-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.436799 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4682a3ba-d8f2-48f0-820c-961ee175193e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4682a3ba-d8f2-48f0-820c-961ee175193e\") " pod="openstack/ceilometer-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.436828 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f697e9a-6e36-40c9-a199-29dc8ec19900-logs\") pod \"nova-api-0\" (UID: \"0f697e9a-6e36-40c9-a199-29dc8ec19900\") " pod="openstack/nova-api-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.436854 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9wmh\" (UniqueName: \"kubernetes.io/projected/4682a3ba-d8f2-48f0-820c-961ee175193e-kube-api-access-f9wmh\") pod \"ceilometer-0\" (UID: \"4682a3ba-d8f2-48f0-820c-961ee175193e\") " pod="openstack/ceilometer-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.436894 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8nc8\" (UniqueName: \"kubernetes.io/projected/e7e7ef23-9d73-45f9-aeae-9fb0bf16b861-kube-api-access-q8nc8\") pod \"nova-scheduler-0\" (UID: \"e7e7ef23-9d73-45f9-aeae-9fb0bf16b861\") " pod="openstack/nova-scheduler-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.436956 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4682a3ba-d8f2-48f0-820c-961ee175193e-scripts\") pod \"ceilometer-0\" (UID: \"4682a3ba-d8f2-48f0-820c-961ee175193e\") " pod="openstack/ceilometer-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.436984 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4682a3ba-d8f2-48f0-820c-961ee175193e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4682a3ba-d8f2-48f0-820c-961ee175193e\") " pod="openstack/ceilometer-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.437006 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7e7ef23-9d73-45f9-aeae-9fb0bf16b861-config-data\") pod \"nova-scheduler-0\" (UID: \"e7e7ef23-9d73-45f9-aeae-9fb0bf16b861\") " pod="openstack/nova-scheduler-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.437047 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4682a3ba-d8f2-48f0-820c-961ee175193e-config-data\") pod \"ceilometer-0\" (UID: \"4682a3ba-d8f2-48f0-820c-961ee175193e\") " pod="openstack/ceilometer-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.437069 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f697e9a-6e36-40c9-a199-29dc8ec19900-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0f697e9a-6e36-40c9-a199-29dc8ec19900\") " pod="openstack/nova-api-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.437101 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7e7ef23-9d73-45f9-aeae-9fb0bf16b861-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e7e7ef23-9d73-45f9-aeae-9fb0bf16b861\") " pod="openstack/nova-scheduler-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.437149 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4682a3ba-d8f2-48f0-820c-961ee175193e-log-httpd\") pod \"ceilometer-0\" (UID: \"4682a3ba-d8f2-48f0-820c-961ee175193e\") " pod="openstack/ceilometer-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.437843 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4682a3ba-d8f2-48f0-820c-961ee175193e-run-httpd\") pod \"ceilometer-0\" (UID: \"4682a3ba-d8f2-48f0-820c-961ee175193e\") " pod="openstack/ceilometer-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.452091 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4682a3ba-d8f2-48f0-820c-961ee175193e-scripts\") pod \"ceilometer-0\" (UID: \"4682a3ba-d8f2-48f0-820c-961ee175193e\") " pod="openstack/ceilometer-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.455924 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4682a3ba-d8f2-48f0-820c-961ee175193e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4682a3ba-d8f2-48f0-820c-961ee175193e\") " pod="openstack/ceilometer-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.456072 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7e7ef23-9d73-45f9-aeae-9fb0bf16b861-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e7e7ef23-9d73-45f9-aeae-9fb0bf16b861\") " pod="openstack/nova-scheduler-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.456364 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4682a3ba-d8f2-48f0-820c-961ee175193e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4682a3ba-d8f2-48f0-820c-961ee175193e\") " pod="openstack/ceilometer-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.456411 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7e7ef23-9d73-45f9-aeae-9fb0bf16b861-config-data\") pod \"nova-scheduler-0\" (UID: \"e7e7ef23-9d73-45f9-aeae-9fb0bf16b861\") " pod="openstack/nova-scheduler-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.456727 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4682a3ba-d8f2-48f0-820c-961ee175193e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4682a3ba-d8f2-48f0-820c-961ee175193e\") " pod="openstack/ceilometer-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.456840 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4682a3ba-d8f2-48f0-820c-961ee175193e-config-data\") pod \"ceilometer-0\" (UID: \"4682a3ba-d8f2-48f0-820c-961ee175193e\") " pod="openstack/ceilometer-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.463361 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8nc8\" (UniqueName: \"kubernetes.io/projected/e7e7ef23-9d73-45f9-aeae-9fb0bf16b861-kube-api-access-q8nc8\") pod \"nova-scheduler-0\" (UID: \"e7e7ef23-9d73-45f9-aeae-9fb0bf16b861\") " pod="openstack/nova-scheduler-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.465006 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9wmh\" (UniqueName: \"kubernetes.io/projected/4682a3ba-d8f2-48f0-820c-961ee175193e-kube-api-access-f9wmh\") pod \"ceilometer-0\" (UID: \"4682a3ba-d8f2-48f0-820c-961ee175193e\") " pod="openstack/ceilometer-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.539242 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gb8mb\" (UniqueName: \"kubernetes.io/projected/0f697e9a-6e36-40c9-a199-29dc8ec19900-kube-api-access-gb8mb\") pod \"nova-api-0\" (UID: \"0f697e9a-6e36-40c9-a199-29dc8ec19900\") " pod="openstack/nova-api-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.539302 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f697e9a-6e36-40c9-a199-29dc8ec19900-config-data\") pod \"nova-api-0\" (UID: \"0f697e9a-6e36-40c9-a199-29dc8ec19900\") " pod="openstack/nova-api-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.539382 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f697e9a-6e36-40c9-a199-29dc8ec19900-logs\") pod \"nova-api-0\" (UID: \"0f697e9a-6e36-40c9-a199-29dc8ec19900\") " pod="openstack/nova-api-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.539472 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f697e9a-6e36-40c9-a199-29dc8ec19900-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0f697e9a-6e36-40c9-a199-29dc8ec19900\") " pod="openstack/nova-api-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.540110 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f697e9a-6e36-40c9-a199-29dc8ec19900-logs\") pod \"nova-api-0\" (UID: \"0f697e9a-6e36-40c9-a199-29dc8ec19900\") " pod="openstack/nova-api-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.542843 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f697e9a-6e36-40c9-a199-29dc8ec19900-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0f697e9a-6e36-40c9-a199-29dc8ec19900\") " pod="openstack/nova-api-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.543320 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f697e9a-6e36-40c9-a199-29dc8ec19900-config-data\") pod \"nova-api-0\" (UID: \"0f697e9a-6e36-40c9-a199-29dc8ec19900\") " pod="openstack/nova-api-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.556733 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gb8mb\" (UniqueName: \"kubernetes.io/projected/0f697e9a-6e36-40c9-a199-29dc8ec19900-kube-api-access-gb8mb\") pod \"nova-api-0\" (UID: \"0f697e9a-6e36-40c9-a199-29dc8ec19900\") " pod="openstack/nova-api-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.583670 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.606797 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.725578 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.867386 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 16:45:49 crc kubenswrapper[4766]: I0130 16:45:49.046139 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e7e7ef23-9d73-45f9-aeae-9fb0bf16b861","Type":"ContainerStarted","Data":"416558aff7b28c3ff1ea22294f12594e969f7f4faf03939457f56d9bd99a3f11"} Jan 30 16:45:49 crc kubenswrapper[4766]: I0130 16:45:49.215659 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:45:49 crc kubenswrapper[4766]: I0130 16:45:49.332161 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 16:45:49 crc kubenswrapper[4766]: W0130 16:45:49.340979 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f697e9a_6e36_40c9_a199_29dc8ec19900.slice/crio-15c72c9ebd81d0974bd3c050d1376a74f29d2862d24231bcedf81abd624b957a WatchSource:0}: Error finding container 15c72c9ebd81d0974bd3c050d1376a74f29d2862d24231bcedf81abd624b957a: Status 404 returned error can't find the container with id 15c72c9ebd81d0974bd3c050d1376a74f29d2862d24231bcedf81abd624b957a Jan 30 16:45:50 crc kubenswrapper[4766]: I0130 16:45:50.050992 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79d5404e-802d-42c7-9245-579f6724b524" path="/var/lib/kubelet/pods/79d5404e-802d-42c7-9245-579f6724b524/volumes" Jan 30 16:45:50 crc kubenswrapper[4766]: I0130 16:45:50.052045 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63" path="/var/lib/kubelet/pods/aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63/volumes" Jan 30 16:45:50 crc kubenswrapper[4766]: I0130 16:45:50.052616 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5a66dd3-f929-4a64-a1c3-82731fbe06e6" path="/var/lib/kubelet/pods/c5a66dd3-f929-4a64-a1c3-82731fbe06e6/volumes" Jan 30 16:45:50 crc kubenswrapper[4766]: I0130 16:45:50.073335 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0f697e9a-6e36-40c9-a199-29dc8ec19900","Type":"ContainerStarted","Data":"7b848af7eef8b1e8cb177dd175a8103f075e0d709a2cd7d271102825954435f0"} Jan 30 16:45:50 crc kubenswrapper[4766]: I0130 16:45:50.073377 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0f697e9a-6e36-40c9-a199-29dc8ec19900","Type":"ContainerStarted","Data":"d0e58709172611be47a7d079cf51386583460c1b5715a8c2e52ad8dc28416bb9"} Jan 30 16:45:50 crc kubenswrapper[4766]: I0130 16:45:50.073387 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0f697e9a-6e36-40c9-a199-29dc8ec19900","Type":"ContainerStarted","Data":"15c72c9ebd81d0974bd3c050d1376a74f29d2862d24231bcedf81abd624b957a"} Jan 30 16:45:50 crc kubenswrapper[4766]: I0130 16:45:50.074882 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e7e7ef23-9d73-45f9-aeae-9fb0bf16b861","Type":"ContainerStarted","Data":"23c5da4a09be36459ef2ef3b673003e33937b08f5a5a89b7b45b8074c830e74e"} Jan 30 16:45:50 crc kubenswrapper[4766]: I0130 16:45:50.078492 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4682a3ba-d8f2-48f0-820c-961ee175193e","Type":"ContainerStarted","Data":"f3b4d5555e6683d7c9a35452956e7db3f892b4d66ffc3b24f2410f434ccab80f"} Jan 30 16:45:50 crc kubenswrapper[4766]: I0130 16:45:50.078532 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4682a3ba-d8f2-48f0-820c-961ee175193e","Type":"ContainerStarted","Data":"8b86ca0ddd886dfba467ba83639ed4630d6babe59e46210d85c130eb9061c10d"} Jan 30 16:45:50 crc kubenswrapper[4766]: I0130 16:45:50.103823 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.10380739 podStartE2EDuration="2.10380739s" podCreationTimestamp="2026-01-30 16:45:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:45:50.097751693 +0000 UTC m=+1404.735709039" watchObservedRunningTime="2026-01-30 16:45:50.10380739 +0000 UTC m=+1404.741764736" Jan 30 16:45:50 crc kubenswrapper[4766]: I0130 16:45:50.115065 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.115047269 podStartE2EDuration="2.115047269s" podCreationTimestamp="2026-01-30 16:45:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:45:50.111249585 +0000 UTC m=+1404.749206931" watchObservedRunningTime="2026-01-30 16:45:50.115047269 +0000 UTC m=+1404.753004615" Jan 30 16:45:51 crc kubenswrapper[4766]: I0130 16:45:51.091395 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4682a3ba-d8f2-48f0-820c-961ee175193e","Type":"ContainerStarted","Data":"88669538db1b407b13af54044dcfe7446f733bbfee3afd84694a09deab2733d3"} Jan 30 16:45:52 crc kubenswrapper[4766]: I0130 16:45:52.101614 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4682a3ba-d8f2-48f0-820c-961ee175193e","Type":"ContainerStarted","Data":"0f2730cffdbcf2a7d668c54d27d193919e51030eb1b48406db509abf3aab1a5e"} Jan 30 16:45:53 crc kubenswrapper[4766]: I0130 16:45:53.395031 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 30 16:45:53 crc kubenswrapper[4766]: I0130 16:45:53.584339 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 16:45:54 crc kubenswrapper[4766]: I0130 16:45:54.474391 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 30 16:45:55 crc kubenswrapper[4766]: I0130 16:45:55.134316 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4682a3ba-d8f2-48f0-820c-961ee175193e","Type":"ContainerStarted","Data":"10c4ecee1cc3249bb8bb9e76e30cec2a7de20f074c2c187438eb8244558c1a17"} Jan 30 16:45:55 crc kubenswrapper[4766]: I0130 16:45:55.135392 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 16:45:55 crc kubenswrapper[4766]: I0130 16:45:55.157862 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.521245334 podStartE2EDuration="7.157841533s" podCreationTimestamp="2026-01-30 16:45:48 +0000 UTC" firstStartedPulling="2026-01-30 16:45:49.225057564 +0000 UTC m=+1403.863014910" lastFinishedPulling="2026-01-30 16:45:53.861653753 +0000 UTC m=+1408.499611109" observedRunningTime="2026-01-30 16:45:55.153213417 +0000 UTC m=+1409.791170773" watchObservedRunningTime="2026-01-30 16:45:55.157841533 +0000 UTC m=+1409.795798879" Jan 30 16:45:58 crc kubenswrapper[4766]: I0130 16:45:58.584136 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 16:45:58 crc kubenswrapper[4766]: I0130 16:45:58.617767 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 16:45:58 crc kubenswrapper[4766]: I0130 16:45:58.726800 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 16:45:58 crc kubenswrapper[4766]: I0130 16:45:58.726878 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 16:45:59 crc kubenswrapper[4766]: I0130 16:45:59.194642 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 16:45:59 crc kubenswrapper[4766]: I0130 16:45:59.811373 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0f697e9a-6e36-40c9-a199-29dc8ec19900" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.194:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 16:45:59 crc kubenswrapper[4766]: I0130 16:45:59.811700 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0f697e9a-6e36-40c9-a199-29dc8ec19900" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.194:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 16:46:06 crc kubenswrapper[4766]: I0130 16:46:06.237961 4766 generic.go:334] "Generic (PLEG): container finished" podID="e0275f96-c8b4-4219-8a95-f8cfa7a4edca" containerID="135c1956a860be59824b856b724e9e55eaa85db098e7c6b8d270f3404e379bf5" exitCode=137 Jan 30 16:46:06 crc kubenswrapper[4766]: I0130 16:46:06.238060 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e0275f96-c8b4-4219-8a95-f8cfa7a4edca","Type":"ContainerDied","Data":"135c1956a860be59824b856b724e9e55eaa85db098e7c6b8d270f3404e379bf5"} Jan 30 16:46:06 crc kubenswrapper[4766]: I0130 16:46:06.240532 4766 generic.go:334] "Generic (PLEG): container finished" podID="e111c80e-0c45-49f2-bfc0-665fbdd2ac56" containerID="6416df1047fe308e33b040e08526583d0654fc7b7b0b8ca00590a24d666f84b7" exitCode=137 Jan 30 16:46:06 crc kubenswrapper[4766]: I0130 16:46:06.240578 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e111c80e-0c45-49f2-bfc0-665fbdd2ac56","Type":"ContainerDied","Data":"6416df1047fe308e33b040e08526583d0654fc7b7b0b8ca00590a24d666f84b7"} Jan 30 16:46:06 crc kubenswrapper[4766]: I0130 16:46:06.240605 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e111c80e-0c45-49f2-bfc0-665fbdd2ac56","Type":"ContainerDied","Data":"7e91b518e9df7a0df66e77c94960005b770ffdf960887cd6b2ab17f156b3e56d"} Jan 30 16:46:06 crc kubenswrapper[4766]: I0130 16:46:06.240618 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e91b518e9df7a0df66e77c94960005b770ffdf960887cd6b2ab17f156b3e56d" Jan 30 16:46:06 crc kubenswrapper[4766]: I0130 16:46:06.261441 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 16:46:06 crc kubenswrapper[4766]: I0130 16:46:06.375999 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e111c80e-0c45-49f2-bfc0-665fbdd2ac56-combined-ca-bundle\") pod \"e111c80e-0c45-49f2-bfc0-665fbdd2ac56\" (UID: \"e111c80e-0c45-49f2-bfc0-665fbdd2ac56\") " Jan 30 16:46:06 crc kubenswrapper[4766]: I0130 16:46:06.376056 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e111c80e-0c45-49f2-bfc0-665fbdd2ac56-config-data\") pod \"e111c80e-0c45-49f2-bfc0-665fbdd2ac56\" (UID: \"e111c80e-0c45-49f2-bfc0-665fbdd2ac56\") " Jan 30 16:46:06 crc kubenswrapper[4766]: I0130 16:46:06.376091 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2n44\" (UniqueName: \"kubernetes.io/projected/e111c80e-0c45-49f2-bfc0-665fbdd2ac56-kube-api-access-l2n44\") pod \"e111c80e-0c45-49f2-bfc0-665fbdd2ac56\" (UID: \"e111c80e-0c45-49f2-bfc0-665fbdd2ac56\") " Jan 30 16:46:06 crc kubenswrapper[4766]: I0130 16:46:06.376127 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e111c80e-0c45-49f2-bfc0-665fbdd2ac56-logs\") pod \"e111c80e-0c45-49f2-bfc0-665fbdd2ac56\" (UID: \"e111c80e-0c45-49f2-bfc0-665fbdd2ac56\") " Jan 30 16:46:06 crc kubenswrapper[4766]: I0130 16:46:06.376711 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e111c80e-0c45-49f2-bfc0-665fbdd2ac56-logs" (OuterVolumeSpecName: "logs") pod "e111c80e-0c45-49f2-bfc0-665fbdd2ac56" (UID: "e111c80e-0c45-49f2-bfc0-665fbdd2ac56"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:46:06 crc kubenswrapper[4766]: I0130 16:46:06.377332 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e111c80e-0c45-49f2-bfc0-665fbdd2ac56-logs\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:06 crc kubenswrapper[4766]: I0130 16:46:06.381295 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e111c80e-0c45-49f2-bfc0-665fbdd2ac56-kube-api-access-l2n44" (OuterVolumeSpecName: "kube-api-access-l2n44") pod "e111c80e-0c45-49f2-bfc0-665fbdd2ac56" (UID: "e111c80e-0c45-49f2-bfc0-665fbdd2ac56"). InnerVolumeSpecName "kube-api-access-l2n44". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:46:06 crc kubenswrapper[4766]: I0130 16:46:06.410408 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e111c80e-0c45-49f2-bfc0-665fbdd2ac56-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e111c80e-0c45-49f2-bfc0-665fbdd2ac56" (UID: "e111c80e-0c45-49f2-bfc0-665fbdd2ac56"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:46:06 crc kubenswrapper[4766]: I0130 16:46:06.410982 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e111c80e-0c45-49f2-bfc0-665fbdd2ac56-config-data" (OuterVolumeSpecName: "config-data") pod "e111c80e-0c45-49f2-bfc0-665fbdd2ac56" (UID: "e111c80e-0c45-49f2-bfc0-665fbdd2ac56"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:46:06 crc kubenswrapper[4766]: I0130 16:46:06.478897 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e111c80e-0c45-49f2-bfc0-665fbdd2ac56-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:06 crc kubenswrapper[4766]: I0130 16:46:06.478930 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e111c80e-0c45-49f2-bfc0-665fbdd2ac56-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:06 crc kubenswrapper[4766]: I0130 16:46:06.478944 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2n44\" (UniqueName: \"kubernetes.io/projected/e111c80e-0c45-49f2-bfc0-665fbdd2ac56-kube-api-access-l2n44\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:06 crc kubenswrapper[4766]: I0130 16:46:06.637073 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:46:06 crc kubenswrapper[4766]: I0130 16:46:06.790937 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxcbk\" (UniqueName: \"kubernetes.io/projected/e0275f96-c8b4-4219-8a95-f8cfa7a4edca-kube-api-access-xxcbk\") pod \"e0275f96-c8b4-4219-8a95-f8cfa7a4edca\" (UID: \"e0275f96-c8b4-4219-8a95-f8cfa7a4edca\") " Jan 30 16:46:06 crc kubenswrapper[4766]: I0130 16:46:06.790983 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0275f96-c8b4-4219-8a95-f8cfa7a4edca-config-data\") pod \"e0275f96-c8b4-4219-8a95-f8cfa7a4edca\" (UID: \"e0275f96-c8b4-4219-8a95-f8cfa7a4edca\") " Jan 30 16:46:06 crc kubenswrapper[4766]: I0130 16:46:06.791010 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0275f96-c8b4-4219-8a95-f8cfa7a4edca-combined-ca-bundle\") pod \"e0275f96-c8b4-4219-8a95-f8cfa7a4edca\" (UID: \"e0275f96-c8b4-4219-8a95-f8cfa7a4edca\") " Jan 30 16:46:06 crc kubenswrapper[4766]: I0130 16:46:06.795097 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0275f96-c8b4-4219-8a95-f8cfa7a4edca-kube-api-access-xxcbk" (OuterVolumeSpecName: "kube-api-access-xxcbk") pod "e0275f96-c8b4-4219-8a95-f8cfa7a4edca" (UID: "e0275f96-c8b4-4219-8a95-f8cfa7a4edca"). InnerVolumeSpecName "kube-api-access-xxcbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:46:06 crc kubenswrapper[4766]: I0130 16:46:06.818958 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0275f96-c8b4-4219-8a95-f8cfa7a4edca-config-data" (OuterVolumeSpecName: "config-data") pod "e0275f96-c8b4-4219-8a95-f8cfa7a4edca" (UID: "e0275f96-c8b4-4219-8a95-f8cfa7a4edca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:46:06 crc kubenswrapper[4766]: I0130 16:46:06.820576 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0275f96-c8b4-4219-8a95-f8cfa7a4edca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e0275f96-c8b4-4219-8a95-f8cfa7a4edca" (UID: "e0275f96-c8b4-4219-8a95-f8cfa7a4edca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:46:06 crc kubenswrapper[4766]: I0130 16:46:06.893595 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxcbk\" (UniqueName: \"kubernetes.io/projected/e0275f96-c8b4-4219-8a95-f8cfa7a4edca-kube-api-access-xxcbk\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:06 crc kubenswrapper[4766]: I0130 16:46:06.893626 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0275f96-c8b4-4219-8a95-f8cfa7a4edca-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:06 crc kubenswrapper[4766]: I0130 16:46:06.893635 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0275f96-c8b4-4219-8a95-f8cfa7a4edca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.250170 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e0275f96-c8b4-4219-8a95-f8cfa7a4edca","Type":"ContainerDied","Data":"d03cdc6170eebcf6ba04199860083b79a704186bcc24a8f0c94fb427aa1473a0"} Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.250228 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.250585 4766 scope.go:117] "RemoveContainer" containerID="135c1956a860be59824b856b724e9e55eaa85db098e7c6b8d270f3404e379bf5" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.250269 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.285434 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.304914 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.321231 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.331892 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.347661 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 16:46:07 crc kubenswrapper[4766]: E0130 16:46:07.348096 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e111c80e-0c45-49f2-bfc0-665fbdd2ac56" containerName="nova-metadata-log" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.348113 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="e111c80e-0c45-49f2-bfc0-665fbdd2ac56" containerName="nova-metadata-log" Jan 30 16:46:07 crc kubenswrapper[4766]: E0130 16:46:07.348124 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e111c80e-0c45-49f2-bfc0-665fbdd2ac56" containerName="nova-metadata-metadata" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.348133 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="e111c80e-0c45-49f2-bfc0-665fbdd2ac56" containerName="nova-metadata-metadata" Jan 30 16:46:07 crc kubenswrapper[4766]: E0130 16:46:07.348154 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0275f96-c8b4-4219-8a95-f8cfa7a4edca" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.348160 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0275f96-c8b4-4219-8a95-f8cfa7a4edca" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.348359 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="e111c80e-0c45-49f2-bfc0-665fbdd2ac56" containerName="nova-metadata-log" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.348371 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0275f96-c8b4-4219-8a95-f8cfa7a4edca" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.348383 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="e111c80e-0c45-49f2-bfc0-665fbdd2ac56" containerName="nova-metadata-metadata" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.349026 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.352011 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.352119 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.352258 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.354832 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.356696 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.359673 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.359964 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.367657 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.378647 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.504453 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2852c370-2b06-4a98-9d48-190ed09dc7fb-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2852c370-2b06-4a98-9d48-190ed09dc7fb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.504546 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e30ce0e-ad1f-4433-91ba-3a19be83ffc9-logs\") pod \"nova-metadata-0\" (UID: \"8e30ce0e-ad1f-4433-91ba-3a19be83ffc9\") " pod="openstack/nova-metadata-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.504684 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plzmd\" (UniqueName: \"kubernetes.io/projected/2852c370-2b06-4a98-9d48-190ed09dc7fb-kube-api-access-plzmd\") pod \"nova-cell1-novncproxy-0\" (UID: \"2852c370-2b06-4a98-9d48-190ed09dc7fb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.504767 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/2852c370-2b06-4a98-9d48-190ed09dc7fb-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"2852c370-2b06-4a98-9d48-190ed09dc7fb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.504926 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf4cx\" (UniqueName: \"kubernetes.io/projected/8e30ce0e-ad1f-4433-91ba-3a19be83ffc9-kube-api-access-mf4cx\") pod \"nova-metadata-0\" (UID: \"8e30ce0e-ad1f-4433-91ba-3a19be83ffc9\") " pod="openstack/nova-metadata-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.504967 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e30ce0e-ad1f-4433-91ba-3a19be83ffc9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8e30ce0e-ad1f-4433-91ba-3a19be83ffc9\") " pod="openstack/nova-metadata-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.505077 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/2852c370-2b06-4a98-9d48-190ed09dc7fb-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"2852c370-2b06-4a98-9d48-190ed09dc7fb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.505125 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e30ce0e-ad1f-4433-91ba-3a19be83ffc9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8e30ce0e-ad1f-4433-91ba-3a19be83ffc9\") " pod="openstack/nova-metadata-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.505166 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e30ce0e-ad1f-4433-91ba-3a19be83ffc9-config-data\") pod \"nova-metadata-0\" (UID: \"8e30ce0e-ad1f-4433-91ba-3a19be83ffc9\") " pod="openstack/nova-metadata-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.505293 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2852c370-2b06-4a98-9d48-190ed09dc7fb-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2852c370-2b06-4a98-9d48-190ed09dc7fb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.606580 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2852c370-2b06-4a98-9d48-190ed09dc7fb-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2852c370-2b06-4a98-9d48-190ed09dc7fb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.606653 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e30ce0e-ad1f-4433-91ba-3a19be83ffc9-logs\") pod \"nova-metadata-0\" (UID: \"8e30ce0e-ad1f-4433-91ba-3a19be83ffc9\") " pod="openstack/nova-metadata-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.606688 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plzmd\" (UniqueName: \"kubernetes.io/projected/2852c370-2b06-4a98-9d48-190ed09dc7fb-kube-api-access-plzmd\") pod \"nova-cell1-novncproxy-0\" (UID: \"2852c370-2b06-4a98-9d48-190ed09dc7fb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.606717 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/2852c370-2b06-4a98-9d48-190ed09dc7fb-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"2852c370-2b06-4a98-9d48-190ed09dc7fb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.606770 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mf4cx\" (UniqueName: \"kubernetes.io/projected/8e30ce0e-ad1f-4433-91ba-3a19be83ffc9-kube-api-access-mf4cx\") pod \"nova-metadata-0\" (UID: \"8e30ce0e-ad1f-4433-91ba-3a19be83ffc9\") " pod="openstack/nova-metadata-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.606790 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e30ce0e-ad1f-4433-91ba-3a19be83ffc9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8e30ce0e-ad1f-4433-91ba-3a19be83ffc9\") " pod="openstack/nova-metadata-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.606826 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/2852c370-2b06-4a98-9d48-190ed09dc7fb-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"2852c370-2b06-4a98-9d48-190ed09dc7fb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.606849 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e30ce0e-ad1f-4433-91ba-3a19be83ffc9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8e30ce0e-ad1f-4433-91ba-3a19be83ffc9\") " pod="openstack/nova-metadata-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.606868 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e30ce0e-ad1f-4433-91ba-3a19be83ffc9-config-data\") pod \"nova-metadata-0\" (UID: \"8e30ce0e-ad1f-4433-91ba-3a19be83ffc9\") " pod="openstack/nova-metadata-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.606905 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2852c370-2b06-4a98-9d48-190ed09dc7fb-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2852c370-2b06-4a98-9d48-190ed09dc7fb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.607884 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e30ce0e-ad1f-4433-91ba-3a19be83ffc9-logs\") pod \"nova-metadata-0\" (UID: \"8e30ce0e-ad1f-4433-91ba-3a19be83ffc9\") " pod="openstack/nova-metadata-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.612070 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/2852c370-2b06-4a98-9d48-190ed09dc7fb-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"2852c370-2b06-4a98-9d48-190ed09dc7fb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.612881 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2852c370-2b06-4a98-9d48-190ed09dc7fb-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2852c370-2b06-4a98-9d48-190ed09dc7fb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.613152 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e30ce0e-ad1f-4433-91ba-3a19be83ffc9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8e30ce0e-ad1f-4433-91ba-3a19be83ffc9\") " pod="openstack/nova-metadata-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.613652 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e30ce0e-ad1f-4433-91ba-3a19be83ffc9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8e30ce0e-ad1f-4433-91ba-3a19be83ffc9\") " pod="openstack/nova-metadata-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.613747 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e30ce0e-ad1f-4433-91ba-3a19be83ffc9-config-data\") pod \"nova-metadata-0\" (UID: \"8e30ce0e-ad1f-4433-91ba-3a19be83ffc9\") " pod="openstack/nova-metadata-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.615658 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/2852c370-2b06-4a98-9d48-190ed09dc7fb-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"2852c370-2b06-4a98-9d48-190ed09dc7fb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.616667 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2852c370-2b06-4a98-9d48-190ed09dc7fb-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2852c370-2b06-4a98-9d48-190ed09dc7fb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.624319 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plzmd\" (UniqueName: \"kubernetes.io/projected/2852c370-2b06-4a98-9d48-190ed09dc7fb-kube-api-access-plzmd\") pod \"nova-cell1-novncproxy-0\" (UID: \"2852c370-2b06-4a98-9d48-190ed09dc7fb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.628272 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mf4cx\" (UniqueName: \"kubernetes.io/projected/8e30ce0e-ad1f-4433-91ba-3a19be83ffc9-kube-api-access-mf4cx\") pod \"nova-metadata-0\" (UID: \"8e30ce0e-ad1f-4433-91ba-3a19be83ffc9\") " pod="openstack/nova-metadata-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.678314 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.707762 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 16:46:08 crc kubenswrapper[4766]: I0130 16:46:08.057534 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0275f96-c8b4-4219-8a95-f8cfa7a4edca" path="/var/lib/kubelet/pods/e0275f96-c8b4-4219-8a95-f8cfa7a4edca/volumes" Jan 30 16:46:08 crc kubenswrapper[4766]: I0130 16:46:08.058511 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e111c80e-0c45-49f2-bfc0-665fbdd2ac56" path="/var/lib/kubelet/pods/e111c80e-0c45-49f2-bfc0-665fbdd2ac56/volumes" Jan 30 16:46:08 crc kubenswrapper[4766]: I0130 16:46:08.234004 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 16:46:08 crc kubenswrapper[4766]: I0130 16:46:08.261039 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8e30ce0e-ad1f-4433-91ba-3a19be83ffc9","Type":"ContainerStarted","Data":"00d52323719cdcf153e25b7a1622f149993ee5f6d853ba11e47ebf2bd0e4a738"} Jan 30 16:46:08 crc kubenswrapper[4766]: I0130 16:46:08.303565 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 16:46:08 crc kubenswrapper[4766]: W0130 16:46:08.306652 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2852c370_2b06_4a98_9d48_190ed09dc7fb.slice/crio-e3f1207851f51fa77618a8f4520c72390b14e22e1338691737d047661159f41f WatchSource:0}: Error finding container e3f1207851f51fa77618a8f4520c72390b14e22e1338691737d047661159f41f: Status 404 returned error can't find the container with id e3f1207851f51fa77618a8f4520c72390b14e22e1338691737d047661159f41f Jan 30 16:46:08 crc kubenswrapper[4766]: I0130 16:46:08.729628 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 16:46:08 crc kubenswrapper[4766]: I0130 16:46:08.730676 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 16:46:08 crc kubenswrapper[4766]: I0130 16:46:08.732721 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 16:46:08 crc kubenswrapper[4766]: I0130 16:46:08.733476 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 16:46:09 crc kubenswrapper[4766]: I0130 16:46:09.272147 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8e30ce0e-ad1f-4433-91ba-3a19be83ffc9","Type":"ContainerStarted","Data":"f33cca9b31b0707055cc7d972b3ac113dfb61b700ced61c9579110d866526c80"} Jan 30 16:46:09 crc kubenswrapper[4766]: I0130 16:46:09.272479 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8e30ce0e-ad1f-4433-91ba-3a19be83ffc9","Type":"ContainerStarted","Data":"817ef08566cd27d90ced8fe5b2ae3d2003a5823e02c0a237d13eebcc353ba241"} Jan 30 16:46:09 crc kubenswrapper[4766]: I0130 16:46:09.275328 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2852c370-2b06-4a98-9d48-190ed09dc7fb","Type":"ContainerStarted","Data":"2671e048e8f586ac30e84753d3da2378fb3c49f9c4e105e7b943738d2cc3d2c1"} Jan 30 16:46:09 crc kubenswrapper[4766]: I0130 16:46:09.275363 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2852c370-2b06-4a98-9d48-190ed09dc7fb","Type":"ContainerStarted","Data":"e3f1207851f51fa77618a8f4520c72390b14e22e1338691737d047661159f41f"} Jan 30 16:46:09 crc kubenswrapper[4766]: I0130 16:46:09.276250 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 16:46:09 crc kubenswrapper[4766]: I0130 16:46:09.279320 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 16:46:09 crc kubenswrapper[4766]: I0130 16:46:09.296978 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.296961489 podStartE2EDuration="2.296961489s" podCreationTimestamp="2026-01-30 16:46:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:46:09.28934217 +0000 UTC m=+1423.927299536" watchObservedRunningTime="2026-01-30 16:46:09.296961489 +0000 UTC m=+1423.934918835" Jan 30 16:46:09 crc kubenswrapper[4766]: I0130 16:46:09.333414 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.333393891 podStartE2EDuration="2.333393891s" podCreationTimestamp="2026-01-30 16:46:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:46:09.327003596 +0000 UTC m=+1423.964960942" watchObservedRunningTime="2026-01-30 16:46:09.333393891 +0000 UTC m=+1423.971351237" Jan 30 16:46:09 crc kubenswrapper[4766]: I0130 16:46:09.491511 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-zcjhs"] Jan 30 16:46:09 crc kubenswrapper[4766]: I0130 16:46:09.493741 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-zcjhs" Jan 30 16:46:09 crc kubenswrapper[4766]: I0130 16:46:09.500703 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-zcjhs"] Jan 30 16:46:09 crc kubenswrapper[4766]: I0130 16:46:09.671841 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtgwn\" (UniqueName: \"kubernetes.io/projected/dc575168-b373-41ba-9dd6-2d9d168a6527-kube-api-access-dtgwn\") pod \"dnsmasq-dns-59cf4bdb65-zcjhs\" (UID: \"dc575168-b373-41ba-9dd6-2d9d168a6527\") " pod="openstack/dnsmasq-dns-59cf4bdb65-zcjhs" Jan 30 16:46:09 crc kubenswrapper[4766]: I0130 16:46:09.672110 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc575168-b373-41ba-9dd6-2d9d168a6527-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-zcjhs\" (UID: \"dc575168-b373-41ba-9dd6-2d9d168a6527\") " pod="openstack/dnsmasq-dns-59cf4bdb65-zcjhs" Jan 30 16:46:09 crc kubenswrapper[4766]: I0130 16:46:09.672319 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc575168-b373-41ba-9dd6-2d9d168a6527-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-zcjhs\" (UID: \"dc575168-b373-41ba-9dd6-2d9d168a6527\") " pod="openstack/dnsmasq-dns-59cf4bdb65-zcjhs" Jan 30 16:46:09 crc kubenswrapper[4766]: I0130 16:46:09.672430 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dc575168-b373-41ba-9dd6-2d9d168a6527-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-zcjhs\" (UID: \"dc575168-b373-41ba-9dd6-2d9d168a6527\") " pod="openstack/dnsmasq-dns-59cf4bdb65-zcjhs" Jan 30 16:46:09 crc kubenswrapper[4766]: I0130 16:46:09.672532 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc575168-b373-41ba-9dd6-2d9d168a6527-config\") pod \"dnsmasq-dns-59cf4bdb65-zcjhs\" (UID: \"dc575168-b373-41ba-9dd6-2d9d168a6527\") " pod="openstack/dnsmasq-dns-59cf4bdb65-zcjhs" Jan 30 16:46:09 crc kubenswrapper[4766]: I0130 16:46:09.672587 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc575168-b373-41ba-9dd6-2d9d168a6527-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-zcjhs\" (UID: \"dc575168-b373-41ba-9dd6-2d9d168a6527\") " pod="openstack/dnsmasq-dns-59cf4bdb65-zcjhs" Jan 30 16:46:09 crc kubenswrapper[4766]: I0130 16:46:09.775063 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc575168-b373-41ba-9dd6-2d9d168a6527-config\") pod \"dnsmasq-dns-59cf4bdb65-zcjhs\" (UID: \"dc575168-b373-41ba-9dd6-2d9d168a6527\") " pod="openstack/dnsmasq-dns-59cf4bdb65-zcjhs" Jan 30 16:46:09 crc kubenswrapper[4766]: I0130 16:46:09.775137 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc575168-b373-41ba-9dd6-2d9d168a6527-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-zcjhs\" (UID: \"dc575168-b373-41ba-9dd6-2d9d168a6527\") " pod="openstack/dnsmasq-dns-59cf4bdb65-zcjhs" Jan 30 16:46:09 crc kubenswrapper[4766]: I0130 16:46:09.775328 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtgwn\" (UniqueName: \"kubernetes.io/projected/dc575168-b373-41ba-9dd6-2d9d168a6527-kube-api-access-dtgwn\") pod \"dnsmasq-dns-59cf4bdb65-zcjhs\" (UID: \"dc575168-b373-41ba-9dd6-2d9d168a6527\") " pod="openstack/dnsmasq-dns-59cf4bdb65-zcjhs" Jan 30 16:46:09 crc kubenswrapper[4766]: I0130 16:46:09.775360 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc575168-b373-41ba-9dd6-2d9d168a6527-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-zcjhs\" (UID: \"dc575168-b373-41ba-9dd6-2d9d168a6527\") " pod="openstack/dnsmasq-dns-59cf4bdb65-zcjhs" Jan 30 16:46:09 crc kubenswrapper[4766]: I0130 16:46:09.775415 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc575168-b373-41ba-9dd6-2d9d168a6527-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-zcjhs\" (UID: \"dc575168-b373-41ba-9dd6-2d9d168a6527\") " pod="openstack/dnsmasq-dns-59cf4bdb65-zcjhs" Jan 30 16:46:09 crc kubenswrapper[4766]: I0130 16:46:09.775463 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dc575168-b373-41ba-9dd6-2d9d168a6527-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-zcjhs\" (UID: \"dc575168-b373-41ba-9dd6-2d9d168a6527\") " pod="openstack/dnsmasq-dns-59cf4bdb65-zcjhs" Jan 30 16:46:09 crc kubenswrapper[4766]: I0130 16:46:09.776121 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc575168-b373-41ba-9dd6-2d9d168a6527-config\") pod \"dnsmasq-dns-59cf4bdb65-zcjhs\" (UID: \"dc575168-b373-41ba-9dd6-2d9d168a6527\") " pod="openstack/dnsmasq-dns-59cf4bdb65-zcjhs" Jan 30 16:46:09 crc kubenswrapper[4766]: I0130 16:46:09.776509 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dc575168-b373-41ba-9dd6-2d9d168a6527-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-zcjhs\" (UID: \"dc575168-b373-41ba-9dd6-2d9d168a6527\") " pod="openstack/dnsmasq-dns-59cf4bdb65-zcjhs" Jan 30 16:46:09 crc kubenswrapper[4766]: I0130 16:46:09.776739 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc575168-b373-41ba-9dd6-2d9d168a6527-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-zcjhs\" (UID: \"dc575168-b373-41ba-9dd6-2d9d168a6527\") " pod="openstack/dnsmasq-dns-59cf4bdb65-zcjhs" Jan 30 16:46:09 crc kubenswrapper[4766]: I0130 16:46:09.776913 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc575168-b373-41ba-9dd6-2d9d168a6527-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-zcjhs\" (UID: \"dc575168-b373-41ba-9dd6-2d9d168a6527\") " pod="openstack/dnsmasq-dns-59cf4bdb65-zcjhs" Jan 30 16:46:09 crc kubenswrapper[4766]: I0130 16:46:09.777092 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc575168-b373-41ba-9dd6-2d9d168a6527-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-zcjhs\" (UID: \"dc575168-b373-41ba-9dd6-2d9d168a6527\") " pod="openstack/dnsmasq-dns-59cf4bdb65-zcjhs" Jan 30 16:46:09 crc kubenswrapper[4766]: I0130 16:46:09.809066 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtgwn\" (UniqueName: \"kubernetes.io/projected/dc575168-b373-41ba-9dd6-2d9d168a6527-kube-api-access-dtgwn\") pod \"dnsmasq-dns-59cf4bdb65-zcjhs\" (UID: \"dc575168-b373-41ba-9dd6-2d9d168a6527\") " pod="openstack/dnsmasq-dns-59cf4bdb65-zcjhs" Jan 30 16:46:09 crc kubenswrapper[4766]: I0130 16:46:09.840825 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-zcjhs" Jan 30 16:46:10 crc kubenswrapper[4766]: W0130 16:46:10.353489 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc575168_b373_41ba_9dd6_2d9d168a6527.slice/crio-5f22f70a639fc1a3de1e29c0cbaf53974c923905b26e7700e024e4f93619bae6 WatchSource:0}: Error finding container 5f22f70a639fc1a3de1e29c0cbaf53974c923905b26e7700e024e4f93619bae6: Status 404 returned error can't find the container with id 5f22f70a639fc1a3de1e29c0cbaf53974c923905b26e7700e024e4f93619bae6 Jan 30 16:46:10 crc kubenswrapper[4766]: I0130 16:46:10.354582 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-zcjhs"] Jan 30 16:46:11 crc kubenswrapper[4766]: I0130 16:46:11.292503 4766 generic.go:334] "Generic (PLEG): container finished" podID="dc575168-b373-41ba-9dd6-2d9d168a6527" containerID="171794ba587c014be0b798dbd63a837f1e8d0b0b80d5e7da01caed534045c23e" exitCode=0 Jan 30 16:46:11 crc kubenswrapper[4766]: I0130 16:46:11.292616 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-zcjhs" event={"ID":"dc575168-b373-41ba-9dd6-2d9d168a6527","Type":"ContainerDied","Data":"171794ba587c014be0b798dbd63a837f1e8d0b0b80d5e7da01caed534045c23e"} Jan 30 16:46:11 crc kubenswrapper[4766]: I0130 16:46:11.292904 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-zcjhs" event={"ID":"dc575168-b373-41ba-9dd6-2d9d168a6527","Type":"ContainerStarted","Data":"5f22f70a639fc1a3de1e29c0cbaf53974c923905b26e7700e024e4f93619bae6"} Jan 30 16:46:11 crc kubenswrapper[4766]: I0130 16:46:11.821052 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:46:11 crc kubenswrapper[4766]: I0130 16:46:11.822054 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4682a3ba-d8f2-48f0-820c-961ee175193e" containerName="proxy-httpd" containerID="cri-o://10c4ecee1cc3249bb8bb9e76e30cec2a7de20f074c2c187438eb8244558c1a17" gracePeriod=30 Jan 30 16:46:11 crc kubenswrapper[4766]: I0130 16:46:11.822093 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4682a3ba-d8f2-48f0-820c-961ee175193e" containerName="sg-core" containerID="cri-o://0f2730cffdbcf2a7d668c54d27d193919e51030eb1b48406db509abf3aab1a5e" gracePeriod=30 Jan 30 16:46:11 crc kubenswrapper[4766]: I0130 16:46:11.822152 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4682a3ba-d8f2-48f0-820c-961ee175193e" containerName="ceilometer-notification-agent" containerID="cri-o://88669538db1b407b13af54044dcfe7446f733bbfee3afd84694a09deab2733d3" gracePeriod=30 Jan 30 16:46:11 crc kubenswrapper[4766]: I0130 16:46:11.822497 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4682a3ba-d8f2-48f0-820c-961ee175193e" containerName="ceilometer-central-agent" containerID="cri-o://f3b4d5555e6683d7c9a35452956e7db3f892b4d66ffc3b24f2410f434ccab80f" gracePeriod=30 Jan 30 16:46:11 crc kubenswrapper[4766]: I0130 16:46:11.844769 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="4682a3ba-d8f2-48f0-820c-961ee175193e" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 30 16:46:12 crc kubenswrapper[4766]: I0130 16:46:12.210394 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 16:46:12 crc kubenswrapper[4766]: I0130 16:46:12.302206 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-zcjhs" event={"ID":"dc575168-b373-41ba-9dd6-2d9d168a6527","Type":"ContainerStarted","Data":"961c44998094a56223784b55dc0a705b3ed88b437f07fbb4bb63251127202310"} Jan 30 16:46:12 crc kubenswrapper[4766]: I0130 16:46:12.302639 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59cf4bdb65-zcjhs" Jan 30 16:46:12 crc kubenswrapper[4766]: I0130 16:46:12.305971 4766 generic.go:334] "Generic (PLEG): container finished" podID="4682a3ba-d8f2-48f0-820c-961ee175193e" containerID="10c4ecee1cc3249bb8bb9e76e30cec2a7de20f074c2c187438eb8244558c1a17" exitCode=0 Jan 30 16:46:12 crc kubenswrapper[4766]: I0130 16:46:12.305996 4766 generic.go:334] "Generic (PLEG): container finished" podID="4682a3ba-d8f2-48f0-820c-961ee175193e" containerID="0f2730cffdbcf2a7d668c54d27d193919e51030eb1b48406db509abf3aab1a5e" exitCode=2 Jan 30 16:46:12 crc kubenswrapper[4766]: I0130 16:46:12.306005 4766 generic.go:334] "Generic (PLEG): container finished" podID="4682a3ba-d8f2-48f0-820c-961ee175193e" containerID="88669538db1b407b13af54044dcfe7446f733bbfee3afd84694a09deab2733d3" exitCode=0 Jan 30 16:46:12 crc kubenswrapper[4766]: I0130 16:46:12.306030 4766 generic.go:334] "Generic (PLEG): container finished" podID="4682a3ba-d8f2-48f0-820c-961ee175193e" containerID="f3b4d5555e6683d7c9a35452956e7db3f892b4d66ffc3b24f2410f434ccab80f" exitCode=0 Jan 30 16:46:12 crc kubenswrapper[4766]: I0130 16:46:12.306254 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4682a3ba-d8f2-48f0-820c-961ee175193e","Type":"ContainerDied","Data":"10c4ecee1cc3249bb8bb9e76e30cec2a7de20f074c2c187438eb8244558c1a17"} Jan 30 16:46:12 crc kubenswrapper[4766]: I0130 16:46:12.306256 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0f697e9a-6e36-40c9-a199-29dc8ec19900" containerName="nova-api-log" containerID="cri-o://d0e58709172611be47a7d079cf51386583460c1b5715a8c2e52ad8dc28416bb9" gracePeriod=30 Jan 30 16:46:12 crc kubenswrapper[4766]: I0130 16:46:12.306293 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4682a3ba-d8f2-48f0-820c-961ee175193e","Type":"ContainerDied","Data":"0f2730cffdbcf2a7d668c54d27d193919e51030eb1b48406db509abf3aab1a5e"} Jan 30 16:46:12 crc kubenswrapper[4766]: I0130 16:46:12.306304 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4682a3ba-d8f2-48f0-820c-961ee175193e","Type":"ContainerDied","Data":"88669538db1b407b13af54044dcfe7446f733bbfee3afd84694a09deab2733d3"} Jan 30 16:46:12 crc kubenswrapper[4766]: I0130 16:46:12.306312 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4682a3ba-d8f2-48f0-820c-961ee175193e","Type":"ContainerDied","Data":"f3b4d5555e6683d7c9a35452956e7db3f892b4d66ffc3b24f2410f434ccab80f"} Jan 30 16:46:12 crc kubenswrapper[4766]: I0130 16:46:12.306555 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0f697e9a-6e36-40c9-a199-29dc8ec19900" containerName="nova-api-api" containerID="cri-o://7b848af7eef8b1e8cb177dd175a8103f075e0d709a2cd7d271102825954435f0" gracePeriod=30 Jan 30 16:46:12 crc kubenswrapper[4766]: I0130 16:46:12.330995 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59cf4bdb65-zcjhs" podStartSLOduration=3.330978434 podStartE2EDuration="3.330978434s" podCreationTimestamp="2026-01-30 16:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:46:12.322996452 +0000 UTC m=+1426.960953798" watchObservedRunningTime="2026-01-30 16:46:12.330978434 +0000 UTC m=+1426.968935780" Jan 30 16:46:12 crc kubenswrapper[4766]: I0130 16:46:12.678483 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:46:12 crc kubenswrapper[4766]: I0130 16:46:12.708561 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 16:46:12 crc kubenswrapper[4766]: I0130 16:46:12.708618 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 16:46:12 crc kubenswrapper[4766]: I0130 16:46:12.831780 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:46:12 crc kubenswrapper[4766]: I0130 16:46:12.950221 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4682a3ba-d8f2-48f0-820c-961ee175193e-config-data\") pod \"4682a3ba-d8f2-48f0-820c-961ee175193e\" (UID: \"4682a3ba-d8f2-48f0-820c-961ee175193e\") " Jan 30 16:46:12 crc kubenswrapper[4766]: I0130 16:46:12.950304 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4682a3ba-d8f2-48f0-820c-961ee175193e-combined-ca-bundle\") pod \"4682a3ba-d8f2-48f0-820c-961ee175193e\" (UID: \"4682a3ba-d8f2-48f0-820c-961ee175193e\") " Jan 30 16:46:12 crc kubenswrapper[4766]: I0130 16:46:12.950369 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4682a3ba-d8f2-48f0-820c-961ee175193e-run-httpd\") pod \"4682a3ba-d8f2-48f0-820c-961ee175193e\" (UID: \"4682a3ba-d8f2-48f0-820c-961ee175193e\") " Jan 30 16:46:12 crc kubenswrapper[4766]: I0130 16:46:12.950395 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4682a3ba-d8f2-48f0-820c-961ee175193e-log-httpd\") pod \"4682a3ba-d8f2-48f0-820c-961ee175193e\" (UID: \"4682a3ba-d8f2-48f0-820c-961ee175193e\") " Jan 30 16:46:12 crc kubenswrapper[4766]: I0130 16:46:12.950463 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4682a3ba-d8f2-48f0-820c-961ee175193e-sg-core-conf-yaml\") pod \"4682a3ba-d8f2-48f0-820c-961ee175193e\" (UID: \"4682a3ba-d8f2-48f0-820c-961ee175193e\") " Jan 30 16:46:12 crc kubenswrapper[4766]: I0130 16:46:12.950520 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4682a3ba-d8f2-48f0-820c-961ee175193e-ceilometer-tls-certs\") pod \"4682a3ba-d8f2-48f0-820c-961ee175193e\" (UID: \"4682a3ba-d8f2-48f0-820c-961ee175193e\") " Jan 30 16:46:12 crc kubenswrapper[4766]: I0130 16:46:12.950557 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4682a3ba-d8f2-48f0-820c-961ee175193e-scripts\") pod \"4682a3ba-d8f2-48f0-820c-961ee175193e\" (UID: \"4682a3ba-d8f2-48f0-820c-961ee175193e\") " Jan 30 16:46:12 crc kubenswrapper[4766]: I0130 16:46:12.950632 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9wmh\" (UniqueName: \"kubernetes.io/projected/4682a3ba-d8f2-48f0-820c-961ee175193e-kube-api-access-f9wmh\") pod \"4682a3ba-d8f2-48f0-820c-961ee175193e\" (UID: \"4682a3ba-d8f2-48f0-820c-961ee175193e\") " Jan 30 16:46:12 crc kubenswrapper[4766]: I0130 16:46:12.950749 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4682a3ba-d8f2-48f0-820c-961ee175193e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4682a3ba-d8f2-48f0-820c-961ee175193e" (UID: "4682a3ba-d8f2-48f0-820c-961ee175193e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:46:12 crc kubenswrapper[4766]: I0130 16:46:12.951083 4766 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4682a3ba-d8f2-48f0-820c-961ee175193e-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:12 crc kubenswrapper[4766]: I0130 16:46:12.951270 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4682a3ba-d8f2-48f0-820c-961ee175193e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4682a3ba-d8f2-48f0-820c-961ee175193e" (UID: "4682a3ba-d8f2-48f0-820c-961ee175193e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:46:12 crc kubenswrapper[4766]: I0130 16:46:12.957563 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4682a3ba-d8f2-48f0-820c-961ee175193e-scripts" (OuterVolumeSpecName: "scripts") pod "4682a3ba-d8f2-48f0-820c-961ee175193e" (UID: "4682a3ba-d8f2-48f0-820c-961ee175193e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:46:12 crc kubenswrapper[4766]: I0130 16:46:12.958037 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4682a3ba-d8f2-48f0-820c-961ee175193e-kube-api-access-f9wmh" (OuterVolumeSpecName: "kube-api-access-f9wmh") pod "4682a3ba-d8f2-48f0-820c-961ee175193e" (UID: "4682a3ba-d8f2-48f0-820c-961ee175193e"). InnerVolumeSpecName "kube-api-access-f9wmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:46:12 crc kubenswrapper[4766]: I0130 16:46:12.981967 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4682a3ba-d8f2-48f0-820c-961ee175193e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4682a3ba-d8f2-48f0-820c-961ee175193e" (UID: "4682a3ba-d8f2-48f0-820c-961ee175193e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.029408 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4682a3ba-d8f2-48f0-820c-961ee175193e-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "4682a3ba-d8f2-48f0-820c-961ee175193e" (UID: "4682a3ba-d8f2-48f0-820c-961ee175193e"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.053222 4766 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4682a3ba-d8f2-48f0-820c-961ee175193e-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.053261 4766 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4682a3ba-d8f2-48f0-820c-961ee175193e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.053274 4766 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4682a3ba-d8f2-48f0-820c-961ee175193e-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.053286 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4682a3ba-d8f2-48f0-820c-961ee175193e-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.053297 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9wmh\" (UniqueName: \"kubernetes.io/projected/4682a3ba-d8f2-48f0-820c-961ee175193e-kube-api-access-f9wmh\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.055728 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4682a3ba-d8f2-48f0-820c-961ee175193e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4682a3ba-d8f2-48f0-820c-961ee175193e" (UID: "4682a3ba-d8f2-48f0-820c-961ee175193e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.096336 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4682a3ba-d8f2-48f0-820c-961ee175193e-config-data" (OuterVolumeSpecName: "config-data") pod "4682a3ba-d8f2-48f0-820c-961ee175193e" (UID: "4682a3ba-d8f2-48f0-820c-961ee175193e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.154314 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4682a3ba-d8f2-48f0-820c-961ee175193e-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.154364 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4682a3ba-d8f2-48f0-820c-961ee175193e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.315988 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4682a3ba-d8f2-48f0-820c-961ee175193e","Type":"ContainerDied","Data":"8b86ca0ddd886dfba467ba83639ed4630d6babe59e46210d85c130eb9061c10d"} Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.316044 4766 scope.go:117] "RemoveContainer" containerID="10c4ecee1cc3249bb8bb9e76e30cec2a7de20f074c2c187438eb8244558c1a17" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.316070 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.318121 4766 generic.go:334] "Generic (PLEG): container finished" podID="0f697e9a-6e36-40c9-a199-29dc8ec19900" containerID="d0e58709172611be47a7d079cf51386583460c1b5715a8c2e52ad8dc28416bb9" exitCode=143 Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.318226 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0f697e9a-6e36-40c9-a199-29dc8ec19900","Type":"ContainerDied","Data":"d0e58709172611be47a7d079cf51386583460c1b5715a8c2e52ad8dc28416bb9"} Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.401567 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.410633 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.423621 4766 scope.go:117] "RemoveContainer" containerID="0f2730cffdbcf2a7d668c54d27d193919e51030eb1b48406db509abf3aab1a5e" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.428743 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:46:13 crc kubenswrapper[4766]: E0130 16:46:13.429178 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4682a3ba-d8f2-48f0-820c-961ee175193e" containerName="ceilometer-central-agent" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.429204 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="4682a3ba-d8f2-48f0-820c-961ee175193e" containerName="ceilometer-central-agent" Jan 30 16:46:13 crc kubenswrapper[4766]: E0130 16:46:13.429216 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4682a3ba-d8f2-48f0-820c-961ee175193e" containerName="proxy-httpd" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.429222 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="4682a3ba-d8f2-48f0-820c-961ee175193e" containerName="proxy-httpd" Jan 30 16:46:13 crc kubenswrapper[4766]: E0130 16:46:13.429247 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4682a3ba-d8f2-48f0-820c-961ee175193e" containerName="ceilometer-notification-agent" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.429253 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="4682a3ba-d8f2-48f0-820c-961ee175193e" containerName="ceilometer-notification-agent" Jan 30 16:46:13 crc kubenswrapper[4766]: E0130 16:46:13.429271 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4682a3ba-d8f2-48f0-820c-961ee175193e" containerName="sg-core" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.429277 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="4682a3ba-d8f2-48f0-820c-961ee175193e" containerName="sg-core" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.429436 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="4682a3ba-d8f2-48f0-820c-961ee175193e" containerName="proxy-httpd" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.429450 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="4682a3ba-d8f2-48f0-820c-961ee175193e" containerName="ceilometer-notification-agent" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.429466 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="4682a3ba-d8f2-48f0-820c-961ee175193e" containerName="ceilometer-central-agent" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.429483 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="4682a3ba-d8f2-48f0-820c-961ee175193e" containerName="sg-core" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.445025 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.449369 4766 scope.go:117] "RemoveContainer" containerID="88669538db1b407b13af54044dcfe7446f733bbfee3afd84694a09deab2733d3" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.450215 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.450521 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.450638 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.451658 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.499045 4766 scope.go:117] "RemoveContainer" containerID="f3b4d5555e6683d7c9a35452956e7db3f892b4d66ffc3b24f2410f434ccab80f" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.565377 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01bf866a-799b-42df-8838-91933afbb104-run-httpd\") pod \"ceilometer-0\" (UID: \"01bf866a-799b-42df-8838-91933afbb104\") " pod="openstack/ceilometer-0" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.565444 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01bf866a-799b-42df-8838-91933afbb104-scripts\") pod \"ceilometer-0\" (UID: \"01bf866a-799b-42df-8838-91933afbb104\") " pod="openstack/ceilometer-0" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.565474 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01bf866a-799b-42df-8838-91933afbb104-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"01bf866a-799b-42df-8838-91933afbb104\") " pod="openstack/ceilometer-0" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.565492 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/01bf866a-799b-42df-8838-91933afbb104-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"01bf866a-799b-42df-8838-91933afbb104\") " pod="openstack/ceilometer-0" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.566031 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01bf866a-799b-42df-8838-91933afbb104-log-httpd\") pod \"ceilometer-0\" (UID: \"01bf866a-799b-42df-8838-91933afbb104\") " pod="openstack/ceilometer-0" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.566134 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01bf866a-799b-42df-8838-91933afbb104-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"01bf866a-799b-42df-8838-91933afbb104\") " pod="openstack/ceilometer-0" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.566170 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgmvd\" (UniqueName: \"kubernetes.io/projected/01bf866a-799b-42df-8838-91933afbb104-kube-api-access-pgmvd\") pod \"ceilometer-0\" (UID: \"01bf866a-799b-42df-8838-91933afbb104\") " pod="openstack/ceilometer-0" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.566236 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01bf866a-799b-42df-8838-91933afbb104-config-data\") pod \"ceilometer-0\" (UID: \"01bf866a-799b-42df-8838-91933afbb104\") " pod="openstack/ceilometer-0" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.668234 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01bf866a-799b-42df-8838-91933afbb104-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"01bf866a-799b-42df-8838-91933afbb104\") " pod="openstack/ceilometer-0" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.668286 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/01bf866a-799b-42df-8838-91933afbb104-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"01bf866a-799b-42df-8838-91933afbb104\") " pod="openstack/ceilometer-0" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.668312 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01bf866a-799b-42df-8838-91933afbb104-log-httpd\") pod \"ceilometer-0\" (UID: \"01bf866a-799b-42df-8838-91933afbb104\") " pod="openstack/ceilometer-0" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.668413 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01bf866a-799b-42df-8838-91933afbb104-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"01bf866a-799b-42df-8838-91933afbb104\") " pod="openstack/ceilometer-0" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.668450 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgmvd\" (UniqueName: \"kubernetes.io/projected/01bf866a-799b-42df-8838-91933afbb104-kube-api-access-pgmvd\") pod \"ceilometer-0\" (UID: \"01bf866a-799b-42df-8838-91933afbb104\") " pod="openstack/ceilometer-0" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.668478 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01bf866a-799b-42df-8838-91933afbb104-config-data\") pod \"ceilometer-0\" (UID: \"01bf866a-799b-42df-8838-91933afbb104\") " pod="openstack/ceilometer-0" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.668512 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01bf866a-799b-42df-8838-91933afbb104-run-httpd\") pod \"ceilometer-0\" (UID: \"01bf866a-799b-42df-8838-91933afbb104\") " pod="openstack/ceilometer-0" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.668545 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01bf866a-799b-42df-8838-91933afbb104-scripts\") pod \"ceilometer-0\" (UID: \"01bf866a-799b-42df-8838-91933afbb104\") " pod="openstack/ceilometer-0" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.669551 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01bf866a-799b-42df-8838-91933afbb104-log-httpd\") pod \"ceilometer-0\" (UID: \"01bf866a-799b-42df-8838-91933afbb104\") " pod="openstack/ceilometer-0" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.669621 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01bf866a-799b-42df-8838-91933afbb104-run-httpd\") pod \"ceilometer-0\" (UID: \"01bf866a-799b-42df-8838-91933afbb104\") " pod="openstack/ceilometer-0" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.672820 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01bf866a-799b-42df-8838-91933afbb104-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"01bf866a-799b-42df-8838-91933afbb104\") " pod="openstack/ceilometer-0" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.673480 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/01bf866a-799b-42df-8838-91933afbb104-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"01bf866a-799b-42df-8838-91933afbb104\") " pod="openstack/ceilometer-0" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.675906 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01bf866a-799b-42df-8838-91933afbb104-scripts\") pod \"ceilometer-0\" (UID: \"01bf866a-799b-42df-8838-91933afbb104\") " pod="openstack/ceilometer-0" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.677311 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01bf866a-799b-42df-8838-91933afbb104-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"01bf866a-799b-42df-8838-91933afbb104\") " pod="openstack/ceilometer-0" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.683267 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01bf866a-799b-42df-8838-91933afbb104-config-data\") pod \"ceilometer-0\" (UID: \"01bf866a-799b-42df-8838-91933afbb104\") " pod="openstack/ceilometer-0" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.699383 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgmvd\" (UniqueName: \"kubernetes.io/projected/01bf866a-799b-42df-8838-91933afbb104-kube-api-access-pgmvd\") pod \"ceilometer-0\" (UID: \"01bf866a-799b-42df-8838-91933afbb104\") " pod="openstack/ceilometer-0" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.782417 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.815905 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:46:14 crc kubenswrapper[4766]: I0130 16:46:14.049593 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4682a3ba-d8f2-48f0-820c-961ee175193e" path="/var/lib/kubelet/pods/4682a3ba-d8f2-48f0-820c-961ee175193e/volumes" Jan 30 16:46:14 crc kubenswrapper[4766]: I0130 16:46:14.244163 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:46:14 crc kubenswrapper[4766]: W0130 16:46:14.249451 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01bf866a_799b_42df_8838_91933afbb104.slice/crio-c1b554db47d10f6ccbba7db486a12601a23becc11b2582c73eecf6b917aa1488 WatchSource:0}: Error finding container c1b554db47d10f6ccbba7db486a12601a23becc11b2582c73eecf6b917aa1488: Status 404 returned error can't find the container with id c1b554db47d10f6ccbba7db486a12601a23becc11b2582c73eecf6b917aa1488 Jan 30 16:46:14 crc kubenswrapper[4766]: I0130 16:46:14.329453 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01bf866a-799b-42df-8838-91933afbb104","Type":"ContainerStarted","Data":"c1b554db47d10f6ccbba7db486a12601a23becc11b2582c73eecf6b917aa1488"} Jan 30 16:46:15 crc kubenswrapper[4766]: I0130 16:46:15.339386 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01bf866a-799b-42df-8838-91933afbb104","Type":"ContainerStarted","Data":"08180bf41a76e968f7d6e5f974cb00ab2c9fe39ff71338a5bfd851705e724e68"} Jan 30 16:46:15 crc kubenswrapper[4766]: I0130 16:46:15.977369 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.026832 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gb8mb\" (UniqueName: \"kubernetes.io/projected/0f697e9a-6e36-40c9-a199-29dc8ec19900-kube-api-access-gb8mb\") pod \"0f697e9a-6e36-40c9-a199-29dc8ec19900\" (UID: \"0f697e9a-6e36-40c9-a199-29dc8ec19900\") " Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.026897 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f697e9a-6e36-40c9-a199-29dc8ec19900-logs\") pod \"0f697e9a-6e36-40c9-a199-29dc8ec19900\" (UID: \"0f697e9a-6e36-40c9-a199-29dc8ec19900\") " Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.026973 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f697e9a-6e36-40c9-a199-29dc8ec19900-combined-ca-bundle\") pod \"0f697e9a-6e36-40c9-a199-29dc8ec19900\" (UID: \"0f697e9a-6e36-40c9-a199-29dc8ec19900\") " Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.026999 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f697e9a-6e36-40c9-a199-29dc8ec19900-config-data\") pod \"0f697e9a-6e36-40c9-a199-29dc8ec19900\" (UID: \"0f697e9a-6e36-40c9-a199-29dc8ec19900\") " Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.028748 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f697e9a-6e36-40c9-a199-29dc8ec19900-logs" (OuterVolumeSpecName: "logs") pod "0f697e9a-6e36-40c9-a199-29dc8ec19900" (UID: "0f697e9a-6e36-40c9-a199-29dc8ec19900"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.033376 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f697e9a-6e36-40c9-a199-29dc8ec19900-kube-api-access-gb8mb" (OuterVolumeSpecName: "kube-api-access-gb8mb") pod "0f697e9a-6e36-40c9-a199-29dc8ec19900" (UID: "0f697e9a-6e36-40c9-a199-29dc8ec19900"). InnerVolumeSpecName "kube-api-access-gb8mb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.071029 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f697e9a-6e36-40c9-a199-29dc8ec19900-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0f697e9a-6e36-40c9-a199-29dc8ec19900" (UID: "0f697e9a-6e36-40c9-a199-29dc8ec19900"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.087346 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f697e9a-6e36-40c9-a199-29dc8ec19900-config-data" (OuterVolumeSpecName: "config-data") pod "0f697e9a-6e36-40c9-a199-29dc8ec19900" (UID: "0f697e9a-6e36-40c9-a199-29dc8ec19900"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.130147 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gb8mb\" (UniqueName: \"kubernetes.io/projected/0f697e9a-6e36-40c9-a199-29dc8ec19900-kube-api-access-gb8mb\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.130208 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f697e9a-6e36-40c9-a199-29dc8ec19900-logs\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.130222 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f697e9a-6e36-40c9-a199-29dc8ec19900-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.130234 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f697e9a-6e36-40c9-a199-29dc8ec19900-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.350218 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01bf866a-799b-42df-8838-91933afbb104","Type":"ContainerStarted","Data":"0f0ea8581bb8c99d60e447f2010a7ab429d09e86c7ee54e9e05c6bc4571b9036"} Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.353168 4766 generic.go:334] "Generic (PLEG): container finished" podID="0f697e9a-6e36-40c9-a199-29dc8ec19900" containerID="7b848af7eef8b1e8cb177dd175a8103f075e0d709a2cd7d271102825954435f0" exitCode=0 Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.353275 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.353280 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0f697e9a-6e36-40c9-a199-29dc8ec19900","Type":"ContainerDied","Data":"7b848af7eef8b1e8cb177dd175a8103f075e0d709a2cd7d271102825954435f0"} Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.353394 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0f697e9a-6e36-40c9-a199-29dc8ec19900","Type":"ContainerDied","Data":"15c72c9ebd81d0974bd3c050d1376a74f29d2862d24231bcedf81abd624b957a"} Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.353419 4766 scope.go:117] "RemoveContainer" containerID="7b848af7eef8b1e8cb177dd175a8103f075e0d709a2cd7d271102825954435f0" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.385152 4766 scope.go:117] "RemoveContainer" containerID="d0e58709172611be47a7d079cf51386583460c1b5715a8c2e52ad8dc28416bb9" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.391316 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.399680 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.415109 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 16:46:16 crc kubenswrapper[4766]: E0130 16:46:16.415470 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f697e9a-6e36-40c9-a199-29dc8ec19900" containerName="nova-api-log" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.415482 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f697e9a-6e36-40c9-a199-29dc8ec19900" containerName="nova-api-log" Jan 30 16:46:16 crc kubenswrapper[4766]: E0130 16:46:16.415514 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f697e9a-6e36-40c9-a199-29dc8ec19900" containerName="nova-api-api" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.415520 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f697e9a-6e36-40c9-a199-29dc8ec19900" containerName="nova-api-api" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.416962 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f697e9a-6e36-40c9-a199-29dc8ec19900" containerName="nova-api-log" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.416998 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f697e9a-6e36-40c9-a199-29dc8ec19900" containerName="nova-api-api" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.417981 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.422193 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.422303 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.424379 4766 scope.go:117] "RemoveContainer" containerID="7b848af7eef8b1e8cb177dd175a8103f075e0d709a2cd7d271102825954435f0" Jan 30 16:46:16 crc kubenswrapper[4766]: E0130 16:46:16.426447 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b848af7eef8b1e8cb177dd175a8103f075e0d709a2cd7d271102825954435f0\": container with ID starting with 7b848af7eef8b1e8cb177dd175a8103f075e0d709a2cd7d271102825954435f0 not found: ID does not exist" containerID="7b848af7eef8b1e8cb177dd175a8103f075e0d709a2cd7d271102825954435f0" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.426482 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b848af7eef8b1e8cb177dd175a8103f075e0d709a2cd7d271102825954435f0"} err="failed to get container status \"7b848af7eef8b1e8cb177dd175a8103f075e0d709a2cd7d271102825954435f0\": rpc error: code = NotFound desc = could not find container \"7b848af7eef8b1e8cb177dd175a8103f075e0d709a2cd7d271102825954435f0\": container with ID starting with 7b848af7eef8b1e8cb177dd175a8103f075e0d709a2cd7d271102825954435f0 not found: ID does not exist" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.426509 4766 scope.go:117] "RemoveContainer" containerID="d0e58709172611be47a7d079cf51386583460c1b5715a8c2e52ad8dc28416bb9" Jan 30 16:46:16 crc kubenswrapper[4766]: E0130 16:46:16.428000 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0e58709172611be47a7d079cf51386583460c1b5715a8c2e52ad8dc28416bb9\": container with ID starting with d0e58709172611be47a7d079cf51386583460c1b5715a8c2e52ad8dc28416bb9 not found: ID does not exist" containerID="d0e58709172611be47a7d079cf51386583460c1b5715a8c2e52ad8dc28416bb9" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.428034 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0e58709172611be47a7d079cf51386583460c1b5715a8c2e52ad8dc28416bb9"} err="failed to get container status \"d0e58709172611be47a7d079cf51386583460c1b5715a8c2e52ad8dc28416bb9\": rpc error: code = NotFound desc = could not find container \"d0e58709172611be47a7d079cf51386583460c1b5715a8c2e52ad8dc28416bb9\": container with ID starting with d0e58709172611be47a7d079cf51386583460c1b5715a8c2e52ad8dc28416bb9 not found: ID does not exist" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.431562 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.434463 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23e893e4-3d60-421d-ad41-bc0f76112015-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"23e893e4-3d60-421d-ad41-bc0f76112015\") " pod="openstack/nova-api-0" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.434940 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/23e893e4-3d60-421d-ad41-bc0f76112015-public-tls-certs\") pod \"nova-api-0\" (UID: \"23e893e4-3d60-421d-ad41-bc0f76112015\") " pod="openstack/nova-api-0" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.435020 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23e893e4-3d60-421d-ad41-bc0f76112015-config-data\") pod \"nova-api-0\" (UID: \"23e893e4-3d60-421d-ad41-bc0f76112015\") " pod="openstack/nova-api-0" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.435036 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxs9t\" (UniqueName: \"kubernetes.io/projected/23e893e4-3d60-421d-ad41-bc0f76112015-kube-api-access-fxs9t\") pod \"nova-api-0\" (UID: \"23e893e4-3d60-421d-ad41-bc0f76112015\") " pod="openstack/nova-api-0" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.435075 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/23e893e4-3d60-421d-ad41-bc0f76112015-internal-tls-certs\") pod \"nova-api-0\" (UID: \"23e893e4-3d60-421d-ad41-bc0f76112015\") " pod="openstack/nova-api-0" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.435094 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23e893e4-3d60-421d-ad41-bc0f76112015-logs\") pod \"nova-api-0\" (UID: \"23e893e4-3d60-421d-ad41-bc0f76112015\") " pod="openstack/nova-api-0" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.442875 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.536502 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/23e893e4-3d60-421d-ad41-bc0f76112015-public-tls-certs\") pod \"nova-api-0\" (UID: \"23e893e4-3d60-421d-ad41-bc0f76112015\") " pod="openstack/nova-api-0" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.536643 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23e893e4-3d60-421d-ad41-bc0f76112015-config-data\") pod \"nova-api-0\" (UID: \"23e893e4-3d60-421d-ad41-bc0f76112015\") " pod="openstack/nova-api-0" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.536666 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxs9t\" (UniqueName: \"kubernetes.io/projected/23e893e4-3d60-421d-ad41-bc0f76112015-kube-api-access-fxs9t\") pod \"nova-api-0\" (UID: \"23e893e4-3d60-421d-ad41-bc0f76112015\") " pod="openstack/nova-api-0" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.536706 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/23e893e4-3d60-421d-ad41-bc0f76112015-internal-tls-certs\") pod \"nova-api-0\" (UID: \"23e893e4-3d60-421d-ad41-bc0f76112015\") " pod="openstack/nova-api-0" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.536731 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23e893e4-3d60-421d-ad41-bc0f76112015-logs\") pod \"nova-api-0\" (UID: \"23e893e4-3d60-421d-ad41-bc0f76112015\") " pod="openstack/nova-api-0" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.536750 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23e893e4-3d60-421d-ad41-bc0f76112015-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"23e893e4-3d60-421d-ad41-bc0f76112015\") " pod="openstack/nova-api-0" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.538840 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23e893e4-3d60-421d-ad41-bc0f76112015-logs\") pod \"nova-api-0\" (UID: \"23e893e4-3d60-421d-ad41-bc0f76112015\") " pod="openstack/nova-api-0" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.551123 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/23e893e4-3d60-421d-ad41-bc0f76112015-internal-tls-certs\") pod \"nova-api-0\" (UID: \"23e893e4-3d60-421d-ad41-bc0f76112015\") " pod="openstack/nova-api-0" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.551163 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23e893e4-3d60-421d-ad41-bc0f76112015-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"23e893e4-3d60-421d-ad41-bc0f76112015\") " pod="openstack/nova-api-0" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.551594 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/23e893e4-3d60-421d-ad41-bc0f76112015-public-tls-certs\") pod \"nova-api-0\" (UID: \"23e893e4-3d60-421d-ad41-bc0f76112015\") " pod="openstack/nova-api-0" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.551771 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23e893e4-3d60-421d-ad41-bc0f76112015-config-data\") pod \"nova-api-0\" (UID: \"23e893e4-3d60-421d-ad41-bc0f76112015\") " pod="openstack/nova-api-0" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.554989 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxs9t\" (UniqueName: \"kubernetes.io/projected/23e893e4-3d60-421d-ad41-bc0f76112015-kube-api-access-fxs9t\") pod \"nova-api-0\" (UID: \"23e893e4-3d60-421d-ad41-bc0f76112015\") " pod="openstack/nova-api-0" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.732754 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 16:46:17 crc kubenswrapper[4766]: I0130 16:46:17.198701 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 16:46:17 crc kubenswrapper[4766]: I0130 16:46:17.369481 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"23e893e4-3d60-421d-ad41-bc0f76112015","Type":"ContainerStarted","Data":"1a67e71d5d71a3f934c66c454b741f0b3ac1c9d352fcd86ce01318614ddc8465"} Jan 30 16:46:17 crc kubenswrapper[4766]: I0130 16:46:17.679026 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:46:17 crc kubenswrapper[4766]: I0130 16:46:17.696283 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:46:17 crc kubenswrapper[4766]: I0130 16:46:17.709057 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 16:46:17 crc kubenswrapper[4766]: I0130 16:46:17.709287 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 16:46:18 crc kubenswrapper[4766]: I0130 16:46:18.051215 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f697e9a-6e36-40c9-a199-29dc8ec19900" path="/var/lib/kubelet/pods/0f697e9a-6e36-40c9-a199-29dc8ec19900/volumes" Jan 30 16:46:18 crc kubenswrapper[4766]: I0130 16:46:18.385459 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01bf866a-799b-42df-8838-91933afbb104","Type":"ContainerStarted","Data":"ad340c8aca0029e4920f80190b92950d94ca01049812f06f02a9c3b0f61d7325"} Jan 30 16:46:18 crc kubenswrapper[4766]: I0130 16:46:18.387624 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"23e893e4-3d60-421d-ad41-bc0f76112015","Type":"ContainerStarted","Data":"4c788f0562704bc4dfadcb722798403e7930e6f7adcdcde22c3c13a6df3f41a3"} Jan 30 16:46:18 crc kubenswrapper[4766]: I0130 16:46:18.387687 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"23e893e4-3d60-421d-ad41-bc0f76112015","Type":"ContainerStarted","Data":"5ccdbaf3bb96dfd8169ab0fe592d26c7a0d1efe43b5c08bbc7390fe652947a65"} Jan 30 16:46:18 crc kubenswrapper[4766]: I0130 16:46:18.429267 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:46:18 crc kubenswrapper[4766]: I0130 16:46:18.430914 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.430891187 podStartE2EDuration="2.430891187s" podCreationTimestamp="2026-01-30 16:46:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:46:18.406214397 +0000 UTC m=+1433.044171743" watchObservedRunningTime="2026-01-30 16:46:18.430891187 +0000 UTC m=+1433.068848533" Jan 30 16:46:18 crc kubenswrapper[4766]: I0130 16:46:18.578161 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-rlpcs"] Jan 30 16:46:18 crc kubenswrapper[4766]: I0130 16:46:18.579546 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-rlpcs" Jan 30 16:46:18 crc kubenswrapper[4766]: I0130 16:46:18.582439 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 30 16:46:18 crc kubenswrapper[4766]: I0130 16:46:18.582643 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 30 16:46:18 crc kubenswrapper[4766]: I0130 16:46:18.587841 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-rlpcs"] Jan 30 16:46:18 crc kubenswrapper[4766]: I0130 16:46:18.704244 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c683df85-82ee-4038-883c-c47b3aa46bec-config-data\") pod \"nova-cell1-cell-mapping-rlpcs\" (UID: \"c683df85-82ee-4038-883c-c47b3aa46bec\") " pod="openstack/nova-cell1-cell-mapping-rlpcs" Jan 30 16:46:18 crc kubenswrapper[4766]: I0130 16:46:18.704456 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c683df85-82ee-4038-883c-c47b3aa46bec-scripts\") pod \"nova-cell1-cell-mapping-rlpcs\" (UID: \"c683df85-82ee-4038-883c-c47b3aa46bec\") " pod="openstack/nova-cell1-cell-mapping-rlpcs" Jan 30 16:46:18 crc kubenswrapper[4766]: I0130 16:46:18.704513 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrlmv\" (UniqueName: \"kubernetes.io/projected/c683df85-82ee-4038-883c-c47b3aa46bec-kube-api-access-qrlmv\") pod \"nova-cell1-cell-mapping-rlpcs\" (UID: \"c683df85-82ee-4038-883c-c47b3aa46bec\") " pod="openstack/nova-cell1-cell-mapping-rlpcs" Jan 30 16:46:18 crc kubenswrapper[4766]: I0130 16:46:18.704543 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c683df85-82ee-4038-883c-c47b3aa46bec-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-rlpcs\" (UID: \"c683df85-82ee-4038-883c-c47b3aa46bec\") " pod="openstack/nova-cell1-cell-mapping-rlpcs" Jan 30 16:46:18 crc kubenswrapper[4766]: I0130 16:46:18.726152 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="8e30ce0e-ad1f-4433-91ba-3a19be83ffc9" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 16:46:18 crc kubenswrapper[4766]: I0130 16:46:18.726209 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="8e30ce0e-ad1f-4433-91ba-3a19be83ffc9" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 16:46:18 crc kubenswrapper[4766]: I0130 16:46:18.806452 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c683df85-82ee-4038-883c-c47b3aa46bec-scripts\") pod \"nova-cell1-cell-mapping-rlpcs\" (UID: \"c683df85-82ee-4038-883c-c47b3aa46bec\") " pod="openstack/nova-cell1-cell-mapping-rlpcs" Jan 30 16:46:18 crc kubenswrapper[4766]: I0130 16:46:18.806552 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrlmv\" (UniqueName: \"kubernetes.io/projected/c683df85-82ee-4038-883c-c47b3aa46bec-kube-api-access-qrlmv\") pod \"nova-cell1-cell-mapping-rlpcs\" (UID: \"c683df85-82ee-4038-883c-c47b3aa46bec\") " pod="openstack/nova-cell1-cell-mapping-rlpcs" Jan 30 16:46:18 crc kubenswrapper[4766]: I0130 16:46:18.806582 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c683df85-82ee-4038-883c-c47b3aa46bec-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-rlpcs\" (UID: \"c683df85-82ee-4038-883c-c47b3aa46bec\") " pod="openstack/nova-cell1-cell-mapping-rlpcs" Jan 30 16:46:18 crc kubenswrapper[4766]: I0130 16:46:18.806661 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c683df85-82ee-4038-883c-c47b3aa46bec-config-data\") pod \"nova-cell1-cell-mapping-rlpcs\" (UID: \"c683df85-82ee-4038-883c-c47b3aa46bec\") " pod="openstack/nova-cell1-cell-mapping-rlpcs" Jan 30 16:46:18 crc kubenswrapper[4766]: I0130 16:46:18.829957 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c683df85-82ee-4038-883c-c47b3aa46bec-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-rlpcs\" (UID: \"c683df85-82ee-4038-883c-c47b3aa46bec\") " pod="openstack/nova-cell1-cell-mapping-rlpcs" Jan 30 16:46:18 crc kubenswrapper[4766]: I0130 16:46:18.829954 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrlmv\" (UniqueName: \"kubernetes.io/projected/c683df85-82ee-4038-883c-c47b3aa46bec-kube-api-access-qrlmv\") pod \"nova-cell1-cell-mapping-rlpcs\" (UID: \"c683df85-82ee-4038-883c-c47b3aa46bec\") " pod="openstack/nova-cell1-cell-mapping-rlpcs" Jan 30 16:46:18 crc kubenswrapper[4766]: I0130 16:46:18.838888 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c683df85-82ee-4038-883c-c47b3aa46bec-config-data\") pod \"nova-cell1-cell-mapping-rlpcs\" (UID: \"c683df85-82ee-4038-883c-c47b3aa46bec\") " pod="openstack/nova-cell1-cell-mapping-rlpcs" Jan 30 16:46:18 crc kubenswrapper[4766]: I0130 16:46:18.843986 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c683df85-82ee-4038-883c-c47b3aa46bec-scripts\") pod \"nova-cell1-cell-mapping-rlpcs\" (UID: \"c683df85-82ee-4038-883c-c47b3aa46bec\") " pod="openstack/nova-cell1-cell-mapping-rlpcs" Jan 30 16:46:18 crc kubenswrapper[4766]: I0130 16:46:18.902912 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-rlpcs" Jan 30 16:46:19 crc kubenswrapper[4766]: I0130 16:46:19.579524 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-rlpcs"] Jan 30 16:46:19 crc kubenswrapper[4766]: I0130 16:46:19.842348 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-59cf4bdb65-zcjhs" Jan 30 16:46:19 crc kubenswrapper[4766]: I0130 16:46:19.905169 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-lx7hm"] Jan 30 16:46:19 crc kubenswrapper[4766]: I0130 16:46:19.905479 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-845d6d6f59-lx7hm" podUID="d92d5f78-a271-41e7-bde9-410e3db6ee58" containerName="dnsmasq-dns" containerID="cri-o://89198eaaa434920b555079a794b492c6b89bd55b10487cc59b3d6ea529f6ecbf" gracePeriod=10 Jan 30 16:46:20 crc kubenswrapper[4766]: I0130 16:46:20.412953 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-rlpcs" event={"ID":"c683df85-82ee-4038-883c-c47b3aa46bec","Type":"ContainerStarted","Data":"a9df41b3a8490f673ad155b5c39e9bf02895871bbd8788cd418cae112017c56d"} Jan 30 16:46:20 crc kubenswrapper[4766]: I0130 16:46:20.413373 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-rlpcs" event={"ID":"c683df85-82ee-4038-883c-c47b3aa46bec","Type":"ContainerStarted","Data":"e12619d95d16f1a55e971e5eb02655b9537d6b5b6e1489ce81521828eefdfcbe"} Jan 30 16:46:20 crc kubenswrapper[4766]: I0130 16:46:20.420139 4766 generic.go:334] "Generic (PLEG): container finished" podID="d92d5f78-a271-41e7-bde9-410e3db6ee58" containerID="89198eaaa434920b555079a794b492c6b89bd55b10487cc59b3d6ea529f6ecbf" exitCode=0 Jan 30 16:46:20 crc kubenswrapper[4766]: I0130 16:46:20.420213 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-lx7hm" event={"ID":"d92d5f78-a271-41e7-bde9-410e3db6ee58","Type":"ContainerDied","Data":"89198eaaa434920b555079a794b492c6b89bd55b10487cc59b3d6ea529f6ecbf"} Jan 30 16:46:20 crc kubenswrapper[4766]: I0130 16:46:20.420245 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-lx7hm" event={"ID":"d92d5f78-a271-41e7-bde9-410e3db6ee58","Type":"ContainerDied","Data":"91e65d6f256f9a51617953fca1ed6a1c2f94a9d6c711363f89bd2892d38340cc"} Jan 30 16:46:20 crc kubenswrapper[4766]: I0130 16:46:20.420259 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91e65d6f256f9a51617953fca1ed6a1c2f94a9d6c711363f89bd2892d38340cc" Jan 30 16:46:20 crc kubenswrapper[4766]: I0130 16:46:20.434207 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-rlpcs" podStartSLOduration=2.434159729 podStartE2EDuration="2.434159729s" podCreationTimestamp="2026-01-30 16:46:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:46:20.428079899 +0000 UTC m=+1435.066037255" watchObservedRunningTime="2026-01-30 16:46:20.434159729 +0000 UTC m=+1435.072117075" Jan 30 16:46:20 crc kubenswrapper[4766]: I0130 16:46:20.520343 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-lx7hm" Jan 30 16:46:20 crc kubenswrapper[4766]: I0130 16:46:20.663323 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhm67\" (UniqueName: \"kubernetes.io/projected/d92d5f78-a271-41e7-bde9-410e3db6ee58-kube-api-access-nhm67\") pod \"d92d5f78-a271-41e7-bde9-410e3db6ee58\" (UID: \"d92d5f78-a271-41e7-bde9-410e3db6ee58\") " Jan 30 16:46:20 crc kubenswrapper[4766]: I0130 16:46:20.663452 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d92d5f78-a271-41e7-bde9-410e3db6ee58-config\") pod \"d92d5f78-a271-41e7-bde9-410e3db6ee58\" (UID: \"d92d5f78-a271-41e7-bde9-410e3db6ee58\") " Jan 30 16:46:20 crc kubenswrapper[4766]: I0130 16:46:20.663521 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d92d5f78-a271-41e7-bde9-410e3db6ee58-ovsdbserver-sb\") pod \"d92d5f78-a271-41e7-bde9-410e3db6ee58\" (UID: \"d92d5f78-a271-41e7-bde9-410e3db6ee58\") " Jan 30 16:46:20 crc kubenswrapper[4766]: I0130 16:46:20.663634 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d92d5f78-a271-41e7-bde9-410e3db6ee58-dns-swift-storage-0\") pod \"d92d5f78-a271-41e7-bde9-410e3db6ee58\" (UID: \"d92d5f78-a271-41e7-bde9-410e3db6ee58\") " Jan 30 16:46:20 crc kubenswrapper[4766]: I0130 16:46:20.663761 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d92d5f78-a271-41e7-bde9-410e3db6ee58-dns-svc\") pod \"d92d5f78-a271-41e7-bde9-410e3db6ee58\" (UID: \"d92d5f78-a271-41e7-bde9-410e3db6ee58\") " Jan 30 16:46:20 crc kubenswrapper[4766]: I0130 16:46:20.663793 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d92d5f78-a271-41e7-bde9-410e3db6ee58-ovsdbserver-nb\") pod \"d92d5f78-a271-41e7-bde9-410e3db6ee58\" (UID: \"d92d5f78-a271-41e7-bde9-410e3db6ee58\") " Jan 30 16:46:20 crc kubenswrapper[4766]: I0130 16:46:20.670367 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d92d5f78-a271-41e7-bde9-410e3db6ee58-kube-api-access-nhm67" (OuterVolumeSpecName: "kube-api-access-nhm67") pod "d92d5f78-a271-41e7-bde9-410e3db6ee58" (UID: "d92d5f78-a271-41e7-bde9-410e3db6ee58"). InnerVolumeSpecName "kube-api-access-nhm67". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:46:20 crc kubenswrapper[4766]: I0130 16:46:20.732037 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d92d5f78-a271-41e7-bde9-410e3db6ee58-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d92d5f78-a271-41e7-bde9-410e3db6ee58" (UID: "d92d5f78-a271-41e7-bde9-410e3db6ee58"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:46:20 crc kubenswrapper[4766]: I0130 16:46:20.734266 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d92d5f78-a271-41e7-bde9-410e3db6ee58-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d92d5f78-a271-41e7-bde9-410e3db6ee58" (UID: "d92d5f78-a271-41e7-bde9-410e3db6ee58"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:46:20 crc kubenswrapper[4766]: I0130 16:46:20.737955 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d92d5f78-a271-41e7-bde9-410e3db6ee58-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d92d5f78-a271-41e7-bde9-410e3db6ee58" (UID: "d92d5f78-a271-41e7-bde9-410e3db6ee58"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:46:20 crc kubenswrapper[4766]: I0130 16:46:20.747765 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d92d5f78-a271-41e7-bde9-410e3db6ee58-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d92d5f78-a271-41e7-bde9-410e3db6ee58" (UID: "d92d5f78-a271-41e7-bde9-410e3db6ee58"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:46:20 crc kubenswrapper[4766]: I0130 16:46:20.755098 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d92d5f78-a271-41e7-bde9-410e3db6ee58-config" (OuterVolumeSpecName: "config") pod "d92d5f78-a271-41e7-bde9-410e3db6ee58" (UID: "d92d5f78-a271-41e7-bde9-410e3db6ee58"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:46:20 crc kubenswrapper[4766]: I0130 16:46:20.766270 4766 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d92d5f78-a271-41e7-bde9-410e3db6ee58-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:20 crc kubenswrapper[4766]: I0130 16:46:20.766305 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d92d5f78-a271-41e7-bde9-410e3db6ee58-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:20 crc kubenswrapper[4766]: I0130 16:46:20.766315 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d92d5f78-a271-41e7-bde9-410e3db6ee58-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:20 crc kubenswrapper[4766]: I0130 16:46:20.766325 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhm67\" (UniqueName: \"kubernetes.io/projected/d92d5f78-a271-41e7-bde9-410e3db6ee58-kube-api-access-nhm67\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:20 crc kubenswrapper[4766]: I0130 16:46:20.766333 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d92d5f78-a271-41e7-bde9-410e3db6ee58-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:20 crc kubenswrapper[4766]: I0130 16:46:20.766356 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d92d5f78-a271-41e7-bde9-410e3db6ee58-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:21 crc kubenswrapper[4766]: I0130 16:46:21.428439 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-lx7hm" Jan 30 16:46:21 crc kubenswrapper[4766]: I0130 16:46:21.510971 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-lx7hm"] Jan 30 16:46:21 crc kubenswrapper[4766]: I0130 16:46:21.521156 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-lx7hm"] Jan 30 16:46:22 crc kubenswrapper[4766]: I0130 16:46:22.058849 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d92d5f78-a271-41e7-bde9-410e3db6ee58" path="/var/lib/kubelet/pods/d92d5f78-a271-41e7-bde9-410e3db6ee58/volumes" Jan 30 16:46:22 crc kubenswrapper[4766]: I0130 16:46:22.439804 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01bf866a-799b-42df-8838-91933afbb104","Type":"ContainerStarted","Data":"5e49ca9825f18a3323c612c7e702c9224ce8465815d066fdab56feda3e48253c"} Jan 30 16:46:22 crc kubenswrapper[4766]: I0130 16:46:22.439959 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="01bf866a-799b-42df-8838-91933afbb104" containerName="ceilometer-central-agent" containerID="cri-o://08180bf41a76e968f7d6e5f974cb00ab2c9fe39ff71338a5bfd851705e724e68" gracePeriod=30 Jan 30 16:46:22 crc kubenswrapper[4766]: I0130 16:46:22.440238 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 16:46:22 crc kubenswrapper[4766]: I0130 16:46:22.441551 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="01bf866a-799b-42df-8838-91933afbb104" containerName="proxy-httpd" containerID="cri-o://5e49ca9825f18a3323c612c7e702c9224ce8465815d066fdab56feda3e48253c" gracePeriod=30 Jan 30 16:46:22 crc kubenswrapper[4766]: I0130 16:46:22.441609 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="01bf866a-799b-42df-8838-91933afbb104" containerName="sg-core" containerID="cri-o://ad340c8aca0029e4920f80190b92950d94ca01049812f06f02a9c3b0f61d7325" gracePeriod=30 Jan 30 16:46:22 crc kubenswrapper[4766]: I0130 16:46:22.441648 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="01bf866a-799b-42df-8838-91933afbb104" containerName="ceilometer-notification-agent" containerID="cri-o://0f0ea8581bb8c99d60e447f2010a7ab429d09e86c7ee54e9e05c6bc4571b9036" gracePeriod=30 Jan 30 16:46:22 crc kubenswrapper[4766]: I0130 16:46:22.468402 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.387301428 podStartE2EDuration="9.468384755s" podCreationTimestamp="2026-01-30 16:46:13 +0000 UTC" firstStartedPulling="2026-01-30 16:46:14.254902289 +0000 UTC m=+1428.892859635" lastFinishedPulling="2026-01-30 16:46:21.335985616 +0000 UTC m=+1435.973942962" observedRunningTime="2026-01-30 16:46:22.459285051 +0000 UTC m=+1437.097242397" watchObservedRunningTime="2026-01-30 16:46:22.468384755 +0000 UTC m=+1437.106342101" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.193334 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.319468 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01bf866a-799b-42df-8838-91933afbb104-run-httpd\") pod \"01bf866a-799b-42df-8838-91933afbb104\" (UID: \"01bf866a-799b-42df-8838-91933afbb104\") " Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.319512 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01bf866a-799b-42df-8838-91933afbb104-config-data\") pod \"01bf866a-799b-42df-8838-91933afbb104\" (UID: \"01bf866a-799b-42df-8838-91933afbb104\") " Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.319566 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01bf866a-799b-42df-8838-91933afbb104-log-httpd\") pod \"01bf866a-799b-42df-8838-91933afbb104\" (UID: \"01bf866a-799b-42df-8838-91933afbb104\") " Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.319686 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01bf866a-799b-42df-8838-91933afbb104-sg-core-conf-yaml\") pod \"01bf866a-799b-42df-8838-91933afbb104\" (UID: \"01bf866a-799b-42df-8838-91933afbb104\") " Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.319740 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01bf866a-799b-42df-8838-91933afbb104-scripts\") pod \"01bf866a-799b-42df-8838-91933afbb104\" (UID: \"01bf866a-799b-42df-8838-91933afbb104\") " Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.319854 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgmvd\" (UniqueName: \"kubernetes.io/projected/01bf866a-799b-42df-8838-91933afbb104-kube-api-access-pgmvd\") pod \"01bf866a-799b-42df-8838-91933afbb104\" (UID: \"01bf866a-799b-42df-8838-91933afbb104\") " Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.319895 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/01bf866a-799b-42df-8838-91933afbb104-ceilometer-tls-certs\") pod \"01bf866a-799b-42df-8838-91933afbb104\" (UID: \"01bf866a-799b-42df-8838-91933afbb104\") " Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.319925 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01bf866a-799b-42df-8838-91933afbb104-combined-ca-bundle\") pod \"01bf866a-799b-42df-8838-91933afbb104\" (UID: \"01bf866a-799b-42df-8838-91933afbb104\") " Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.323649 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01bf866a-799b-42df-8838-91933afbb104-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "01bf866a-799b-42df-8838-91933afbb104" (UID: "01bf866a-799b-42df-8838-91933afbb104"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.323818 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01bf866a-799b-42df-8838-91933afbb104-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "01bf866a-799b-42df-8838-91933afbb104" (UID: "01bf866a-799b-42df-8838-91933afbb104"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.326491 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01bf866a-799b-42df-8838-91933afbb104-kube-api-access-pgmvd" (OuterVolumeSpecName: "kube-api-access-pgmvd") pod "01bf866a-799b-42df-8838-91933afbb104" (UID: "01bf866a-799b-42df-8838-91933afbb104"). InnerVolumeSpecName "kube-api-access-pgmvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.327816 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01bf866a-799b-42df-8838-91933afbb104-scripts" (OuterVolumeSpecName: "scripts") pod "01bf866a-799b-42df-8838-91933afbb104" (UID: "01bf866a-799b-42df-8838-91933afbb104"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.351971 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01bf866a-799b-42df-8838-91933afbb104-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "01bf866a-799b-42df-8838-91933afbb104" (UID: "01bf866a-799b-42df-8838-91933afbb104"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.384908 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01bf866a-799b-42df-8838-91933afbb104-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "01bf866a-799b-42df-8838-91933afbb104" (UID: "01bf866a-799b-42df-8838-91933afbb104"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.423469 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgmvd\" (UniqueName: \"kubernetes.io/projected/01bf866a-799b-42df-8838-91933afbb104-kube-api-access-pgmvd\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.423506 4766 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/01bf866a-799b-42df-8838-91933afbb104-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.423517 4766 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01bf866a-799b-42df-8838-91933afbb104-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.423525 4766 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01bf866a-799b-42df-8838-91933afbb104-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.423534 4766 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01bf866a-799b-42df-8838-91933afbb104-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.423542 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01bf866a-799b-42df-8838-91933afbb104-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.445541 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01bf866a-799b-42df-8838-91933afbb104-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "01bf866a-799b-42df-8838-91933afbb104" (UID: "01bf866a-799b-42df-8838-91933afbb104"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.476873 4766 generic.go:334] "Generic (PLEG): container finished" podID="01bf866a-799b-42df-8838-91933afbb104" containerID="5e49ca9825f18a3323c612c7e702c9224ce8465815d066fdab56feda3e48253c" exitCode=0 Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.476908 4766 generic.go:334] "Generic (PLEG): container finished" podID="01bf866a-799b-42df-8838-91933afbb104" containerID="ad340c8aca0029e4920f80190b92950d94ca01049812f06f02a9c3b0f61d7325" exitCode=2 Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.476917 4766 generic.go:334] "Generic (PLEG): container finished" podID="01bf866a-799b-42df-8838-91933afbb104" containerID="0f0ea8581bb8c99d60e447f2010a7ab429d09e86c7ee54e9e05c6bc4571b9036" exitCode=0 Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.476926 4766 generic.go:334] "Generic (PLEG): container finished" podID="01bf866a-799b-42df-8838-91933afbb104" containerID="08180bf41a76e968f7d6e5f974cb00ab2c9fe39ff71338a5bfd851705e724e68" exitCode=0 Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.476946 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01bf866a-799b-42df-8838-91933afbb104","Type":"ContainerDied","Data":"5e49ca9825f18a3323c612c7e702c9224ce8465815d066fdab56feda3e48253c"} Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.476974 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01bf866a-799b-42df-8838-91933afbb104","Type":"ContainerDied","Data":"ad340c8aca0029e4920f80190b92950d94ca01049812f06f02a9c3b0f61d7325"} Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.476984 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01bf866a-799b-42df-8838-91933afbb104","Type":"ContainerDied","Data":"0f0ea8581bb8c99d60e447f2010a7ab429d09e86c7ee54e9e05c6bc4571b9036"} Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.476995 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01bf866a-799b-42df-8838-91933afbb104","Type":"ContainerDied","Data":"08180bf41a76e968f7d6e5f974cb00ab2c9fe39ff71338a5bfd851705e724e68"} Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.477003 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01bf866a-799b-42df-8838-91933afbb104","Type":"ContainerDied","Data":"c1b554db47d10f6ccbba7db486a12601a23becc11b2582c73eecf6b917aa1488"} Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.476996 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01bf866a-799b-42df-8838-91933afbb104-config-data" (OuterVolumeSpecName: "config-data") pod "01bf866a-799b-42df-8838-91933afbb104" (UID: "01bf866a-799b-42df-8838-91933afbb104"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.477014 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.477018 4766 scope.go:117] "RemoveContainer" containerID="5e49ca9825f18a3323c612c7e702c9224ce8465815d066fdab56feda3e48253c" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.510868 4766 scope.go:117] "RemoveContainer" containerID="ad340c8aca0029e4920f80190b92950d94ca01049812f06f02a9c3b0f61d7325" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.525752 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01bf866a-799b-42df-8838-91933afbb104-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.525803 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01bf866a-799b-42df-8838-91933afbb104-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.547775 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.551227 4766 scope.go:117] "RemoveContainer" containerID="0f0ea8581bb8c99d60e447f2010a7ab429d09e86c7ee54e9e05c6bc4571b9036" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.555884 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.619233 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:46:23 crc kubenswrapper[4766]: E0130 16:46:23.619815 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01bf866a-799b-42df-8838-91933afbb104" containerName="sg-core" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.619834 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="01bf866a-799b-42df-8838-91933afbb104" containerName="sg-core" Jan 30 16:46:23 crc kubenswrapper[4766]: E0130 16:46:23.619856 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d92d5f78-a271-41e7-bde9-410e3db6ee58" containerName="init" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.619861 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d92d5f78-a271-41e7-bde9-410e3db6ee58" containerName="init" Jan 30 16:46:23 crc kubenswrapper[4766]: E0130 16:46:23.619870 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d92d5f78-a271-41e7-bde9-410e3db6ee58" containerName="dnsmasq-dns" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.619876 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d92d5f78-a271-41e7-bde9-410e3db6ee58" containerName="dnsmasq-dns" Jan 30 16:46:23 crc kubenswrapper[4766]: E0130 16:46:23.619887 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01bf866a-799b-42df-8838-91933afbb104" containerName="ceilometer-notification-agent" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.619893 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="01bf866a-799b-42df-8838-91933afbb104" containerName="ceilometer-notification-agent" Jan 30 16:46:23 crc kubenswrapper[4766]: E0130 16:46:23.619908 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01bf866a-799b-42df-8838-91933afbb104" containerName="proxy-httpd" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.619915 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="01bf866a-799b-42df-8838-91933afbb104" containerName="proxy-httpd" Jan 30 16:46:23 crc kubenswrapper[4766]: E0130 16:46:23.619929 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01bf866a-799b-42df-8838-91933afbb104" containerName="ceilometer-central-agent" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.619936 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="01bf866a-799b-42df-8838-91933afbb104" containerName="ceilometer-central-agent" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.620105 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="d92d5f78-a271-41e7-bde9-410e3db6ee58" containerName="dnsmasq-dns" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.620122 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="01bf866a-799b-42df-8838-91933afbb104" containerName="proxy-httpd" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.620134 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="01bf866a-799b-42df-8838-91933afbb104" containerName="ceilometer-notification-agent" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.620144 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="01bf866a-799b-42df-8838-91933afbb104" containerName="ceilometer-central-agent" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.620154 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="01bf866a-799b-42df-8838-91933afbb104" containerName="sg-core" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.621694 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.630991 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.631666 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.631893 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.631968 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.656214 4766 scope.go:117] "RemoveContainer" containerID="08180bf41a76e968f7d6e5f974cb00ab2c9fe39ff71338a5bfd851705e724e68" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.692834 4766 scope.go:117] "RemoveContainer" containerID="5e49ca9825f18a3323c612c7e702c9224ce8465815d066fdab56feda3e48253c" Jan 30 16:46:23 crc kubenswrapper[4766]: E0130 16:46:23.693307 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e49ca9825f18a3323c612c7e702c9224ce8465815d066fdab56feda3e48253c\": container with ID starting with 5e49ca9825f18a3323c612c7e702c9224ce8465815d066fdab56feda3e48253c not found: ID does not exist" containerID="5e49ca9825f18a3323c612c7e702c9224ce8465815d066fdab56feda3e48253c" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.693659 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e49ca9825f18a3323c612c7e702c9224ce8465815d066fdab56feda3e48253c"} err="failed to get container status \"5e49ca9825f18a3323c612c7e702c9224ce8465815d066fdab56feda3e48253c\": rpc error: code = NotFound desc = could not find container \"5e49ca9825f18a3323c612c7e702c9224ce8465815d066fdab56feda3e48253c\": container with ID starting with 5e49ca9825f18a3323c612c7e702c9224ce8465815d066fdab56feda3e48253c not found: ID does not exist" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.693696 4766 scope.go:117] "RemoveContainer" containerID="ad340c8aca0029e4920f80190b92950d94ca01049812f06f02a9c3b0f61d7325" Jan 30 16:46:23 crc kubenswrapper[4766]: E0130 16:46:23.694430 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad340c8aca0029e4920f80190b92950d94ca01049812f06f02a9c3b0f61d7325\": container with ID starting with ad340c8aca0029e4920f80190b92950d94ca01049812f06f02a9c3b0f61d7325 not found: ID does not exist" containerID="ad340c8aca0029e4920f80190b92950d94ca01049812f06f02a9c3b0f61d7325" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.694601 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad340c8aca0029e4920f80190b92950d94ca01049812f06f02a9c3b0f61d7325"} err="failed to get container status \"ad340c8aca0029e4920f80190b92950d94ca01049812f06f02a9c3b0f61d7325\": rpc error: code = NotFound desc = could not find container \"ad340c8aca0029e4920f80190b92950d94ca01049812f06f02a9c3b0f61d7325\": container with ID starting with ad340c8aca0029e4920f80190b92950d94ca01049812f06f02a9c3b0f61d7325 not found: ID does not exist" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.694631 4766 scope.go:117] "RemoveContainer" containerID="0f0ea8581bb8c99d60e447f2010a7ab429d09e86c7ee54e9e05c6bc4571b9036" Jan 30 16:46:23 crc kubenswrapper[4766]: E0130 16:46:23.694973 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f0ea8581bb8c99d60e447f2010a7ab429d09e86c7ee54e9e05c6bc4571b9036\": container with ID starting with 0f0ea8581bb8c99d60e447f2010a7ab429d09e86c7ee54e9e05c6bc4571b9036 not found: ID does not exist" containerID="0f0ea8581bb8c99d60e447f2010a7ab429d09e86c7ee54e9e05c6bc4571b9036" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.695010 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f0ea8581bb8c99d60e447f2010a7ab429d09e86c7ee54e9e05c6bc4571b9036"} err="failed to get container status \"0f0ea8581bb8c99d60e447f2010a7ab429d09e86c7ee54e9e05c6bc4571b9036\": rpc error: code = NotFound desc = could not find container \"0f0ea8581bb8c99d60e447f2010a7ab429d09e86c7ee54e9e05c6bc4571b9036\": container with ID starting with 0f0ea8581bb8c99d60e447f2010a7ab429d09e86c7ee54e9e05c6bc4571b9036 not found: ID does not exist" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.695033 4766 scope.go:117] "RemoveContainer" containerID="08180bf41a76e968f7d6e5f974cb00ab2c9fe39ff71338a5bfd851705e724e68" Jan 30 16:46:23 crc kubenswrapper[4766]: E0130 16:46:23.695412 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08180bf41a76e968f7d6e5f974cb00ab2c9fe39ff71338a5bfd851705e724e68\": container with ID starting with 08180bf41a76e968f7d6e5f974cb00ab2c9fe39ff71338a5bfd851705e724e68 not found: ID does not exist" containerID="08180bf41a76e968f7d6e5f974cb00ab2c9fe39ff71338a5bfd851705e724e68" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.695453 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08180bf41a76e968f7d6e5f974cb00ab2c9fe39ff71338a5bfd851705e724e68"} err="failed to get container status \"08180bf41a76e968f7d6e5f974cb00ab2c9fe39ff71338a5bfd851705e724e68\": rpc error: code = NotFound desc = could not find container \"08180bf41a76e968f7d6e5f974cb00ab2c9fe39ff71338a5bfd851705e724e68\": container with ID starting with 08180bf41a76e968f7d6e5f974cb00ab2c9fe39ff71338a5bfd851705e724e68 not found: ID does not exist" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.695486 4766 scope.go:117] "RemoveContainer" containerID="5e49ca9825f18a3323c612c7e702c9224ce8465815d066fdab56feda3e48253c" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.696125 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e49ca9825f18a3323c612c7e702c9224ce8465815d066fdab56feda3e48253c"} err="failed to get container status \"5e49ca9825f18a3323c612c7e702c9224ce8465815d066fdab56feda3e48253c\": rpc error: code = NotFound desc = could not find container \"5e49ca9825f18a3323c612c7e702c9224ce8465815d066fdab56feda3e48253c\": container with ID starting with 5e49ca9825f18a3323c612c7e702c9224ce8465815d066fdab56feda3e48253c not found: ID does not exist" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.696154 4766 scope.go:117] "RemoveContainer" containerID="ad340c8aca0029e4920f80190b92950d94ca01049812f06f02a9c3b0f61d7325" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.696439 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad340c8aca0029e4920f80190b92950d94ca01049812f06f02a9c3b0f61d7325"} err="failed to get container status \"ad340c8aca0029e4920f80190b92950d94ca01049812f06f02a9c3b0f61d7325\": rpc error: code = NotFound desc = could not find container \"ad340c8aca0029e4920f80190b92950d94ca01049812f06f02a9c3b0f61d7325\": container with ID starting with ad340c8aca0029e4920f80190b92950d94ca01049812f06f02a9c3b0f61d7325 not found: ID does not exist" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.696464 4766 scope.go:117] "RemoveContainer" containerID="0f0ea8581bb8c99d60e447f2010a7ab429d09e86c7ee54e9e05c6bc4571b9036" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.696969 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f0ea8581bb8c99d60e447f2010a7ab429d09e86c7ee54e9e05c6bc4571b9036"} err="failed to get container status \"0f0ea8581bb8c99d60e447f2010a7ab429d09e86c7ee54e9e05c6bc4571b9036\": rpc error: code = NotFound desc = could not find container \"0f0ea8581bb8c99d60e447f2010a7ab429d09e86c7ee54e9e05c6bc4571b9036\": container with ID starting with 0f0ea8581bb8c99d60e447f2010a7ab429d09e86c7ee54e9e05c6bc4571b9036 not found: ID does not exist" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.697004 4766 scope.go:117] "RemoveContainer" containerID="08180bf41a76e968f7d6e5f974cb00ab2c9fe39ff71338a5bfd851705e724e68" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.697350 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08180bf41a76e968f7d6e5f974cb00ab2c9fe39ff71338a5bfd851705e724e68"} err="failed to get container status \"08180bf41a76e968f7d6e5f974cb00ab2c9fe39ff71338a5bfd851705e724e68\": rpc error: code = NotFound desc = could not find container \"08180bf41a76e968f7d6e5f974cb00ab2c9fe39ff71338a5bfd851705e724e68\": container with ID starting with 08180bf41a76e968f7d6e5f974cb00ab2c9fe39ff71338a5bfd851705e724e68 not found: ID does not exist" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.697375 4766 scope.go:117] "RemoveContainer" containerID="5e49ca9825f18a3323c612c7e702c9224ce8465815d066fdab56feda3e48253c" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.697774 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e49ca9825f18a3323c612c7e702c9224ce8465815d066fdab56feda3e48253c"} err="failed to get container status \"5e49ca9825f18a3323c612c7e702c9224ce8465815d066fdab56feda3e48253c\": rpc error: code = NotFound desc = could not find container \"5e49ca9825f18a3323c612c7e702c9224ce8465815d066fdab56feda3e48253c\": container with ID starting with 5e49ca9825f18a3323c612c7e702c9224ce8465815d066fdab56feda3e48253c not found: ID does not exist" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.697792 4766 scope.go:117] "RemoveContainer" containerID="ad340c8aca0029e4920f80190b92950d94ca01049812f06f02a9c3b0f61d7325" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.698062 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad340c8aca0029e4920f80190b92950d94ca01049812f06f02a9c3b0f61d7325"} err="failed to get container status \"ad340c8aca0029e4920f80190b92950d94ca01049812f06f02a9c3b0f61d7325\": rpc error: code = NotFound desc = could not find container \"ad340c8aca0029e4920f80190b92950d94ca01049812f06f02a9c3b0f61d7325\": container with ID starting with ad340c8aca0029e4920f80190b92950d94ca01049812f06f02a9c3b0f61d7325 not found: ID does not exist" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.698087 4766 scope.go:117] "RemoveContainer" containerID="0f0ea8581bb8c99d60e447f2010a7ab429d09e86c7ee54e9e05c6bc4571b9036" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.698334 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f0ea8581bb8c99d60e447f2010a7ab429d09e86c7ee54e9e05c6bc4571b9036"} err="failed to get container status \"0f0ea8581bb8c99d60e447f2010a7ab429d09e86c7ee54e9e05c6bc4571b9036\": rpc error: code = NotFound desc = could not find container \"0f0ea8581bb8c99d60e447f2010a7ab429d09e86c7ee54e9e05c6bc4571b9036\": container with ID starting with 0f0ea8581bb8c99d60e447f2010a7ab429d09e86c7ee54e9e05c6bc4571b9036 not found: ID does not exist" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.698353 4766 scope.go:117] "RemoveContainer" containerID="08180bf41a76e968f7d6e5f974cb00ab2c9fe39ff71338a5bfd851705e724e68" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.698544 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08180bf41a76e968f7d6e5f974cb00ab2c9fe39ff71338a5bfd851705e724e68"} err="failed to get container status \"08180bf41a76e968f7d6e5f974cb00ab2c9fe39ff71338a5bfd851705e724e68\": rpc error: code = NotFound desc = could not find container \"08180bf41a76e968f7d6e5f974cb00ab2c9fe39ff71338a5bfd851705e724e68\": container with ID starting with 08180bf41a76e968f7d6e5f974cb00ab2c9fe39ff71338a5bfd851705e724e68 not found: ID does not exist" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.698565 4766 scope.go:117] "RemoveContainer" containerID="5e49ca9825f18a3323c612c7e702c9224ce8465815d066fdab56feda3e48253c" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.698737 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e49ca9825f18a3323c612c7e702c9224ce8465815d066fdab56feda3e48253c"} err="failed to get container status \"5e49ca9825f18a3323c612c7e702c9224ce8465815d066fdab56feda3e48253c\": rpc error: code = NotFound desc = could not find container \"5e49ca9825f18a3323c612c7e702c9224ce8465815d066fdab56feda3e48253c\": container with ID starting with 5e49ca9825f18a3323c612c7e702c9224ce8465815d066fdab56feda3e48253c not found: ID does not exist" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.698752 4766 scope.go:117] "RemoveContainer" containerID="ad340c8aca0029e4920f80190b92950d94ca01049812f06f02a9c3b0f61d7325" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.698922 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad340c8aca0029e4920f80190b92950d94ca01049812f06f02a9c3b0f61d7325"} err="failed to get container status \"ad340c8aca0029e4920f80190b92950d94ca01049812f06f02a9c3b0f61d7325\": rpc error: code = NotFound desc = could not find container \"ad340c8aca0029e4920f80190b92950d94ca01049812f06f02a9c3b0f61d7325\": container with ID starting with ad340c8aca0029e4920f80190b92950d94ca01049812f06f02a9c3b0f61d7325 not found: ID does not exist" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.698935 4766 scope.go:117] "RemoveContainer" containerID="0f0ea8581bb8c99d60e447f2010a7ab429d09e86c7ee54e9e05c6bc4571b9036" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.699121 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f0ea8581bb8c99d60e447f2010a7ab429d09e86c7ee54e9e05c6bc4571b9036"} err="failed to get container status \"0f0ea8581bb8c99d60e447f2010a7ab429d09e86c7ee54e9e05c6bc4571b9036\": rpc error: code = NotFound desc = could not find container \"0f0ea8581bb8c99d60e447f2010a7ab429d09e86c7ee54e9e05c6bc4571b9036\": container with ID starting with 0f0ea8581bb8c99d60e447f2010a7ab429d09e86c7ee54e9e05c6bc4571b9036 not found: ID does not exist" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.699133 4766 scope.go:117] "RemoveContainer" containerID="08180bf41a76e968f7d6e5f974cb00ab2c9fe39ff71338a5bfd851705e724e68" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.699396 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08180bf41a76e968f7d6e5f974cb00ab2c9fe39ff71338a5bfd851705e724e68"} err="failed to get container status \"08180bf41a76e968f7d6e5f974cb00ab2c9fe39ff71338a5bfd851705e724e68\": rpc error: code = NotFound desc = could not find container \"08180bf41a76e968f7d6e5f974cb00ab2c9fe39ff71338a5bfd851705e724e68\": container with ID starting with 08180bf41a76e968f7d6e5f974cb00ab2c9fe39ff71338a5bfd851705e724e68 not found: ID does not exist" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.738245 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/908c7fd8-c07e-463e-94c4-76980a3a8ba2-log-httpd\") pod \"ceilometer-0\" (UID: \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\") " pod="openstack/ceilometer-0" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.738284 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908c7fd8-c07e-463e-94c4-76980a3a8ba2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\") " pod="openstack/ceilometer-0" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.738315 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cflcc\" (UniqueName: \"kubernetes.io/projected/908c7fd8-c07e-463e-94c4-76980a3a8ba2-kube-api-access-cflcc\") pod \"ceilometer-0\" (UID: \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\") " pod="openstack/ceilometer-0" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.738339 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/908c7fd8-c07e-463e-94c4-76980a3a8ba2-run-httpd\") pod \"ceilometer-0\" (UID: \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\") " pod="openstack/ceilometer-0" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.738358 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/908c7fd8-c07e-463e-94c4-76980a3a8ba2-scripts\") pod \"ceilometer-0\" (UID: \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\") " pod="openstack/ceilometer-0" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.738374 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/908c7fd8-c07e-463e-94c4-76980a3a8ba2-config-data\") pod \"ceilometer-0\" (UID: \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\") " pod="openstack/ceilometer-0" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.738443 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/908c7fd8-c07e-463e-94c4-76980a3a8ba2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\") " pod="openstack/ceilometer-0" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.738512 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/908c7fd8-c07e-463e-94c4-76980a3a8ba2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\") " pod="openstack/ceilometer-0" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.839546 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cflcc\" (UniqueName: \"kubernetes.io/projected/908c7fd8-c07e-463e-94c4-76980a3a8ba2-kube-api-access-cflcc\") pod \"ceilometer-0\" (UID: \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\") " pod="openstack/ceilometer-0" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.839603 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/908c7fd8-c07e-463e-94c4-76980a3a8ba2-run-httpd\") pod \"ceilometer-0\" (UID: \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\") " pod="openstack/ceilometer-0" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.839625 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/908c7fd8-c07e-463e-94c4-76980a3a8ba2-scripts\") pod \"ceilometer-0\" (UID: \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\") " pod="openstack/ceilometer-0" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.839640 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/908c7fd8-c07e-463e-94c4-76980a3a8ba2-config-data\") pod \"ceilometer-0\" (UID: \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\") " pod="openstack/ceilometer-0" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.839709 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/908c7fd8-c07e-463e-94c4-76980a3a8ba2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\") " pod="openstack/ceilometer-0" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.839805 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/908c7fd8-c07e-463e-94c4-76980a3a8ba2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\") " pod="openstack/ceilometer-0" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.839863 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/908c7fd8-c07e-463e-94c4-76980a3a8ba2-log-httpd\") pod \"ceilometer-0\" (UID: \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\") " pod="openstack/ceilometer-0" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.839884 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908c7fd8-c07e-463e-94c4-76980a3a8ba2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\") " pod="openstack/ceilometer-0" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.840254 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/908c7fd8-c07e-463e-94c4-76980a3a8ba2-run-httpd\") pod \"ceilometer-0\" (UID: \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\") " pod="openstack/ceilometer-0" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.840866 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/908c7fd8-c07e-463e-94c4-76980a3a8ba2-log-httpd\") pod \"ceilometer-0\" (UID: \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\") " pod="openstack/ceilometer-0" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.843798 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/908c7fd8-c07e-463e-94c4-76980a3a8ba2-scripts\") pod \"ceilometer-0\" (UID: \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\") " pod="openstack/ceilometer-0" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.843932 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908c7fd8-c07e-463e-94c4-76980a3a8ba2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\") " pod="openstack/ceilometer-0" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.844815 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/908c7fd8-c07e-463e-94c4-76980a3a8ba2-config-data\") pod \"ceilometer-0\" (UID: \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\") " pod="openstack/ceilometer-0" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.853788 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/908c7fd8-c07e-463e-94c4-76980a3a8ba2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\") " pod="openstack/ceilometer-0" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.854450 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/908c7fd8-c07e-463e-94c4-76980a3a8ba2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\") " pod="openstack/ceilometer-0" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.855563 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cflcc\" (UniqueName: \"kubernetes.io/projected/908c7fd8-c07e-463e-94c4-76980a3a8ba2-kube-api-access-cflcc\") pod \"ceilometer-0\" (UID: \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\") " pod="openstack/ceilometer-0" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.946448 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:46:24 crc kubenswrapper[4766]: I0130 16:46:24.067394 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01bf866a-799b-42df-8838-91933afbb104" path="/var/lib/kubelet/pods/01bf866a-799b-42df-8838-91933afbb104/volumes" Jan 30 16:46:24 crc kubenswrapper[4766]: I0130 16:46:24.436574 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:46:24 crc kubenswrapper[4766]: I0130 16:46:24.491467 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"908c7fd8-c07e-463e-94c4-76980a3a8ba2","Type":"ContainerStarted","Data":"9e20509f1f367971ebad4df00092bfa9e6a737cd37ee5f2217bf7f1fb1c22b6c"} Jan 30 16:46:25 crc kubenswrapper[4766]: I0130 16:46:25.511659 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"908c7fd8-c07e-463e-94c4-76980a3a8ba2","Type":"ContainerStarted","Data":"1fe4777b2695557b65a6f9a91a3f309b01c42b5f0288bbecc862c67c0bda120a"} Jan 30 16:46:25 crc kubenswrapper[4766]: I0130 16:46:25.513705 4766 generic.go:334] "Generic (PLEG): container finished" podID="c683df85-82ee-4038-883c-c47b3aa46bec" containerID="a9df41b3a8490f673ad155b5c39e9bf02895871bbd8788cd418cae112017c56d" exitCode=0 Jan 30 16:46:25 crc kubenswrapper[4766]: I0130 16:46:25.513750 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-rlpcs" event={"ID":"c683df85-82ee-4038-883c-c47b3aa46bec","Type":"ContainerDied","Data":"a9df41b3a8490f673ad155b5c39e9bf02895871bbd8788cd418cae112017c56d"} Jan 30 16:46:26 crc kubenswrapper[4766]: I0130 16:46:26.524137 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"908c7fd8-c07e-463e-94c4-76980a3a8ba2","Type":"ContainerStarted","Data":"69d64425bbacf9da73461e63012a983fa8ef6f8440c070018088e050cf6bc5a6"} Jan 30 16:46:26 crc kubenswrapper[4766]: I0130 16:46:26.734131 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 16:46:26 crc kubenswrapper[4766]: I0130 16:46:26.736445 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 16:46:26 crc kubenswrapper[4766]: I0130 16:46:26.875410 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-rlpcs" Jan 30 16:46:26 crc kubenswrapper[4766]: I0130 16:46:26.936586 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrlmv\" (UniqueName: \"kubernetes.io/projected/c683df85-82ee-4038-883c-c47b3aa46bec-kube-api-access-qrlmv\") pod \"c683df85-82ee-4038-883c-c47b3aa46bec\" (UID: \"c683df85-82ee-4038-883c-c47b3aa46bec\") " Jan 30 16:46:26 crc kubenswrapper[4766]: I0130 16:46:26.936746 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c683df85-82ee-4038-883c-c47b3aa46bec-config-data\") pod \"c683df85-82ee-4038-883c-c47b3aa46bec\" (UID: \"c683df85-82ee-4038-883c-c47b3aa46bec\") " Jan 30 16:46:26 crc kubenswrapper[4766]: I0130 16:46:26.936797 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c683df85-82ee-4038-883c-c47b3aa46bec-scripts\") pod \"c683df85-82ee-4038-883c-c47b3aa46bec\" (UID: \"c683df85-82ee-4038-883c-c47b3aa46bec\") " Jan 30 16:46:26 crc kubenswrapper[4766]: I0130 16:46:26.936869 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c683df85-82ee-4038-883c-c47b3aa46bec-combined-ca-bundle\") pod \"c683df85-82ee-4038-883c-c47b3aa46bec\" (UID: \"c683df85-82ee-4038-883c-c47b3aa46bec\") " Jan 30 16:46:26 crc kubenswrapper[4766]: I0130 16:46:26.944333 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c683df85-82ee-4038-883c-c47b3aa46bec-kube-api-access-qrlmv" (OuterVolumeSpecName: "kube-api-access-qrlmv") pod "c683df85-82ee-4038-883c-c47b3aa46bec" (UID: "c683df85-82ee-4038-883c-c47b3aa46bec"). InnerVolumeSpecName "kube-api-access-qrlmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:46:26 crc kubenswrapper[4766]: I0130 16:46:26.958579 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c683df85-82ee-4038-883c-c47b3aa46bec-scripts" (OuterVolumeSpecName: "scripts") pod "c683df85-82ee-4038-883c-c47b3aa46bec" (UID: "c683df85-82ee-4038-883c-c47b3aa46bec"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:46:26 crc kubenswrapper[4766]: I0130 16:46:26.993749 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c683df85-82ee-4038-883c-c47b3aa46bec-config-data" (OuterVolumeSpecName: "config-data") pod "c683df85-82ee-4038-883c-c47b3aa46bec" (UID: "c683df85-82ee-4038-883c-c47b3aa46bec"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:46:27 crc kubenswrapper[4766]: I0130 16:46:27.020358 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c683df85-82ee-4038-883c-c47b3aa46bec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c683df85-82ee-4038-883c-c47b3aa46bec" (UID: "c683df85-82ee-4038-883c-c47b3aa46bec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:46:27 crc kubenswrapper[4766]: I0130 16:46:27.038383 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c683df85-82ee-4038-883c-c47b3aa46bec-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:27 crc kubenswrapper[4766]: I0130 16:46:27.038570 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c683df85-82ee-4038-883c-c47b3aa46bec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:27 crc kubenswrapper[4766]: I0130 16:46:27.038681 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrlmv\" (UniqueName: \"kubernetes.io/projected/c683df85-82ee-4038-883c-c47b3aa46bec-kube-api-access-qrlmv\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:27 crc kubenswrapper[4766]: I0130 16:46:27.038749 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c683df85-82ee-4038-883c-c47b3aa46bec-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:27 crc kubenswrapper[4766]: I0130 16:46:27.534803 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-rlpcs" event={"ID":"c683df85-82ee-4038-883c-c47b3aa46bec","Type":"ContainerDied","Data":"e12619d95d16f1a55e971e5eb02655b9537d6b5b6e1489ce81521828eefdfcbe"} Jan 30 16:46:27 crc kubenswrapper[4766]: I0130 16:46:27.535922 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e12619d95d16f1a55e971e5eb02655b9537d6b5b6e1489ce81521828eefdfcbe" Jan 30 16:46:27 crc kubenswrapper[4766]: I0130 16:46:27.534831 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-rlpcs" Jan 30 16:46:27 crc kubenswrapper[4766]: I0130 16:46:27.537566 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"908c7fd8-c07e-463e-94c4-76980a3a8ba2","Type":"ContainerStarted","Data":"3a4e2d5078fd2eacb9382be606cd830ba0289dae57441c51076a58524a7c71f4"} Jan 30 16:46:27 crc kubenswrapper[4766]: I0130 16:46:27.708233 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 16:46:27 crc kubenswrapper[4766]: I0130 16:46:27.708478 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="23e893e4-3d60-421d-ad41-bc0f76112015" containerName="nova-api-log" containerID="cri-o://5ccdbaf3bb96dfd8169ab0fe592d26c7a0d1efe43b5c08bbc7390fe652947a65" gracePeriod=30 Jan 30 16:46:27 crc kubenswrapper[4766]: I0130 16:46:27.708940 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="23e893e4-3d60-421d-ad41-bc0f76112015" containerName="nova-api-api" containerID="cri-o://4c788f0562704bc4dfadcb722798403e7930e6f7adcdcde22c3c13a6df3f41a3" gracePeriod=30 Jan 30 16:46:27 crc kubenswrapper[4766]: I0130 16:46:27.723506 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="23e893e4-3d60-421d-ad41-bc0f76112015" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.199:8774/\": EOF" Jan 30 16:46:27 crc kubenswrapper[4766]: I0130 16:46:27.723719 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="23e893e4-3d60-421d-ad41-bc0f76112015" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.199:8774/\": EOF" Jan 30 16:46:27 crc kubenswrapper[4766]: I0130 16:46:27.740046 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 16:46:27 crc kubenswrapper[4766]: I0130 16:46:27.740401 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="e7e7ef23-9d73-45f9-aeae-9fb0bf16b861" containerName="nova-scheduler-scheduler" containerID="cri-o://23c5da4a09be36459ef2ef3b673003e33937b08f5a5a89b7b45b8074c830e74e" gracePeriod=30 Jan 30 16:46:27 crc kubenswrapper[4766]: I0130 16:46:27.748429 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 16:46:27 crc kubenswrapper[4766]: I0130 16:46:27.761669 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 16:46:27 crc kubenswrapper[4766]: I0130 16:46:27.762499 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 16:46:27 crc kubenswrapper[4766]: I0130 16:46:27.795128 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 16:46:28 crc kubenswrapper[4766]: I0130 16:46:28.548835 4766 generic.go:334] "Generic (PLEG): container finished" podID="23e893e4-3d60-421d-ad41-bc0f76112015" containerID="5ccdbaf3bb96dfd8169ab0fe592d26c7a0d1efe43b5c08bbc7390fe652947a65" exitCode=143 Jan 30 16:46:28 crc kubenswrapper[4766]: I0130 16:46:28.548899 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"23e893e4-3d60-421d-ad41-bc0f76112015","Type":"ContainerDied","Data":"5ccdbaf3bb96dfd8169ab0fe592d26c7a0d1efe43b5c08bbc7390fe652947a65"} Jan 30 16:46:28 crc kubenswrapper[4766]: I0130 16:46:28.555989 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 16:46:28 crc kubenswrapper[4766]: E0130 16:46:28.586066 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="23c5da4a09be36459ef2ef3b673003e33937b08f5a5a89b7b45b8074c830e74e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 16:46:28 crc kubenswrapper[4766]: E0130 16:46:28.587300 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="23c5da4a09be36459ef2ef3b673003e33937b08f5a5a89b7b45b8074c830e74e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 16:46:28 crc kubenswrapper[4766]: E0130 16:46:28.588512 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="23c5da4a09be36459ef2ef3b673003e33937b08f5a5a89b7b45b8074c830e74e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 16:46:28 crc kubenswrapper[4766]: E0130 16:46:28.588554 4766 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="e7e7ef23-9d73-45f9-aeae-9fb0bf16b861" containerName="nova-scheduler-scheduler" Jan 30 16:46:29 crc kubenswrapper[4766]: I0130 16:46:29.558404 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8e30ce0e-ad1f-4433-91ba-3a19be83ffc9" containerName="nova-metadata-log" containerID="cri-o://817ef08566cd27d90ced8fe5b2ae3d2003a5823e02c0a237d13eebcc353ba241" gracePeriod=30 Jan 30 16:46:29 crc kubenswrapper[4766]: I0130 16:46:29.558766 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"908c7fd8-c07e-463e-94c4-76980a3a8ba2","Type":"ContainerStarted","Data":"858741e925270a4f1dbc19a53c612cec0223b237f4d6e8b8741323f1a01a83e4"} Jan 30 16:46:29 crc kubenswrapper[4766]: I0130 16:46:29.559203 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8e30ce0e-ad1f-4433-91ba-3a19be83ffc9" containerName="nova-metadata-metadata" containerID="cri-o://f33cca9b31b0707055cc7d972b3ac113dfb61b700ced61c9579110d866526c80" gracePeriod=30 Jan 30 16:46:29 crc kubenswrapper[4766]: I0130 16:46:29.560608 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 16:46:29 crc kubenswrapper[4766]: I0130 16:46:29.589885 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.006412179 podStartE2EDuration="6.589866821s" podCreationTimestamp="2026-01-30 16:46:23 +0000 UTC" firstStartedPulling="2026-01-30 16:46:24.432162964 +0000 UTC m=+1439.070120310" lastFinishedPulling="2026-01-30 16:46:29.015617606 +0000 UTC m=+1443.653574952" observedRunningTime="2026-01-30 16:46:29.582084263 +0000 UTC m=+1444.220041609" watchObservedRunningTime="2026-01-30 16:46:29.589866821 +0000 UTC m=+1444.227824167" Jan 30 16:46:30 crc kubenswrapper[4766]: I0130 16:46:30.569498 4766 generic.go:334] "Generic (PLEG): container finished" podID="8e30ce0e-ad1f-4433-91ba-3a19be83ffc9" containerID="817ef08566cd27d90ced8fe5b2ae3d2003a5823e02c0a237d13eebcc353ba241" exitCode=143 Jan 30 16:46:30 crc kubenswrapper[4766]: I0130 16:46:30.573167 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8e30ce0e-ad1f-4433-91ba-3a19be83ffc9","Type":"ContainerDied","Data":"817ef08566cd27d90ced8fe5b2ae3d2003a5823e02c0a237d13eebcc353ba241"} Jan 30 16:46:32 crc kubenswrapper[4766]: I0130 16:46:32.542829 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 16:46:32 crc kubenswrapper[4766]: I0130 16:46:32.584719 4766 generic.go:334] "Generic (PLEG): container finished" podID="e7e7ef23-9d73-45f9-aeae-9fb0bf16b861" containerID="23c5da4a09be36459ef2ef3b673003e33937b08f5a5a89b7b45b8074c830e74e" exitCode=0 Jan 30 16:46:32 crc kubenswrapper[4766]: I0130 16:46:32.584769 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e7e7ef23-9d73-45f9-aeae-9fb0bf16b861","Type":"ContainerDied","Data":"23c5da4a09be36459ef2ef3b673003e33937b08f5a5a89b7b45b8074c830e74e"} Jan 30 16:46:32 crc kubenswrapper[4766]: I0130 16:46:32.584795 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e7e7ef23-9d73-45f9-aeae-9fb0bf16b861","Type":"ContainerDied","Data":"416558aff7b28c3ff1ea22294f12594e969f7f4faf03939457f56d9bd99a3f11"} Jan 30 16:46:32 crc kubenswrapper[4766]: I0130 16:46:32.584812 4766 scope.go:117] "RemoveContainer" containerID="23c5da4a09be36459ef2ef3b673003e33937b08f5a5a89b7b45b8074c830e74e" Jan 30 16:46:32 crc kubenswrapper[4766]: I0130 16:46:32.584923 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 16:46:32 crc kubenswrapper[4766]: I0130 16:46:32.642881 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7e7ef23-9d73-45f9-aeae-9fb0bf16b861-combined-ca-bundle\") pod \"e7e7ef23-9d73-45f9-aeae-9fb0bf16b861\" (UID: \"e7e7ef23-9d73-45f9-aeae-9fb0bf16b861\") " Jan 30 16:46:32 crc kubenswrapper[4766]: I0130 16:46:32.643118 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8nc8\" (UniqueName: \"kubernetes.io/projected/e7e7ef23-9d73-45f9-aeae-9fb0bf16b861-kube-api-access-q8nc8\") pod \"e7e7ef23-9d73-45f9-aeae-9fb0bf16b861\" (UID: \"e7e7ef23-9d73-45f9-aeae-9fb0bf16b861\") " Jan 30 16:46:32 crc kubenswrapper[4766]: I0130 16:46:32.643238 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7e7ef23-9d73-45f9-aeae-9fb0bf16b861-config-data\") pod \"e7e7ef23-9d73-45f9-aeae-9fb0bf16b861\" (UID: \"e7e7ef23-9d73-45f9-aeae-9fb0bf16b861\") " Jan 30 16:46:32 crc kubenswrapper[4766]: I0130 16:46:32.652944 4766 scope.go:117] "RemoveContainer" containerID="23c5da4a09be36459ef2ef3b673003e33937b08f5a5a89b7b45b8074c830e74e" Jan 30 16:46:32 crc kubenswrapper[4766]: I0130 16:46:32.658613 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e7ef23-9d73-45f9-aeae-9fb0bf16b861-kube-api-access-q8nc8" (OuterVolumeSpecName: "kube-api-access-q8nc8") pod "e7e7ef23-9d73-45f9-aeae-9fb0bf16b861" (UID: "e7e7ef23-9d73-45f9-aeae-9fb0bf16b861"). InnerVolumeSpecName "kube-api-access-q8nc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:46:32 crc kubenswrapper[4766]: E0130 16:46:32.677689 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23c5da4a09be36459ef2ef3b673003e33937b08f5a5a89b7b45b8074c830e74e\": container with ID starting with 23c5da4a09be36459ef2ef3b673003e33937b08f5a5a89b7b45b8074c830e74e not found: ID does not exist" containerID="23c5da4a09be36459ef2ef3b673003e33937b08f5a5a89b7b45b8074c830e74e" Jan 30 16:46:32 crc kubenswrapper[4766]: I0130 16:46:32.677750 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23c5da4a09be36459ef2ef3b673003e33937b08f5a5a89b7b45b8074c830e74e"} err="failed to get container status \"23c5da4a09be36459ef2ef3b673003e33937b08f5a5a89b7b45b8074c830e74e\": rpc error: code = NotFound desc = could not find container \"23c5da4a09be36459ef2ef3b673003e33937b08f5a5a89b7b45b8074c830e74e\": container with ID starting with 23c5da4a09be36459ef2ef3b673003e33937b08f5a5a89b7b45b8074c830e74e not found: ID does not exist" Jan 30 16:46:32 crc kubenswrapper[4766]: I0130 16:46:32.732355 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e7ef23-9d73-45f9-aeae-9fb0bf16b861-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e7e7ef23-9d73-45f9-aeae-9fb0bf16b861" (UID: "e7e7ef23-9d73-45f9-aeae-9fb0bf16b861"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:46:32 crc kubenswrapper[4766]: I0130 16:46:32.746992 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7e7ef23-9d73-45f9-aeae-9fb0bf16b861-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:32 crc kubenswrapper[4766]: I0130 16:46:32.747036 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8nc8\" (UniqueName: \"kubernetes.io/projected/e7e7ef23-9d73-45f9-aeae-9fb0bf16b861-kube-api-access-q8nc8\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:32 crc kubenswrapper[4766]: I0130 16:46:32.760356 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e7ef23-9d73-45f9-aeae-9fb0bf16b861-config-data" (OuterVolumeSpecName: "config-data") pod "e7e7ef23-9d73-45f9-aeae-9fb0bf16b861" (UID: "e7e7ef23-9d73-45f9-aeae-9fb0bf16b861"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:46:32 crc kubenswrapper[4766]: I0130 16:46:32.827256 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="8e30ce0e-ad1f-4433-91ba-3a19be83ffc9" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": read tcp 10.217.0.2:50848->10.217.0.196:8775: read: connection reset by peer" Jan 30 16:46:32 crc kubenswrapper[4766]: I0130 16:46:32.827645 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="8e30ce0e-ad1f-4433-91ba-3a19be83ffc9" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": read tcp 10.217.0.2:50842->10.217.0.196:8775: read: connection reset by peer" Jan 30 16:46:32 crc kubenswrapper[4766]: I0130 16:46:32.848383 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7e7ef23-9d73-45f9-aeae-9fb0bf16b861-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:32 crc kubenswrapper[4766]: I0130 16:46:32.937052 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 16:46:32 crc kubenswrapper[4766]: I0130 16:46:32.954255 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 16:46:32 crc kubenswrapper[4766]: I0130 16:46:32.962716 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 16:46:32 crc kubenswrapper[4766]: E0130 16:46:32.964261 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7e7ef23-9d73-45f9-aeae-9fb0bf16b861" containerName="nova-scheduler-scheduler" Jan 30 16:46:32 crc kubenswrapper[4766]: I0130 16:46:32.964358 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7e7ef23-9d73-45f9-aeae-9fb0bf16b861" containerName="nova-scheduler-scheduler" Jan 30 16:46:32 crc kubenswrapper[4766]: E0130 16:46:32.964626 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c683df85-82ee-4038-883c-c47b3aa46bec" containerName="nova-manage" Jan 30 16:46:32 crc kubenswrapper[4766]: I0130 16:46:32.964687 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c683df85-82ee-4038-883c-c47b3aa46bec" containerName="nova-manage" Jan 30 16:46:32 crc kubenswrapper[4766]: I0130 16:46:32.964931 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c683df85-82ee-4038-883c-c47b3aa46bec" containerName="nova-manage" Jan 30 16:46:32 crc kubenswrapper[4766]: I0130 16:46:32.964992 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7e7ef23-9d73-45f9-aeae-9fb0bf16b861" containerName="nova-scheduler-scheduler" Jan 30 16:46:32 crc kubenswrapper[4766]: I0130 16:46:32.965613 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 16:46:32 crc kubenswrapper[4766]: I0130 16:46:32.968001 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 16:46:32 crc kubenswrapper[4766]: I0130 16:46:32.998921 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.052164 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f217490-8a26-4f4b-935b-fe5918500948-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4f217490-8a26-4f4b-935b-fe5918500948\") " pod="openstack/nova-scheduler-0" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.052558 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmrmz\" (UniqueName: \"kubernetes.io/projected/4f217490-8a26-4f4b-935b-fe5918500948-kube-api-access-jmrmz\") pod \"nova-scheduler-0\" (UID: \"4f217490-8a26-4f4b-935b-fe5918500948\") " pod="openstack/nova-scheduler-0" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.052619 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f217490-8a26-4f4b-935b-fe5918500948-config-data\") pod \"nova-scheduler-0\" (UID: \"4f217490-8a26-4f4b-935b-fe5918500948\") " pod="openstack/nova-scheduler-0" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.157215 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f217490-8a26-4f4b-935b-fe5918500948-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4f217490-8a26-4f4b-935b-fe5918500948\") " pod="openstack/nova-scheduler-0" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.157380 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmrmz\" (UniqueName: \"kubernetes.io/projected/4f217490-8a26-4f4b-935b-fe5918500948-kube-api-access-jmrmz\") pod \"nova-scheduler-0\" (UID: \"4f217490-8a26-4f4b-935b-fe5918500948\") " pod="openstack/nova-scheduler-0" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.157465 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f217490-8a26-4f4b-935b-fe5918500948-config-data\") pod \"nova-scheduler-0\" (UID: \"4f217490-8a26-4f4b-935b-fe5918500948\") " pod="openstack/nova-scheduler-0" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.162873 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f217490-8a26-4f4b-935b-fe5918500948-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4f217490-8a26-4f4b-935b-fe5918500948\") " pod="openstack/nova-scheduler-0" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.166906 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f217490-8a26-4f4b-935b-fe5918500948-config-data\") pod \"nova-scheduler-0\" (UID: \"4f217490-8a26-4f4b-935b-fe5918500948\") " pod="openstack/nova-scheduler-0" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.176460 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmrmz\" (UniqueName: \"kubernetes.io/projected/4f217490-8a26-4f4b-935b-fe5918500948-kube-api-access-jmrmz\") pod \"nova-scheduler-0\" (UID: \"4f217490-8a26-4f4b-935b-fe5918500948\") " pod="openstack/nova-scheduler-0" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.288263 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.299196 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.360390 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e30ce0e-ad1f-4433-91ba-3a19be83ffc9-logs\") pod \"8e30ce0e-ad1f-4433-91ba-3a19be83ffc9\" (UID: \"8e30ce0e-ad1f-4433-91ba-3a19be83ffc9\") " Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.360918 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e30ce0e-ad1f-4433-91ba-3a19be83ffc9-nova-metadata-tls-certs\") pod \"8e30ce0e-ad1f-4433-91ba-3a19be83ffc9\" (UID: \"8e30ce0e-ad1f-4433-91ba-3a19be83ffc9\") " Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.360962 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e30ce0e-ad1f-4433-91ba-3a19be83ffc9-combined-ca-bundle\") pod \"8e30ce0e-ad1f-4433-91ba-3a19be83ffc9\" (UID: \"8e30ce0e-ad1f-4433-91ba-3a19be83ffc9\") " Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.360982 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e30ce0e-ad1f-4433-91ba-3a19be83ffc9-config-data\") pod \"8e30ce0e-ad1f-4433-91ba-3a19be83ffc9\" (UID: \"8e30ce0e-ad1f-4433-91ba-3a19be83ffc9\") " Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.361024 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mf4cx\" (UniqueName: \"kubernetes.io/projected/8e30ce0e-ad1f-4433-91ba-3a19be83ffc9-kube-api-access-mf4cx\") pod \"8e30ce0e-ad1f-4433-91ba-3a19be83ffc9\" (UID: \"8e30ce0e-ad1f-4433-91ba-3a19be83ffc9\") " Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.361537 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e30ce0e-ad1f-4433-91ba-3a19be83ffc9-logs" (OuterVolumeSpecName: "logs") pod "8e30ce0e-ad1f-4433-91ba-3a19be83ffc9" (UID: "8e30ce0e-ad1f-4433-91ba-3a19be83ffc9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.375246 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e30ce0e-ad1f-4433-91ba-3a19be83ffc9-kube-api-access-mf4cx" (OuterVolumeSpecName: "kube-api-access-mf4cx") pod "8e30ce0e-ad1f-4433-91ba-3a19be83ffc9" (UID: "8e30ce0e-ad1f-4433-91ba-3a19be83ffc9"). InnerVolumeSpecName "kube-api-access-mf4cx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.416557 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e30ce0e-ad1f-4433-91ba-3a19be83ffc9-config-data" (OuterVolumeSpecName: "config-data") pod "8e30ce0e-ad1f-4433-91ba-3a19be83ffc9" (UID: "8e30ce0e-ad1f-4433-91ba-3a19be83ffc9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.420418 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e30ce0e-ad1f-4433-91ba-3a19be83ffc9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8e30ce0e-ad1f-4433-91ba-3a19be83ffc9" (UID: "8e30ce0e-ad1f-4433-91ba-3a19be83ffc9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.454860 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e30ce0e-ad1f-4433-91ba-3a19be83ffc9-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "8e30ce0e-ad1f-4433-91ba-3a19be83ffc9" (UID: "8e30ce0e-ad1f-4433-91ba-3a19be83ffc9"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.463517 4766 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e30ce0e-ad1f-4433-91ba-3a19be83ffc9-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.463565 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e30ce0e-ad1f-4433-91ba-3a19be83ffc9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.463577 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e30ce0e-ad1f-4433-91ba-3a19be83ffc9-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.463588 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mf4cx\" (UniqueName: \"kubernetes.io/projected/8e30ce0e-ad1f-4433-91ba-3a19be83ffc9-kube-api-access-mf4cx\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.463599 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e30ce0e-ad1f-4433-91ba-3a19be83ffc9-logs\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.555711 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.604129 4766 generic.go:334] "Generic (PLEG): container finished" podID="23e893e4-3d60-421d-ad41-bc0f76112015" containerID="4c788f0562704bc4dfadcb722798403e7930e6f7adcdcde22c3c13a6df3f41a3" exitCode=0 Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.604202 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"23e893e4-3d60-421d-ad41-bc0f76112015","Type":"ContainerDied","Data":"4c788f0562704bc4dfadcb722798403e7930e6f7adcdcde22c3c13a6df3f41a3"} Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.604248 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"23e893e4-3d60-421d-ad41-bc0f76112015","Type":"ContainerDied","Data":"1a67e71d5d71a3f934c66c454b741f0b3ac1c9d352fcd86ce01318614ddc8465"} Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.604270 4766 scope.go:117] "RemoveContainer" containerID="4c788f0562704bc4dfadcb722798403e7930e6f7adcdcde22c3c13a6df3f41a3" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.604656 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.627330 4766 generic.go:334] "Generic (PLEG): container finished" podID="8e30ce0e-ad1f-4433-91ba-3a19be83ffc9" containerID="f33cca9b31b0707055cc7d972b3ac113dfb61b700ced61c9579110d866526c80" exitCode=0 Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.627378 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8e30ce0e-ad1f-4433-91ba-3a19be83ffc9","Type":"ContainerDied","Data":"f33cca9b31b0707055cc7d972b3ac113dfb61b700ced61c9579110d866526c80"} Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.627409 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8e30ce0e-ad1f-4433-91ba-3a19be83ffc9","Type":"ContainerDied","Data":"00d52323719cdcf153e25b7a1622f149993ee5f6d853ba11e47ebf2bd0e4a738"} Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.627425 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.667084 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxs9t\" (UniqueName: \"kubernetes.io/projected/23e893e4-3d60-421d-ad41-bc0f76112015-kube-api-access-fxs9t\") pod \"23e893e4-3d60-421d-ad41-bc0f76112015\" (UID: \"23e893e4-3d60-421d-ad41-bc0f76112015\") " Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.667173 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/23e893e4-3d60-421d-ad41-bc0f76112015-public-tls-certs\") pod \"23e893e4-3d60-421d-ad41-bc0f76112015\" (UID: \"23e893e4-3d60-421d-ad41-bc0f76112015\") " Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.667223 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23e893e4-3d60-421d-ad41-bc0f76112015-combined-ca-bundle\") pod \"23e893e4-3d60-421d-ad41-bc0f76112015\" (UID: \"23e893e4-3d60-421d-ad41-bc0f76112015\") " Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.667284 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/23e893e4-3d60-421d-ad41-bc0f76112015-internal-tls-certs\") pod \"23e893e4-3d60-421d-ad41-bc0f76112015\" (UID: \"23e893e4-3d60-421d-ad41-bc0f76112015\") " Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.667378 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23e893e4-3d60-421d-ad41-bc0f76112015-logs\") pod \"23e893e4-3d60-421d-ad41-bc0f76112015\" (UID: \"23e893e4-3d60-421d-ad41-bc0f76112015\") " Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.667867 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23e893e4-3d60-421d-ad41-bc0f76112015-logs" (OuterVolumeSpecName: "logs") pod "23e893e4-3d60-421d-ad41-bc0f76112015" (UID: "23e893e4-3d60-421d-ad41-bc0f76112015"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.668152 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23e893e4-3d60-421d-ad41-bc0f76112015-config-data\") pod \"23e893e4-3d60-421d-ad41-bc0f76112015\" (UID: \"23e893e4-3d60-421d-ad41-bc0f76112015\") " Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.668786 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23e893e4-3d60-421d-ad41-bc0f76112015-logs\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.671618 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23e893e4-3d60-421d-ad41-bc0f76112015-kube-api-access-fxs9t" (OuterVolumeSpecName: "kube-api-access-fxs9t") pod "23e893e4-3d60-421d-ad41-bc0f76112015" (UID: "23e893e4-3d60-421d-ad41-bc0f76112015"). InnerVolumeSpecName "kube-api-access-fxs9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.719998 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23e893e4-3d60-421d-ad41-bc0f76112015-config-data" (OuterVolumeSpecName: "config-data") pod "23e893e4-3d60-421d-ad41-bc0f76112015" (UID: "23e893e4-3d60-421d-ad41-bc0f76112015"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.737588 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.741787 4766 scope.go:117] "RemoveContainer" containerID="5ccdbaf3bb96dfd8169ab0fe592d26c7a0d1efe43b5c08bbc7390fe652947a65" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.749342 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.754441 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23e893e4-3d60-421d-ad41-bc0f76112015-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "23e893e4-3d60-421d-ad41-bc0f76112015" (UID: "23e893e4-3d60-421d-ad41-bc0f76112015"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.763817 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 16:46:33 crc kubenswrapper[4766]: E0130 16:46:33.764280 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e30ce0e-ad1f-4433-91ba-3a19be83ffc9" containerName="nova-metadata-log" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.764307 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e30ce0e-ad1f-4433-91ba-3a19be83ffc9" containerName="nova-metadata-log" Jan 30 16:46:33 crc kubenswrapper[4766]: E0130 16:46:33.764327 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23e893e4-3d60-421d-ad41-bc0f76112015" containerName="nova-api-log" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.764335 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="23e893e4-3d60-421d-ad41-bc0f76112015" containerName="nova-api-log" Jan 30 16:46:33 crc kubenswrapper[4766]: E0130 16:46:33.764352 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e30ce0e-ad1f-4433-91ba-3a19be83ffc9" containerName="nova-metadata-metadata" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.764361 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e30ce0e-ad1f-4433-91ba-3a19be83ffc9" containerName="nova-metadata-metadata" Jan 30 16:46:33 crc kubenswrapper[4766]: E0130 16:46:33.764384 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23e893e4-3d60-421d-ad41-bc0f76112015" containerName="nova-api-api" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.764391 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="23e893e4-3d60-421d-ad41-bc0f76112015" containerName="nova-api-api" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.764548 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e30ce0e-ad1f-4433-91ba-3a19be83ffc9" containerName="nova-metadata-metadata" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.764565 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e30ce0e-ad1f-4433-91ba-3a19be83ffc9" containerName="nova-metadata-log" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.764577 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="23e893e4-3d60-421d-ad41-bc0f76112015" containerName="nova-api-api" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.764591 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="23e893e4-3d60-421d-ad41-bc0f76112015" containerName="nova-api-log" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.765592 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.773809 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.774199 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.820464 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.826316 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23e893e4-3d60-421d-ad41-bc0f76112015-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.826457 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23e893e4-3d60-421d-ad41-bc0f76112015-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.826471 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxs9t\" (UniqueName: \"kubernetes.io/projected/23e893e4-3d60-421d-ad41-bc0f76112015-kube-api-access-fxs9t\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.839512 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23e893e4-3d60-421d-ad41-bc0f76112015-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "23e893e4-3d60-421d-ad41-bc0f76112015" (UID: "23e893e4-3d60-421d-ad41-bc0f76112015"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.859426 4766 scope.go:117] "RemoveContainer" containerID="4c788f0562704bc4dfadcb722798403e7930e6f7adcdcde22c3c13a6df3f41a3" Jan 30 16:46:33 crc kubenswrapper[4766]: E0130 16:46:33.864583 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c788f0562704bc4dfadcb722798403e7930e6f7adcdcde22c3c13a6df3f41a3\": container with ID starting with 4c788f0562704bc4dfadcb722798403e7930e6f7adcdcde22c3c13a6df3f41a3 not found: ID does not exist" containerID="4c788f0562704bc4dfadcb722798403e7930e6f7adcdcde22c3c13a6df3f41a3" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.864624 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c788f0562704bc4dfadcb722798403e7930e6f7adcdcde22c3c13a6df3f41a3"} err="failed to get container status \"4c788f0562704bc4dfadcb722798403e7930e6f7adcdcde22c3c13a6df3f41a3\": rpc error: code = NotFound desc = could not find container \"4c788f0562704bc4dfadcb722798403e7930e6f7adcdcde22c3c13a6df3f41a3\": container with ID starting with 4c788f0562704bc4dfadcb722798403e7930e6f7adcdcde22c3c13a6df3f41a3 not found: ID does not exist" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.864652 4766 scope.go:117] "RemoveContainer" containerID="5ccdbaf3bb96dfd8169ab0fe592d26c7a0d1efe43b5c08bbc7390fe652947a65" Jan 30 16:46:33 crc kubenswrapper[4766]: E0130 16:46:33.865561 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ccdbaf3bb96dfd8169ab0fe592d26c7a0d1efe43b5c08bbc7390fe652947a65\": container with ID starting with 5ccdbaf3bb96dfd8169ab0fe592d26c7a0d1efe43b5c08bbc7390fe652947a65 not found: ID does not exist" containerID="5ccdbaf3bb96dfd8169ab0fe592d26c7a0d1efe43b5c08bbc7390fe652947a65" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.865598 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ccdbaf3bb96dfd8169ab0fe592d26c7a0d1efe43b5c08bbc7390fe652947a65"} err="failed to get container status \"5ccdbaf3bb96dfd8169ab0fe592d26c7a0d1efe43b5c08bbc7390fe652947a65\": rpc error: code = NotFound desc = could not find container \"5ccdbaf3bb96dfd8169ab0fe592d26c7a0d1efe43b5c08bbc7390fe652947a65\": container with ID starting with 5ccdbaf3bb96dfd8169ab0fe592d26c7a0d1efe43b5c08bbc7390fe652947a65 not found: ID does not exist" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.865614 4766 scope.go:117] "RemoveContainer" containerID="f33cca9b31b0707055cc7d972b3ac113dfb61b700ced61c9579110d866526c80" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.873859 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23e893e4-3d60-421d-ad41-bc0f76112015-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "23e893e4-3d60-421d-ad41-bc0f76112015" (UID: "23e893e4-3d60-421d-ad41-bc0f76112015"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:46:33 crc kubenswrapper[4766]: W0130 16:46:33.873511 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f217490_8a26_4f4b_935b_fe5918500948.slice/crio-f056061bd522d3379f642d93301ecddb3bb56cae94292cc340f18fe39f2e4f4b WatchSource:0}: Error finding container f056061bd522d3379f642d93301ecddb3bb56cae94292cc340f18fe39f2e4f4b: Status 404 returned error can't find the container with id f056061bd522d3379f642d93301ecddb3bb56cae94292cc340f18fe39f2e4f4b Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.883259 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.904056 4766 scope.go:117] "RemoveContainer" containerID="817ef08566cd27d90ced8fe5b2ae3d2003a5823e02c0a237d13eebcc353ba241" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.929010 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40f1dc52-213f-4a5b-af33-4067a83859e4-logs\") pod \"nova-metadata-0\" (UID: \"40f1dc52-213f-4a5b-af33-4067a83859e4\") " pod="openstack/nova-metadata-0" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.929165 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxzz5\" (UniqueName: \"kubernetes.io/projected/40f1dc52-213f-4a5b-af33-4067a83859e4-kube-api-access-sxzz5\") pod \"nova-metadata-0\" (UID: \"40f1dc52-213f-4a5b-af33-4067a83859e4\") " pod="openstack/nova-metadata-0" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.929246 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40f1dc52-213f-4a5b-af33-4067a83859e4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"40f1dc52-213f-4a5b-af33-4067a83859e4\") " pod="openstack/nova-metadata-0" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.929291 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/40f1dc52-213f-4a5b-af33-4067a83859e4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"40f1dc52-213f-4a5b-af33-4067a83859e4\") " pod="openstack/nova-metadata-0" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.929318 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40f1dc52-213f-4a5b-af33-4067a83859e4-config-data\") pod \"nova-metadata-0\" (UID: \"40f1dc52-213f-4a5b-af33-4067a83859e4\") " pod="openstack/nova-metadata-0" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.929398 4766 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/23e893e4-3d60-421d-ad41-bc0f76112015-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.929410 4766 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/23e893e4-3d60-421d-ad41-bc0f76112015-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.937265 4766 scope.go:117] "RemoveContainer" containerID="f33cca9b31b0707055cc7d972b3ac113dfb61b700ced61c9579110d866526c80" Jan 30 16:46:33 crc kubenswrapper[4766]: E0130 16:46:33.937815 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f33cca9b31b0707055cc7d972b3ac113dfb61b700ced61c9579110d866526c80\": container with ID starting with f33cca9b31b0707055cc7d972b3ac113dfb61b700ced61c9579110d866526c80 not found: ID does not exist" containerID="f33cca9b31b0707055cc7d972b3ac113dfb61b700ced61c9579110d866526c80" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.937846 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f33cca9b31b0707055cc7d972b3ac113dfb61b700ced61c9579110d866526c80"} err="failed to get container status \"f33cca9b31b0707055cc7d972b3ac113dfb61b700ced61c9579110d866526c80\": rpc error: code = NotFound desc = could not find container \"f33cca9b31b0707055cc7d972b3ac113dfb61b700ced61c9579110d866526c80\": container with ID starting with f33cca9b31b0707055cc7d972b3ac113dfb61b700ced61c9579110d866526c80 not found: ID does not exist" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.937870 4766 scope.go:117] "RemoveContainer" containerID="817ef08566cd27d90ced8fe5b2ae3d2003a5823e02c0a237d13eebcc353ba241" Jan 30 16:46:33 crc kubenswrapper[4766]: E0130 16:46:33.938092 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"817ef08566cd27d90ced8fe5b2ae3d2003a5823e02c0a237d13eebcc353ba241\": container with ID starting with 817ef08566cd27d90ced8fe5b2ae3d2003a5823e02c0a237d13eebcc353ba241 not found: ID does not exist" containerID="817ef08566cd27d90ced8fe5b2ae3d2003a5823e02c0a237d13eebcc353ba241" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.938126 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"817ef08566cd27d90ced8fe5b2ae3d2003a5823e02c0a237d13eebcc353ba241"} err="failed to get container status \"817ef08566cd27d90ced8fe5b2ae3d2003a5823e02c0a237d13eebcc353ba241\": rpc error: code = NotFound desc = could not find container \"817ef08566cd27d90ced8fe5b2ae3d2003a5823e02c0a237d13eebcc353ba241\": container with ID starting with 817ef08566cd27d90ced8fe5b2ae3d2003a5823e02c0a237d13eebcc353ba241 not found: ID does not exist" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.946220 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.960862 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.974383 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.976105 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.977886 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.978685 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.979080 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.986139 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.030420 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxzz5\" (UniqueName: \"kubernetes.io/projected/40f1dc52-213f-4a5b-af33-4067a83859e4-kube-api-access-sxzz5\") pod \"nova-metadata-0\" (UID: \"40f1dc52-213f-4a5b-af33-4067a83859e4\") " pod="openstack/nova-metadata-0" Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.030656 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40f1dc52-213f-4a5b-af33-4067a83859e4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"40f1dc52-213f-4a5b-af33-4067a83859e4\") " pod="openstack/nova-metadata-0" Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.030748 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/40f1dc52-213f-4a5b-af33-4067a83859e4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"40f1dc52-213f-4a5b-af33-4067a83859e4\") " pod="openstack/nova-metadata-0" Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.030818 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40f1dc52-213f-4a5b-af33-4067a83859e4-config-data\") pod \"nova-metadata-0\" (UID: \"40f1dc52-213f-4a5b-af33-4067a83859e4\") " pod="openstack/nova-metadata-0" Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.030928 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40f1dc52-213f-4a5b-af33-4067a83859e4-logs\") pod \"nova-metadata-0\" (UID: \"40f1dc52-213f-4a5b-af33-4067a83859e4\") " pod="openstack/nova-metadata-0" Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.031641 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40f1dc52-213f-4a5b-af33-4067a83859e4-logs\") pod \"nova-metadata-0\" (UID: \"40f1dc52-213f-4a5b-af33-4067a83859e4\") " pod="openstack/nova-metadata-0" Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.035713 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/40f1dc52-213f-4a5b-af33-4067a83859e4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"40f1dc52-213f-4a5b-af33-4067a83859e4\") " pod="openstack/nova-metadata-0" Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.035724 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40f1dc52-213f-4a5b-af33-4067a83859e4-config-data\") pod \"nova-metadata-0\" (UID: \"40f1dc52-213f-4a5b-af33-4067a83859e4\") " pod="openstack/nova-metadata-0" Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.036419 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40f1dc52-213f-4a5b-af33-4067a83859e4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"40f1dc52-213f-4a5b-af33-4067a83859e4\") " pod="openstack/nova-metadata-0" Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.050212 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxzz5\" (UniqueName: \"kubernetes.io/projected/40f1dc52-213f-4a5b-af33-4067a83859e4-kube-api-access-sxzz5\") pod \"nova-metadata-0\" (UID: \"40f1dc52-213f-4a5b-af33-4067a83859e4\") " pod="openstack/nova-metadata-0" Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.053829 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23e893e4-3d60-421d-ad41-bc0f76112015" path="/var/lib/kubelet/pods/23e893e4-3d60-421d-ad41-bc0f76112015/volumes" Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.056317 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e30ce0e-ad1f-4433-91ba-3a19be83ffc9" path="/var/lib/kubelet/pods/8e30ce0e-ad1f-4433-91ba-3a19be83ffc9/volumes" Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.057034 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e7ef23-9d73-45f9-aeae-9fb0bf16b861" path="/var/lib/kubelet/pods/e7e7ef23-9d73-45f9-aeae-9fb0bf16b861/volumes" Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.133015 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14ae2453-74fa-4114-9261-21b381518493-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"14ae2453-74fa-4114-9261-21b381518493\") " pod="openstack/nova-api-0" Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.133111 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14ae2453-74fa-4114-9261-21b381518493-config-data\") pod \"nova-api-0\" (UID: \"14ae2453-74fa-4114-9261-21b381518493\") " pod="openstack/nova-api-0" Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.133149 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqjcv\" (UniqueName: \"kubernetes.io/projected/14ae2453-74fa-4114-9261-21b381518493-kube-api-access-xqjcv\") pod \"nova-api-0\" (UID: \"14ae2453-74fa-4114-9261-21b381518493\") " pod="openstack/nova-api-0" Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.133276 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/14ae2453-74fa-4114-9261-21b381518493-public-tls-certs\") pod \"nova-api-0\" (UID: \"14ae2453-74fa-4114-9261-21b381518493\") " pod="openstack/nova-api-0" Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.133332 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/14ae2453-74fa-4114-9261-21b381518493-internal-tls-certs\") pod \"nova-api-0\" (UID: \"14ae2453-74fa-4114-9261-21b381518493\") " pod="openstack/nova-api-0" Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.133356 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14ae2453-74fa-4114-9261-21b381518493-logs\") pod \"nova-api-0\" (UID: \"14ae2453-74fa-4114-9261-21b381518493\") " pod="openstack/nova-api-0" Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.151985 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.235117 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/14ae2453-74fa-4114-9261-21b381518493-public-tls-certs\") pod \"nova-api-0\" (UID: \"14ae2453-74fa-4114-9261-21b381518493\") " pod="openstack/nova-api-0" Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.235572 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/14ae2453-74fa-4114-9261-21b381518493-internal-tls-certs\") pod \"nova-api-0\" (UID: \"14ae2453-74fa-4114-9261-21b381518493\") " pod="openstack/nova-api-0" Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.235597 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14ae2453-74fa-4114-9261-21b381518493-logs\") pod \"nova-api-0\" (UID: \"14ae2453-74fa-4114-9261-21b381518493\") " pod="openstack/nova-api-0" Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.235642 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14ae2453-74fa-4114-9261-21b381518493-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"14ae2453-74fa-4114-9261-21b381518493\") " pod="openstack/nova-api-0" Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.235683 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14ae2453-74fa-4114-9261-21b381518493-config-data\") pod \"nova-api-0\" (UID: \"14ae2453-74fa-4114-9261-21b381518493\") " pod="openstack/nova-api-0" Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.235717 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqjcv\" (UniqueName: \"kubernetes.io/projected/14ae2453-74fa-4114-9261-21b381518493-kube-api-access-xqjcv\") pod \"nova-api-0\" (UID: \"14ae2453-74fa-4114-9261-21b381518493\") " pod="openstack/nova-api-0" Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.239599 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14ae2453-74fa-4114-9261-21b381518493-logs\") pod \"nova-api-0\" (UID: \"14ae2453-74fa-4114-9261-21b381518493\") " pod="openstack/nova-api-0" Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.247172 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/14ae2453-74fa-4114-9261-21b381518493-internal-tls-certs\") pod \"nova-api-0\" (UID: \"14ae2453-74fa-4114-9261-21b381518493\") " pod="openstack/nova-api-0" Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.247692 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14ae2453-74fa-4114-9261-21b381518493-config-data\") pod \"nova-api-0\" (UID: \"14ae2453-74fa-4114-9261-21b381518493\") " pod="openstack/nova-api-0" Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.249151 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14ae2453-74fa-4114-9261-21b381518493-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"14ae2453-74fa-4114-9261-21b381518493\") " pod="openstack/nova-api-0" Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.252269 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/14ae2453-74fa-4114-9261-21b381518493-public-tls-certs\") pod \"nova-api-0\" (UID: \"14ae2453-74fa-4114-9261-21b381518493\") " pod="openstack/nova-api-0" Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.254782 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqjcv\" (UniqueName: \"kubernetes.io/projected/14ae2453-74fa-4114-9261-21b381518493-kube-api-access-xqjcv\") pod \"nova-api-0\" (UID: \"14ae2453-74fa-4114-9261-21b381518493\") " pod="openstack/nova-api-0" Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.306528 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 16:46:35 crc kubenswrapper[4766]: I0130 16:46:34.641616 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4f217490-8a26-4f4b-935b-fe5918500948","Type":"ContainerStarted","Data":"49f1b21f2e3d5b4e03c8ce1eb39f56c81d4a5fbe999ee27e98fc4e4b585ae884"} Jan 30 16:46:35 crc kubenswrapper[4766]: I0130 16:46:34.641659 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4f217490-8a26-4f4b-935b-fe5918500948","Type":"ContainerStarted","Data":"f056061bd522d3379f642d93301ecddb3bb56cae94292cc340f18fe39f2e4f4b"} Jan 30 16:46:35 crc kubenswrapper[4766]: I0130 16:46:34.672206 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 16:46:35 crc kubenswrapper[4766]: I0130 16:46:34.674670 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.674653491 podStartE2EDuration="2.674653491s" podCreationTimestamp="2026-01-30 16:46:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:46:34.655931798 +0000 UTC m=+1449.293889154" watchObservedRunningTime="2026-01-30 16:46:34.674653491 +0000 UTC m=+1449.312610837" Jan 30 16:46:35 crc kubenswrapper[4766]: W0130 16:46:34.798731 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14ae2453_74fa_4114_9261_21b381518493.slice/crio-7fc6fabdf1696e6682c7bbb5d9becc2f8e5aa3ed317845b65b7dc17fdb970244 WatchSource:0}: Error finding container 7fc6fabdf1696e6682c7bbb5d9becc2f8e5aa3ed317845b65b7dc17fdb970244: Status 404 returned error can't find the container with id 7fc6fabdf1696e6682c7bbb5d9becc2f8e5aa3ed317845b65b7dc17fdb970244 Jan 30 16:46:35 crc kubenswrapper[4766]: I0130 16:46:34.800464 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 16:46:35 crc kubenswrapper[4766]: I0130 16:46:35.651504 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"40f1dc52-213f-4a5b-af33-4067a83859e4","Type":"ContainerStarted","Data":"f5c26f650562d5378425a19e3a733be4cbcd37edac52306372d412afb45a409d"} Jan 30 16:46:35 crc kubenswrapper[4766]: I0130 16:46:35.651832 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"40f1dc52-213f-4a5b-af33-4067a83859e4","Type":"ContainerStarted","Data":"e3f046bd0100af048e190e7bb7dc2c7ede76a6587bf6a06b51be084ec53ea1dc"} Jan 30 16:46:35 crc kubenswrapper[4766]: I0130 16:46:35.651847 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"40f1dc52-213f-4a5b-af33-4067a83859e4","Type":"ContainerStarted","Data":"fbc4233875c212f4b897d1f9917772ed396cd3598ca0ca808134dccd327aa2de"} Jan 30 16:46:35 crc kubenswrapper[4766]: I0130 16:46:35.653517 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"14ae2453-74fa-4114-9261-21b381518493","Type":"ContainerStarted","Data":"078a351f4bbfda381f7eaea97874a2d3cad8f7b02bef769bcb410ba868b12250"} Jan 30 16:46:35 crc kubenswrapper[4766]: I0130 16:46:35.653546 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"14ae2453-74fa-4114-9261-21b381518493","Type":"ContainerStarted","Data":"7cabed8561645b99877a1c2df47b93e7663d97c477d7b28bd91f347a72034772"} Jan 30 16:46:35 crc kubenswrapper[4766]: I0130 16:46:35.653556 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"14ae2453-74fa-4114-9261-21b381518493","Type":"ContainerStarted","Data":"7fc6fabdf1696e6682c7bbb5d9becc2f8e5aa3ed317845b65b7dc17fdb970244"} Jan 30 16:46:35 crc kubenswrapper[4766]: I0130 16:46:35.680687 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.680663779 podStartE2EDuration="2.680663779s" podCreationTimestamp="2026-01-30 16:46:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:46:35.670222578 +0000 UTC m=+1450.308179944" watchObservedRunningTime="2026-01-30 16:46:35.680663779 +0000 UTC m=+1450.318621135" Jan 30 16:46:35 crc kubenswrapper[4766]: I0130 16:46:35.693462 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.693443946 podStartE2EDuration="2.693443946s" podCreationTimestamp="2026-01-30 16:46:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:46:35.691254415 +0000 UTC m=+1450.329211771" watchObservedRunningTime="2026-01-30 16:46:35.693443946 +0000 UTC m=+1450.331401292" Jan 30 16:46:38 crc kubenswrapper[4766]: I0130 16:46:38.288556 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 16:46:39 crc kubenswrapper[4766]: I0130 16:46:39.152961 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 16:46:39 crc kubenswrapper[4766]: I0130 16:46:39.153319 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 16:46:43 crc kubenswrapper[4766]: I0130 16:46:43.289421 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 16:46:43 crc kubenswrapper[4766]: I0130 16:46:43.327649 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 16:46:43 crc kubenswrapper[4766]: I0130 16:46:43.765191 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 16:46:44 crc kubenswrapper[4766]: I0130 16:46:44.153031 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 16:46:44 crc kubenswrapper[4766]: I0130 16:46:44.153078 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 16:46:44 crc kubenswrapper[4766]: I0130 16:46:44.307737 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 16:46:44 crc kubenswrapper[4766]: I0130 16:46:44.307815 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 16:46:45 crc kubenswrapper[4766]: I0130 16:46:45.172314 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="40f1dc52-213f-4a5b-af33-4067a83859e4" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.203:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 16:46:45 crc kubenswrapper[4766]: I0130 16:46:45.172329 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="40f1dc52-213f-4a5b-af33-4067a83859e4" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.203:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 16:46:45 crc kubenswrapper[4766]: I0130 16:46:45.321434 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="14ae2453-74fa-4114-9261-21b381518493" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 16:46:45 crc kubenswrapper[4766]: I0130 16:46:45.321452 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="14ae2453-74fa-4114-9261-21b381518493" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 16:46:53 crc kubenswrapper[4766]: I0130 16:46:53.955378 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 30 16:46:54 crc kubenswrapper[4766]: I0130 16:46:54.159008 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 16:46:54 crc kubenswrapper[4766]: I0130 16:46:54.159111 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 16:46:54 crc kubenswrapper[4766]: I0130 16:46:54.166156 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 16:46:54 crc kubenswrapper[4766]: I0130 16:46:54.167462 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 16:46:54 crc kubenswrapper[4766]: I0130 16:46:54.317909 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 16:46:54 crc kubenswrapper[4766]: I0130 16:46:54.319642 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 16:46:54 crc kubenswrapper[4766]: I0130 16:46:54.329099 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 16:46:54 crc kubenswrapper[4766]: I0130 16:46:54.335201 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 16:46:54 crc kubenswrapper[4766]: I0130 16:46:54.833834 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 16:46:54 crc kubenswrapper[4766]: I0130 16:46:54.842408 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 16:46:56 crc kubenswrapper[4766]: I0130 16:46:56.991142 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6kx5n"] Jan 30 16:46:56 crc kubenswrapper[4766]: I0130 16:46:56.993529 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6kx5n" Jan 30 16:46:57 crc kubenswrapper[4766]: I0130 16:46:57.000843 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6kx5n"] Jan 30 16:46:57 crc kubenswrapper[4766]: I0130 16:46:57.072580 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/845c3343-246e-4309-bd46-9bcd92cad574-catalog-content\") pod \"redhat-operators-6kx5n\" (UID: \"845c3343-246e-4309-bd46-9bcd92cad574\") " pod="openshift-marketplace/redhat-operators-6kx5n" Jan 30 16:46:57 crc kubenswrapper[4766]: I0130 16:46:57.072702 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/845c3343-246e-4309-bd46-9bcd92cad574-utilities\") pod \"redhat-operators-6kx5n\" (UID: \"845c3343-246e-4309-bd46-9bcd92cad574\") " pod="openshift-marketplace/redhat-operators-6kx5n" Jan 30 16:46:57 crc kubenswrapper[4766]: I0130 16:46:57.072789 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4dw2\" (UniqueName: \"kubernetes.io/projected/845c3343-246e-4309-bd46-9bcd92cad574-kube-api-access-s4dw2\") pod \"redhat-operators-6kx5n\" (UID: \"845c3343-246e-4309-bd46-9bcd92cad574\") " pod="openshift-marketplace/redhat-operators-6kx5n" Jan 30 16:46:57 crc kubenswrapper[4766]: I0130 16:46:57.174731 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/845c3343-246e-4309-bd46-9bcd92cad574-utilities\") pod \"redhat-operators-6kx5n\" (UID: \"845c3343-246e-4309-bd46-9bcd92cad574\") " pod="openshift-marketplace/redhat-operators-6kx5n" Jan 30 16:46:57 crc kubenswrapper[4766]: I0130 16:46:57.174853 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4dw2\" (UniqueName: \"kubernetes.io/projected/845c3343-246e-4309-bd46-9bcd92cad574-kube-api-access-s4dw2\") pod \"redhat-operators-6kx5n\" (UID: \"845c3343-246e-4309-bd46-9bcd92cad574\") " pod="openshift-marketplace/redhat-operators-6kx5n" Jan 30 16:46:57 crc kubenswrapper[4766]: I0130 16:46:57.174913 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/845c3343-246e-4309-bd46-9bcd92cad574-catalog-content\") pod \"redhat-operators-6kx5n\" (UID: \"845c3343-246e-4309-bd46-9bcd92cad574\") " pod="openshift-marketplace/redhat-operators-6kx5n" Jan 30 16:46:57 crc kubenswrapper[4766]: I0130 16:46:57.175350 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/845c3343-246e-4309-bd46-9bcd92cad574-utilities\") pod \"redhat-operators-6kx5n\" (UID: \"845c3343-246e-4309-bd46-9bcd92cad574\") " pod="openshift-marketplace/redhat-operators-6kx5n" Jan 30 16:46:57 crc kubenswrapper[4766]: I0130 16:46:57.175362 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/845c3343-246e-4309-bd46-9bcd92cad574-catalog-content\") pod \"redhat-operators-6kx5n\" (UID: \"845c3343-246e-4309-bd46-9bcd92cad574\") " pod="openshift-marketplace/redhat-operators-6kx5n" Jan 30 16:46:57 crc kubenswrapper[4766]: I0130 16:46:57.200488 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4dw2\" (UniqueName: \"kubernetes.io/projected/845c3343-246e-4309-bd46-9bcd92cad574-kube-api-access-s4dw2\") pod \"redhat-operators-6kx5n\" (UID: \"845c3343-246e-4309-bd46-9bcd92cad574\") " pod="openshift-marketplace/redhat-operators-6kx5n" Jan 30 16:46:57 crc kubenswrapper[4766]: I0130 16:46:57.329297 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6kx5n" Jan 30 16:46:57 crc kubenswrapper[4766]: I0130 16:46:57.862338 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6kx5n"] Jan 30 16:46:58 crc kubenswrapper[4766]: I0130 16:46:58.878583 4766 generic.go:334] "Generic (PLEG): container finished" podID="845c3343-246e-4309-bd46-9bcd92cad574" containerID="327eb2c72821c900abb686fc57155daefdfa60f38b6a17aeb8f64c6e06c87f14" exitCode=0 Jan 30 16:46:58 crc kubenswrapper[4766]: I0130 16:46:58.878898 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6kx5n" event={"ID":"845c3343-246e-4309-bd46-9bcd92cad574","Type":"ContainerDied","Data":"327eb2c72821c900abb686fc57155daefdfa60f38b6a17aeb8f64c6e06c87f14"} Jan 30 16:46:58 crc kubenswrapper[4766]: I0130 16:46:58.878965 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6kx5n" event={"ID":"845c3343-246e-4309-bd46-9bcd92cad574","Type":"ContainerStarted","Data":"721b24966425ad3828c4ed010c44283d43a0eeb0f5dae60a2287376c39e4728d"} Jan 30 16:46:59 crc kubenswrapper[4766]: I0130 16:46:59.889000 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6kx5n" event={"ID":"845c3343-246e-4309-bd46-9bcd92cad574","Type":"ContainerStarted","Data":"07c35af9b71ba10727d5e6edfec8a9dd18621078f9906a2ff5f1d606d8fff921"} Jan 30 16:47:02 crc kubenswrapper[4766]: I0130 16:47:02.918560 4766 generic.go:334] "Generic (PLEG): container finished" podID="845c3343-246e-4309-bd46-9bcd92cad574" containerID="07c35af9b71ba10727d5e6edfec8a9dd18621078f9906a2ff5f1d606d8fff921" exitCode=0 Jan 30 16:47:02 crc kubenswrapper[4766]: I0130 16:47:02.918660 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6kx5n" event={"ID":"845c3343-246e-4309-bd46-9bcd92cad574","Type":"ContainerDied","Data":"07c35af9b71ba10727d5e6edfec8a9dd18621078f9906a2ff5f1d606d8fff921"} Jan 30 16:47:03 crc kubenswrapper[4766]: I0130 16:47:03.933245 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6kx5n" event={"ID":"845c3343-246e-4309-bd46-9bcd92cad574","Type":"ContainerStarted","Data":"8cbf9b54e98821990af4d26a0270e1083e6e97cddbcd3ce5671c84518d5767d7"} Jan 30 16:47:03 crc kubenswrapper[4766]: I0130 16:47:03.966167 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6kx5n" podStartSLOduration=3.4675488789999998 podStartE2EDuration="7.966126771s" podCreationTimestamp="2026-01-30 16:46:56 +0000 UTC" firstStartedPulling="2026-01-30 16:46:58.881012312 +0000 UTC m=+1473.518969658" lastFinishedPulling="2026-01-30 16:47:03.379590194 +0000 UTC m=+1478.017547550" observedRunningTime="2026-01-30 16:47:03.950815174 +0000 UTC m=+1478.588772540" watchObservedRunningTime="2026-01-30 16:47:03.966126771 +0000 UTC m=+1478.604084117" Jan 30 16:47:07 crc kubenswrapper[4766]: I0130 16:47:07.329961 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6kx5n" Jan 30 16:47:07 crc kubenswrapper[4766]: I0130 16:47:07.330322 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6kx5n" Jan 30 16:47:08 crc kubenswrapper[4766]: I0130 16:47:08.375240 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6kx5n" podUID="845c3343-246e-4309-bd46-9bcd92cad574" containerName="registry-server" probeResult="failure" output=< Jan 30 16:47:08 crc kubenswrapper[4766]: timeout: failed to connect service ":50051" within 1s Jan 30 16:47:08 crc kubenswrapper[4766]: > Jan 30 16:47:09 crc kubenswrapper[4766]: I0130 16:47:09.045520 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:47:09 crc kubenswrapper[4766]: I0130 16:47:09.045783 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:47:13 crc kubenswrapper[4766]: I0130 16:47:13.420860 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-66a8-account-create-update-hh2cg"] Jan 30 16:47:13 crc kubenswrapper[4766]: I0130 16:47:13.422504 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-66a8-account-create-update-hh2cg" Jan 30 16:47:13 crc kubenswrapper[4766]: I0130 16:47:13.428943 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 30 16:47:13 crc kubenswrapper[4766]: I0130 16:47:13.500945 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-66a8-account-create-update-hh2cg"] Jan 30 16:47:13 crc kubenswrapper[4766]: I0130 16:47:13.543408 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhtqk\" (UniqueName: \"kubernetes.io/projected/d12bc030-c731-4999-ac6d-1be59807c6de-kube-api-access-rhtqk\") pod \"barbican-66a8-account-create-update-hh2cg\" (UID: \"d12bc030-c731-4999-ac6d-1be59807c6de\") " pod="openstack/barbican-66a8-account-create-update-hh2cg" Jan 30 16:47:13 crc kubenswrapper[4766]: I0130 16:47:13.543560 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d12bc030-c731-4999-ac6d-1be59807c6de-operator-scripts\") pod \"barbican-66a8-account-create-update-hh2cg\" (UID: \"d12bc030-c731-4999-ac6d-1be59807c6de\") " pod="openstack/barbican-66a8-account-create-update-hh2cg" Jan 30 16:47:13 crc kubenswrapper[4766]: I0130 16:47:13.605254 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-66a8-account-create-update-wk4g8"] Jan 30 16:47:13 crc kubenswrapper[4766]: I0130 16:47:13.630812 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-66a8-account-create-update-wk4g8"] Jan 30 16:47:13 crc kubenswrapper[4766]: I0130 16:47:13.645474 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhtqk\" (UniqueName: \"kubernetes.io/projected/d12bc030-c731-4999-ac6d-1be59807c6de-kube-api-access-rhtqk\") pod \"barbican-66a8-account-create-update-hh2cg\" (UID: \"d12bc030-c731-4999-ac6d-1be59807c6de\") " pod="openstack/barbican-66a8-account-create-update-hh2cg" Jan 30 16:47:13 crc kubenswrapper[4766]: I0130 16:47:13.645619 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d12bc030-c731-4999-ac6d-1be59807c6de-operator-scripts\") pod \"barbican-66a8-account-create-update-hh2cg\" (UID: \"d12bc030-c731-4999-ac6d-1be59807c6de\") " pod="openstack/barbican-66a8-account-create-update-hh2cg" Jan 30 16:47:13 crc kubenswrapper[4766]: I0130 16:47:13.646402 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d12bc030-c731-4999-ac6d-1be59807c6de-operator-scripts\") pod \"barbican-66a8-account-create-update-hh2cg\" (UID: \"d12bc030-c731-4999-ac6d-1be59807c6de\") " pod="openstack/barbican-66a8-account-create-update-hh2cg" Jan 30 16:47:13 crc kubenswrapper[4766]: I0130 16:47:13.662238 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-jfd74"] Jan 30 16:47:13 crc kubenswrapper[4766]: I0130 16:47:13.663655 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-jfd74" Jan 30 16:47:13 crc kubenswrapper[4766]: I0130 16:47:13.670089 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 30 16:47:13 crc kubenswrapper[4766]: I0130 16:47:13.683267 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-jfd74"] Jan 30 16:47:13 crc kubenswrapper[4766]: I0130 16:47:13.713251 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 30 16:47:13 crc kubenswrapper[4766]: I0130 16:47:13.713503 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="372f7d7a-9066-4b9b-884a-5257785ed101" containerName="openstackclient" containerID="cri-o://df788f30600005e9bd630dc70c223ed28619ad8b7870fd3b9815867378945be2" gracePeriod=2 Jan 30 16:47:13 crc kubenswrapper[4766]: I0130 16:47:13.742228 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 30 16:47:13 crc kubenswrapper[4766]: I0130 16:47:13.747552 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn85z\" (UniqueName: \"kubernetes.io/projected/4e9bbf1f-b039-4112-ab71-308535065091-kube-api-access-nn85z\") pod \"root-account-create-update-jfd74\" (UID: \"4e9bbf1f-b039-4112-ab71-308535065091\") " pod="openstack/root-account-create-update-jfd74" Jan 30 16:47:13 crc kubenswrapper[4766]: I0130 16:47:13.747601 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e9bbf1f-b039-4112-ab71-308535065091-operator-scripts\") pod \"root-account-create-update-jfd74\" (UID: \"4e9bbf1f-b039-4112-ab71-308535065091\") " pod="openstack/root-account-create-update-jfd74" Jan 30 16:47:13 crc kubenswrapper[4766]: I0130 16:47:13.791222 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhtqk\" (UniqueName: \"kubernetes.io/projected/d12bc030-c731-4999-ac6d-1be59807c6de-kube-api-access-rhtqk\") pod \"barbican-66a8-account-create-update-hh2cg\" (UID: \"d12bc030-c731-4999-ac6d-1be59807c6de\") " pod="openstack/barbican-66a8-account-create-update-hh2cg" Jan 30 16:47:13 crc kubenswrapper[4766]: I0130 16:47:13.854758 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn85z\" (UniqueName: \"kubernetes.io/projected/4e9bbf1f-b039-4112-ab71-308535065091-kube-api-access-nn85z\") pod \"root-account-create-update-jfd74\" (UID: \"4e9bbf1f-b039-4112-ab71-308535065091\") " pod="openstack/root-account-create-update-jfd74" Jan 30 16:47:13 crc kubenswrapper[4766]: I0130 16:47:13.854806 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e9bbf1f-b039-4112-ab71-308535065091-operator-scripts\") pod \"root-account-create-update-jfd74\" (UID: \"4e9bbf1f-b039-4112-ab71-308535065091\") " pod="openstack/root-account-create-update-jfd74" Jan 30 16:47:13 crc kubenswrapper[4766]: I0130 16:47:13.855513 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e9bbf1f-b039-4112-ab71-308535065091-operator-scripts\") pod \"root-account-create-update-jfd74\" (UID: \"4e9bbf1f-b039-4112-ab71-308535065091\") " pod="openstack/root-account-create-update-jfd74" Jan 30 16:47:13 crc kubenswrapper[4766]: I0130 16:47:13.873609 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-jppr8"] Jan 30 16:47:13 crc kubenswrapper[4766]: I0130 16:47:13.912023 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn85z\" (UniqueName: \"kubernetes.io/projected/4e9bbf1f-b039-4112-ab71-308535065091-kube-api-access-nn85z\") pod \"root-account-create-update-jfd74\" (UID: \"4e9bbf1f-b039-4112-ab71-308535065091\") " pod="openstack/root-account-create-update-jfd74" Jan 30 16:47:13 crc kubenswrapper[4766]: I0130 16:47:13.999264 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-jppr8"] Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.055554 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-jfd74" Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.061063 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-66a8-account-create-update-hh2cg" Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.104294 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3747d6ac-f476-429b-83b8-c5a65a241d47" path="/var/lib/kubelet/pods/3747d6ac-f476-429b-83b8-c5a65a241d47/volumes" Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.104961 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9dd82ac-e512-442e-97c4-53be730affca" path="/var/lib/kubelet/pods/e9dd82ac-e512-442e-97c4-53be730affca/volumes" Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.121676 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-cc14-account-create-update-6kfvc"] Jan 30 16:47:14 crc kubenswrapper[4766]: E0130 16:47:14.122085 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="372f7d7a-9066-4b9b-884a-5257785ed101" containerName="openstackclient" Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.122099 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="372f7d7a-9066-4b9b-884a-5257785ed101" containerName="openstackclient" Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.122300 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="372f7d7a-9066-4b9b-884a-5257785ed101" containerName="openstackclient" Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.122870 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-cc14-account-create-update-6kfvc" Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.129599 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.160572 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-cc14-account-create-update-6kfvc"] Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.164288 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9sg8\" (UniqueName: \"kubernetes.io/projected/a5ce540c-4925-43fa-b0aa-ef474912f60e-kube-api-access-k9sg8\") pod \"placement-cc14-account-create-update-6kfvc\" (UID: \"a5ce540c-4925-43fa-b0aa-ef474912f60e\") " pod="openstack/placement-cc14-account-create-update-6kfvc" Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.164366 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5ce540c-4925-43fa-b0aa-ef474912f60e-operator-scripts\") pod \"placement-cc14-account-create-update-6kfvc\" (UID: \"a5ce540c-4925-43fa-b0aa-ef474912f60e\") " pod="openstack/placement-cc14-account-create-update-6kfvc" Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.195255 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-cc14-account-create-update-jhjn2"] Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.203234 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-cc14-account-create-update-jhjn2"] Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.245905 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.277519 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5ce540c-4925-43fa-b0aa-ef474912f60e-operator-scripts\") pod \"placement-cc14-account-create-update-6kfvc\" (UID: \"a5ce540c-4925-43fa-b0aa-ef474912f60e\") " pod="openstack/placement-cc14-account-create-update-6kfvc" Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.277992 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9sg8\" (UniqueName: \"kubernetes.io/projected/a5ce540c-4925-43fa-b0aa-ef474912f60e-kube-api-access-k9sg8\") pod \"placement-cc14-account-create-update-6kfvc\" (UID: \"a5ce540c-4925-43fa-b0aa-ef474912f60e\") " pod="openstack/placement-cc14-account-create-update-6kfvc" Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.280115 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5ce540c-4925-43fa-b0aa-ef474912f60e-operator-scripts\") pod \"placement-cc14-account-create-update-6kfvc\" (UID: \"a5ce540c-4925-43fa-b0aa-ef474912f60e\") " pod="openstack/placement-cc14-account-create-update-6kfvc" Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.280578 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.281346 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="1e751b80-d475-4bfd-a382-5d9e1618e5aa" containerName="openstack-network-exporter" containerID="cri-o://68be686c2198473cf235baf71f611a27995c8888c56e86a3626a67b42470e28a" gracePeriod=300 Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.336633 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.337042 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="9f9f648f-36fc-4ab4-9e08-cf4e01e30f22" containerName="ovn-northd" containerID="cri-o://1018ad035e1117daba7d0fa6d624c300af7a28f4b34f661587a2d4823b6112f1" gracePeriod=30 Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.337398 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="9f9f648f-36fc-4ab4-9e08-cf4e01e30f22" containerName="openstack-network-exporter" containerID="cri-o://722b9f0bf4bb4fdc169a16a2a0008b553646c69b6b43ec117a7046c04ee677ad" gracePeriod=30 Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.357618 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9sg8\" (UniqueName: \"kubernetes.io/projected/a5ce540c-4925-43fa-b0aa-ef474912f60e-kube-api-access-k9sg8\") pod \"placement-cc14-account-create-update-6kfvc\" (UID: \"a5ce540c-4925-43fa-b0aa-ef474912f60e\") " pod="openstack/placement-cc14-account-create-update-6kfvc" Jan 30 16:47:14 crc kubenswrapper[4766]: E0130 16:47:14.384566 4766 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 30 16:47:14 crc kubenswrapper[4766]: E0130 16:47:14.384627 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b21357e1-82c9-419a-a191-359c84d6d001-config-data podName:b21357e1-82c9-419a-a191-359c84d6d001 nodeName:}" failed. No retries permitted until 2026-01-30 16:47:14.884607056 +0000 UTC m=+1489.522564412 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/b21357e1-82c9-419a-a191-359c84d6d001-config-data") pod "rabbitmq-cell1-server-0" (UID: "b21357e1-82c9-419a-a191-359c84d6d001") : configmap "rabbitmq-cell1-config-data" not found Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.393551 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-270a-account-create-update-d5mdk"] Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.465532 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-270a-account-create-update-d5mdk"] Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.518247 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-jpmx7"] Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.537706 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="1e751b80-d475-4bfd-a382-5d9e1618e5aa" containerName="ovsdbserver-nb" containerID="cri-o://20e080fafb462224d035f80d6933976aeeea05d7d2ed407630e50efdc1f07cd7" gracePeriod=300 Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.588281 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-b00e-account-create-update-pkszz"] Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.589748 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-b00e-account-create-update-pkszz" Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.619801 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.627395 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-jpmx7"] Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.669536 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-cc14-account-create-update-6kfvc" Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.674636 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-b00e-account-create-update-pkszz"] Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.693950 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw828\" (UniqueName: \"kubernetes.io/projected/965e8a8f-b4eb-4abb-8177-841fde4d33a2-kube-api-access-bw828\") pod \"nova-api-b00e-account-create-update-pkszz\" (UID: \"965e8a8f-b4eb-4abb-8177-841fde4d33a2\") " pod="openstack/nova-api-b00e-account-create-update-pkszz" Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.694011 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/965e8a8f-b4eb-4abb-8177-841fde4d33a2-operator-scripts\") pod \"nova-api-b00e-account-create-update-pkszz\" (UID: \"965e8a8f-b4eb-4abb-8177-841fde4d33a2\") " pod="openstack/nova-api-b00e-account-create-update-pkszz" Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.786344 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-1273-account-create-update-qhttp"] Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.787755 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1273-account-create-update-qhttp" Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.801761 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bw828\" (UniqueName: \"kubernetes.io/projected/965e8a8f-b4eb-4abb-8177-841fde4d33a2-kube-api-access-bw828\") pod \"nova-api-b00e-account-create-update-pkszz\" (UID: \"965e8a8f-b4eb-4abb-8177-841fde4d33a2\") " pod="openstack/nova-api-b00e-account-create-update-pkszz" Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.801877 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/965e8a8f-b4eb-4abb-8177-841fde4d33a2-operator-scripts\") pod \"nova-api-b00e-account-create-update-pkszz\" (UID: \"965e8a8f-b4eb-4abb-8177-841fde4d33a2\") " pod="openstack/nova-api-b00e-account-create-update-pkszz" Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.803034 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/965e8a8f-b4eb-4abb-8177-841fde4d33a2-operator-scripts\") pod \"nova-api-b00e-account-create-update-pkszz\" (UID: \"965e8a8f-b4eb-4abb-8177-841fde4d33a2\") " pod="openstack/nova-api-b00e-account-create-update-pkszz" Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.805528 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.819886 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-1273-account-create-update-qhttp"] Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.831246 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-b00e-account-create-update-r7p4m"] Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.869230 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-b00e-account-create-update-r7p4m"] Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.886905 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bw828\" (UniqueName: \"kubernetes.io/projected/965e8a8f-b4eb-4abb-8177-841fde4d33a2-kube-api-access-bw828\") pod \"nova-api-b00e-account-create-update-pkszz\" (UID: \"965e8a8f-b4eb-4abb-8177-841fde4d33a2\") " pod="openstack/nova-api-b00e-account-create-update-pkszz" Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.896722 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-321b-account-create-update-fb9ws"] Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.904603 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfxsn\" (UniqueName: \"kubernetes.io/projected/4f9c7bf1-ef4e-4b2b-806a-2da4b6a26cbc-kube-api-access-dfxsn\") pod \"nova-cell0-1273-account-create-update-qhttp\" (UID: \"4f9c7bf1-ef4e-4b2b-806a-2da4b6a26cbc\") " pod="openstack/nova-cell0-1273-account-create-update-qhttp" Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.904900 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f9c7bf1-ef4e-4b2b-806a-2da4b6a26cbc-operator-scripts\") pod \"nova-cell0-1273-account-create-update-qhttp\" (UID: \"4f9c7bf1-ef4e-4b2b-806a-2da4b6a26cbc\") " pod="openstack/nova-cell0-1273-account-create-update-qhttp" Jan 30 16:47:14 crc kubenswrapper[4766]: E0130 16:47:14.905329 4766 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 30 16:47:14 crc kubenswrapper[4766]: E0130 16:47:14.905458 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b21357e1-82c9-419a-a191-359c84d6d001-config-data podName:b21357e1-82c9-419a-a191-359c84d6d001 nodeName:}" failed. No retries permitted until 2026-01-30 16:47:15.905436678 +0000 UTC m=+1490.543394024 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/b21357e1-82c9-419a-a191-359c84d6d001-config-data") pod "rabbitmq-cell1-server-0" (UID: "b21357e1-82c9-419a-a191-359c84d6d001") : configmap "rabbitmq-cell1-config-data" not found Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.931119 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-321b-account-create-update-fb9ws"] Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.962310 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-rxmkt"] Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.978486 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-rxmkt"] Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.995448 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-b00e-account-create-update-pkszz" Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.009480 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-mq5sq"] Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.021412 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f9c7bf1-ef4e-4b2b-806a-2da4b6a26cbc-operator-scripts\") pod \"nova-cell0-1273-account-create-update-qhttp\" (UID: \"4f9c7bf1-ef4e-4b2b-806a-2da4b6a26cbc\") " pod="openstack/nova-cell0-1273-account-create-update-qhttp" Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.023106 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfxsn\" (UniqueName: \"kubernetes.io/projected/4f9c7bf1-ef4e-4b2b-806a-2da4b6a26cbc-kube-api-access-dfxsn\") pod \"nova-cell0-1273-account-create-update-qhttp\" (UID: \"4f9c7bf1-ef4e-4b2b-806a-2da4b6a26cbc\") " pod="openstack/nova-cell0-1273-account-create-update-qhttp" Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.022725 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f9c7bf1-ef4e-4b2b-806a-2da4b6a26cbc-operator-scripts\") pod \"nova-cell0-1273-account-create-update-qhttp\" (UID: \"4f9c7bf1-ef4e-4b2b-806a-2da4b6a26cbc\") " pod="openstack/nova-cell0-1273-account-create-update-qhttp" Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.035994 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-1273-account-create-update-d2bd4"] Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.055697 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-mq5sq"] Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.078564 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-1273-account-create-update-d2bd4"] Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.104801 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-zgzf5"] Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.208436 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfxsn\" (UniqueName: \"kubernetes.io/projected/4f9c7bf1-ef4e-4b2b-806a-2da4b6a26cbc-kube-api-access-dfxsn\") pod \"nova-cell0-1273-account-create-update-qhttp\" (UID: \"4f9c7bf1-ef4e-4b2b-806a-2da4b6a26cbc\") " pod="openstack/nova-cell0-1273-account-create-update-qhttp" Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.237520 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_1e751b80-d475-4bfd-a382-5d9e1618e5aa/ovsdbserver-nb/0.log" Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.237878 4766 generic.go:334] "Generic (PLEG): container finished" podID="1e751b80-d475-4bfd-a382-5d9e1618e5aa" containerID="68be686c2198473cf235baf71f611a27995c8888c56e86a3626a67b42470e28a" exitCode=2 Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.237902 4766 generic.go:334] "Generic (PLEG): container finished" podID="1e751b80-d475-4bfd-a382-5d9e1618e5aa" containerID="20e080fafb462224d035f80d6933976aeeea05d7d2ed407630e50efdc1f07cd7" exitCode=143 Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.238001 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"1e751b80-d475-4bfd-a382-5d9e1618e5aa","Type":"ContainerDied","Data":"68be686c2198473cf235baf71f611a27995c8888c56e86a3626a67b42470e28a"} Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.238031 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"1e751b80-d475-4bfd-a382-5d9e1618e5aa","Type":"ContainerDied","Data":"20e080fafb462224d035f80d6933976aeeea05d7d2ed407630e50efdc1f07cd7"} Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.252704 4766 generic.go:334] "Generic (PLEG): container finished" podID="9f9f648f-36fc-4ab4-9e08-cf4e01e30f22" containerID="722b9f0bf4bb4fdc169a16a2a0008b553646c69b6b43ec117a7046c04ee677ad" exitCode=2 Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.252773 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22","Type":"ContainerDied","Data":"722b9f0bf4bb4fdc169a16a2a0008b553646c69b6b43ec117a7046c04ee677ad"} Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.301385 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-zgzf5"] Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.330484 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-rsxl2"] Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.330806 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-metrics-rsxl2" podUID="140fa04a-cb22-40ed-a08c-17f4ea13a5c4" containerName="openstack-network-exporter" containerID="cri-o://ca773f6965466e1c966e4078c56699b7af7241f8034d067ce868bbc53f1f1cda" gracePeriod=30 Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.358748 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1273-account-create-update-qhttp" Jan 30 16:47:15 crc kubenswrapper[4766]: E0130 16:47:15.398531 4766 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 16:47:15 crc kubenswrapper[4766]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 30 16:47:15 crc kubenswrapper[4766]: Jan 30 16:47:15 crc kubenswrapper[4766]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 30 16:47:15 crc kubenswrapper[4766]: Jan 30 16:47:15 crc kubenswrapper[4766]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 30 16:47:15 crc kubenswrapper[4766]: Jan 30 16:47:15 crc kubenswrapper[4766]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 30 16:47:15 crc kubenswrapper[4766]: Jan 30 16:47:15 crc kubenswrapper[4766]: if [ -n "" ]; then Jan 30 16:47:15 crc kubenswrapper[4766]: GRANT_DATABASE="" Jan 30 16:47:15 crc kubenswrapper[4766]: else Jan 30 16:47:15 crc kubenswrapper[4766]: GRANT_DATABASE="*" Jan 30 16:47:15 crc kubenswrapper[4766]: fi Jan 30 16:47:15 crc kubenswrapper[4766]: Jan 30 16:47:15 crc kubenswrapper[4766]: # going for maximum compatibility here: Jan 30 16:47:15 crc kubenswrapper[4766]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 30 16:47:15 crc kubenswrapper[4766]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 30 16:47:15 crc kubenswrapper[4766]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 30 16:47:15 crc kubenswrapper[4766]: # support updates Jan 30 16:47:15 crc kubenswrapper[4766]: Jan 30 16:47:15 crc kubenswrapper[4766]: $MYSQL_CMD < logger="UnhandledError" Jan 30 16:47:15 crc kubenswrapper[4766]: E0130 16:47:15.400262 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-cell1-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-jfd74" podUID="4e9bbf1f-b039-4112-ab71-308535065091" Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.449963 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-clmnh"] Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.503412 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-83af-account-create-update-87kzk"] Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.551801 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-83af-account-create-update-87kzk"] Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.608232 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-l6hkn"] Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.692255 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-zcjhs"] Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.692496 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-59cf4bdb65-zcjhs" podUID="dc575168-b373-41ba-9dd6-2d9d168a6527" containerName="dnsmasq-dns" containerID="cri-o://961c44998094a56223784b55dc0a705b3ed88b437f07fbb4bb63251127202310" gracePeriod=10 Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.747319 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-sc6rp"] Jan 30 16:47:15 crc kubenswrapper[4766]: E0130 16:47:15.799578 4766 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err="command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: " execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-clmnh" message="Exiting ovn-controller (1) " Jan 30 16:47:15 crc kubenswrapper[4766]: E0130 16:47:15.799616 4766 kuberuntime_container.go:691] "PreStop hook failed" err="command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: " pod="openstack/ovn-controller-clmnh" podUID="eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9" containerName="ovn-controller" containerID="cri-o://cc06e17c8227a3be8709faf659e52c8b8081ab19b313069647e67f5a0b8b13e7" Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.799648 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-clmnh" podUID="eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9" containerName="ovn-controller" containerID="cri-o://cc06e17c8227a3be8709faf659e52c8b8081ab19b313069647e67f5a0b8b13e7" gracePeriod=30 Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.799783 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-sc6rp"] Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.825291 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.826159 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="c4c6022b-f99b-41de-8048-ac8e4c4fa68f" containerName="openstack-network-exporter" containerID="cri-o://0e83e4f15db60d1d22bf2322b23168b3c373a79d29a5171d8b43db0aa0812d3a" gracePeriod=300 Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.845205 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-rlpcs"] Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.854703 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-rlpcs"] Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.871035 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.883196 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-2sfxl"] Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.906500 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-2sfxl"] Jan 30 16:47:15 crc kubenswrapper[4766]: E0130 16:47:15.916306 4766 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 30 16:47:15 crc kubenswrapper[4766]: E0130 16:47:15.916409 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b21357e1-82c9-419a-a191-359c84d6d001-config-data podName:b21357e1-82c9-419a-a191-359c84d6d001 nodeName:}" failed. No retries permitted until 2026-01-30 16:47:17.916384444 +0000 UTC m=+1492.554341790 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/b21357e1-82c9-419a-a191-359c84d6d001-config-data") pod "rabbitmq-cell1-server-0" (UID: "b21357e1-82c9-419a-a191-359c84d6d001") : configmap "rabbitmq-cell1-config-data" not found Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.916695 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="c4c6022b-f99b-41de-8048-ac8e4c4fa68f" containerName="ovsdbserver-sb" containerID="cri-o://35c50dacc5fd194e0367ec397b84d1ebda25e534558fb6144d3b0aa1f4575270" gracePeriod=300 Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.935499 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.935732 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="063ebe65-0175-443e-8c75-5018c42b3f36" containerName="cinder-scheduler" containerID="cri-o://e5049dc222f6a4c60730423ca57b88c9c36337971b3ab52ed5de35266e17e533" gracePeriod=30 Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.936138 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="063ebe65-0175-443e-8c75-5018c42b3f36" containerName="probe" containerID="cri-o://a33a51c4ce72a3331d749a25239fbd5adeae2f5c2b9a417968c58a83c32f6d49" gracePeriod=30 Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.963666 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.963896 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="6d5b8a42-39dd-4b1b-9f92-1e3585b6707b" containerName="glance-log" containerID="cri-o://ae7dde53148ed0648831b0890e97ecd78fb3fcfcd4a2cb421fec3c285c7bcafc" gracePeriod=30 Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.964296 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="6d5b8a42-39dd-4b1b-9f92-1e3585b6707b" containerName="glance-httpd" containerID="cri-o://a67df9afdf256b25cf9bda3ba94d84b8fbd05c6c568cae7f599293e1949d5425" gracePeriod=30 Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.976249 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.976643 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="aca8dfc0-f915-4696-95c1-3c232f2ea35a" containerName="cinder-api-log" containerID="cri-o://a733f4a35556a7749ab7a8a117d06d26d21581815da5c3972ae1bb6be001e832" gracePeriod=30 Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.977151 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="aca8dfc0-f915-4696-95c1-3c232f2ea35a" containerName="cinder-api" containerID="cri-o://f8cd42c17a2b9677c49633243f1573eb74a7d64e3fcc5f7dba33bde53ccef668" gracePeriod=30 Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.990684 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="aca8dfc0-f915-4696-95c1-3c232f2ea35a" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.168:8776/healthcheck\": EOF" Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.990838 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="aca8dfc0-f915-4696-95c1-3c232f2ea35a" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.168:8776/healthcheck\": EOF" Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.994187 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-69d8797fb6-zzsfd"] Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.994401 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-69d8797fb6-zzsfd" podUID="447a8ec3-4e50-40a9-b418-01fd8c0eb03e" containerName="placement-log" containerID="cri-o://13f1ad493c49e69abd03b3b6444cd83dde3cd1df4412312365d88ef9307e7a64" gracePeriod=30 Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.994799 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-69d8797fb6-zzsfd" podUID="447a8ec3-4e50-40a9-b418-01fd8c0eb03e" containerName="placement-api" containerID="cri-o://e1c9c044f33b3da34602b78fc59451988ca7b3d5b492d71105b99eb5384541ae" gracePeriod=30 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.015139 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.015676 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4bc2931b-8439-4c5c-be4d-43f4aab528f2" containerName="glance-log" containerID="cri-o://7a019f6cf432acd6921c269ed116db1aa5dfd42bb062f9567ee28226592d75f9" gracePeriod=30 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.016156 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4bc2931b-8439-4c5c-be4d-43f4aab528f2" containerName="glance-httpd" containerID="cri-o://7cb223d43c8f7f218cb3801a506f0b8a1c37370133be56bce90a766f5556e3ab" gracePeriod=30 Jan 30 16:47:16 crc kubenswrapper[4766]: E0130 16:47:16.030546 4766 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 30 16:47:16 crc kubenswrapper[4766]: E0130 16:47:16.030627 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bc2a138c-9abd-427b-815c-cbb9e12459f6-config-data podName:bc2a138c-9abd-427b-815c-cbb9e12459f6 nodeName:}" failed. No retries permitted until 2026-01-30 16:47:16.530600266 +0000 UTC m=+1491.168557622 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/bc2a138c-9abd-427b-815c-cbb9e12459f6-config-data") pod "rabbitmq-server-0" (UID: "bc2a138c-9abd-427b-815c-cbb9e12459f6") : configmap "rabbitmq-config-data" not found Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.128695 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c69ac66-232c-41b5-95a8-66eeb597bf70" path="/var/lib/kubelet/pods/0c69ac66-232c-41b5-95a8-66eeb597bf70/volumes" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.132943 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d" path="/var/lib/kubelet/pods/0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d/volumes" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.133623 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10bcd3d7-2c30-4a51-9455-2ffed88a7f43" path="/var/lib/kubelet/pods/10bcd3d7-2c30-4a51-9455-2ffed88a7f43/volumes" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.134165 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a05e847-bb50-49ab-821d-e2432c0f01e9" path="/var/lib/kubelet/pods/3a05e847-bb50-49ab-821d-e2432c0f01e9/volumes" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.134972 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42d1f0ba-d11c-4e08-9e01-5783f42a6b84" path="/var/lib/kubelet/pods/42d1f0ba-d11c-4e08-9e01-5783f42a6b84/volumes" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.137110 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bc27037-152a-461b-bce1-6d37b38bbb95" path="/var/lib/kubelet/pods/4bc27037-152a-461b-bce1-6d37b38bbb95/volumes" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.137831 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75830eb2-571a-4fef-92b5-057b0928cfe0" path="/var/lib/kubelet/pods/75830eb2-571a-4fef-92b5-057b0928cfe0/volumes" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.140339 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7639b60e-a348-4203-84b6-68af413cd517" path="/var/lib/kubelet/pods/7639b60e-a348-4203-84b6-68af413cd517/volumes" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.142039 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83c08adc-cebc-4bff-8994-d8f1f0cb59d7" path="/var/lib/kubelet/pods/83c08adc-cebc-4bff-8994-d8f1f0cb59d7/volumes" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.143123 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98478911-5d75-4bba-a256-e1c2c28e56de" path="/var/lib/kubelet/pods/98478911-5d75-4bba-a256-e1c2c28e56de/volumes" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.144556 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad8b317f-6f81-4ac9-a854-7b71e384ed98" path="/var/lib/kubelet/pods/ad8b317f-6f81-4ac9-a854-7b71e384ed98/volumes" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.147542 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c683df85-82ee-4038-883c-c47b3aa46bec" path="/var/lib/kubelet/pods/c683df85-82ee-4038-883c-c47b3aa46bec/volumes" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.150458 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db058df5-07b8-4d6e-a646-48ac7105c516" path="/var/lib/kubelet/pods/db058df5-07b8-4d6e-a646-48ac7105c516/volumes" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.153964 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-jfd74"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.153998 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-n8rf4"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.154012 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-n8rf4"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.154026 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.154042 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-x95v6"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.154479 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="account-server" containerID="cri-o://374f13cd2087a08f8eec3c99c6917ad293b1c5c6f50b2378b94b79cc272999d3" gracePeriod=30 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.154792 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="swift-recon-cron" containerID="cri-o://9ef33fd7af0697eee6aa37a4f43e02cd1ff7caec575a2b12e994eb6a0549b3a1" gracePeriod=30 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.154841 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="rsync" containerID="cri-o://fb57872e5fb6a58cc8c40e732147b1054a269fa84054e322cc2f52fa8c9c9ad5" gracePeriod=30 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.154872 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="object-expirer" containerID="cri-o://1867868d042226b0102d7af4efd2c5d0686e840d200dd33d6ec36968fc03fa94" gracePeriod=30 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.154902 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="object-updater" containerID="cri-o://2de20de1c925cc2fe2631c488767f62edc5546cfa1bab3a9f5b3b5568ebd33bd" gracePeriod=30 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.154935 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="object-auditor" containerID="cri-o://cabff9d9eac1e96f01b9ae0ea6118276a0a0f7d8869b118376d2a160d9c95fbd" gracePeriod=30 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.154965 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="object-replicator" containerID="cri-o://686b4de4bfb8090cbee7ffd8b429f45a75fa7f8db6a139284fa6c26cb4ebf320" gracePeriod=30 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.154993 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="object-server" containerID="cri-o://93345e4db373057383a4e7560531f5f8dc222e4ea8e6511d8365b6b242bb9305" gracePeriod=30 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.155020 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="container-updater" containerID="cri-o://ed024a5d8346d6cba34ca8427849879c1c8708dd88d1dff2c821e85ba14d6f5d" gracePeriod=30 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.155051 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="container-auditor" containerID="cri-o://3d565bf23f387505355fc88939efb3e922421c5ce2f3cce9972954f997abf7e9" gracePeriod=30 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.155077 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="container-replicator" containerID="cri-o://7e0ee7c6c23df84239fa6a0f2dda7982f60b3b9413744489a50144073243e8be" gracePeriod=30 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.155103 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="container-server" containerID="cri-o://4a378782d7a92d740e9d92e144de664ebf098b972f3febcbf7a8d0d8994d65c2" gracePeriod=30 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.155130 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="account-reaper" containerID="cri-o://b33858618ac4f97b57ed3a00bf2ef12f457aa24b08e1a7b17d0bccf28da68819" gracePeriod=30 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.155160 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="account-auditor" containerID="cri-o://8fb2a9d730e1fac1ed432db1aa83e0d89ad22b45725d36e0ee578815b9d18bd4" gracePeriod=30 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.155222 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="account-replicator" containerID="cri-o://13a067c315d5248f25766b082e783d339afd79a237563ce5f91071342f2570b8" gracePeriod=30 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.175350 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-x95v6"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.186068 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-nwrgq"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.252302 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-nwrgq"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.283291 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-cc14-account-create-update-6kfvc"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.320899 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6d4bdf9c45-5nxgr"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.321195 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6d4bdf9c45-5nxgr" podUID="533a3663-0294-48ef-b771-1f5fb3ae05ab" containerName="neutron-api" containerID="cri-o://2ef26908ff305b23e8e962f558b46195015a464a6f4ddf9d9d52d4e04bf0f666" gracePeriod=30 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.321319 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6d4bdf9c45-5nxgr" podUID="533a3663-0294-48ef-b771-1f5fb3ae05ab" containerName="neutron-httpd" containerID="cri-o://7b8bf066636272b652b67ba985eba08e74de13009f953d0190f16c41f92e8863" gracePeriod=30 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.328881 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-zf522"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.347372 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-66a8-account-create-update-hh2cg"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.379490 4766 generic.go:334] "Generic (PLEG): container finished" podID="aca8dfc0-f915-4696-95c1-3c232f2ea35a" containerID="a733f4a35556a7749ab7a8a117d06d26d21581815da5c3972ae1bb6be001e832" exitCode=143 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.379654 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"aca8dfc0-f915-4696-95c1-3c232f2ea35a","Type":"ContainerDied","Data":"a733f4a35556a7749ab7a8a117d06d26d21581815da5c3972ae1bb6be001e832"} Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.382839 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-rsxl2_140fa04a-cb22-40ed-a08c-17f4ea13a5c4/openstack-network-exporter/0.log" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.382905 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-rsxl2" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.383261 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-zf522"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.386859 4766 generic.go:334] "Generic (PLEG): container finished" podID="372f7d7a-9066-4b9b-884a-5257785ed101" containerID="df788f30600005e9bd630dc70c223ed28619ad8b7870fd3b9815867378945be2" exitCode=137 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.390413 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_c4c6022b-f99b-41de-8048-ac8e4c4fa68f/ovsdbserver-sb/0.log" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.390453 4766 generic.go:334] "Generic (PLEG): container finished" podID="c4c6022b-f99b-41de-8048-ac8e4c4fa68f" containerID="0e83e4f15db60d1d22bf2322b23168b3c373a79d29a5171d8b43db0aa0812d3a" exitCode=2 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.392044 4766 generic.go:334] "Generic (PLEG): container finished" podID="c4c6022b-f99b-41de-8048-ac8e4c4fa68f" containerID="35c50dacc5fd194e0367ec397b84d1ebda25e534558fb6144d3b0aa1f4575270" exitCode=143 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.392123 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c4c6022b-f99b-41de-8048-ac8e4c4fa68f","Type":"ContainerDied","Data":"0e83e4f15db60d1d22bf2322b23168b3c373a79d29a5171d8b43db0aa0812d3a"} Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.392150 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c4c6022b-f99b-41de-8048-ac8e4c4fa68f","Type":"ContainerDied","Data":"35c50dacc5fd194e0367ec397b84d1ebda25e534558fb6144d3b0aa1f4575270"} Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.395588 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-jfd74" event={"ID":"4e9bbf1f-b039-4112-ab71-308535065091","Type":"ContainerStarted","Data":"fca4c05dceea3855589628ff1ebfa551584aedf44b196076f8197c1c533ffe64"} Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.398927 4766 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/root-account-create-update-jfd74" secret="" err="secret \"galera-openstack-cell1-dockercfg-zd2kf\" not found" Jan 30 16:47:16 crc kubenswrapper[4766]: E0130 16:47:16.431651 4766 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 16:47:16 crc kubenswrapper[4766]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 30 16:47:16 crc kubenswrapper[4766]: Jan 30 16:47:16 crc kubenswrapper[4766]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 30 16:47:16 crc kubenswrapper[4766]: Jan 30 16:47:16 crc kubenswrapper[4766]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 30 16:47:16 crc kubenswrapper[4766]: Jan 30 16:47:16 crc kubenswrapper[4766]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 30 16:47:16 crc kubenswrapper[4766]: Jan 30 16:47:16 crc kubenswrapper[4766]: if [ -n "" ]; then Jan 30 16:47:16 crc kubenswrapper[4766]: GRANT_DATABASE="" Jan 30 16:47:16 crc kubenswrapper[4766]: else Jan 30 16:47:16 crc kubenswrapper[4766]: GRANT_DATABASE="*" Jan 30 16:47:16 crc kubenswrapper[4766]: fi Jan 30 16:47:16 crc kubenswrapper[4766]: Jan 30 16:47:16 crc kubenswrapper[4766]: # going for maximum compatibility here: Jan 30 16:47:16 crc kubenswrapper[4766]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 30 16:47:16 crc kubenswrapper[4766]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 30 16:47:16 crc kubenswrapper[4766]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 30 16:47:16 crc kubenswrapper[4766]: # support updates Jan 30 16:47:16 crc kubenswrapper[4766]: Jan 30 16:47:16 crc kubenswrapper[4766]: $MYSQL_CMD < logger="UnhandledError" Jan 30 16:47:16 crc kubenswrapper[4766]: E0130 16:47:16.433059 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-cell1-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-jfd74" podUID="4e9bbf1f-b039-4112-ab71-308535065091" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.461573 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zh9x4\" (UniqueName: \"kubernetes.io/projected/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-kube-api-access-zh9x4\") pod \"140fa04a-cb22-40ed-a08c-17f4ea13a5c4\" (UID: \"140fa04a-cb22-40ed-a08c-17f4ea13a5c4\") " Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.461737 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-combined-ca-bundle\") pod \"140fa04a-cb22-40ed-a08c-17f4ea13a5c4\" (UID: \"140fa04a-cb22-40ed-a08c-17f4ea13a5c4\") " Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.461885 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-ovs-rundir\") pod \"140fa04a-cb22-40ed-a08c-17f4ea13a5c4\" (UID: \"140fa04a-cb22-40ed-a08c-17f4ea13a5c4\") " Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.461948 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-config\") pod \"140fa04a-cb22-40ed-a08c-17f4ea13a5c4\" (UID: \"140fa04a-cb22-40ed-a08c-17f4ea13a5c4\") " Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.461977 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-metrics-certs-tls-certs\") pod \"140fa04a-cb22-40ed-a08c-17f4ea13a5c4\" (UID: \"140fa04a-cb22-40ed-a08c-17f4ea13a5c4\") " Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.462003 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-ovn-rundir\") pod \"140fa04a-cb22-40ed-a08c-17f4ea13a5c4\" (UID: \"140fa04a-cb22-40ed-a08c-17f4ea13a5c4\") " Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.462511 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-ovs-rundir" (OuterVolumeSpecName: "ovs-rundir") pod "140fa04a-cb22-40ed-a08c-17f4ea13a5c4" (UID: "140fa04a-cb22-40ed-a08c-17f4ea13a5c4"). InnerVolumeSpecName "ovs-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.464824 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "140fa04a-cb22-40ed-a08c-17f4ea13a5c4" (UID: "140fa04a-cb22-40ed-a08c-17f4ea13a5c4"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.467814 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-config" (OuterVolumeSpecName: "config") pod "140fa04a-cb22-40ed-a08c-17f4ea13a5c4" (UID: "140fa04a-cb22-40ed-a08c-17f4ea13a5c4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.472768 4766 generic.go:334] "Generic (PLEG): container finished" podID="eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9" containerID="cc06e17c8227a3be8709faf659e52c8b8081ab19b313069647e67f5a0b8b13e7" exitCode=0 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.472863 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-clmnh" event={"ID":"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9","Type":"ContainerDied","Data":"cc06e17c8227a3be8709faf659e52c8b8081ab19b313069647e67f5a0b8b13e7"} Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.473906 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_1e751b80-d475-4bfd-a382-5d9e1618e5aa/ovsdbserver-nb/0.log" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.473979 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.504638 4766 reconciler_common.go:293] "Volume detached for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-ovs-rundir\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.504677 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.504686 4766 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-ovn-rundir\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.545482 4766 generic.go:334] "Generic (PLEG): container finished" podID="6d5b8a42-39dd-4b1b-9f92-1e3585b6707b" containerID="ae7dde53148ed0648831b0890e97ecd78fb3fcfcd4a2cb421fec3c285c7bcafc" exitCode=143 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.545557 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b","Type":"ContainerDied","Data":"ae7dde53148ed0648831b0890e97ecd78fb3fcfcd4a2cb421fec3c285c7bcafc"} Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.549135 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-rsxl2_140fa04a-cb22-40ed-a08c-17f4ea13a5c4/openstack-network-exporter/0.log" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.549185 4766 generic.go:334] "Generic (PLEG): container finished" podID="140fa04a-cb22-40ed-a08c-17f4ea13a5c4" containerID="ca773f6965466e1c966e4078c56699b7af7241f8034d067ce868bbc53f1f1cda" exitCode=2 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.549239 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-rsxl2" event={"ID":"140fa04a-cb22-40ed-a08c-17f4ea13a5c4","Type":"ContainerDied","Data":"ca773f6965466e1c966e4078c56699b7af7241f8034d067ce868bbc53f1f1cda"} Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.549277 4766 scope.go:117] "RemoveContainer" containerID="ca773f6965466e1c966e4078c56699b7af7241f8034d067ce868bbc53f1f1cda" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.549494 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-rsxl2" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.563424 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.563803 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="40f1dc52-213f-4a5b-af33-4067a83859e4" containerName="nova-metadata-log" containerID="cri-o://e3f046bd0100af048e190e7bb7dc2c7ede76a6587bf6a06b51be084ec53ea1dc" gracePeriod=30 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.564142 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="40f1dc52-213f-4a5b-af33-4067a83859e4" containerName="nova-metadata-metadata" containerID="cri-o://f5c26f650562d5378425a19e3a733be4cbcd37edac52306372d412afb45a409d" gracePeriod=30 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.586557 4766 generic.go:334] "Generic (PLEG): container finished" podID="dc575168-b373-41ba-9dd6-2d9d168a6527" containerID="961c44998094a56223784b55dc0a705b3ed88b437f07fbb4bb63251127202310" exitCode=0 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.586685 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-zcjhs" event={"ID":"dc575168-b373-41ba-9dd6-2d9d168a6527","Type":"ContainerDied","Data":"961c44998094a56223784b55dc0a705b3ed88b437f07fbb4bb63251127202310"} Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.591119 4766 generic.go:334] "Generic (PLEG): container finished" podID="447a8ec3-4e50-40a9-b418-01fd8c0eb03e" containerID="13f1ad493c49e69abd03b3b6444cd83dde3cd1df4412312365d88ef9307e7a64" exitCode=143 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.591304 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-69d8797fb6-zzsfd" event={"ID":"447a8ec3-4e50-40a9-b418-01fd8c0eb03e","Type":"ContainerDied","Data":"13f1ad493c49e69abd03b3b6444cd83dde3cd1df4412312365d88ef9307e7a64"} Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.602798 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.603645 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="14ae2453-74fa-4114-9261-21b381518493" containerName="nova-api-log" containerID="cri-o://7cabed8561645b99877a1c2df47b93e7663d97c477d7b28bd91f347a72034772" gracePeriod=30 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.605084 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="14ae2453-74fa-4114-9261-21b381518493" containerName="nova-api-api" containerID="cri-o://078a351f4bbfda381f7eaea97874a2d3cad8f7b02bef769bcb410ba868b12250" gracePeriod=30 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.607983 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1e751b80-d475-4bfd-a382-5d9e1618e5aa-scripts\") pod \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") " Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.608059 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e751b80-d475-4bfd-a382-5d9e1618e5aa-config\") pod \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") " Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.608142 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85mnc\" (UniqueName: \"kubernetes.io/projected/1e751b80-d475-4bfd-a382-5d9e1618e5aa-kube-api-access-85mnc\") pod \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") " Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.608193 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e751b80-d475-4bfd-a382-5d9e1618e5aa-combined-ca-bundle\") pod \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") " Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.608405 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1e751b80-d475-4bfd-a382-5d9e1618e5aa-ovsdb-rundir\") pod \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") " Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.608497 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e751b80-d475-4bfd-a382-5d9e1618e5aa-ovsdbserver-nb-tls-certs\") pod \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") " Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.608916 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e751b80-d475-4bfd-a382-5d9e1618e5aa-metrics-certs-tls-certs\") pod \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") " Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.608991 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") " Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.610738 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-kube-api-access-zh9x4" (OuterVolumeSpecName: "kube-api-access-zh9x4") pod "140fa04a-cb22-40ed-a08c-17f4ea13a5c4" (UID: "140fa04a-cb22-40ed-a08c-17f4ea13a5c4"). InnerVolumeSpecName "kube-api-access-zh9x4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.613006 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e751b80-d475-4bfd-a382-5d9e1618e5aa-config" (OuterVolumeSpecName: "config") pod "1e751b80-d475-4bfd-a382-5d9e1618e5aa" (UID: "1e751b80-d475-4bfd-a382-5d9e1618e5aa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.613587 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e751b80-d475-4bfd-a382-5d9e1618e5aa-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "1e751b80-d475-4bfd-a382-5d9e1618e5aa" (UID: "1e751b80-d475-4bfd-a382-5d9e1618e5aa"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.637999 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e751b80-d475-4bfd-a382-5d9e1618e5aa-scripts" (OuterVolumeSpecName: "scripts") pod "1e751b80-d475-4bfd-a382-5d9e1618e5aa" (UID: "1e751b80-d475-4bfd-a382-5d9e1618e5aa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:16 crc kubenswrapper[4766]: E0130 16:47:16.664586 4766 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Jan 30 16:47:16 crc kubenswrapper[4766]: E0130 16:47:16.664658 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4e9bbf1f-b039-4112-ab71-308535065091-operator-scripts podName:4e9bbf1f-b039-4112-ab71-308535065091 nodeName:}" failed. No retries permitted until 2026-01-30 16:47:17.16463952 +0000 UTC m=+1491.802596866 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/4e9bbf1f-b039-4112-ab71-308535065091-operator-scripts") pod "root-account-create-update-jfd74" (UID: "4e9bbf1f-b039-4112-ab71-308535065091") : configmap "openstack-cell1-scripts" not found Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.665270 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "1e751b80-d475-4bfd-a382-5d9e1618e5aa" (UID: "1e751b80-d475-4bfd-a382-5d9e1618e5aa"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.665588 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1e751b80-d475-4bfd-a382-5d9e1618e5aa-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.665605 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zh9x4\" (UniqueName: \"kubernetes.io/projected/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-kube-api-access-zh9x4\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.669466 4766 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.669500 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1e751b80-d475-4bfd-a382-5d9e1618e5aa-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.669511 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e751b80-d475-4bfd-a382-5d9e1618e5aa-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:16 crc kubenswrapper[4766]: E0130 16:47:16.671012 4766 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 30 16:47:16 crc kubenswrapper[4766]: E0130 16:47:16.671116 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bc2a138c-9abd-427b-815c-cbb9e12459f6-config-data podName:bc2a138c-9abd-427b-815c-cbb9e12459f6 nodeName:}" failed. No retries permitted until 2026-01-30 16:47:17.671084391 +0000 UTC m=+1492.309041737 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/bc2a138c-9abd-427b-815c-cbb9e12459f6-config-data") pod "rabbitmq-server-0" (UID: "bc2a138c-9abd-427b-815c-cbb9e12459f6") : configmap "rabbitmq-config-data" not found Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.679117 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-dksnn"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.686101 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e751b80-d475-4bfd-a382-5d9e1618e5aa-kube-api-access-85mnc" (OuterVolumeSpecName: "kube-api-access-85mnc") pod "1e751b80-d475-4bfd-a382-5d9e1618e5aa" (UID: "1e751b80-d475-4bfd-a382-5d9e1618e5aa"). InnerVolumeSpecName "kube-api-access-85mnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.702322 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "140fa04a-cb22-40ed-a08c-17f4ea13a5c4" (UID: "140fa04a-cb22-40ed-a08c-17f4ea13a5c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.771773 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.771804 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85mnc\" (UniqueName: \"kubernetes.io/projected/1e751b80-d475-4bfd-a382-5d9e1618e5aa-kube-api-access-85mnc\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.775361 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-dksnn"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.796972 4766 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.802356 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e751b80-d475-4bfd-a382-5d9e1618e5aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1e751b80-d475-4bfd-a382-5d9e1618e5aa" (UID: "1e751b80-d475-4bfd-a382-5d9e1618e5aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.818249 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-5c649fd446-flqwn"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.818502 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-5c649fd446-flqwn" podUID="22d60b44-40c9-425e-8daf-8931a25954e0" containerName="barbican-keystone-listener-log" containerID="cri-o://712f1ec6de09438090f58fbb0c4f302531a0e53b3ab1025ce983291fe2a30a55" gracePeriod=30 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.818925 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-5c649fd446-flqwn" podUID="22d60b44-40c9-425e-8daf-8931a25954e0" containerName="barbican-keystone-listener" containerID="cri-o://812a3e23be177e19676f6003e9e0ddb46880fe309badbba4e93d1efe04dcf597" gracePeriod=30 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.834062 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-2h7p2"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.843512 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-2h7p2"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.850787 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.854603 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-l6hkn" podUID="2a501828-e06b-4096-b555-1ecd9323ee20" containerName="ovs-vswitchd" containerID="cri-o://83993d94a8bc7f594d30caf8ddb5c055e031d5c9a949175233563c06d2f790e9" gracePeriod=29 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.862617 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-d6c45fdd9-srlkx"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.862897 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-d6c45fdd9-srlkx" podUID="d13e6f63-37d4-4780-9902-430a9669901c" containerName="barbican-worker-log" containerID="cri-o://929f2cc066366dea699ff53637f354d8aeab119c1be0aa3851b50d5090307472" gracePeriod=30 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.863498 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-d6c45fdd9-srlkx" podUID="d13e6f63-37d4-4780-9902-430a9669901c" containerName="barbican-worker" containerID="cri-o://e3fbc192fdad733807e36f2325831d022e561f39e323dd8f0e5a0da778a417b6" gracePeriod=30 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.874070 4766 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.874096 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e751b80-d475-4bfd-a382-5d9e1618e5aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.884003 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-smswb"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.892239 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-l6hkn" podUID="2a501828-e06b-4096-b555-1ecd9323ee20" containerName="ovsdb-server" containerID="cri-o://087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2" gracePeriod=29 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.893509 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-smswb"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.900573 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-8mgkl"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.908345 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-8mgkl"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.920379 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-63c5-account-create-update-sx7bq"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.931845 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-63c5-account-create-update-sx7bq"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.949776 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-7d7d659cc9-88mc9"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.954820 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-7d7d659cc9-88mc9" podUID="c3997cdc-9abd-4aa3-9201-0015456d4750" containerName="proxy-httpd" containerID="cri-o://068d2554e4761fa5c6a952a00e96bf35b9eae89ff6aa26d9e05f71a814e8b350" gracePeriod=30 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.955849 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-7d7d659cc9-88mc9" podUID="c3997cdc-9abd-4aa3-9201-0015456d4750" containerName="proxy-server" containerID="cri-o://75ae716b873f99b1ddf625fab6e52abbd22acfb11f4be4e5163116f6fbbe7e1a" gracePeriod=30 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.960120 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-b00e-account-create-update-pkszz"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.970122 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7b946b75c8-zb6q6"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.979713 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.981919 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "140fa04a-cb22-40ed-a08c-17f4ea13a5c4" (UID: "140fa04a-cb22-40ed-a08c-17f4ea13a5c4"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.986854 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7b946b75c8-zb6q6" podUID="17d6e828-fc05-46cb-9bee-bac08ebf331a" containerName="barbican-api-log" containerID="cri-o://c8cf61cca49f5913c4f404f73dad03aeee69e0fc8dd7632010cae106dffa30c1" gracePeriod=30 Jan 30 16:47:17 crc kubenswrapper[4766]: E0130 16:47:17.004867 4766 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 16:47:17 crc kubenswrapper[4766]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 30 16:47:17 crc kubenswrapper[4766]: Jan 30 16:47:17 crc kubenswrapper[4766]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 30 16:47:17 crc kubenswrapper[4766]: Jan 30 16:47:17 crc kubenswrapper[4766]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 30 16:47:17 crc kubenswrapper[4766]: Jan 30 16:47:17 crc kubenswrapper[4766]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 30 16:47:17 crc kubenswrapper[4766]: Jan 30 16:47:17 crc kubenswrapper[4766]: if [ -n "barbican" ]; then Jan 30 16:47:17 crc kubenswrapper[4766]: GRANT_DATABASE="barbican" Jan 30 16:47:17 crc kubenswrapper[4766]: else Jan 30 16:47:17 crc kubenswrapper[4766]: GRANT_DATABASE="*" Jan 30 16:47:17 crc kubenswrapper[4766]: fi Jan 30 16:47:17 crc kubenswrapper[4766]: Jan 30 16:47:17 crc kubenswrapper[4766]: # going for maximum compatibility here: Jan 30 16:47:17 crc kubenswrapper[4766]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 30 16:47:17 crc kubenswrapper[4766]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 30 16:47:17 crc kubenswrapper[4766]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 30 16:47:17 crc kubenswrapper[4766]: # support updates Jan 30 16:47:17 crc kubenswrapper[4766]: Jan 30 16:47:17 crc kubenswrapper[4766]: $MYSQL_CMD < logger="UnhandledError" Jan 30 16:47:17 crc kubenswrapper[4766]: E0130 16:47:17.007584 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"barbican-db-secret\\\" not found\"" pod="openstack/barbican-66a8-account-create-update-hh2cg" podUID="d12bc030-c731-4999-ac6d-1be59807c6de" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.032788 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7b946b75c8-zb6q6" podUID="17d6e828-fc05-46cb-9bee-bac08ebf331a" containerName="barbican-api" containerID="cri-o://b76d74d517695b954352727012ec69dbd68335889d86f736409624bd420dd1d5" gracePeriod=30 Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.043102 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-pq28c"] Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.060267 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e751b80-d475-4bfd-a382-5d9e1618e5aa-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "1e751b80-d475-4bfd-a382-5d9e1618e5aa" (UID: "1e751b80-d475-4bfd-a382-5d9e1618e5aa"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.064046 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e751b80-d475-4bfd-a382-5d9e1618e5aa-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "1e751b80-d475-4bfd-a382-5d9e1618e5aa" (UID: "1e751b80-d475-4bfd-a382-5d9e1618e5aa"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.065923 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="b21357e1-82c9-419a-a191-359c84d6d001" containerName="rabbitmq" containerID="cri-o://db7977d3774b9fe20bfc32eaf113a99ac43aaf8d33fb949d557be22220077920" gracePeriod=604800 Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.092673 4766 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e751b80-d475-4bfd-a382-5d9e1618e5aa-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.092721 4766 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.092734 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e751b80-d475-4bfd-a382-5d9e1618e5aa-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.097967 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-pq28c"] Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.161034 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-1273-account-create-update-qhttp"] Jan 30 16:47:17 crc kubenswrapper[4766]: E0130 16:47:17.197552 4766 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Jan 30 16:47:17 crc kubenswrapper[4766]: E0130 16:47:17.197621 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4e9bbf1f-b039-4112-ab71-308535065091-operator-scripts podName:4e9bbf1f-b039-4112-ab71-308535065091 nodeName:}" failed. No retries permitted until 2026-01-30 16:47:18.197600142 +0000 UTC m=+1492.835557488 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/4e9bbf1f-b039-4112-ab71-308535065091-operator-scripts") pod "root-account-create-update-jfd74" (UID: "4e9bbf1f-b039-4112-ab71-308535065091") : configmap "openstack-cell1-scripts" not found Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.205976 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.206610 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="2852c370-2b06-4a98-9d48-190ed09dc7fb" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://2671e048e8f586ac30e84753d3da2378fb3c49f9c4e105e7b943738d2cc3d2c1" gracePeriod=30 Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.253055 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-jfd74"] Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.272666 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.272942 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="4f217490-8a26-4f4b-935b-fe5918500948" containerName="nova-scheduler-scheduler" containerID="cri-o://49f1b21f2e3d5b4e03c8ce1eb39f56c81d4a5fbe999ee27e98fc4e4b585ae884" gracePeriod=30 Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.282390 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-d5p85"] Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.288802 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.289030 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="7fa69536-b701-43a4-814a-2ba16974b1dd" containerName="nova-cell1-conductor-conductor" containerID="cri-o://7d4fac86f391b975d4d442ab4194690e25c69e0569d23636e3c1ed6941b267b8" gracePeriod=30 Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.299909 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-d5p85"] Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.307294 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-zcjhs" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.310456 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-clmnh" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.314662 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.317734 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.317874 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="e5346df4-67e7-4a20-bb56-11173908a334" containerName="nova-cell0-conductor-conductor" containerID="cri-o://f89cf49e9f838960fe8746366d6f0b6e8a301a5e9f763e18897855dbba68d6bf" gracePeriod=30 Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.324831 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xsc6g"] Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.327442 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_c4c6022b-f99b-41de-8048-ac8e4c4fa68f/ovsdbserver-sb/0.log" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.327512 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.330255 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xsc6g"] Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.350238 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.381036 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-66a8-account-create-update-hh2cg"] Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.402375 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-rsxl2"] Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.416696 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-metrics-rsxl2"] Jan 30 16:47:17 crc kubenswrapper[4766]: E0130 16:47:17.420919 4766 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 16:47:17 crc kubenswrapper[4766]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 30 16:47:17 crc kubenswrapper[4766]: Jan 30 16:47:17 crc kubenswrapper[4766]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 30 16:47:17 crc kubenswrapper[4766]: Jan 30 16:47:17 crc kubenswrapper[4766]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 30 16:47:17 crc kubenswrapper[4766]: Jan 30 16:47:17 crc kubenswrapper[4766]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 30 16:47:17 crc kubenswrapper[4766]: Jan 30 16:47:17 crc kubenswrapper[4766]: if [ -n "nova_cell0" ]; then Jan 30 16:47:17 crc kubenswrapper[4766]: GRANT_DATABASE="nova_cell0" Jan 30 16:47:17 crc kubenswrapper[4766]: else Jan 30 16:47:17 crc kubenswrapper[4766]: GRANT_DATABASE="*" Jan 30 16:47:17 crc kubenswrapper[4766]: fi Jan 30 16:47:17 crc kubenswrapper[4766]: Jan 30 16:47:17 crc kubenswrapper[4766]: # going for maximum compatibility here: Jan 30 16:47:17 crc kubenswrapper[4766]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 30 16:47:17 crc kubenswrapper[4766]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 30 16:47:17 crc kubenswrapper[4766]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 30 16:47:17 crc kubenswrapper[4766]: # support updates Jan 30 16:47:17 crc kubenswrapper[4766]: Jan 30 16:47:17 crc kubenswrapper[4766]: $MYSQL_CMD < logger="UnhandledError" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.422996 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/372f7d7a-9066-4b9b-884a-5257785ed101-combined-ca-bundle\") pod \"372f7d7a-9066-4b9b-884a-5257785ed101\" (UID: \"372f7d7a-9066-4b9b-884a-5257785ed101\") " Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.423052 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-var-run-ovn\") pod \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\" (UID: \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\") " Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.423082 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-combined-ca-bundle\") pod \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") " Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.423104 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") " Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.423137 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-scripts\") pod \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") " Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.423159 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4q2q\" (UniqueName: \"kubernetes.io/projected/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-kube-api-access-g4q2q\") pod \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") " Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.423241 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc575168-b373-41ba-9dd6-2d9d168a6527-config\") pod \"dc575168-b373-41ba-9dd6-2d9d168a6527\" (UID: \"dc575168-b373-41ba-9dd6-2d9d168a6527\") " Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.423290 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-config\") pod \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") " Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.423315 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-ovsdbserver-sb-tls-certs\") pod \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") " Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.423341 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtgwn\" (UniqueName: \"kubernetes.io/projected/dc575168-b373-41ba-9dd6-2d9d168a6527-kube-api-access-dtgwn\") pod \"dc575168-b373-41ba-9dd6-2d9d168a6527\" (UID: \"dc575168-b373-41ba-9dd6-2d9d168a6527\") " Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.423374 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvm4n\" (UniqueName: \"kubernetes.io/projected/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-kube-api-access-hvm4n\") pod \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\" (UID: \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\") " Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.423397 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/372f7d7a-9066-4b9b-884a-5257785ed101-openstack-config-secret\") pod \"372f7d7a-9066-4b9b-884a-5257785ed101\" (UID: \"372f7d7a-9066-4b9b-884a-5257785ed101\") " Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.423417 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/372f7d7a-9066-4b9b-884a-5257785ed101-openstack-config\") pod \"372f7d7a-9066-4b9b-884a-5257785ed101\" (UID: \"372f7d7a-9066-4b9b-884a-5257785ed101\") " Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.423443 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-var-log-ovn\") pod \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\" (UID: \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\") " Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.423466 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc575168-b373-41ba-9dd6-2d9d168a6527-ovsdbserver-nb\") pod \"dc575168-b373-41ba-9dd6-2d9d168a6527\" (UID: \"dc575168-b373-41ba-9dd6-2d9d168a6527\") " Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.423489 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc575168-b373-41ba-9dd6-2d9d168a6527-dns-svc\") pod \"dc575168-b373-41ba-9dd6-2d9d168a6527\" (UID: \"dc575168-b373-41ba-9dd6-2d9d168a6527\") " Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.423527 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-combined-ca-bundle\") pod \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\" (UID: \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\") " Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.423550 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8d4d8\" (UniqueName: \"kubernetes.io/projected/372f7d7a-9066-4b9b-884a-5257785ed101-kube-api-access-8d4d8\") pod \"372f7d7a-9066-4b9b-884a-5257785ed101\" (UID: \"372f7d7a-9066-4b9b-884a-5257785ed101\") " Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.423573 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-ovn-controller-tls-certs\") pod \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\" (UID: \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\") " Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.423639 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-scripts\") pod \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\" (UID: \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\") " Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.423674 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dc575168-b373-41ba-9dd6-2d9d168a6527-dns-swift-storage-0\") pod \"dc575168-b373-41ba-9dd6-2d9d168a6527\" (UID: \"dc575168-b373-41ba-9dd6-2d9d168a6527\") " Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.423699 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-ovsdb-rundir\") pod \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") " Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.423730 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc575168-b373-41ba-9dd6-2d9d168a6527-ovsdbserver-sb\") pod \"dc575168-b373-41ba-9dd6-2d9d168a6527\" (UID: \"dc575168-b373-41ba-9dd6-2d9d168a6527\") " Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.423767 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-metrics-certs-tls-certs\") pod \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") " Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.423797 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-var-run\") pod \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\" (UID: \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\") " Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.424374 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-var-run" (OuterVolumeSpecName: "var-run") pod "eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9" (UID: "eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.425264 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9" (UID: "eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.426018 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-scripts" (OuterVolumeSpecName: "scripts") pod "c4c6022b-f99b-41de-8048-ac8e4c4fa68f" (UID: "c4c6022b-f99b-41de-8048-ac8e4c4fa68f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.427961 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "c4c6022b-f99b-41de-8048-ac8e4c4fa68f" (UID: "c4c6022b-f99b-41de-8048-ac8e4c4fa68f"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.428321 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-config" (OuterVolumeSpecName: "config") pod "c4c6022b-f99b-41de-8048-ac8e4c4fa68f" (UID: "c4c6022b-f99b-41de-8048-ac8e4c4fa68f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.428827 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "c4c6022b-f99b-41de-8048-ac8e4c4fa68f" (UID: "c4c6022b-f99b-41de-8048-ac8e4c4fa68f"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.429580 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-kube-api-access-g4q2q" (OuterVolumeSpecName: "kube-api-access-g4q2q") pod "c4c6022b-f99b-41de-8048-ac8e4c4fa68f" (UID: "c4c6022b-f99b-41de-8048-ac8e4c4fa68f"). InnerVolumeSpecName "kube-api-access-g4q2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.431350 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-scripts" (OuterVolumeSpecName: "scripts") pod "eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9" (UID: "eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:17 crc kubenswrapper[4766]: E0130 16:47:17.433418 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-cell0-db-secret\\\" not found\"" pod="openstack/nova-cell0-1273-account-create-update-qhttp" podUID="4f9c7bf1-ef4e-4b2b-806a-2da4b6a26cbc" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.433562 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9" (UID: "eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.460682 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc575168-b373-41ba-9dd6-2d9d168a6527-kube-api-access-dtgwn" (OuterVolumeSpecName: "kube-api-access-dtgwn") pod "dc575168-b373-41ba-9dd6-2d9d168a6527" (UID: "dc575168-b373-41ba-9dd6-2d9d168a6527"). InnerVolumeSpecName "kube-api-access-dtgwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.462723 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/372f7d7a-9066-4b9b-884a-5257785ed101-kube-api-access-8d4d8" (OuterVolumeSpecName: "kube-api-access-8d4d8") pod "372f7d7a-9066-4b9b-884a-5257785ed101" (UID: "372f7d7a-9066-4b9b-884a-5257785ed101"). InnerVolumeSpecName "kube-api-access-8d4d8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.466943 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="bc2a138c-9abd-427b-815c-cbb9e12459f6" containerName="rabbitmq" containerID="cri-o://40a3ac01470631f3856774db28b8f61347a07c88a9ecabdd8c4a7fdd55f65bf9" gracePeriod=604800 Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.469430 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6kx5n" Jan 30 16:47:17 crc kubenswrapper[4766]: E0130 16:47:17.471610 4766 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 16:47:17 crc kubenswrapper[4766]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 30 16:47:17 crc kubenswrapper[4766]: Jan 30 16:47:17 crc kubenswrapper[4766]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 30 16:47:17 crc kubenswrapper[4766]: Jan 30 16:47:17 crc kubenswrapper[4766]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 30 16:47:17 crc kubenswrapper[4766]: Jan 30 16:47:17 crc kubenswrapper[4766]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 30 16:47:17 crc kubenswrapper[4766]: Jan 30 16:47:17 crc kubenswrapper[4766]: if [ -n "nova_api" ]; then Jan 30 16:47:17 crc kubenswrapper[4766]: GRANT_DATABASE="nova_api" Jan 30 16:47:17 crc kubenswrapper[4766]: else Jan 30 16:47:17 crc kubenswrapper[4766]: GRANT_DATABASE="*" Jan 30 16:47:17 crc kubenswrapper[4766]: fi Jan 30 16:47:17 crc kubenswrapper[4766]: Jan 30 16:47:17 crc kubenswrapper[4766]: # going for maximum compatibility here: Jan 30 16:47:17 crc kubenswrapper[4766]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 30 16:47:17 crc kubenswrapper[4766]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 30 16:47:17 crc kubenswrapper[4766]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 30 16:47:17 crc kubenswrapper[4766]: # support updates Jan 30 16:47:17 crc kubenswrapper[4766]: Jan 30 16:47:17 crc kubenswrapper[4766]: $MYSQL_CMD < logger="UnhandledError" Jan 30 16:47:17 crc kubenswrapper[4766]: E0130 16:47:17.472766 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-api-db-secret\\\" not found\"" pod="openstack/nova-api-b00e-account-create-update-pkszz" podUID="965e8a8f-b4eb-4abb-8177-841fde4d33a2" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.507510 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-kube-api-access-hvm4n" (OuterVolumeSpecName: "kube-api-access-hvm4n") pod "eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9" (UID: "eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9"). InnerVolumeSpecName "kube-api-access-hvm4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.507589 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-1273-account-create-update-qhttp"] Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.526749 4766 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-var-run\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.526771 4766 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.526797 4766 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.526807 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.526816 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4q2q\" (UniqueName: \"kubernetes.io/projected/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-kube-api-access-g4q2q\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.526828 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.526837 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtgwn\" (UniqueName: \"kubernetes.io/projected/dc575168-b373-41ba-9dd6-2d9d168a6527-kube-api-access-dtgwn\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.526846 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvm4n\" (UniqueName: \"kubernetes.io/projected/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-kube-api-access-hvm4n\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.526854 4766 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.526865 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8d4d8\" (UniqueName: \"kubernetes.io/projected/372f7d7a-9066-4b9b-884a-5257785ed101-kube-api-access-8d4d8\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.526876 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.526884 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.535233 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="9ad68dc2-23ff-4044-b74d-149ae8f02bc0" containerName="galera" containerID="cri-o://83eef1fac3cc96895ab4ddd98d9e41ad0d9179a5c5f100993449cfa02dfc79ae" gracePeriod=30 Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.537166 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/372f7d7a-9066-4b9b-884a-5257785ed101-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "372f7d7a-9066-4b9b-884a-5257785ed101" (UID: "372f7d7a-9066-4b9b-884a-5257785ed101"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.554972 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-b00e-account-create-update-pkszz"] Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.559922 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6kx5n" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.572099 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-cc14-account-create-update-6kfvc"] Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.574729 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9" (UID: "eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.610773 4766 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 30 16:47:17 crc kubenswrapper[4766]: E0130 16:47:17.629528 4766 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 16:47:17 crc kubenswrapper[4766]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 30 16:47:17 crc kubenswrapper[4766]: Jan 30 16:47:17 crc kubenswrapper[4766]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 30 16:47:17 crc kubenswrapper[4766]: Jan 30 16:47:17 crc kubenswrapper[4766]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 30 16:47:17 crc kubenswrapper[4766]: Jan 30 16:47:17 crc kubenswrapper[4766]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 30 16:47:17 crc kubenswrapper[4766]: Jan 30 16:47:17 crc kubenswrapper[4766]: if [ -n "placement" ]; then Jan 30 16:47:17 crc kubenswrapper[4766]: GRANT_DATABASE="placement" Jan 30 16:47:17 crc kubenswrapper[4766]: else Jan 30 16:47:17 crc kubenswrapper[4766]: GRANT_DATABASE="*" Jan 30 16:47:17 crc kubenswrapper[4766]: fi Jan 30 16:47:17 crc kubenswrapper[4766]: Jan 30 16:47:17 crc kubenswrapper[4766]: # going for maximum compatibility here: Jan 30 16:47:17 crc kubenswrapper[4766]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 30 16:47:17 crc kubenswrapper[4766]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 30 16:47:17 crc kubenswrapper[4766]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 30 16:47:17 crc kubenswrapper[4766]: # support updates Jan 30 16:47:17 crc kubenswrapper[4766]: Jan 30 16:47:17 crc kubenswrapper[4766]: $MYSQL_CMD < logger="UnhandledError" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.629881 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/372f7d7a-9066-4b9b-884a-5257785ed101-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.629902 4766 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.629912 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:17 crc kubenswrapper[4766]: E0130 16:47:17.630626 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"placement-db-secret\\\" not found\"" pod="openstack/placement-cc14-account-create-update-6kfvc" podUID="a5ce540c-4925-43fa-b0aa-ef474912f60e" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.630834 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc575168-b373-41ba-9dd6-2d9d168a6527-config" (OuterVolumeSpecName: "config") pod "dc575168-b373-41ba-9dd6-2d9d168a6527" (UID: "dc575168-b373-41ba-9dd6-2d9d168a6527"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.636978 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-l6hkn" event={"ID":"2a501828-e06b-4096-b555-1ecd9323ee20","Type":"ContainerDied","Data":"087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2"} Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.636793 4766 generic.go:334] "Generic (PLEG): container finished" podID="2a501828-e06b-4096-b555-1ecd9323ee20" containerID="087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2" exitCode=0 Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.642263 4766 generic.go:334] "Generic (PLEG): container finished" podID="17d6e828-fc05-46cb-9bee-bac08ebf331a" containerID="c8cf61cca49f5913c4f404f73dad03aeee69e0fc8dd7632010cae106dffa30c1" exitCode=143 Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.642340 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b946b75c8-zb6q6" event={"ID":"17d6e828-fc05-46cb-9bee-bac08ebf331a","Type":"ContainerDied","Data":"c8cf61cca49f5913c4f404f73dad03aeee69e0fc8dd7632010cae106dffa30c1"} Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.650849 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc575168-b373-41ba-9dd6-2d9d168a6527-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dc575168-b373-41ba-9dd6-2d9d168a6527" (UID: "dc575168-b373-41ba-9dd6-2d9d168a6527"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.659849 4766 generic.go:334] "Generic (PLEG): container finished" podID="4bc2931b-8439-4c5c-be4d-43f4aab528f2" containerID="7a019f6cf432acd6921c269ed116db1aa5dfd42bb062f9567ee28226592d75f9" exitCode=143 Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.659921 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4bc2931b-8439-4c5c-be4d-43f4aab528f2","Type":"ContainerDied","Data":"7a019f6cf432acd6921c269ed116db1aa5dfd42bb062f9567ee28226592d75f9"} Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.663677 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/372f7d7a-9066-4b9b-884a-5257785ed101-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "372f7d7a-9066-4b9b-884a-5257785ed101" (UID: "372f7d7a-9066-4b9b-884a-5257785ed101"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.670389 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc575168-b373-41ba-9dd6-2d9d168a6527-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "dc575168-b373-41ba-9dd6-2d9d168a6527" (UID: "dc575168-b373-41ba-9dd6-2d9d168a6527"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.671095 4766 scope.go:117] "RemoveContainer" containerID="df788f30600005e9bd630dc70c223ed28619ad8b7870fd3b9815867378945be2" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.671260 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.680072 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-cell1-novncproxy-0" podUID="2852c370-2b06-4a98-9d48-190ed09dc7fb" containerName="nova-cell1-novncproxy-novncproxy" probeResult="failure" output="Get \"https://10.217.0.195:6080/vnc_lite.html\": dial tcp 10.217.0.195:6080: connect: connection refused" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.681364 4766 generic.go:334] "Generic (PLEG): container finished" podID="40f1dc52-213f-4a5b-af33-4067a83859e4" containerID="e3f046bd0100af048e190e7bb7dc2c7ede76a6587bf6a06b51be084ec53ea1dc" exitCode=143 Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.681420 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"40f1dc52-213f-4a5b-af33-4067a83859e4","Type":"ContainerDied","Data":"e3f046bd0100af048e190e7bb7dc2c7ede76a6587bf6a06b51be084ec53ea1dc"} Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.686973 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c4c6022b-f99b-41de-8048-ac8e4c4fa68f" (UID: "c4c6022b-f99b-41de-8048-ac8e4c4fa68f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.690456 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_c4c6022b-f99b-41de-8048-ac8e4c4fa68f/ovsdbserver-sb/0.log" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.690550 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c4c6022b-f99b-41de-8048-ac8e4c4fa68f","Type":"ContainerDied","Data":"44d944c146c567ab0a586afa23a8e30b46436b5558ae7e1ed7aeb15de65469a1"} Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.690659 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.696071 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc575168-b373-41ba-9dd6-2d9d168a6527-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "dc575168-b373-41ba-9dd6-2d9d168a6527" (UID: "dc575168-b373-41ba-9dd6-2d9d168a6527"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.709555 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-b00e-account-create-update-pkszz" event={"ID":"965e8a8f-b4eb-4abb-8177-841fde4d33a2","Type":"ContainerStarted","Data":"07444bdec33060f75bafa2f5ef1ef7ed7a4bfb753db474b6ac639a173646884f"} Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.714503 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6d4bdf9c45-5nxgr" podUID="533a3663-0294-48ef-b771-1f5fb3ae05ab" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.163:9696/\": read tcp 10.217.0.2:45420->10.217.0.163:9696: read: connection reset by peer" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.719480 4766 generic.go:334] "Generic (PLEG): container finished" podID="14ae2453-74fa-4114-9261-21b381518493" containerID="7cabed8561645b99877a1c2df47b93e7663d97c477d7b28bd91f347a72034772" exitCode=143 Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.719546 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"14ae2453-74fa-4114-9261-21b381518493","Type":"ContainerDied","Data":"7cabed8561645b99877a1c2df47b93e7663d97c477d7b28bd91f347a72034772"} Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.732443 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6kx5n"] Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.732518 4766 scope.go:117] "RemoveContainer" containerID="0e83e4f15db60d1d22bf2322b23168b3c373a79d29a5171d8b43db0aa0812d3a" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.745711 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/372f7d7a-9066-4b9b-884a-5257785ed101-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "372f7d7a-9066-4b9b-884a-5257785ed101" (UID: "372f7d7a-9066-4b9b-884a-5257785ed101"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.746606 4766 generic.go:334] "Generic (PLEG): container finished" podID="d13e6f63-37d4-4780-9902-430a9669901c" containerID="929f2cc066366dea699ff53637f354d8aeab119c1be0aa3851b50d5090307472" exitCode=143 Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.746703 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-d6c45fdd9-srlkx" event={"ID":"d13e6f63-37d4-4780-9902-430a9669901c","Type":"ContainerDied","Data":"929f2cc066366dea699ff53637f354d8aeab119c1be0aa3851b50d5090307472"} Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.751237 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.774958 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc575168-b373-41ba-9dd6-2d9d168a6527-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.775881 4766 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/372f7d7a-9066-4b9b-884a-5257785ed101-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.776270 4766 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/372f7d7a-9066-4b9b-884a-5257785ed101-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.776523 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc575168-b373-41ba-9dd6-2d9d168a6527-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.776890 4766 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dc575168-b373-41ba-9dd6-2d9d168a6527-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.783784 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc575168-b373-41ba-9dd6-2d9d168a6527-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:17 crc kubenswrapper[4766]: E0130 16:47:17.752729 4766 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 30 16:47:17 crc kubenswrapper[4766]: E0130 16:47:17.785146 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bc2a138c-9abd-427b-815c-cbb9e12459f6-config-data podName:bc2a138c-9abd-427b-815c-cbb9e12459f6 nodeName:}" failed. No retries permitted until 2026-01-30 16:47:19.785118517 +0000 UTC m=+1494.423075863 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/bc2a138c-9abd-427b-815c-cbb9e12459f6-config-data") pod "rabbitmq-server-0" (UID: "bc2a138c-9abd-427b-815c-cbb9e12459f6") : configmap "rabbitmq-config-data" not found Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.784414 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "c4c6022b-f99b-41de-8048-ac8e4c4fa68f" (UID: "c4c6022b-f99b-41de-8048-ac8e4c4fa68f"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.758996 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-66a8-account-create-update-hh2cg" event={"ID":"d12bc030-c731-4999-ac6d-1be59807c6de","Type":"ContainerStarted","Data":"51966cd3a843232e24ea290a07e04942bd3fc29e3ba863dc709b3486073ad006"} Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.791991 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1273-account-create-update-qhttp" event={"ID":"4f9c7bf1-ef4e-4b2b-806a-2da4b6a26cbc","Type":"ContainerStarted","Data":"4eed2095c1e71bf557db6c6c4861ce127a35758cb81e96d8821eff98abbdbbf2"} Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.825749 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_1e751b80-d475-4bfd-a382-5d9e1618e5aa/ovsdbserver-nb/0.log" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.825884 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.826703 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"1e751b80-d475-4bfd-a382-5d9e1618e5aa","Type":"ContainerDied","Data":"e1760b87e9caefe6e9c0ac6d3d9d8457bd91e81888eeb4755458d5a683cbea69"} Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.849971 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "c4c6022b-f99b-41de-8048-ac8e4c4fa68f" (UID: "c4c6022b-f99b-41de-8048-ac8e4c4fa68f"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.878814 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-zcjhs" event={"ID":"dc575168-b373-41ba-9dd6-2d9d168a6527","Type":"ContainerDied","Data":"5f22f70a639fc1a3de1e29c0cbaf53974c923905b26e7700e024e4f93619bae6"} Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.878911 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-zcjhs" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.886330 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc575168-b373-41ba-9dd6-2d9d168a6527-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "dc575168-b373-41ba-9dd6-2d9d168a6527" (UID: "dc575168-b373-41ba-9dd6-2d9d168a6527"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.887496 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.887789 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc575168-b373-41ba-9dd6-2d9d168a6527-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.887800 4766 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.908038 4766 generic.go:334] "Generic (PLEG): container finished" podID="22d60b44-40c9-425e-8daf-8931a25954e0" containerID="712f1ec6de09438090f58fbb0c4f302531a0e53b3ab1025ce983291fe2a30a55" exitCode=143 Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.908116 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5c649fd446-flqwn" event={"ID":"22d60b44-40c9-425e-8daf-8931a25954e0","Type":"ContainerDied","Data":"712f1ec6de09438090f58fbb0c4f302531a0e53b3ab1025ce983291fe2a30a55"} Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.936546 4766 generic.go:334] "Generic (PLEG): container finished" podID="063ebe65-0175-443e-8c75-5018c42b3f36" containerID="a33a51c4ce72a3331d749a25239fbd5adeae2f5c2b9a417968c58a83c32f6d49" exitCode=0 Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.936634 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"063ebe65-0175-443e-8c75-5018c42b3f36","Type":"ContainerDied","Data":"a33a51c4ce72a3331d749a25239fbd5adeae2f5c2b9a417968c58a83c32f6d49"} Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.940989 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-ovn-controller-tls-certs" (OuterVolumeSpecName: "ovn-controller-tls-certs") pod "eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9" (UID: "eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9"). InnerVolumeSpecName "ovn-controller-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.969494 4766 generic.go:334] "Generic (PLEG): container finished" podID="c3997cdc-9abd-4aa3-9201-0015456d4750" containerID="068d2554e4761fa5c6a952a00e96bf35b9eae89ff6aa26d9e05f71a814e8b350" exitCode=0 Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.969583 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7d7d659cc9-88mc9" event={"ID":"c3997cdc-9abd-4aa3-9201-0015456d4750","Type":"ContainerDied","Data":"068d2554e4761fa5c6a952a00e96bf35b9eae89ff6aa26d9e05f71a814e8b350"} Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.990076 4766 reconciler_common.go:293] "Volume detached for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-ovn-controller-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:17 crc kubenswrapper[4766]: E0130 16:47:17.990211 4766 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 30 16:47:17 crc kubenswrapper[4766]: E0130 16:47:17.990272 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b21357e1-82c9-419a-a191-359c84d6d001-config-data podName:b21357e1-82c9-419a-a191-359c84d6d001 nodeName:}" failed. No retries permitted until 2026-01-30 16:47:21.990254279 +0000 UTC m=+1496.628211625 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/b21357e1-82c9-419a-a191-359c84d6d001-config-data") pod "rabbitmq-cell1-server-0" (UID: "b21357e1-82c9-419a-a191-359c84d6d001") : configmap "rabbitmq-cell1-config-data" not found Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.992633 4766 scope.go:117] "RemoveContainer" containerID="35c50dacc5fd194e0367ec397b84d1ebda25e534558fb6144d3b0aa1f4575270" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.029852 4766 generic.go:334] "Generic (PLEG): container finished" podID="8b182790-0761-450c-85d1-63ddd59ac10f" containerID="fb57872e5fb6a58cc8c40e732147b1054a269fa84054e322cc2f52fa8c9c9ad5" exitCode=0 Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.029884 4766 generic.go:334] "Generic (PLEG): container finished" podID="8b182790-0761-450c-85d1-63ddd59ac10f" containerID="1867868d042226b0102d7af4efd2c5d0686e840d200dd33d6ec36968fc03fa94" exitCode=0 Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.029891 4766 generic.go:334] "Generic (PLEG): container finished" podID="8b182790-0761-450c-85d1-63ddd59ac10f" containerID="2de20de1c925cc2fe2631c488767f62edc5546cfa1bab3a9f5b3b5568ebd33bd" exitCode=0 Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.029905 4766 generic.go:334] "Generic (PLEG): container finished" podID="8b182790-0761-450c-85d1-63ddd59ac10f" containerID="cabff9d9eac1e96f01b9ae0ea6118276a0a0f7d8869b118376d2a160d9c95fbd" exitCode=0 Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.029914 4766 generic.go:334] "Generic (PLEG): container finished" podID="8b182790-0761-450c-85d1-63ddd59ac10f" containerID="686b4de4bfb8090cbee7ffd8b429f45a75fa7f8db6a139284fa6c26cb4ebf320" exitCode=0 Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.029920 4766 generic.go:334] "Generic (PLEG): container finished" podID="8b182790-0761-450c-85d1-63ddd59ac10f" containerID="93345e4db373057383a4e7560531f5f8dc222e4ea8e6511d8365b6b242bb9305" exitCode=0 Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.029927 4766 generic.go:334] "Generic (PLEG): container finished" podID="8b182790-0761-450c-85d1-63ddd59ac10f" containerID="ed024a5d8346d6cba34ca8427849879c1c8708dd88d1dff2c821e85ba14d6f5d" exitCode=0 Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.029933 4766 generic.go:334] "Generic (PLEG): container finished" podID="8b182790-0761-450c-85d1-63ddd59ac10f" containerID="3d565bf23f387505355fc88939efb3e922421c5ce2f3cce9972954f997abf7e9" exitCode=0 Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.029939 4766 generic.go:334] "Generic (PLEG): container finished" podID="8b182790-0761-450c-85d1-63ddd59ac10f" containerID="7e0ee7c6c23df84239fa6a0f2dda7982f60b3b9413744489a50144073243e8be" exitCode=0 Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.029946 4766 generic.go:334] "Generic (PLEG): container finished" podID="8b182790-0761-450c-85d1-63ddd59ac10f" containerID="4a378782d7a92d740e9d92e144de664ebf098b972f3febcbf7a8d0d8994d65c2" exitCode=0 Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.029954 4766 generic.go:334] "Generic (PLEG): container finished" podID="8b182790-0761-450c-85d1-63ddd59ac10f" containerID="b33858618ac4f97b57ed3a00bf2ef12f457aa24b08e1a7b17d0bccf28da68819" exitCode=0 Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.029961 4766 generic.go:334] "Generic (PLEG): container finished" podID="8b182790-0761-450c-85d1-63ddd59ac10f" containerID="8fb2a9d730e1fac1ed432db1aa83e0d89ad22b45725d36e0ee578815b9d18bd4" exitCode=0 Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.029967 4766 generic.go:334] "Generic (PLEG): container finished" podID="8b182790-0761-450c-85d1-63ddd59ac10f" containerID="13a067c315d5248f25766b082e783d339afd79a237563ce5f91071342f2570b8" exitCode=0 Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.029974 4766 generic.go:334] "Generic (PLEG): container finished" podID="8b182790-0761-450c-85d1-63ddd59ac10f" containerID="374f13cd2087a08f8eec3c99c6917ad293b1c5c6f50b2378b94b79cc272999d3" exitCode=0 Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.030021 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b182790-0761-450c-85d1-63ddd59ac10f","Type":"ContainerDied","Data":"fb57872e5fb6a58cc8c40e732147b1054a269fa84054e322cc2f52fa8c9c9ad5"} Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.030046 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b182790-0761-450c-85d1-63ddd59ac10f","Type":"ContainerDied","Data":"1867868d042226b0102d7af4efd2c5d0686e840d200dd33d6ec36968fc03fa94"} Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.030057 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b182790-0761-450c-85d1-63ddd59ac10f","Type":"ContainerDied","Data":"2de20de1c925cc2fe2631c488767f62edc5546cfa1bab3a9f5b3b5568ebd33bd"} Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.030067 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b182790-0761-450c-85d1-63ddd59ac10f","Type":"ContainerDied","Data":"cabff9d9eac1e96f01b9ae0ea6118276a0a0f7d8869b118376d2a160d9c95fbd"} Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.030077 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b182790-0761-450c-85d1-63ddd59ac10f","Type":"ContainerDied","Data":"686b4de4bfb8090cbee7ffd8b429f45a75fa7f8db6a139284fa6c26cb4ebf320"} Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.030085 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b182790-0761-450c-85d1-63ddd59ac10f","Type":"ContainerDied","Data":"93345e4db373057383a4e7560531f5f8dc222e4ea8e6511d8365b6b242bb9305"} Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.030095 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b182790-0761-450c-85d1-63ddd59ac10f","Type":"ContainerDied","Data":"ed024a5d8346d6cba34ca8427849879c1c8708dd88d1dff2c821e85ba14d6f5d"} Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.030103 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b182790-0761-450c-85d1-63ddd59ac10f","Type":"ContainerDied","Data":"3d565bf23f387505355fc88939efb3e922421c5ce2f3cce9972954f997abf7e9"} Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.030115 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b182790-0761-450c-85d1-63ddd59ac10f","Type":"ContainerDied","Data":"7e0ee7c6c23df84239fa6a0f2dda7982f60b3b9413744489a50144073243e8be"} Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.030126 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b182790-0761-450c-85d1-63ddd59ac10f","Type":"ContainerDied","Data":"4a378782d7a92d740e9d92e144de664ebf098b972f3febcbf7a8d0d8994d65c2"} Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.030138 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b182790-0761-450c-85d1-63ddd59ac10f","Type":"ContainerDied","Data":"b33858618ac4f97b57ed3a00bf2ef12f457aa24b08e1a7b17d0bccf28da68819"} Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.030148 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b182790-0761-450c-85d1-63ddd59ac10f","Type":"ContainerDied","Data":"8fb2a9d730e1fac1ed432db1aa83e0d89ad22b45725d36e0ee578815b9d18bd4"} Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.030159 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b182790-0761-450c-85d1-63ddd59ac10f","Type":"ContainerDied","Data":"13a067c315d5248f25766b082e783d339afd79a237563ce5f91071342f2570b8"} Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.030170 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b182790-0761-450c-85d1-63ddd59ac10f","Type":"ContainerDied","Data":"374f13cd2087a08f8eec3c99c6917ad293b1c5c6f50b2378b94b79cc272999d3"} Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.032989 4766 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/root-account-create-update-jfd74" secret="" err="secret \"galera-openstack-cell1-dockercfg-zd2kf\" not found" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.033375 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-clmnh" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.040048 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-clmnh" event={"ID":"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9","Type":"ContainerDied","Data":"35bff03af4700c59de26d7f263ff6609c1c1e4962e327e55accdbc5ea2056c14"} Jan 30 16:47:18 crc kubenswrapper[4766]: E0130 16:47:18.043430 4766 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 16:47:18 crc kubenswrapper[4766]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 30 16:47:18 crc kubenswrapper[4766]: Jan 30 16:47:18 crc kubenswrapper[4766]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 30 16:47:18 crc kubenswrapper[4766]: Jan 30 16:47:18 crc kubenswrapper[4766]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 30 16:47:18 crc kubenswrapper[4766]: Jan 30 16:47:18 crc kubenswrapper[4766]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 30 16:47:18 crc kubenswrapper[4766]: Jan 30 16:47:18 crc kubenswrapper[4766]: if [ -n "" ]; then Jan 30 16:47:18 crc kubenswrapper[4766]: GRANT_DATABASE="" Jan 30 16:47:18 crc kubenswrapper[4766]: else Jan 30 16:47:18 crc kubenswrapper[4766]: GRANT_DATABASE="*" Jan 30 16:47:18 crc kubenswrapper[4766]: fi Jan 30 16:47:18 crc kubenswrapper[4766]: Jan 30 16:47:18 crc kubenswrapper[4766]: # going for maximum compatibility here: Jan 30 16:47:18 crc kubenswrapper[4766]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 30 16:47:18 crc kubenswrapper[4766]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 30 16:47:18 crc kubenswrapper[4766]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 30 16:47:18 crc kubenswrapper[4766]: # support updates Jan 30 16:47:18 crc kubenswrapper[4766]: Jan 30 16:47:18 crc kubenswrapper[4766]: $MYSQL_CMD < logger="UnhandledError" Jan 30 16:47:18 crc kubenswrapper[4766]: E0130 16:47:18.044572 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-cell1-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-jfd74" podUID="4e9bbf1f-b039-4112-ab71-308535065091" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.074066 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12ab95d5-fb83-42b1-a38b-9e3bb8916f37" path="/var/lib/kubelet/pods/12ab95d5-fb83-42b1-a38b-9e3bb8916f37/volumes" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.074640 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="140fa04a-cb22-40ed-a08c-17f4ea13a5c4" path="/var/lib/kubelet/pods/140fa04a-cb22-40ed-a08c-17f4ea13a5c4/volumes" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.075553 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="199b8ae3-05c7-4785-9590-1cb06cce0013" path="/var/lib/kubelet/pods/199b8ae3-05c7-4785-9590-1cb06cce0013/volumes" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.076060 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1caad6ca-26a4-488c-8b03-90da40a955b0" path="/var/lib/kubelet/pods/1caad6ca-26a4-488c-8b03-90da40a955b0/volumes" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.076591 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="372f7d7a-9066-4b9b-884a-5257785ed101" path="/var/lib/kubelet/pods/372f7d7a-9066-4b9b-884a-5257785ed101/volumes" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.077558 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c8af029-8432-4152-8e74-5c40d72636d7" path="/var/lib/kubelet/pods/4c8af029-8432-4152-8e74-5c40d72636d7/volumes" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.078099 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="574fc4f9-56c3-44bf-bb85-26bb97a23ddc" path="/var/lib/kubelet/pods/574fc4f9-56c3-44bf-bb85-26bb97a23ddc/volumes" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.078709 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6da00370-0819-4857-8fa3-1ffe3e6b628b" path="/var/lib/kubelet/pods/6da00370-0819-4857-8fa3-1ffe3e6b628b/volumes" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.079723 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81d680b3-ced9-4a2a-9a50-780e6239b4a5" path="/var/lib/kubelet/pods/81d680b3-ced9-4a2a-9a50-780e6239b4a5/volumes" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.080270 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acb52775-c639-4afc-9f21-f33531a854b3" path="/var/lib/kubelet/pods/acb52775-c639-4afc-9f21-f33531a854b3/volumes" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.080767 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aeb40512-6ec4-4dd4-a623-ed2232387ee3" path="/var/lib/kubelet/pods/aeb40512-6ec4-4dd4-a623-ed2232387ee3/volumes" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.081800 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b88e4495-e013-4fc2-b65b-c3d914b89dd8" path="/var/lib/kubelet/pods/b88e4495-e013-4fc2-b65b-c3d914b89dd8/volumes" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.082331 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cea24037-4775-49f8-8a3b-d194ea750544" path="/var/lib/kubelet/pods/cea24037-4775-49f8-8a3b-d194ea750544/volumes" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.082821 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d707ae8a-f650-48e3-87e8-dc79076433e4" path="/var/lib/kubelet/pods/d707ae8a-f650-48e3-87e8-dc79076433e4/volumes" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.153333 4766 scope.go:117] "RemoveContainer" containerID="68be686c2198473cf235baf71f611a27995c8888c56e86a3626a67b42470e28a" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.188304 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.213189 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-b00e-account-create-update-pkszz" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.216911 4766 scope.go:117] "RemoveContainer" containerID="20e080fafb462224d035f80d6933976aeeea05d7d2ed407630e50efdc1f07cd7" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.220090 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.255209 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-clmnh"] Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.257785 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7d7d659cc9-88mc9" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.267247 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-clmnh"] Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.287791 4766 scope.go:117] "RemoveContainer" containerID="961c44998094a56223784b55dc0a705b3ed88b437f07fbb4bb63251127202310" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.296209 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/965e8a8f-b4eb-4abb-8177-841fde4d33a2-operator-scripts\") pod \"965e8a8f-b4eb-4abb-8177-841fde4d33a2\" (UID: \"965e8a8f-b4eb-4abb-8177-841fde4d33a2\") " Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.296361 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bw828\" (UniqueName: \"kubernetes.io/projected/965e8a8f-b4eb-4abb-8177-841fde4d33a2-kube-api-access-bw828\") pod \"965e8a8f-b4eb-4abb-8177-841fde4d33a2\" (UID: \"965e8a8f-b4eb-4abb-8177-841fde4d33a2\") " Jan 30 16:47:18 crc kubenswrapper[4766]: E0130 16:47:18.296912 4766 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Jan 30 16:47:18 crc kubenswrapper[4766]: E0130 16:47:18.296976 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4e9bbf1f-b039-4112-ab71-308535065091-operator-scripts podName:4e9bbf1f-b039-4112-ab71-308535065091 nodeName:}" failed. No retries permitted until 2026-01-30 16:47:20.296959008 +0000 UTC m=+1494.934916354 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/4e9bbf1f-b039-4112-ab71-308535065091-operator-scripts") pod "root-account-create-update-jfd74" (UID: "4e9bbf1f-b039-4112-ab71-308535065091") : configmap "openstack-cell1-scripts" not found Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.299642 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/965e8a8f-b4eb-4abb-8177-841fde4d33a2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "965e8a8f-b4eb-4abb-8177-841fde4d33a2" (UID: "965e8a8f-b4eb-4abb-8177-841fde4d33a2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:18 crc kubenswrapper[4766]: E0130 16:47:18.299696 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="49f1b21f2e3d5b4e03c8ce1eb39f56c81d4a5fbe999ee27e98fc4e4b585ae884" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.313219 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 16:47:18 crc kubenswrapper[4766]: E0130 16:47:18.317014 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="49f1b21f2e3d5b4e03c8ce1eb39f56c81d4a5fbe999ee27e98fc4e4b585ae884" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.320346 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/965e8a8f-b4eb-4abb-8177-841fde4d33a2-kube-api-access-bw828" (OuterVolumeSpecName: "kube-api-access-bw828") pod "965e8a8f-b4eb-4abb-8177-841fde4d33a2" (UID: "965e8a8f-b4eb-4abb-8177-841fde4d33a2"). InnerVolumeSpecName "kube-api-access-bw828". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.337616 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 16:47:18 crc kubenswrapper[4766]: E0130 16:47:18.345021 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="49f1b21f2e3d5b4e03c8ce1eb39f56c81d4a5fbe999ee27e98fc4e4b585ae884" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 16:47:18 crc kubenswrapper[4766]: E0130 16:47:18.345345 4766 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="4f217490-8a26-4f4b-935b-fe5918500948" containerName="nova-scheduler-scheduler" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.349259 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-zcjhs"] Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.358482 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-zcjhs"] Jan 30 16:47:18 crc kubenswrapper[4766]: E0130 16:47:18.361544 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7d4fac86f391b975d4d442ab4194690e25c69e0569d23636e3c1ed6941b267b8" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 16:47:18 crc kubenswrapper[4766]: E0130 16:47:18.363416 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7d4fac86f391b975d4d442ab4194690e25c69e0569d23636e3c1ed6941b267b8" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.383432 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-zlndr"] Jan 30 16:47:18 crc kubenswrapper[4766]: E0130 16:47:18.384439 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7d4fac86f391b975d4d442ab4194690e25c69e0569d23636e3c1ed6941b267b8" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 16:47:18 crc kubenswrapper[4766]: E0130 16:47:18.384563 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9" containerName="ovn-controller" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.384583 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9" containerName="ovn-controller" Jan 30 16:47:18 crc kubenswrapper[4766]: E0130 16:47:18.384597 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc575168-b373-41ba-9dd6-2d9d168a6527" containerName="dnsmasq-dns" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.384604 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc575168-b373-41ba-9dd6-2d9d168a6527" containerName="dnsmasq-dns" Jan 30 16:47:18 crc kubenswrapper[4766]: E0130 16:47:18.384592 4766 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="7fa69536-b701-43a4-814a-2ba16974b1dd" containerName="nova-cell1-conductor-conductor" Jan 30 16:47:18 crc kubenswrapper[4766]: E0130 16:47:18.384612 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3997cdc-9abd-4aa3-9201-0015456d4750" containerName="proxy-server" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.385337 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3997cdc-9abd-4aa3-9201-0015456d4750" containerName="proxy-server" Jan 30 16:47:18 crc kubenswrapper[4766]: E0130 16:47:18.385403 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="140fa04a-cb22-40ed-a08c-17f4ea13a5c4" containerName="openstack-network-exporter" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.385417 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="140fa04a-cb22-40ed-a08c-17f4ea13a5c4" containerName="openstack-network-exporter" Jan 30 16:47:18 crc kubenswrapper[4766]: E0130 16:47:18.385460 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc575168-b373-41ba-9dd6-2d9d168a6527" containerName="init" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.385468 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc575168-b373-41ba-9dd6-2d9d168a6527" containerName="init" Jan 30 16:47:18 crc kubenswrapper[4766]: E0130 16:47:18.385500 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e751b80-d475-4bfd-a382-5d9e1618e5aa" containerName="ovsdbserver-nb" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.385508 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e751b80-d475-4bfd-a382-5d9e1618e5aa" containerName="ovsdbserver-nb" Jan 30 16:47:18 crc kubenswrapper[4766]: E0130 16:47:18.385523 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4c6022b-f99b-41de-8048-ac8e4c4fa68f" containerName="ovsdbserver-sb" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.385530 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4c6022b-f99b-41de-8048-ac8e4c4fa68f" containerName="ovsdbserver-sb" Jan 30 16:47:18 crc kubenswrapper[4766]: E0130 16:47:18.386152 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4c6022b-f99b-41de-8048-ac8e4c4fa68f" containerName="openstack-network-exporter" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.386171 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4c6022b-f99b-41de-8048-ac8e4c4fa68f" containerName="openstack-network-exporter" Jan 30 16:47:18 crc kubenswrapper[4766]: E0130 16:47:18.386201 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e751b80-d475-4bfd-a382-5d9e1618e5aa" containerName="openstack-network-exporter" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.386211 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e751b80-d475-4bfd-a382-5d9e1618e5aa" containerName="openstack-network-exporter" Jan 30 16:47:18 crc kubenswrapper[4766]: E0130 16:47:18.386225 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3997cdc-9abd-4aa3-9201-0015456d4750" containerName="proxy-httpd" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.386514 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3997cdc-9abd-4aa3-9201-0015456d4750" containerName="proxy-httpd" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.386966 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3997cdc-9abd-4aa3-9201-0015456d4750" containerName="proxy-httpd" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.386986 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3997cdc-9abd-4aa3-9201-0015456d4750" containerName="proxy-server" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.387002 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9" containerName="ovn-controller" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.387014 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e751b80-d475-4bfd-a382-5d9e1618e5aa" containerName="ovsdbserver-nb" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.387025 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e751b80-d475-4bfd-a382-5d9e1618e5aa" containerName="openstack-network-exporter" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.387040 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc575168-b373-41ba-9dd6-2d9d168a6527" containerName="dnsmasq-dns" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.387054 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4c6022b-f99b-41de-8048-ac8e4c4fa68f" containerName="openstack-network-exporter" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.387073 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4c6022b-f99b-41de-8048-ac8e4c4fa68f" containerName="ovsdbserver-sb" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.387087 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="140fa04a-cb22-40ed-a08c-17f4ea13a5c4" containerName="openstack-network-exporter" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.387800 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zlndr" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.399854 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.402940 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c3997cdc-9abd-4aa3-9201-0015456d4750-etc-swift\") pod \"c3997cdc-9abd-4aa3-9201-0015456d4750\" (UID: \"c3997cdc-9abd-4aa3-9201-0015456d4750\") " Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.403043 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3997cdc-9abd-4aa3-9201-0015456d4750-log-httpd\") pod \"c3997cdc-9abd-4aa3-9201-0015456d4750\" (UID: \"c3997cdc-9abd-4aa3-9201-0015456d4750\") " Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.403110 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3997cdc-9abd-4aa3-9201-0015456d4750-config-data\") pod \"c3997cdc-9abd-4aa3-9201-0015456d4750\" (UID: \"c3997cdc-9abd-4aa3-9201-0015456d4750\") " Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.403130 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3997cdc-9abd-4aa3-9201-0015456d4750-public-tls-certs\") pod \"c3997cdc-9abd-4aa3-9201-0015456d4750\" (UID: \"c3997cdc-9abd-4aa3-9201-0015456d4750\") " Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.403148 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3997cdc-9abd-4aa3-9201-0015456d4750-internal-tls-certs\") pod \"c3997cdc-9abd-4aa3-9201-0015456d4750\" (UID: \"c3997cdc-9abd-4aa3-9201-0015456d4750\") " Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.403190 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dsts\" (UniqueName: \"kubernetes.io/projected/c3997cdc-9abd-4aa3-9201-0015456d4750-kube-api-access-7dsts\") pod \"c3997cdc-9abd-4aa3-9201-0015456d4750\" (UID: \"c3997cdc-9abd-4aa3-9201-0015456d4750\") " Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.403263 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3997cdc-9abd-4aa3-9201-0015456d4750-combined-ca-bundle\") pod \"c3997cdc-9abd-4aa3-9201-0015456d4750\" (UID: \"c3997cdc-9abd-4aa3-9201-0015456d4750\") " Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.403289 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3997cdc-9abd-4aa3-9201-0015456d4750-run-httpd\") pod \"c3997cdc-9abd-4aa3-9201-0015456d4750\" (UID: \"c3997cdc-9abd-4aa3-9201-0015456d4750\") " Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.403809 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bw828\" (UniqueName: \"kubernetes.io/projected/965e8a8f-b4eb-4abb-8177-841fde4d33a2-kube-api-access-bw828\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.403821 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/965e8a8f-b4eb-4abb-8177-841fde4d33a2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.405675 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3997cdc-9abd-4aa3-9201-0015456d4750-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c3997cdc-9abd-4aa3-9201-0015456d4750" (UID: "c3997cdc-9abd-4aa3-9201-0015456d4750"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.417135 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3997cdc-9abd-4aa3-9201-0015456d4750-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c3997cdc-9abd-4aa3-9201-0015456d4750" (UID: "c3997cdc-9abd-4aa3-9201-0015456d4750"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.419892 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3997cdc-9abd-4aa3-9201-0015456d4750-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "c3997cdc-9abd-4aa3-9201-0015456d4750" (UID: "c3997cdc-9abd-4aa3-9201-0015456d4750"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.432007 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-66a8-account-create-update-hh2cg" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.432465 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3997cdc-9abd-4aa3-9201-0015456d4750-kube-api-access-7dsts" (OuterVolumeSpecName: "kube-api-access-7dsts") pod "c3997cdc-9abd-4aa3-9201-0015456d4750" (UID: "c3997cdc-9abd-4aa3-9201-0015456d4750"). InnerVolumeSpecName "kube-api-access-7dsts". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.465937 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1273-account-create-update-qhttp" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.471492 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-zlndr"] Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.502530 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3997cdc-9abd-4aa3-9201-0015456d4750-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c3997cdc-9abd-4aa3-9201-0015456d4750" (UID: "c3997cdc-9abd-4aa3-9201-0015456d4750"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.505660 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhtqk\" (UniqueName: \"kubernetes.io/projected/d12bc030-c731-4999-ac6d-1be59807c6de-kube-api-access-rhtqk\") pod \"d12bc030-c731-4999-ac6d-1be59807c6de\" (UID: \"d12bc030-c731-4999-ac6d-1be59807c6de\") " Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.505728 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d12bc030-c731-4999-ac6d-1be59807c6de-operator-scripts\") pod \"d12bc030-c731-4999-ac6d-1be59807c6de\" (UID: \"d12bc030-c731-4999-ac6d-1be59807c6de\") " Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.506110 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc497\" (UniqueName: \"kubernetes.io/projected/768238f5-b74e-4f23-91ec-4eeb69375025-kube-api-access-fc497\") pod \"root-account-create-update-zlndr\" (UID: \"768238f5-b74e-4f23-91ec-4eeb69375025\") " pod="openstack/root-account-create-update-zlndr" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.506226 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/768238f5-b74e-4f23-91ec-4eeb69375025-operator-scripts\") pod \"root-account-create-update-zlndr\" (UID: \"768238f5-b74e-4f23-91ec-4eeb69375025\") " pod="openstack/root-account-create-update-zlndr" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.506343 4766 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3997cdc-9abd-4aa3-9201-0015456d4750-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.506355 4766 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3997cdc-9abd-4aa3-9201-0015456d4750-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.506366 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dsts\" (UniqueName: \"kubernetes.io/projected/c3997cdc-9abd-4aa3-9201-0015456d4750-kube-api-access-7dsts\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.506378 4766 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3997cdc-9abd-4aa3-9201-0015456d4750-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.506388 4766 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c3997cdc-9abd-4aa3-9201-0015456d4750-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.508105 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d12bc030-c731-4999-ac6d-1be59807c6de-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d12bc030-c731-4999-ac6d-1be59807c6de" (UID: "d12bc030-c731-4999-ac6d-1be59807c6de"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.509861 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3997cdc-9abd-4aa3-9201-0015456d4750-config-data" (OuterVolumeSpecName: "config-data") pod "c3997cdc-9abd-4aa3-9201-0015456d4750" (UID: "c3997cdc-9abd-4aa3-9201-0015456d4750"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.509933 4766 scope.go:117] "RemoveContainer" containerID="171794ba587c014be0b798dbd63a837f1e8d0b0b80d5e7da01caed534045c23e" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.516522 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d12bc030-c731-4999-ac6d-1be59807c6de-kube-api-access-rhtqk" (OuterVolumeSpecName: "kube-api-access-rhtqk") pod "d12bc030-c731-4999-ac6d-1be59807c6de" (UID: "d12bc030-c731-4999-ac6d-1be59807c6de"). InnerVolumeSpecName "kube-api-access-rhtqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.524782 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3997cdc-9abd-4aa3-9201-0015456d4750-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c3997cdc-9abd-4aa3-9201-0015456d4750" (UID: "c3997cdc-9abd-4aa3-9201-0015456d4750"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.549320 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3997cdc-9abd-4aa3-9201-0015456d4750-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c3997cdc-9abd-4aa3-9201-0015456d4750" (UID: "c3997cdc-9abd-4aa3-9201-0015456d4750"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.562698 4766 scope.go:117] "RemoveContainer" containerID="cc06e17c8227a3be8709faf659e52c8b8081ab19b313069647e67f5a0b8b13e7" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.581439 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.608850 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfxsn\" (UniqueName: \"kubernetes.io/projected/4f9c7bf1-ef4e-4b2b-806a-2da4b6a26cbc-kube-api-access-dfxsn\") pod \"4f9c7bf1-ef4e-4b2b-806a-2da4b6a26cbc\" (UID: \"4f9c7bf1-ef4e-4b2b-806a-2da4b6a26cbc\") " Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.609163 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f9c7bf1-ef4e-4b2b-806a-2da4b6a26cbc-operator-scripts\") pod \"4f9c7bf1-ef4e-4b2b-806a-2da4b6a26cbc\" (UID: \"4f9c7bf1-ef4e-4b2b-806a-2da4b6a26cbc\") " Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.609644 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fc497\" (UniqueName: \"kubernetes.io/projected/768238f5-b74e-4f23-91ec-4eeb69375025-kube-api-access-fc497\") pod \"root-account-create-update-zlndr\" (UID: \"768238f5-b74e-4f23-91ec-4eeb69375025\") " pod="openstack/root-account-create-update-zlndr" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.609761 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/768238f5-b74e-4f23-91ec-4eeb69375025-operator-scripts\") pod \"root-account-create-update-zlndr\" (UID: \"768238f5-b74e-4f23-91ec-4eeb69375025\") " pod="openstack/root-account-create-update-zlndr" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.609818 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3997cdc-9abd-4aa3-9201-0015456d4750-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.609831 4766 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3997cdc-9abd-4aa3-9201-0015456d4750-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.609845 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3997cdc-9abd-4aa3-9201-0015456d4750-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.609857 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhtqk\" (UniqueName: \"kubernetes.io/projected/d12bc030-c731-4999-ac6d-1be59807c6de-kube-api-access-rhtqk\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.609870 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d12bc030-c731-4999-ac6d-1be59807c6de-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.610435 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f9c7bf1-ef4e-4b2b-806a-2da4b6a26cbc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4f9c7bf1-ef4e-4b2b-806a-2da4b6a26cbc" (UID: "4f9c7bf1-ef4e-4b2b-806a-2da4b6a26cbc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.610577 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/768238f5-b74e-4f23-91ec-4eeb69375025-operator-scripts\") pod \"root-account-create-update-zlndr\" (UID: \"768238f5-b74e-4f23-91ec-4eeb69375025\") " pod="openstack/root-account-create-update-zlndr" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.617745 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f9c7bf1-ef4e-4b2b-806a-2da4b6a26cbc-kube-api-access-dfxsn" (OuterVolumeSpecName: "kube-api-access-dfxsn") pod "4f9c7bf1-ef4e-4b2b-806a-2da4b6a26cbc" (UID: "4f9c7bf1-ef4e-4b2b-806a-2da4b6a26cbc"). InnerVolumeSpecName "kube-api-access-dfxsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.645721 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fc497\" (UniqueName: \"kubernetes.io/projected/768238f5-b74e-4f23-91ec-4eeb69375025-kube-api-access-fc497\") pod \"root-account-create-update-zlndr\" (UID: \"768238f5-b74e-4f23-91ec-4eeb69375025\") " pod="openstack/root-account-create-update-zlndr" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.711260 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plzmd\" (UniqueName: \"kubernetes.io/projected/2852c370-2b06-4a98-9d48-190ed09dc7fb-kube-api-access-plzmd\") pod \"2852c370-2b06-4a98-9d48-190ed09dc7fb\" (UID: \"2852c370-2b06-4a98-9d48-190ed09dc7fb\") " Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.711381 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2852c370-2b06-4a98-9d48-190ed09dc7fb-config-data\") pod \"2852c370-2b06-4a98-9d48-190ed09dc7fb\" (UID: \"2852c370-2b06-4a98-9d48-190ed09dc7fb\") " Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.711663 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/2852c370-2b06-4a98-9d48-190ed09dc7fb-nova-novncproxy-tls-certs\") pod \"2852c370-2b06-4a98-9d48-190ed09dc7fb\" (UID: \"2852c370-2b06-4a98-9d48-190ed09dc7fb\") " Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.711740 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2852c370-2b06-4a98-9d48-190ed09dc7fb-combined-ca-bundle\") pod \"2852c370-2b06-4a98-9d48-190ed09dc7fb\" (UID: \"2852c370-2b06-4a98-9d48-190ed09dc7fb\") " Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.711795 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/2852c370-2b06-4a98-9d48-190ed09dc7fb-vencrypt-tls-certs\") pod \"2852c370-2b06-4a98-9d48-190ed09dc7fb\" (UID: \"2852c370-2b06-4a98-9d48-190ed09dc7fb\") " Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.714342 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f9c7bf1-ef4e-4b2b-806a-2da4b6a26cbc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.714380 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfxsn\" (UniqueName: \"kubernetes.io/projected/4f9c7bf1-ef4e-4b2b-806a-2da4b6a26cbc-kube-api-access-dfxsn\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:18 crc kubenswrapper[4766]: E0130 16:47:18.739942 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2 is running failed: container process not found" containerID="087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 16:47:18 crc kubenswrapper[4766]: E0130 16:47:18.746004 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="83993d94a8bc7f594d30caf8ddb5c055e031d5c9a949175233563c06d2f790e9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 16:47:18 crc kubenswrapper[4766]: E0130 16:47:18.746167 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2 is running failed: container process not found" containerID="087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 16:47:18 crc kubenswrapper[4766]: E0130 16:47:18.750818 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="83993d94a8bc7f594d30caf8ddb5c055e031d5c9a949175233563c06d2f790e9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 16:47:18 crc kubenswrapper[4766]: E0130 16:47:18.750981 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2 is running failed: container process not found" containerID="087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 16:47:18 crc kubenswrapper[4766]: E0130 16:47:18.751011 4766 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-l6hkn" podUID="2a501828-e06b-4096-b555-1ecd9323ee20" containerName="ovsdb-server" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.750823 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2852c370-2b06-4a98-9d48-190ed09dc7fb-config-data" (OuterVolumeSpecName: "config-data") pod "2852c370-2b06-4a98-9d48-190ed09dc7fb" (UID: "2852c370-2b06-4a98-9d48-190ed09dc7fb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.760385 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2852c370-2b06-4a98-9d48-190ed09dc7fb-kube-api-access-plzmd" (OuterVolumeSpecName: "kube-api-access-plzmd") pod "2852c370-2b06-4a98-9d48-190ed09dc7fb" (UID: "2852c370-2b06-4a98-9d48-190ed09dc7fb"). InnerVolumeSpecName "kube-api-access-plzmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.762505 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zlndr" Jan 30 16:47:18 crc kubenswrapper[4766]: E0130 16:47:18.768463 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="83993d94a8bc7f594d30caf8ddb5c055e031d5c9a949175233563c06d2f790e9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 16:47:18 crc kubenswrapper[4766]: E0130 16:47:18.768520 4766 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-l6hkn" podUID="2a501828-e06b-4096-b555-1ecd9323ee20" containerName="ovs-vswitchd" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.819235 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plzmd\" (UniqueName: \"kubernetes.io/projected/2852c370-2b06-4a98-9d48-190ed09dc7fb-kube-api-access-plzmd\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.819540 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2852c370-2b06-4a98-9d48-190ed09dc7fb-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.820412 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2852c370-2b06-4a98-9d48-190ed09dc7fb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2852c370-2b06-4a98-9d48-190ed09dc7fb" (UID: "2852c370-2b06-4a98-9d48-190ed09dc7fb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.020967 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2852c370-2b06-4a98-9d48-190ed09dc7fb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.022520 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2852c370-2b06-4a98-9d48-190ed09dc7fb-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "2852c370-2b06-4a98-9d48-190ed09dc7fb" (UID: "2852c370-2b06-4a98-9d48-190ed09dc7fb"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:19 crc kubenswrapper[4766]: E0130 16:47:19.060911 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f89cf49e9f838960fe8746366d6f0b6e8a301a5e9f763e18897855dbba68d6bf is running failed: container process not found" containerID="f89cf49e9f838960fe8746366d6f0b6e8a301a5e9f763e18897855dbba68d6bf" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 16:47:19 crc kubenswrapper[4766]: E0130 16:47:19.087364 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f89cf49e9f838960fe8746366d6f0b6e8a301a5e9f763e18897855dbba68d6bf is running failed: container process not found" containerID="f89cf49e9f838960fe8746366d6f0b6e8a301a5e9f763e18897855dbba68d6bf" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 16:47:19 crc kubenswrapper[4766]: E0130 16:47:19.102514 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f89cf49e9f838960fe8746366d6f0b6e8a301a5e9f763e18897855dbba68d6bf is running failed: container process not found" containerID="f89cf49e9f838960fe8746366d6f0b6e8a301a5e9f763e18897855dbba68d6bf" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 16:47:19 crc kubenswrapper[4766]: E0130 16:47:19.102763 4766 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f89cf49e9f838960fe8746366d6f0b6e8a301a5e9f763e18897855dbba68d6bf is running failed: container process not found" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="e5346df4-67e7-4a20-bb56-11173908a334" containerName="nova-cell0-conductor-conductor" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.108370 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.121403 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-66a8-account-create-update-hh2cg" event={"ID":"d12bc030-c731-4999-ac6d-1be59807c6de","Type":"ContainerDied","Data":"51966cd3a843232e24ea290a07e04942bd3fc29e3ba863dc709b3486073ad006"} Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.121539 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-66a8-account-create-update-hh2cg" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.125267 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2852c370-2b06-4a98-9d48-190ed09dc7fb-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "2852c370-2b06-4a98-9d48-190ed09dc7fb" (UID: "2852c370-2b06-4a98-9d48-190ed09dc7fb"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.130254 4766 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/2852c370-2b06-4a98-9d48-190ed09dc7fb-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.130281 4766 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/2852c370-2b06-4a98-9d48-190ed09dc7fb-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.133308 4766 generic.go:334] "Generic (PLEG): container finished" podID="533a3663-0294-48ef-b771-1f5fb3ae05ab" containerID="7b8bf066636272b652b67ba985eba08e74de13009f953d0190f16c41f92e8863" exitCode=0 Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.133411 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d4bdf9c45-5nxgr" event={"ID":"533a3663-0294-48ef-b771-1f5fb3ae05ab","Type":"ContainerDied","Data":"7b8bf066636272b652b67ba985eba08e74de13009f953d0190f16c41f92e8863"} Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.162133 4766 generic.go:334] "Generic (PLEG): container finished" podID="c3997cdc-9abd-4aa3-9201-0015456d4750" containerID="75ae716b873f99b1ddf625fab6e52abbd22acfb11f4be4e5163116f6fbbe7e1a" exitCode=0 Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.162271 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7d7d659cc9-88mc9" event={"ID":"c3997cdc-9abd-4aa3-9201-0015456d4750","Type":"ContainerDied","Data":"75ae716b873f99b1ddf625fab6e52abbd22acfb11f4be4e5163116f6fbbe7e1a"} Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.162307 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7d7d659cc9-88mc9" event={"ID":"c3997cdc-9abd-4aa3-9201-0015456d4750","Type":"ContainerDied","Data":"49605357677b39efe33a4677710b6828509af2272af5c0ba35f1272ec2a825ae"} Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.162324 4766 scope.go:117] "RemoveContainer" containerID="75ae716b873f99b1ddf625fab6e52abbd22acfb11f4be4e5163116f6fbbe7e1a" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.162462 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7d7d659cc9-88mc9" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.174228 4766 generic.go:334] "Generic (PLEG): container finished" podID="2852c370-2b06-4a98-9d48-190ed09dc7fb" containerID="2671e048e8f586ac30e84753d3da2378fb3c49f9c4e105e7b943738d2cc3d2c1" exitCode=0 Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.174350 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2852c370-2b06-4a98-9d48-190ed09dc7fb","Type":"ContainerDied","Data":"2671e048e8f586ac30e84753d3da2378fb3c49f9c4e105e7b943738d2cc3d2c1"} Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.174384 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2852c370-2b06-4a98-9d48-190ed09dc7fb","Type":"ContainerDied","Data":"e3f1207851f51fa77618a8f4520c72390b14e22e1338691737d047661159f41f"} Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.177301 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.208821 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-66a8-account-create-update-hh2cg"] Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.213655 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-66a8-account-create-update-hh2cg"] Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.215393 4766 scope.go:117] "RemoveContainer" containerID="068d2554e4761fa5c6a952a00e96bf35b9eae89ff6aa26d9e05f71a814e8b350" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.223905 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-cc14-account-create-update-6kfvc" event={"ID":"a5ce540c-4925-43fa-b0aa-ef474912f60e","Type":"ContainerStarted","Data":"4fd65f8ecd2b6f82a377e2d07f913ddeac5bcdf9496f8b1aeada1b9cd5e4251c"} Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.234134 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpsfm\" (UniqueName: \"kubernetes.io/projected/e5346df4-67e7-4a20-bb56-11173908a334-kube-api-access-wpsfm\") pod \"e5346df4-67e7-4a20-bb56-11173908a334\" (UID: \"e5346df4-67e7-4a20-bb56-11173908a334\") " Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.235508 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5346df4-67e7-4a20-bb56-11173908a334-combined-ca-bundle\") pod \"e5346df4-67e7-4a20-bb56-11173908a334\" (UID: \"e5346df4-67e7-4a20-bb56-11173908a334\") " Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.236359 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5346df4-67e7-4a20-bb56-11173908a334-config-data\") pod \"e5346df4-67e7-4a20-bb56-11173908a334\" (UID: \"e5346df4-67e7-4a20-bb56-11173908a334\") " Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.254516 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5346df4-67e7-4a20-bb56-11173908a334-kube-api-access-wpsfm" (OuterVolumeSpecName: "kube-api-access-wpsfm") pod "e5346df4-67e7-4a20-bb56-11173908a334" (UID: "e5346df4-67e7-4a20-bb56-11173908a334"). InnerVolumeSpecName "kube-api-access-wpsfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.257363 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1273-account-create-update-qhttp" event={"ID":"4f9c7bf1-ef4e-4b2b-806a-2da4b6a26cbc","Type":"ContainerDied","Data":"4eed2095c1e71bf557db6c6c4861ce127a35758cb81e96d8821eff98abbdbbf2"} Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.257459 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1273-account-create-update-qhttp" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.262416 4766 scope.go:117] "RemoveContainer" containerID="75ae716b873f99b1ddf625fab6e52abbd22acfb11f4be4e5163116f6fbbe7e1a" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.266823 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-7d7d659cc9-88mc9"] Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.267210 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-b00e-account-create-update-pkszz" event={"ID":"965e8a8f-b4eb-4abb-8177-841fde4d33a2","Type":"ContainerDied","Data":"07444bdec33060f75bafa2f5ef1ef7ed7a4bfb753db474b6ac639a173646884f"} Jan 30 16:47:19 crc kubenswrapper[4766]: E0130 16:47:19.268335 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75ae716b873f99b1ddf625fab6e52abbd22acfb11f4be4e5163116f6fbbe7e1a\": container with ID starting with 75ae716b873f99b1ddf625fab6e52abbd22acfb11f4be4e5163116f6fbbe7e1a not found: ID does not exist" containerID="75ae716b873f99b1ddf625fab6e52abbd22acfb11f4be4e5163116f6fbbe7e1a" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.269048 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75ae716b873f99b1ddf625fab6e52abbd22acfb11f4be4e5163116f6fbbe7e1a"} err="failed to get container status \"75ae716b873f99b1ddf625fab6e52abbd22acfb11f4be4e5163116f6fbbe7e1a\": rpc error: code = NotFound desc = could not find container \"75ae716b873f99b1ddf625fab6e52abbd22acfb11f4be4e5163116f6fbbe7e1a\": container with ID starting with 75ae716b873f99b1ddf625fab6e52abbd22acfb11f4be4e5163116f6fbbe7e1a not found: ID does not exist" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.269653 4766 scope.go:117] "RemoveContainer" containerID="068d2554e4761fa5c6a952a00e96bf35b9eae89ff6aa26d9e05f71a814e8b350" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.268446 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-b00e-account-create-update-pkszz" Jan 30 16:47:19 crc kubenswrapper[4766]: E0130 16:47:19.270456 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"068d2554e4761fa5c6a952a00e96bf35b9eae89ff6aa26d9e05f71a814e8b350\": container with ID starting with 068d2554e4761fa5c6a952a00e96bf35b9eae89ff6aa26d9e05f71a814e8b350 not found: ID does not exist" containerID="068d2554e4761fa5c6a952a00e96bf35b9eae89ff6aa26d9e05f71a814e8b350" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.270498 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"068d2554e4761fa5c6a952a00e96bf35b9eae89ff6aa26d9e05f71a814e8b350"} err="failed to get container status \"068d2554e4761fa5c6a952a00e96bf35b9eae89ff6aa26d9e05f71a814e8b350\": rpc error: code = NotFound desc = could not find container \"068d2554e4761fa5c6a952a00e96bf35b9eae89ff6aa26d9e05f71a814e8b350\": container with ID starting with 068d2554e4761fa5c6a952a00e96bf35b9eae89ff6aa26d9e05f71a814e8b350 not found: ID does not exist" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.270529 4766 scope.go:117] "RemoveContainer" containerID="2671e048e8f586ac30e84753d3da2378fb3c49f9c4e105e7b943738d2cc3d2c1" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.271666 4766 generic.go:334] "Generic (PLEG): container finished" podID="e5346df4-67e7-4a20-bb56-11173908a334" containerID="f89cf49e9f838960fe8746366d6f0b6e8a301a5e9f763e18897855dbba68d6bf" exitCode=0 Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.271738 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"e5346df4-67e7-4a20-bb56-11173908a334","Type":"ContainerDied","Data":"f89cf49e9f838960fe8746366d6f0b6e8a301a5e9f763e18897855dbba68d6bf"} Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.271771 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"e5346df4-67e7-4a20-bb56-11173908a334","Type":"ContainerDied","Data":"33febc3f7d219c782652c5547871f0fec7686207e6742c6b6d2b0ff232b61a09"} Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.272059 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.273674 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-7d7d659cc9-88mc9"] Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.277678 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5346df4-67e7-4a20-bb56-11173908a334-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e5346df4-67e7-4a20-bb56-11173908a334" (UID: "e5346df4-67e7-4a20-bb56-11173908a334"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.278914 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6kx5n" podUID="845c3343-246e-4309-bd46-9bcd92cad574" containerName="registry-server" containerID="cri-o://8cbf9b54e98821990af4d26a0270e1083e6e97cddbcd3ce5671c84518d5767d7" gracePeriod=2 Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.317385 4766 scope.go:117] "RemoveContainer" containerID="2671e048e8f586ac30e84753d3da2378fb3c49f9c4e105e7b943738d2cc3d2c1" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.317964 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5346df4-67e7-4a20-bb56-11173908a334-config-data" (OuterVolumeSpecName: "config-data") pod "e5346df4-67e7-4a20-bb56-11173908a334" (UID: "e5346df4-67e7-4a20-bb56-11173908a334"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:19 crc kubenswrapper[4766]: E0130 16:47:19.333807 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2671e048e8f586ac30e84753d3da2378fb3c49f9c4e105e7b943738d2cc3d2c1\": container with ID starting with 2671e048e8f586ac30e84753d3da2378fb3c49f9c4e105e7b943738d2cc3d2c1 not found: ID does not exist" containerID="2671e048e8f586ac30e84753d3da2378fb3c49f9c4e105e7b943738d2cc3d2c1" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.333866 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2671e048e8f586ac30e84753d3da2378fb3c49f9c4e105e7b943738d2cc3d2c1"} err="failed to get container status \"2671e048e8f586ac30e84753d3da2378fb3c49f9c4e105e7b943738d2cc3d2c1\": rpc error: code = NotFound desc = could not find container \"2671e048e8f586ac30e84753d3da2378fb3c49f9c4e105e7b943738d2cc3d2c1\": container with ID starting with 2671e048e8f586ac30e84753d3da2378fb3c49f9c4e105e7b943738d2cc3d2c1 not found: ID does not exist" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.333899 4766 scope.go:117] "RemoveContainer" containerID="f89cf49e9f838960fe8746366d6f0b6e8a301a5e9f763e18897855dbba68d6bf" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.344511 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5346df4-67e7-4a20-bb56-11173908a334-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.344553 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpsfm\" (UniqueName: \"kubernetes.io/projected/e5346df4-67e7-4a20-bb56-11173908a334-kube-api-access-wpsfm\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.344566 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5346df4-67e7-4a20-bb56-11173908a334-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.399955 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-1273-account-create-update-qhttp"] Jan 30 16:47:19 crc kubenswrapper[4766]: E0130 16:47:19.417040 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1018ad035e1117daba7d0fa6d624c300af7a28f4b34f661587a2d4823b6112f1" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 30 16:47:19 crc kubenswrapper[4766]: E0130 16:47:19.419015 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1018ad035e1117daba7d0fa6d624c300af7a28f4b34f661587a2d4823b6112f1" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.420782 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-1273-account-create-update-qhttp"] Jan 30 16:47:19 crc kubenswrapper[4766]: E0130 16:47:19.421152 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1018ad035e1117daba7d0fa6d624c300af7a28f4b34f661587a2d4823b6112f1" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 30 16:47:19 crc kubenswrapper[4766]: E0130 16:47:19.424874 4766 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="9f9f648f-36fc-4ab4-9e08-cf4e01e30f22" containerName="ovn-northd" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.429554 4766 scope.go:117] "RemoveContainer" containerID="f89cf49e9f838960fe8746366d6f0b6e8a301a5e9f763e18897855dbba68d6bf" Jan 30 16:47:19 crc kubenswrapper[4766]: E0130 16:47:19.430961 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f89cf49e9f838960fe8746366d6f0b6e8a301a5e9f763e18897855dbba68d6bf\": container with ID starting with f89cf49e9f838960fe8746366d6f0b6e8a301a5e9f763e18897855dbba68d6bf not found: ID does not exist" containerID="f89cf49e9f838960fe8746366d6f0b6e8a301a5e9f763e18897855dbba68d6bf" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.431003 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f89cf49e9f838960fe8746366d6f0b6e8a301a5e9f763e18897855dbba68d6bf"} err="failed to get container status \"f89cf49e9f838960fe8746366d6f0b6e8a301a5e9f763e18897855dbba68d6bf\": rpc error: code = NotFound desc = could not find container \"f89cf49e9f838960fe8746366d6f0b6e8a301a5e9f763e18897855dbba68d6bf\": container with ID starting with f89cf49e9f838960fe8746366d6f0b6e8a301a5e9f763e18897855dbba68d6bf not found: ID does not exist" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.460171 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-b00e-account-create-update-pkszz"] Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.476930 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-b00e-account-create-update-pkszz"] Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.498798 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.505529 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.619929 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.628753 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.739490 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.739782 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="908c7fd8-c07e-463e-94c4-76980a3a8ba2" containerName="ceilometer-central-agent" containerID="cri-o://1fe4777b2695557b65a6f9a91a3f309b01c42b5f0288bbecc862c67c0bda120a" gracePeriod=30 Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.740507 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="908c7fd8-c07e-463e-94c4-76980a3a8ba2" containerName="sg-core" containerID="cri-o://3a4e2d5078fd2eacb9382be606cd830ba0289dae57441c51076a58524a7c71f4" gracePeriod=30 Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.740816 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="908c7fd8-c07e-463e-94c4-76980a3a8ba2" containerName="proxy-httpd" containerID="cri-o://858741e925270a4f1dbc19a53c612cec0223b237f4d6e8b8741323f1a01a83e4" gracePeriod=30 Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.740864 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="908c7fd8-c07e-463e-94c4-76980a3a8ba2" containerName="ceilometer-notification-agent" containerID="cri-o://69d64425bbacf9da73461e63012a983fa8ef6f8440c070018088e050cf6bc5a6" gracePeriod=30 Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.773376 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="b21357e1-82c9-419a-a191-359c84d6d001" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.100:5671: connect: connection refused" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.802684 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.802915 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="bb576787-90a5-4e81-a047-6fcf37921335" containerName="kube-state-metrics" containerID="cri-o://b169f04387ed060fbbaaafe5ea96dd7518c3bc7deab7064d883b932c7d250d26" gracePeriod=30 Jan 30 16:47:19 crc kubenswrapper[4766]: E0130 16:47:19.873932 4766 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 30 16:47:19 crc kubenswrapper[4766]: E0130 16:47:19.873999 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bc2a138c-9abd-427b-815c-cbb9e12459f6-config-data podName:bc2a138c-9abd-427b-815c-cbb9e12459f6 nodeName:}" failed. No retries permitted until 2026-01-30 16:47:23.873984701 +0000 UTC m=+1498.511942047 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/bc2a138c-9abd-427b-815c-cbb9e12459f6-config-data") pod "rabbitmq-server-0" (UID: "bc2a138c-9abd-427b-815c-cbb9e12459f6") : configmap "rabbitmq-config-data" not found Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.938085 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.938359 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="61f7793d-39bd-4e96-a857-7de972f0c76d" containerName="memcached" containerID="cri-o://7526886bd5bb2b792b565e84d6fd278abe954f56801bb63be7f6750c601e890f" gracePeriod=30 Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.968869 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-e3be-account-create-update-n7qg6"] Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.000261 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-e3be-account-create-update-n7qg6"] Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.016238 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-e3be-account-create-update-qnsph"] Jan 30 16:47:20 crc kubenswrapper[4766]: E0130 16:47:20.016686 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2852c370-2b06-4a98-9d48-190ed09dc7fb" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.016702 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="2852c370-2b06-4a98-9d48-190ed09dc7fb" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 16:47:20 crc kubenswrapper[4766]: E0130 16:47:20.016732 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5346df4-67e7-4a20-bb56-11173908a334" containerName="nova-cell0-conductor-conductor" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.016738 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5346df4-67e7-4a20-bb56-11173908a334" containerName="nova-cell0-conductor-conductor" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.016885 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="2852c370-2b06-4a98-9d48-190ed09dc7fb" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.016902 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5346df4-67e7-4a20-bb56-11173908a334" containerName="nova-cell0-conductor-conductor" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.017588 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e3be-account-create-update-qnsph" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.020907 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.038589 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-e3be-account-create-update-qnsph"] Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.080743 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34adc844-a813-4bb0-9d46-131d1b5a7b9b-operator-scripts\") pod \"keystone-e3be-account-create-update-qnsph\" (UID: \"34adc844-a813-4bb0-9d46-131d1b5a7b9b\") " pod="openstack/keystone-e3be-account-create-update-qnsph" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.080865 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn9rq\" (UniqueName: \"kubernetes.io/projected/34adc844-a813-4bb0-9d46-131d1b5a7b9b-kube-api-access-jn9rq\") pod \"keystone-e3be-account-create-update-qnsph\" (UID: \"34adc844-a813-4bb0-9d46-131d1b5a7b9b\") " pod="openstack/keystone-e3be-account-create-update-qnsph" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.103381 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e751b80-d475-4bfd-a382-5d9e1618e5aa" path="/var/lib/kubelet/pods/1e751b80-d475-4bfd-a382-5d9e1618e5aa/volumes" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.104568 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2852c370-2b06-4a98-9d48-190ed09dc7fb" path="/var/lib/kubelet/pods/2852c370-2b06-4a98-9d48-190ed09dc7fb/volumes" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.105550 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fb40e54-43ed-4dd6-8c23-138c01cf062d" path="/var/lib/kubelet/pods/3fb40e54-43ed-4dd6-8c23-138c01cf062d/volumes" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.116971 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f9c7bf1-ef4e-4b2b-806a-2da4b6a26cbc" path="/var/lib/kubelet/pods/4f9c7bf1-ef4e-4b2b-806a-2da4b6a26cbc/volumes" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.124359 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="965e8a8f-b4eb-4abb-8177-841fde4d33a2" path="/var/lib/kubelet/pods/965e8a8f-b4eb-4abb-8177-841fde4d33a2/volumes" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.125544 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3997cdc-9abd-4aa3-9201-0015456d4750" path="/var/lib/kubelet/pods/c3997cdc-9abd-4aa3-9201-0015456d4750/volumes" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.126612 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4c6022b-f99b-41de-8048-ac8e4c4fa68f" path="/var/lib/kubelet/pods/c4c6022b-f99b-41de-8048-ac8e4c4fa68f/volumes" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.128079 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d12bc030-c731-4999-ac6d-1be59807c6de" path="/var/lib/kubelet/pods/d12bc030-c731-4999-ac6d-1be59807c6de/volumes" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.128520 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc575168-b373-41ba-9dd6-2d9d168a6527" path="/var/lib/kubelet/pods/dc575168-b373-41ba-9dd6-2d9d168a6527/volumes" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.129264 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5346df4-67e7-4a20-bb56-11173908a334" path="/var/lib/kubelet/pods/e5346df4-67e7-4a20-bb56-11173908a334/volumes" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.129844 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9" path="/var/lib/kubelet/pods/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9/volumes" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.149552 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-8p4hm"] Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.149659 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-8p4hm"] Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.149731 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-7bc6f65df6-mx4xk"] Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.149986 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-7bc6f65df6-mx4xk" podUID="821de7d3-dc41-4351-bced-6ed09a729223" containerName="keystone-api" containerID="cri-o://7fedc7578cd65e1da9885d991db738315a5357e363187467c355ed6389131188" gracePeriod=30 Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.165449 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="bc2a138c-9abd-427b-815c-cbb9e12459f6" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.101:5671: connect: connection refused" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.166272 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-2jkw8"] Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.186413 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34adc844-a813-4bb0-9d46-131d1b5a7b9b-operator-scripts\") pod \"keystone-e3be-account-create-update-qnsph\" (UID: \"34adc844-a813-4bb0-9d46-131d1b5a7b9b\") " pod="openstack/keystone-e3be-account-create-update-qnsph" Jan 30 16:47:20 crc kubenswrapper[4766]: E0130 16:47:20.192873 4766 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 30 16:47:20 crc kubenswrapper[4766]: E0130 16:47:20.192966 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/34adc844-a813-4bb0-9d46-131d1b5a7b9b-operator-scripts podName:34adc844-a813-4bb0-9d46-131d1b5a7b9b nodeName:}" failed. No retries permitted until 2026-01-30 16:47:20.692941522 +0000 UTC m=+1495.330898868 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/34adc844-a813-4bb0-9d46-131d1b5a7b9b-operator-scripts") pod "keystone-e3be-account-create-update-qnsph" (UID: "34adc844-a813-4bb0-9d46-131d1b5a7b9b") : configmap "openstack-scripts" not found Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.193823 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn9rq\" (UniqueName: \"kubernetes.io/projected/34adc844-a813-4bb0-9d46-131d1b5a7b9b-kube-api-access-jn9rq\") pod \"keystone-e3be-account-create-update-qnsph\" (UID: \"34adc844-a813-4bb0-9d46-131d1b5a7b9b\") " pod="openstack/keystone-e3be-account-create-update-qnsph" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.209910 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="40f1dc52-213f-4a5b-af33-4067a83859e4" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.203:8775/\": read tcp 10.217.0.2:57124->10.217.0.203:8775: read: connection reset by peer" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.210040 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="40f1dc52-213f-4a5b-af33-4067a83859e4" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.203:8775/\": read tcp 10.217.0.2:57134->10.217.0.203:8775: read: connection reset by peer" Jan 30 16:47:20 crc kubenswrapper[4766]: E0130 16:47:20.227218 4766 projected.go:194] Error preparing data for projected volume kube-api-access-jn9rq for pod openstack/keystone-e3be-account-create-update-qnsph: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 30 16:47:20 crc kubenswrapper[4766]: E0130 16:47:20.227321 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34adc844-a813-4bb0-9d46-131d1b5a7b9b-kube-api-access-jn9rq podName:34adc844-a813-4bb0-9d46-131d1b5a7b9b nodeName:}" failed. No retries permitted until 2026-01-30 16:47:20.727290502 +0000 UTC m=+1495.365247848 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jn9rq" (UniqueName: "kubernetes.io/projected/34adc844-a813-4bb0-9d46-131d1b5a7b9b-kube-api-access-jn9rq") pod "keystone-e3be-account-create-update-qnsph" (UID: "34adc844-a813-4bb0-9d46-131d1b5a7b9b") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 30 16:47:20 crc kubenswrapper[4766]: E0130 16:47:20.299442 4766 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Jan 30 16:47:20 crc kubenswrapper[4766]: E0130 16:47:20.299520 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4e9bbf1f-b039-4112-ab71-308535065091-operator-scripts podName:4e9bbf1f-b039-4112-ab71-308535065091 nodeName:}" failed. No retries permitted until 2026-01-30 16:47:24.29950201 +0000 UTC m=+1498.937459356 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/4e9bbf1f-b039-4112-ab71-308535065091-operator-scripts") pod "root-account-create-update-jfd74" (UID: "4e9bbf1f-b039-4112-ab71-308535065091") : configmap "openstack-cell1-scripts" not found Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.300650 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-2jkw8"] Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.325069 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.332537 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-jfd74" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.338635 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-cc14-account-create-update-6kfvc" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.338859 4766 generic.go:334] "Generic (PLEG): container finished" podID="447a8ec3-4e50-40a9-b418-01fd8c0eb03e" containerID="e1c9c044f33b3da34602b78fc59451988ca7b3d5b492d71105b99eb5384541ae" exitCode=0 Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.338891 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-69d8797fb6-zzsfd" event={"ID":"447a8ec3-4e50-40a9-b418-01fd8c0eb03e","Type":"ContainerDied","Data":"e1c9c044f33b3da34602b78fc59451988ca7b3d5b492d71105b99eb5384541ae"} Jan 30 16:47:20 crc kubenswrapper[4766]: E0130 16:47:20.348261 4766 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 16:47:20 crc kubenswrapper[4766]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 30 16:47:20 crc kubenswrapper[4766]: Jan 30 16:47:20 crc kubenswrapper[4766]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 30 16:47:20 crc kubenswrapper[4766]: Jan 30 16:47:20 crc kubenswrapper[4766]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 30 16:47:20 crc kubenswrapper[4766]: Jan 30 16:47:20 crc kubenswrapper[4766]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 30 16:47:20 crc kubenswrapper[4766]: Jan 30 16:47:20 crc kubenswrapper[4766]: if [ -n "" ]; then Jan 30 16:47:20 crc kubenswrapper[4766]: GRANT_DATABASE="" Jan 30 16:47:20 crc kubenswrapper[4766]: else Jan 30 16:47:20 crc kubenswrapper[4766]: GRANT_DATABASE="*" Jan 30 16:47:20 crc kubenswrapper[4766]: fi Jan 30 16:47:20 crc kubenswrapper[4766]: Jan 30 16:47:20 crc kubenswrapper[4766]: # going for maximum compatibility here: Jan 30 16:47:20 crc kubenswrapper[4766]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 30 16:47:20 crc kubenswrapper[4766]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 30 16:47:20 crc kubenswrapper[4766]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 30 16:47:20 crc kubenswrapper[4766]: # support updates Jan 30 16:47:20 crc kubenswrapper[4766]: Jan 30 16:47:20 crc kubenswrapper[4766]: $MYSQL_CMD < logger="UnhandledError" Jan 30 16:47:20 crc kubenswrapper[4766]: E0130 16:47:20.354623 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-zlndr" podUID="768238f5-b74e-4f23-91ec-4eeb69375025" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.360913 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7b946b75c8-zb6q6" podUID="17d6e828-fc05-46cb-9bee-bac08ebf331a" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.156:9311/healthcheck\": read tcp 10.217.0.2:49948->10.217.0.156:9311: read: connection reset by peer" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.360929 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7b946b75c8-zb6q6" podUID="17d6e828-fc05-46cb-9bee-bac08ebf331a" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.156:9311/healthcheck\": read tcp 10.217.0.2:49952->10.217.0.156:9311: read: connection reset by peer" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.364286 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-qdgxb"] Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.364810 4766 generic.go:334] "Generic (PLEG): container finished" podID="bb576787-90a5-4e81-a047-6fcf37921335" containerID="b169f04387ed060fbbaaafe5ea96dd7518c3bc7deab7064d883b932c7d250d26" exitCode=2 Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.364982 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bb576787-90a5-4e81-a047-6fcf37921335","Type":"ContainerDied","Data":"b169f04387ed060fbbaaafe5ea96dd7518c3bc7deab7064d883b932c7d250d26"} Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.372007 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6kx5n" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.375079 4766 generic.go:334] "Generic (PLEG): container finished" podID="845c3343-246e-4309-bd46-9bcd92cad574" containerID="8cbf9b54e98821990af4d26a0270e1083e6e97cddbcd3ce5671c84518d5767d7" exitCode=0 Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.375140 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6kx5n" event={"ID":"845c3343-246e-4309-bd46-9bcd92cad574","Type":"ContainerDied","Data":"8cbf9b54e98821990af4d26a0270e1083e6e97cddbcd3ce5671c84518d5767d7"} Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.375164 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6kx5n" event={"ID":"845c3343-246e-4309-bd46-9bcd92cad574","Type":"ContainerDied","Data":"721b24966425ad3828c4ed010c44283d43a0eeb0f5dae60a2287376c39e4728d"} Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.375240 4766 scope.go:117] "RemoveContainer" containerID="8cbf9b54e98821990af4d26a0270e1083e6e97cddbcd3ce5671c84518d5767d7" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.375305 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-e3be-account-create-update-qnsph"] Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.375434 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-69d8797fb6-zzsfd" Jan 30 16:47:20 crc kubenswrapper[4766]: E0130 16:47:20.375968 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-jn9rq operator-scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/keystone-e3be-account-create-update-qnsph" podUID="34adc844-a813-4bb0-9d46-131d1b5a7b9b" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.398816 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-qdgxb"] Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.401578 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkrsf\" (UniqueName: \"kubernetes.io/projected/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-kube-api-access-nkrsf\") pod \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\" (UID: \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\") " Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.401611 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-logs\") pod \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\" (UID: \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\") " Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.401639 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/845c3343-246e-4309-bd46-9bcd92cad574-catalog-content\") pod \"845c3343-246e-4309-bd46-9bcd92cad574\" (UID: \"845c3343-246e-4309-bd46-9bcd92cad574\") " Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.401692 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4dw2\" (UniqueName: \"kubernetes.io/projected/845c3343-246e-4309-bd46-9bcd92cad574-kube-api-access-s4dw2\") pod \"845c3343-246e-4309-bd46-9bcd92cad574\" (UID: \"845c3343-246e-4309-bd46-9bcd92cad574\") " Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.401717 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9sg8\" (UniqueName: \"kubernetes.io/projected/a5ce540c-4925-43fa-b0aa-ef474912f60e-kube-api-access-k9sg8\") pod \"a5ce540c-4925-43fa-b0aa-ef474912f60e\" (UID: \"a5ce540c-4925-43fa-b0aa-ef474912f60e\") " Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.401746 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e9bbf1f-b039-4112-ab71-308535065091-operator-scripts\") pod \"4e9bbf1f-b039-4112-ab71-308535065091\" (UID: \"4e9bbf1f-b039-4112-ab71-308535065091\") " Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.401789 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-config-data\") pod \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\" (UID: \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\") " Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.401821 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-internal-tls-certs\") pod \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\" (UID: \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\") " Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.401839 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nn85z\" (UniqueName: \"kubernetes.io/projected/4e9bbf1f-b039-4112-ab71-308535065091-kube-api-access-nn85z\") pod \"4e9bbf1f-b039-4112-ab71-308535065091\" (UID: \"4e9bbf1f-b039-4112-ab71-308535065091\") " Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.401867 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-combined-ca-bundle\") pod \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\" (UID: \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\") " Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.401924 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5ce540c-4925-43fa-b0aa-ef474912f60e-operator-scripts\") pod \"a5ce540c-4925-43fa-b0aa-ef474912f60e\" (UID: \"a5ce540c-4925-43fa-b0aa-ef474912f60e\") " Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.401958 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-scripts\") pod \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\" (UID: \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\") " Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.401976 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/845c3343-246e-4309-bd46-9bcd92cad574-utilities\") pod \"845c3343-246e-4309-bd46-9bcd92cad574\" (UID: \"845c3343-246e-4309-bd46-9bcd92cad574\") " Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.402000 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-public-tls-certs\") pod \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\" (UID: \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\") " Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.414590 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-logs" (OuterVolumeSpecName: "logs") pod "447a8ec3-4e50-40a9-b418-01fd8c0eb03e" (UID: "447a8ec3-4e50-40a9-b418-01fd8c0eb03e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.415428 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/845c3343-246e-4309-bd46-9bcd92cad574-utilities" (OuterVolumeSpecName: "utilities") pod "845c3343-246e-4309-bd46-9bcd92cad574" (UID: "845c3343-246e-4309-bd46-9bcd92cad574"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.417508 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-zlndr"] Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.417929 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e9bbf1f-b039-4112-ab71-308535065091-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4e9bbf1f-b039-4112-ab71-308535065091" (UID: "4e9bbf1f-b039-4112-ab71-308535065091"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.421108 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5ce540c-4925-43fa-b0aa-ef474912f60e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a5ce540c-4925-43fa-b0aa-ef474912f60e" (UID: "a5ce540c-4925-43fa-b0aa-ef474912f60e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.423705 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5ce540c-4925-43fa-b0aa-ef474912f60e-kube-api-access-k9sg8" (OuterVolumeSpecName: "kube-api-access-k9sg8") pod "a5ce540c-4925-43fa-b0aa-ef474912f60e" (UID: "a5ce540c-4925-43fa-b0aa-ef474912f60e"). InnerVolumeSpecName "kube-api-access-k9sg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.426486 4766 generic.go:334] "Generic (PLEG): container finished" podID="908c7fd8-c07e-463e-94c4-76980a3a8ba2" containerID="858741e925270a4f1dbc19a53c612cec0223b237f4d6e8b8741323f1a01a83e4" exitCode=0 Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.426520 4766 generic.go:334] "Generic (PLEG): container finished" podID="908c7fd8-c07e-463e-94c4-76980a3a8ba2" containerID="3a4e2d5078fd2eacb9382be606cd830ba0289dae57441c51076a58524a7c71f4" exitCode=2 Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.426639 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"908c7fd8-c07e-463e-94c4-76980a3a8ba2","Type":"ContainerDied","Data":"858741e925270a4f1dbc19a53c612cec0223b237f4d6e8b8741323f1a01a83e4"} Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.426675 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"908c7fd8-c07e-463e-94c4-76980a3a8ba2","Type":"ContainerDied","Data":"3a4e2d5078fd2eacb9382be606cd830ba0289dae57441c51076a58524a7c71f4"} Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.432700 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/845c3343-246e-4309-bd46-9bcd92cad574-kube-api-access-s4dw2" (OuterVolumeSpecName: "kube-api-access-s4dw2") pod "845c3343-246e-4309-bd46-9bcd92cad574" (UID: "845c3343-246e-4309-bd46-9bcd92cad574"). InnerVolumeSpecName "kube-api-access-s4dw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.437776 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-kube-api-access-nkrsf" (OuterVolumeSpecName: "kube-api-access-nkrsf") pod "447a8ec3-4e50-40a9-b418-01fd8c0eb03e" (UID: "447a8ec3-4e50-40a9-b418-01fd8c0eb03e"). InnerVolumeSpecName "kube-api-access-nkrsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.443371 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-scripts" (OuterVolumeSpecName: "scripts") pod "447a8ec3-4e50-40a9-b418-01fd8c0eb03e" (UID: "447a8ec3-4e50-40a9-b418-01fd8c0eb03e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.443538 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-jfd74" event={"ID":"4e9bbf1f-b039-4112-ab71-308535065091","Type":"ContainerDied","Data":"fca4c05dceea3855589628ff1ebfa551584aedf44b196076f8197c1c533ffe64"} Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.443648 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-jfd74" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.446941 4766 scope.go:117] "RemoveContainer" containerID="07c35af9b71ba10727d5e6edfec8a9dd18621078f9906a2ff5f1d606d8fff921" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.447018 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-zlndr"] Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.465157 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e9bbf1f-b039-4112-ab71-308535065091-kube-api-access-nn85z" (OuterVolumeSpecName: "kube-api-access-nn85z") pod "4e9bbf1f-b039-4112-ab71-308535065091" (UID: "4e9bbf1f-b039-4112-ab71-308535065091"). InnerVolumeSpecName "kube-api-access-nn85z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.466499 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-cc14-account-create-update-6kfvc" event={"ID":"a5ce540c-4925-43fa-b0aa-ef474912f60e","Type":"ContainerDied","Data":"4fd65f8ecd2b6f82a377e2d07f913ddeac5bcdf9496f8b1aeada1b9cd5e4251c"} Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.467151 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-cc14-account-create-update-6kfvc" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.506589 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5ce540c-4925-43fa-b0aa-ef474912f60e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.506625 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.506636 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/845c3343-246e-4309-bd46-9bcd92cad574-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.506645 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nkrsf\" (UniqueName: \"kubernetes.io/projected/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-kube-api-access-nkrsf\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.506656 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-logs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.506664 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4dw2\" (UniqueName: \"kubernetes.io/projected/845c3343-246e-4309-bd46-9bcd92cad574-kube-api-access-s4dw2\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.506672 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9sg8\" (UniqueName: \"kubernetes.io/projected/a5ce540c-4925-43fa-b0aa-ef474912f60e-kube-api-access-k9sg8\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.506680 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e9bbf1f-b039-4112-ab71-308535065091-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.506689 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nn85z\" (UniqueName: \"kubernetes.io/projected/4e9bbf1f-b039-4112-ab71-308535065091-kube-api-access-nn85z\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.573032 4766 scope.go:117] "RemoveContainer" containerID="327eb2c72821c900abb686fc57155daefdfa60f38b6a17aeb8f64c6e06c87f14" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.579524 4766 generic.go:334] "Generic (PLEG): container finished" podID="14ae2453-74fa-4114-9261-21b381518493" containerID="078a351f4bbfda381f7eaea97874a2d3cad8f7b02bef769bcb410ba868b12250" exitCode=0 Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.579624 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"14ae2453-74fa-4114-9261-21b381518493","Type":"ContainerDied","Data":"078a351f4bbfda381f7eaea97874a2d3cad8f7b02bef769bcb410ba868b12250"} Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.625831 4766 generic.go:334] "Generic (PLEG): container finished" podID="9ad68dc2-23ff-4044-b74d-149ae8f02bc0" containerID="83eef1fac3cc96895ab4ddd98d9e41ad0d9179a5c5f100993449cfa02dfc79ae" exitCode=0 Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.625928 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9ad68dc2-23ff-4044-b74d-149ae8f02bc0","Type":"ContainerDied","Data":"83eef1fac3cc96895ab4ddd98d9e41ad0d9179a5c5f100993449cfa02dfc79ae"} Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.634459 4766 generic.go:334] "Generic (PLEG): container finished" podID="4bc2931b-8439-4c5c-be4d-43f4aab528f2" containerID="7cb223d43c8f7f218cb3801a506f0b8a1c37370133be56bce90a766f5556e3ab" exitCode=0 Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.634516 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4bc2931b-8439-4c5c-be4d-43f4aab528f2","Type":"ContainerDied","Data":"7cb223d43c8f7f218cb3801a506f0b8a1c37370133be56bce90a766f5556e3ab"} Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.642291 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-cc14-account-create-update-6kfvc"] Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.646774 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "447a8ec3-4e50-40a9-b418-01fd8c0eb03e" (UID: "447a8ec3-4e50-40a9-b418-01fd8c0eb03e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.658520 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-cc14-account-create-update-6kfvc"] Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.712416 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34adc844-a813-4bb0-9d46-131d1b5a7b9b-operator-scripts\") pod \"keystone-e3be-account-create-update-qnsph\" (UID: \"34adc844-a813-4bb0-9d46-131d1b5a7b9b\") " pod="openstack/keystone-e3be-account-create-update-qnsph" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.712642 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:20 crc kubenswrapper[4766]: E0130 16:47:20.712777 4766 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 30 16:47:20 crc kubenswrapper[4766]: E0130 16:47:20.712867 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/34adc844-a813-4bb0-9d46-131d1b5a7b9b-operator-scripts podName:34adc844-a813-4bb0-9d46-131d1b5a7b9b nodeName:}" failed. No retries permitted until 2026-01-30 16:47:21.712816927 +0000 UTC m=+1496.350774283 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/34adc844-a813-4bb0-9d46-131d1b5a7b9b-operator-scripts") pod "keystone-e3be-account-create-update-qnsph" (UID: "34adc844-a813-4bb0-9d46-131d1b5a7b9b") : configmap "openstack-scripts" not found Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.715988 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/845c3343-246e-4309-bd46-9bcd92cad574-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "845c3343-246e-4309-bd46-9bcd92cad574" (UID: "845c3343-246e-4309-bd46-9bcd92cad574"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.735277 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "447a8ec3-4e50-40a9-b418-01fd8c0eb03e" (UID: "447a8ec3-4e50-40a9-b418-01fd8c0eb03e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.774110 4766 scope.go:117] "RemoveContainer" containerID="8cbf9b54e98821990af4d26a0270e1083e6e97cddbcd3ce5671c84518d5767d7" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.774328 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-config-data" (OuterVolumeSpecName: "config-data") pod "447a8ec3-4e50-40a9-b418-01fd8c0eb03e" (UID: "447a8ec3-4e50-40a9-b418-01fd8c0eb03e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:20 crc kubenswrapper[4766]: E0130 16:47:20.775578 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cbf9b54e98821990af4d26a0270e1083e6e97cddbcd3ce5671c84518d5767d7\": container with ID starting with 8cbf9b54e98821990af4d26a0270e1083e6e97cddbcd3ce5671c84518d5767d7 not found: ID does not exist" containerID="8cbf9b54e98821990af4d26a0270e1083e6e97cddbcd3ce5671c84518d5767d7" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.775624 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cbf9b54e98821990af4d26a0270e1083e6e97cddbcd3ce5671c84518d5767d7"} err="failed to get container status \"8cbf9b54e98821990af4d26a0270e1083e6e97cddbcd3ce5671c84518d5767d7\": rpc error: code = NotFound desc = could not find container \"8cbf9b54e98821990af4d26a0270e1083e6e97cddbcd3ce5671c84518d5767d7\": container with ID starting with 8cbf9b54e98821990af4d26a0270e1083e6e97cddbcd3ce5671c84518d5767d7 not found: ID does not exist" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.775652 4766 scope.go:117] "RemoveContainer" containerID="07c35af9b71ba10727d5e6edfec8a9dd18621078f9906a2ff5f1d606d8fff921" Jan 30 16:47:20 crc kubenswrapper[4766]: E0130 16:47:20.777243 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07c35af9b71ba10727d5e6edfec8a9dd18621078f9906a2ff5f1d606d8fff921\": container with ID starting with 07c35af9b71ba10727d5e6edfec8a9dd18621078f9906a2ff5f1d606d8fff921 not found: ID does not exist" containerID="07c35af9b71ba10727d5e6edfec8a9dd18621078f9906a2ff5f1d606d8fff921" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.777275 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07c35af9b71ba10727d5e6edfec8a9dd18621078f9906a2ff5f1d606d8fff921"} err="failed to get container status \"07c35af9b71ba10727d5e6edfec8a9dd18621078f9906a2ff5f1d606d8fff921\": rpc error: code = NotFound desc = could not find container \"07c35af9b71ba10727d5e6edfec8a9dd18621078f9906a2ff5f1d606d8fff921\": container with ID starting with 07c35af9b71ba10727d5e6edfec8a9dd18621078f9906a2ff5f1d606d8fff921 not found: ID does not exist" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.777296 4766 scope.go:117] "RemoveContainer" containerID="327eb2c72821c900abb686fc57155daefdfa60f38b6a17aeb8f64c6e06c87f14" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.789642 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="62dd6ad1-1550-48cf-b103-b7ab6dd93c97" containerName="galera" containerID="cri-o://aff944adaf195dece4b9f2cffe288741329b6e6257d83a2eb304dfe74183c399" gracePeriod=30 Jan 30 16:47:20 crc kubenswrapper[4766]: E0130 16:47:20.789721 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"327eb2c72821c900abb686fc57155daefdfa60f38b6a17aeb8f64c6e06c87f14\": container with ID starting with 327eb2c72821c900abb686fc57155daefdfa60f38b6a17aeb8f64c6e06c87f14 not found: ID does not exist" containerID="327eb2c72821c900abb686fc57155daefdfa60f38b6a17aeb8f64c6e06c87f14" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.789757 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"327eb2c72821c900abb686fc57155daefdfa60f38b6a17aeb8f64c6e06c87f14"} err="failed to get container status \"327eb2c72821c900abb686fc57155daefdfa60f38b6a17aeb8f64c6e06c87f14\": rpc error: code = NotFound desc = could not find container \"327eb2c72821c900abb686fc57155daefdfa60f38b6a17aeb8f64c6e06c87f14\": container with ID starting with 327eb2c72821c900abb686fc57155daefdfa60f38b6a17aeb8f64c6e06c87f14 not found: ID does not exist" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.814268 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn9rq\" (UniqueName: \"kubernetes.io/projected/34adc844-a813-4bb0-9d46-131d1b5a7b9b-kube-api-access-jn9rq\") pod \"keystone-e3be-account-create-update-qnsph\" (UID: \"34adc844-a813-4bb0-9d46-131d1b5a7b9b\") " pod="openstack/keystone-e3be-account-create-update-qnsph" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.814438 4766 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.814454 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/845c3343-246e-4309-bd46-9bcd92cad574-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.814466 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:20 crc kubenswrapper[4766]: E0130 16:47:20.824358 4766 projected.go:194] Error preparing data for projected volume kube-api-access-jn9rq for pod openstack/keystone-e3be-account-create-update-qnsph: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 30 16:47:20 crc kubenswrapper[4766]: E0130 16:47:20.824440 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34adc844-a813-4bb0-9d46-131d1b5a7b9b-kube-api-access-jn9rq podName:34adc844-a813-4bb0-9d46-131d1b5a7b9b nodeName:}" failed. No retries permitted until 2026-01-30 16:47:21.824416306 +0000 UTC m=+1496.462373652 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-jn9rq" (UniqueName: "kubernetes.io/projected/34adc844-a813-4bb0-9d46-131d1b5a7b9b-kube-api-access-jn9rq") pod "keystone-e3be-account-create-update-qnsph" (UID: "34adc844-a813-4bb0-9d46-131d1b5a7b9b") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.905404 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="aca8dfc0-f915-4696-95c1-3c232f2ea35a" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.168:8776/healthcheck\": dial tcp 10.217.0.168:8776: connect: connection refused" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.958584 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.997588 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-jfd74"] Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.016240 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-jfd74"] Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.076863 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "447a8ec3-4e50-40a9-b418-01fd8c0eb03e" (UID: "447a8ec3-4e50-40a9-b418-01fd8c0eb03e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.125450 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.125562 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-kolla-config\") pod \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.125591 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-config-data-generated\") pod \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.125822 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-config-data-default\") pod \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.125947 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-galera-tls-certs\") pod \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.125976 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-combined-ca-bundle\") pod \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.126035 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q47vz\" (UniqueName: \"kubernetes.io/projected/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-kube-api-access-q47vz\") pod \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.126055 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-operator-scripts\") pod \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.126534 4766 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.131327 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9ad68dc2-23ff-4044-b74d-149ae8f02bc0" (UID: "9ad68dc2-23ff-4044-b74d-149ae8f02bc0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.132432 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "9ad68dc2-23ff-4044-b74d-149ae8f02bc0" (UID: "9ad68dc2-23ff-4044-b74d-149ae8f02bc0"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.132887 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "9ad68dc2-23ff-4044-b74d-149ae8f02bc0" (UID: "9ad68dc2-23ff-4044-b74d-149ae8f02bc0"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.133247 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "9ad68dc2-23ff-4044-b74d-149ae8f02bc0" (UID: "9ad68dc2-23ff-4044-b74d-149ae8f02bc0"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.144419 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-kube-api-access-q47vz" (OuterVolumeSpecName: "kube-api-access-q47vz") pod "9ad68dc2-23ff-4044-b74d-149ae8f02bc0" (UID: "9ad68dc2-23ff-4044-b74d-149ae8f02bc0"). InnerVolumeSpecName "kube-api-access-q47vz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.148239 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "mysql-db") pod "9ad68dc2-23ff-4044-b74d-149ae8f02bc0" (UID: "9ad68dc2-23ff-4044-b74d-149ae8f02bc0"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: E0130 16:47:21.166572 4766 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e9bbf1f_b039_4112_ab71_308535065091.slice/crio-fca4c05dceea3855589628ff1ebfa551584aedf44b196076f8197c1c533ffe64\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22d60b44_40c9_425e_8daf_8931a25954e0.slice/crio-conmon-812a3e23be177e19676f6003e9e0ddb46880fe309badbba4e93d1efe04dcf597.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d5b8a42_39dd_4b1b_9f92_1e3585b6707b.slice/crio-conmon-a67df9afdf256b25cf9bda3ba94d84b8fbd05c6c568cae7f599293e1949d5425.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22d60b44_40c9_425e_8daf_8931a25954e0.slice/crio-812a3e23be177e19676f6003e9e0ddb46880fe309badbba4e93d1efe04dcf597.scope\": RecentStats: unable to find data in memory cache]" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.204873 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9ad68dc2-23ff-4044-b74d-149ae8f02bc0" (UID: "9ad68dc2-23ff-4044-b74d-149ae8f02bc0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.227969 4766 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.227993 4766 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.228004 4766 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.228012 4766 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.228021 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.228030 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q47vz\" (UniqueName: \"kubernetes.io/projected/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-kube-api-access-q47vz\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.228038 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.254249 4766 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.265819 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "9ad68dc2-23ff-4044-b74d-149ae8f02bc0" (UID: "9ad68dc2-23ff-4044-b74d-149ae8f02bc0"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.330859 4766 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.330926 4766 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.415606 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.425834 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.441490 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.487455 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 16:47:21 crc kubenswrapper[4766]: E0130 16:47:21.487556 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="aff944adaf195dece4b9f2cffe288741329b6e6257d83a2eb304dfe74183c399" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 30 16:47:21 crc kubenswrapper[4766]: E0130 16:47:21.489827 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="aff944adaf195dece4b9f2cffe288741329b6e6257d83a2eb304dfe74183c399" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 30 16:47:21 crc kubenswrapper[4766]: E0130 16:47:21.491387 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="aff944adaf195dece4b9f2cffe288741329b6e6257d83a2eb304dfe74183c399" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 30 16:47:21 crc kubenswrapper[4766]: E0130 16:47:21.491433 4766 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="62dd6ad1-1550-48cf-b103-b7ab6dd93c97" containerName="galera" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.541907 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb576787-90a5-4e81-a047-6fcf37921335-combined-ca-bundle\") pod \"bb576787-90a5-4e81-a047-6fcf37921335\" (UID: \"bb576787-90a5-4e81-a047-6fcf37921335\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.541958 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bc2931b-8439-4c5c-be4d-43f4aab528f2-internal-tls-certs\") pod \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.541996 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bc2931b-8439-4c5c-be4d-43f4aab528f2-scripts\") pod \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.542017 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bc2931b-8439-4c5c-be4d-43f4aab528f2-config-data\") pod \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.542103 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb576787-90a5-4e81-a047-6fcf37921335-kube-state-metrics-tls-certs\") pod \"bb576787-90a5-4e81-a047-6fcf37921335\" (UID: \"bb576787-90a5-4e81-a047-6fcf37921335\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.542152 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4bc2931b-8439-4c5c-be4d-43f4aab528f2-httpd-run\") pod \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.542198 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2f48\" (UniqueName: \"kubernetes.io/projected/bb576787-90a5-4e81-a047-6fcf37921335-kube-api-access-s2f48\") pod \"bb576787-90a5-4e81-a047-6fcf37921335\" (UID: \"bb576787-90a5-4e81-a047-6fcf37921335\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.542236 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4bc2931b-8439-4c5c-be4d-43f4aab528f2-logs\") pod \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.542499 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4btb\" (UniqueName: \"kubernetes.io/projected/4bc2931b-8439-4c5c-be4d-43f4aab528f2-kube-api-access-r4btb\") pod \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.542526 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.542567 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14ae2453-74fa-4114-9261-21b381518493-config-data\") pod \"14ae2453-74fa-4114-9261-21b381518493\" (UID: \"14ae2453-74fa-4114-9261-21b381518493\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.542590 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bc2931b-8439-4c5c-be4d-43f4aab528f2-combined-ca-bundle\") pod \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.542618 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/bb576787-90a5-4e81-a047-6fcf37921335-kube-state-metrics-tls-config\") pod \"bb576787-90a5-4e81-a047-6fcf37921335\" (UID: \"bb576787-90a5-4e81-a047-6fcf37921335\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.542639 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/14ae2453-74fa-4114-9261-21b381518493-internal-tls-certs\") pod \"14ae2453-74fa-4114-9261-21b381518493\" (UID: \"14ae2453-74fa-4114-9261-21b381518493\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.546006 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bc2931b-8439-4c5c-be4d-43f4aab528f2-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "4bc2931b-8439-4c5c-be4d-43f4aab528f2" (UID: "4bc2931b-8439-4c5c-be4d-43f4aab528f2"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.549517 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bc2931b-8439-4c5c-be4d-43f4aab528f2-logs" (OuterVolumeSpecName: "logs") pod "4bc2931b-8439-4c5c-be4d-43f4aab528f2" (UID: "4bc2931b-8439-4c5c-be4d-43f4aab528f2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.549694 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bc2931b-8439-4c5c-be4d-43f4aab528f2-kube-api-access-r4btb" (OuterVolumeSpecName: "kube-api-access-r4btb") pod "4bc2931b-8439-4c5c-be4d-43f4aab528f2" (UID: "4bc2931b-8439-4c5c-be4d-43f4aab528f2"). InnerVolumeSpecName "kube-api-access-r4btb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.561422 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "4bc2931b-8439-4c5c-be4d-43f4aab528f2" (UID: "4bc2931b-8439-4c5c-be4d-43f4aab528f2"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.569942 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7b946b75c8-zb6q6" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.575254 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bc2931b-8439-4c5c-be4d-43f4aab528f2-scripts" (OuterVolumeSpecName: "scripts") pod "4bc2931b-8439-4c5c-be4d-43f4aab528f2" (UID: "4bc2931b-8439-4c5c-be4d-43f4aab528f2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.585918 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb576787-90a5-4e81-a047-6fcf37921335-kube-api-access-s2f48" (OuterVolumeSpecName: "kube-api-access-s2f48") pod "bb576787-90a5-4e81-a047-6fcf37921335" (UID: "bb576787-90a5-4e81-a047-6fcf37921335"). InnerVolumeSpecName "kube-api-access-s2f48". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.609902 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.623716 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb576787-90a5-4e81-a047-6fcf37921335-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "bb576787-90a5-4e81-a047-6fcf37921335" (UID: "bb576787-90a5-4e81-a047-6fcf37921335"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.627146 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb576787-90a5-4e81-a047-6fcf37921335-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb576787-90a5-4e81-a047-6fcf37921335" (UID: "bb576787-90a5-4e81-a047-6fcf37921335"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.632728 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.632805 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14ae2453-74fa-4114-9261-21b381518493-config-data" (OuterVolumeSpecName: "config-data") pod "14ae2453-74fa-4114-9261-21b381518493" (UID: "14ae2453-74fa-4114-9261-21b381518493"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.638808 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bc2931b-8439-4c5c-be4d-43f4aab528f2-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4bc2931b-8439-4c5c-be4d-43f4aab528f2" (UID: "4bc2931b-8439-4c5c-be4d-43f4aab528f2"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.643277 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bc2931b-8439-4c5c-be4d-43f4aab528f2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4bc2931b-8439-4c5c-be4d-43f4aab528f2" (UID: "4bc2931b-8439-4c5c-be4d-43f4aab528f2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.643675 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxzz5\" (UniqueName: \"kubernetes.io/projected/40f1dc52-213f-4a5b-af33-4067a83859e4-kube-api-access-sxzz5\") pod \"40f1dc52-213f-4a5b-af33-4067a83859e4\" (UID: \"40f1dc52-213f-4a5b-af33-4067a83859e4\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.643780 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40f1dc52-213f-4a5b-af33-4067a83859e4-logs\") pod \"40f1dc52-213f-4a5b-af33-4067a83859e4\" (UID: \"40f1dc52-213f-4a5b-af33-4067a83859e4\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.643817 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14ae2453-74fa-4114-9261-21b381518493-combined-ca-bundle\") pod \"14ae2453-74fa-4114-9261-21b381518493\" (UID: \"14ae2453-74fa-4114-9261-21b381518493\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.643844 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14ae2453-74fa-4114-9261-21b381518493-logs\") pod \"14ae2453-74fa-4114-9261-21b381518493\" (UID: \"14ae2453-74fa-4114-9261-21b381518493\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.643891 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/40f1dc52-213f-4a5b-af33-4067a83859e4-nova-metadata-tls-certs\") pod \"40f1dc52-213f-4a5b-af33-4067a83859e4\" (UID: \"40f1dc52-213f-4a5b-af33-4067a83859e4\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.643918 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40f1dc52-213f-4a5b-af33-4067a83859e4-config-data\") pod \"40f1dc52-213f-4a5b-af33-4067a83859e4\" (UID: \"40f1dc52-213f-4a5b-af33-4067a83859e4\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.643948 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqjcv\" (UniqueName: \"kubernetes.io/projected/14ae2453-74fa-4114-9261-21b381518493-kube-api-access-xqjcv\") pod \"14ae2453-74fa-4114-9261-21b381518493\" (UID: \"14ae2453-74fa-4114-9261-21b381518493\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.644008 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40f1dc52-213f-4a5b-af33-4067a83859e4-combined-ca-bundle\") pod \"40f1dc52-213f-4a5b-af33-4067a83859e4\" (UID: \"40f1dc52-213f-4a5b-af33-4067a83859e4\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.644054 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/14ae2453-74fa-4114-9261-21b381518493-public-tls-certs\") pod \"14ae2453-74fa-4114-9261-21b381518493\" (UID: \"14ae2453-74fa-4114-9261-21b381518493\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.646061 4766 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4bc2931b-8439-4c5c-be4d-43f4aab528f2-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.646102 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2f48\" (UniqueName: \"kubernetes.io/projected/bb576787-90a5-4e81-a047-6fcf37921335-kube-api-access-s2f48\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.646115 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4bc2931b-8439-4c5c-be4d-43f4aab528f2-logs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.646127 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4btb\" (UniqueName: \"kubernetes.io/projected/4bc2931b-8439-4c5c-be4d-43f4aab528f2-kube-api-access-r4btb\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.646152 4766 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.646164 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14ae2453-74fa-4114-9261-21b381518493-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.646191 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bc2931b-8439-4c5c-be4d-43f4aab528f2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.646204 4766 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/bb576787-90a5-4e81-a047-6fcf37921335-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.646218 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb576787-90a5-4e81-a047-6fcf37921335-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.646233 4766 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bc2931b-8439-4c5c-be4d-43f4aab528f2-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.646243 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bc2931b-8439-4c5c-be4d-43f4aab528f2-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.648792 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40f1dc52-213f-4a5b-af33-4067a83859e4-logs" (OuterVolumeSpecName: "logs") pod "40f1dc52-213f-4a5b-af33-4067a83859e4" (UID: "40f1dc52-213f-4a5b-af33-4067a83859e4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.654408 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14ae2453-74fa-4114-9261-21b381518493-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "14ae2453-74fa-4114-9261-21b381518493" (UID: "14ae2453-74fa-4114-9261-21b381518493"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.654578 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40f1dc52-213f-4a5b-af33-4067a83859e4-kube-api-access-sxzz5" (OuterVolumeSpecName: "kube-api-access-sxzz5") pod "40f1dc52-213f-4a5b-af33-4067a83859e4" (UID: "40f1dc52-213f-4a5b-af33-4067a83859e4"). InnerVolumeSpecName "kube-api-access-sxzz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.661208 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb576787-90a5-4e81-a047-6fcf37921335-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "bb576787-90a5-4e81-a047-6fcf37921335" (UID: "bb576787-90a5-4e81-a047-6fcf37921335"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.662587 4766 generic.go:334] "Generic (PLEG): container finished" podID="aca8dfc0-f915-4696-95c1-3c232f2ea35a" containerID="f8cd42c17a2b9677c49633243f1573eb74a7d64e3fcc5f7dba33bde53ccef668" exitCode=0 Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.662667 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"aca8dfc0-f915-4696-95c1-3c232f2ea35a","Type":"ContainerDied","Data":"f8cd42c17a2b9677c49633243f1573eb74a7d64e3fcc5f7dba33bde53ccef668"} Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.662697 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"aca8dfc0-f915-4696-95c1-3c232f2ea35a","Type":"ContainerDied","Data":"7e89f84a27af28de0ff96a206ea024d02e0721f6cc45b38d9fef889091b6e08b"} Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.662715 4766 scope.go:117] "RemoveContainer" containerID="f8cd42c17a2b9677c49633243f1573eb74a7d64e3fcc5f7dba33bde53ccef668" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.662858 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.678834 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14ae2453-74fa-4114-9261-21b381518493-logs" (OuterVolumeSpecName: "logs") pod "14ae2453-74fa-4114-9261-21b381518493" (UID: "14ae2453-74fa-4114-9261-21b381518493"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.696291 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14ae2453-74fa-4114-9261-21b381518493-kube-api-access-xqjcv" (OuterVolumeSpecName: "kube-api-access-xqjcv") pod "14ae2453-74fa-4114-9261-21b381518493" (UID: "14ae2453-74fa-4114-9261-21b381518493"). InnerVolumeSpecName "kube-api-access-xqjcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.702628 4766 generic.go:334] "Generic (PLEG): container finished" podID="063ebe65-0175-443e-8c75-5018c42b3f36" containerID="e5049dc222f6a4c60730423ca57b88c9c36337971b3ab52ed5de35266e17e533" exitCode=0 Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.702708 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"063ebe65-0175-443e-8c75-5018c42b3f36","Type":"ContainerDied","Data":"e5049dc222f6a4c60730423ca57b88c9c36337971b3ab52ed5de35266e17e533"} Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.708500 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14ae2453-74fa-4114-9261-21b381518493-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "14ae2453-74fa-4114-9261-21b381518493" (UID: "14ae2453-74fa-4114-9261-21b381518493"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.711207 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bb576787-90a5-4e81-a047-6fcf37921335","Type":"ContainerDied","Data":"004a4dbb8938c5e8f1cfef5ca99ba208dc91ea1d26f1a6bd59dd513328e8e0c0"} Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.711323 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.714803 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6kx5n" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.718501 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40f1dc52-213f-4a5b-af33-4067a83859e4-config-data" (OuterVolumeSpecName: "config-data") pod "40f1dc52-213f-4a5b-af33-4067a83859e4" (UID: "40f1dc52-213f-4a5b-af33-4067a83859e4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.720518 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4bc2931b-8439-4c5c-be4d-43f4aab528f2","Type":"ContainerDied","Data":"2797b67ea13c41adaa6a8bb781fc530c7226e6d8ca440692aa04b6d42362f33b"} Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.720640 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.722418 4766 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.728366 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40f1dc52-213f-4a5b-af33-4067a83859e4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "40f1dc52-213f-4a5b-af33-4067a83859e4" (UID: "40f1dc52-213f-4a5b-af33-4067a83859e4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.729794 4766 generic.go:334] "Generic (PLEG): container finished" podID="61f7793d-39bd-4e96-a857-7de972f0c76d" containerID="7526886bd5bb2b792b565e84d6fd278abe954f56801bb63be7f6750c601e890f" exitCode=0 Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.729890 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"61f7793d-39bd-4e96-a857-7de972f0c76d","Type":"ContainerDied","Data":"7526886bd5bb2b792b565e84d6fd278abe954f56801bb63be7f6750c601e890f"} Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.738596 4766 generic.go:334] "Generic (PLEG): container finished" podID="6d5b8a42-39dd-4b1b-9f92-1e3585b6707b" containerID="a67df9afdf256b25cf9bda3ba94d84b8fbd05c6c568cae7f599293e1949d5425" exitCode=0 Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.741513 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b","Type":"ContainerDied","Data":"a67df9afdf256b25cf9bda3ba94d84b8fbd05c6c568cae7f599293e1949d5425"} Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.742320 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b","Type":"ContainerDied","Data":"a9a6840755fd2b986bdb4ab361591ae6bb5de2cf1574ac6d83650a445bab4f37"} Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.742479 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.746796 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17d6e828-fc05-46cb-9bee-bac08ebf331a-combined-ca-bundle\") pod \"17d6e828-fc05-46cb-9bee-bac08ebf331a\" (UID: \"17d6e828-fc05-46cb-9bee-bac08ebf331a\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.747357 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aca8dfc0-f915-4696-95c1-3c232f2ea35a-logs\") pod \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.747618 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-config-data\") pod \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.747905 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-config-data\") pod \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.748427 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.748536 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17d6e828-fc05-46cb-9bee-bac08ebf331a-logs\") pod \"17d6e828-fc05-46cb-9bee-bac08ebf331a\" (UID: \"17d6e828-fc05-46cb-9bee-bac08ebf331a\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.748726 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-combined-ca-bundle\") pod \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.749064 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aca8dfc0-f915-4696-95c1-3c232f2ea35a-etc-machine-id\") pod \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.749205 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xl7b\" (UniqueName: \"kubernetes.io/projected/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-kube-api-access-2xl7b\") pod \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.749316 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-logs\") pod \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.749514 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/17d6e828-fc05-46cb-9bee-bac08ebf331a-internal-tls-certs\") pod \"17d6e828-fc05-46cb-9bee-bac08ebf331a\" (UID: \"17d6e828-fc05-46cb-9bee-bac08ebf331a\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.749906 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/17d6e828-fc05-46cb-9bee-bac08ebf331a-public-tls-certs\") pod \"17d6e828-fc05-46cb-9bee-bac08ebf331a\" (UID: \"17d6e828-fc05-46cb-9bee-bac08ebf331a\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.750012 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-internal-tls-certs\") pod \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.750124 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-scripts\") pod \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.750611 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17d6e828-fc05-46cb-9bee-bac08ebf331a-config-data\") pod \"17d6e828-fc05-46cb-9bee-bac08ebf331a\" (UID: \"17d6e828-fc05-46cb-9bee-bac08ebf331a\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.750736 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-public-tls-certs\") pod \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.750839 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-public-tls-certs\") pod \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.751043 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-scripts\") pod \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.751254 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-combined-ca-bundle\") pod \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.751321 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69h5t\" (UniqueName: \"kubernetes.io/projected/aca8dfc0-f915-4696-95c1-3c232f2ea35a-kube-api-access-69h5t\") pod \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.751353 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-config-data-custom\") pod \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.751415 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dct4b\" (UniqueName: \"kubernetes.io/projected/17d6e828-fc05-46cb-9bee-bac08ebf331a-kube-api-access-dct4b\") pod \"17d6e828-fc05-46cb-9bee-bac08ebf331a\" (UID: \"17d6e828-fc05-46cb-9bee-bac08ebf331a\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.751443 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-httpd-run\") pod \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.751466 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17d6e828-fc05-46cb-9bee-bac08ebf331a-config-data-custom\") pod \"17d6e828-fc05-46cb-9bee-bac08ebf331a\" (UID: \"17d6e828-fc05-46cb-9bee-bac08ebf331a\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.751886 4766 scope.go:117] "RemoveContainer" containerID="a733f4a35556a7749ab7a8a117d06d26d21581815da5c3972ae1bb6be001e832" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.752237 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"14ae2453-74fa-4114-9261-21b381518493","Type":"ContainerDied","Data":"7fc6fabdf1696e6682c7bbb5d9becc2f8e5aa3ed317845b65b7dc17fdb970244"} Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.748987 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aca8dfc0-f915-4696-95c1-3c232f2ea35a-logs" (OuterVolumeSpecName: "logs") pod "aca8dfc0-f915-4696-95c1-3c232f2ea35a" (UID: "aca8dfc0-f915-4696-95c1-3c232f2ea35a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.749092 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17d6e828-fc05-46cb-9bee-bac08ebf331a-logs" (OuterVolumeSpecName: "logs") pod "17d6e828-fc05-46cb-9bee-bac08ebf331a" (UID: "17d6e828-fc05-46cb-9bee-bac08ebf331a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.749136 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aca8dfc0-f915-4696-95c1-3c232f2ea35a-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "aca8dfc0-f915-4696-95c1-3c232f2ea35a" (UID: "aca8dfc0-f915-4696-95c1-3c232f2ea35a"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.750391 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-logs" (OuterVolumeSpecName: "logs") pod "6d5b8a42-39dd-4b1b-9f92-1e3585b6707b" (UID: "6d5b8a42-39dd-4b1b-9f92-1e3585b6707b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.753574 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.753751 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "6d5b8a42-39dd-4b1b-9f92-1e3585b6707b" (UID: "6d5b8a42-39dd-4b1b-9f92-1e3585b6707b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: E0130 16:47:21.757088 4766 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 30 16:47:21 crc kubenswrapper[4766]: E0130 16:47:21.757153 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/34adc844-a813-4bb0-9d46-131d1b5a7b9b-operator-scripts podName:34adc844-a813-4bb0-9d46-131d1b5a7b9b nodeName:}" failed. No retries permitted until 2026-01-30 16:47:23.757131547 +0000 UTC m=+1498.395088963 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/34adc844-a813-4bb0-9d46-131d1b5a7b9b-operator-scripts") pod "keystone-e3be-account-create-update-qnsph" (UID: "34adc844-a813-4bb0-9d46-131d1b5a7b9b") : configmap "openstack-scripts" not found Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.766144 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34adc844-a813-4bb0-9d46-131d1b5a7b9b-operator-scripts\") pod \"keystone-e3be-account-create-update-qnsph\" (UID: \"34adc844-a813-4bb0-9d46-131d1b5a7b9b\") " pod="openstack/keystone-e3be-account-create-update-qnsph" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.766451 4766 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/14ae2453-74fa-4114-9261-21b381518493-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.766471 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxzz5\" (UniqueName: \"kubernetes.io/projected/40f1dc52-213f-4a5b-af33-4067a83859e4-kube-api-access-sxzz5\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.766488 4766 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.766527 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40f1dc52-213f-4a5b-af33-4067a83859e4-logs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.766541 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14ae2453-74fa-4114-9261-21b381518493-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.766554 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14ae2453-74fa-4114-9261-21b381518493-logs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.766564 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aca8dfc0-f915-4696-95c1-3c232f2ea35a-logs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.766602 4766 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb576787-90a5-4e81-a047-6fcf37921335-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.766770 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40f1dc52-213f-4a5b-af33-4067a83859e4-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.766788 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqjcv\" (UniqueName: \"kubernetes.io/projected/14ae2453-74fa-4114-9261-21b381518493-kube-api-access-xqjcv\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.766871 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17d6e828-fc05-46cb-9bee-bac08ebf331a-logs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.766885 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40f1dc52-213f-4a5b-af33-4067a83859e4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.766899 4766 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aca8dfc0-f915-4696-95c1-3c232f2ea35a-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.766943 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-logs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.766959 4766 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.771695 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-kube-api-access-2xl7b" (OuterVolumeSpecName: "kube-api-access-2xl7b") pod "6d5b8a42-39dd-4b1b-9f92-1e3585b6707b" (UID: "6d5b8a42-39dd-4b1b-9f92-1e3585b6707b"). InnerVolumeSpecName "kube-api-access-2xl7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.772160 4766 generic.go:334] "Generic (PLEG): container finished" podID="40f1dc52-213f-4a5b-af33-4067a83859e4" containerID="f5c26f650562d5378425a19e3a733be4cbcd37edac52306372d412afb45a409d" exitCode=0 Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.772269 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"40f1dc52-213f-4a5b-af33-4067a83859e4","Type":"ContainerDied","Data":"f5c26f650562d5378425a19e3a733be4cbcd37edac52306372d412afb45a409d"} Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.772302 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"40f1dc52-213f-4a5b-af33-4067a83859e4","Type":"ContainerDied","Data":"fbc4233875c212f4b897d1f9917772ed396cd3598ca0ca808134dccd327aa2de"} Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.772369 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.775486 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "aca8dfc0-f915-4696-95c1-3c232f2ea35a" (UID: "aca8dfc0-f915-4696-95c1-3c232f2ea35a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.777150 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "6d5b8a42-39dd-4b1b-9f92-1e3585b6707b" (UID: "6d5b8a42-39dd-4b1b-9f92-1e3585b6707b"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.781010 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_9f9f648f-36fc-4ab4-9e08-cf4e01e30f22/ovn-northd/0.log" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.781202 4766 generic.go:334] "Generic (PLEG): container finished" podID="9f9f648f-36fc-4ab4-9e08-cf4e01e30f22" containerID="1018ad035e1117daba7d0fa6d624c300af7a28f4b34f661587a2d4823b6112f1" exitCode=139 Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.781354 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22","Type":"ContainerDied","Data":"1018ad035e1117daba7d0fa6d624c300af7a28f4b34f661587a2d4823b6112f1"} Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.800765 4766 generic.go:334] "Generic (PLEG): container finished" podID="908c7fd8-c07e-463e-94c4-76980a3a8ba2" containerID="69d64425bbacf9da73461e63012a983fa8ef6f8440c070018088e050cf6bc5a6" exitCode=0 Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.801532 4766 generic.go:334] "Generic (PLEG): container finished" podID="908c7fd8-c07e-463e-94c4-76980a3a8ba2" containerID="1fe4777b2695557b65a6f9a91a3f309b01c42b5f0288bbecc862c67c0bda120a" exitCode=0 Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.801831 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.801964 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"908c7fd8-c07e-463e-94c4-76980a3a8ba2","Type":"ContainerDied","Data":"69d64425bbacf9da73461e63012a983fa8ef6f8440c070018088e050cf6bc5a6"} Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.802068 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"908c7fd8-c07e-463e-94c4-76980a3a8ba2","Type":"ContainerDied","Data":"1fe4777b2695557b65a6f9a91a3f309b01c42b5f0288bbecc862c67c0bda120a"} Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.804201 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zlndr" event={"ID":"768238f5-b74e-4f23-91ec-4eeb69375025","Type":"ContainerStarted","Data":"51ffbc2026ffaf4c9f26fd55d50669f8d3b947029fdc717ba29a5acfdc7e97bf"} Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.804630 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-scripts" (OuterVolumeSpecName: "scripts") pod "6d5b8a42-39dd-4b1b-9f92-1e3585b6707b" (UID: "6d5b8a42-39dd-4b1b-9f92-1e3585b6707b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.804862 4766 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/root-account-create-update-zlndr" secret="" err="secret \"galera-openstack-dockercfg-x2qq7\" not found" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.806664 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aca8dfc0-f915-4696-95c1-3c232f2ea35a-kube-api-access-69h5t" (OuterVolumeSpecName: "kube-api-access-69h5t") pod "aca8dfc0-f915-4696-95c1-3c232f2ea35a" (UID: "aca8dfc0-f915-4696-95c1-3c232f2ea35a"). InnerVolumeSpecName "kube-api-access-69h5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.806968 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17d6e828-fc05-46cb-9bee-bac08ebf331a-kube-api-access-dct4b" (OuterVolumeSpecName: "kube-api-access-dct4b") pod "17d6e828-fc05-46cb-9bee-bac08ebf331a" (UID: "17d6e828-fc05-46cb-9bee-bac08ebf331a"). InnerVolumeSpecName "kube-api-access-dct4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.807117 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17d6e828-fc05-46cb-9bee-bac08ebf331a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "17d6e828-fc05-46cb-9bee-bac08ebf331a" (UID: "17d6e828-fc05-46cb-9bee-bac08ebf331a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.814970 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40f1dc52-213f-4a5b-af33-4067a83859e4-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "40f1dc52-213f-4a5b-af33-4067a83859e4" (UID: "40f1dc52-213f-4a5b-af33-4067a83859e4"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.815228 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.817845 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-d6c45fdd9-srlkx" event={"ID":"d13e6f63-37d4-4780-9902-430a9669901c","Type":"ContainerDied","Data":"e3fbc192fdad733807e36f2325831d022e561f39e323dd8f0e5a0da778a417b6"} Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.817851 4766 generic.go:334] "Generic (PLEG): container finished" podID="d13e6f63-37d4-4780-9902-430a9669901c" containerID="e3fbc192fdad733807e36f2325831d022e561f39e323dd8f0e5a0da778a417b6" exitCode=0 Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.820382 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-scripts" (OuterVolumeSpecName: "scripts") pod "aca8dfc0-f915-4696-95c1-3c232f2ea35a" (UID: "aca8dfc0-f915-4696-95c1-3c232f2ea35a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: E0130 16:47:21.821834 4766 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 16:47:21 crc kubenswrapper[4766]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 30 16:47:21 crc kubenswrapper[4766]: Jan 30 16:47:21 crc kubenswrapper[4766]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 30 16:47:21 crc kubenswrapper[4766]: Jan 30 16:47:21 crc kubenswrapper[4766]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 30 16:47:21 crc kubenswrapper[4766]: Jan 30 16:47:21 crc kubenswrapper[4766]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 30 16:47:21 crc kubenswrapper[4766]: Jan 30 16:47:21 crc kubenswrapper[4766]: if [ -n "" ]; then Jan 30 16:47:21 crc kubenswrapper[4766]: GRANT_DATABASE="" Jan 30 16:47:21 crc kubenswrapper[4766]: else Jan 30 16:47:21 crc kubenswrapper[4766]: GRANT_DATABASE="*" Jan 30 16:47:21 crc kubenswrapper[4766]: fi Jan 30 16:47:21 crc kubenswrapper[4766]: Jan 30 16:47:21 crc kubenswrapper[4766]: # going for maximum compatibility here: Jan 30 16:47:21 crc kubenswrapper[4766]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 30 16:47:21 crc kubenswrapper[4766]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 30 16:47:21 crc kubenswrapper[4766]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 30 16:47:21 crc kubenswrapper[4766]: # support updates Jan 30 16:47:21 crc kubenswrapper[4766]: Jan 30 16:47:21 crc kubenswrapper[4766]: $MYSQL_CMD < logger="UnhandledError" Jan 30 16:47:21 crc kubenswrapper[4766]: E0130 16:47:21.822971 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-zlndr" podUID="768238f5-b74e-4f23-91ec-4eeb69375025" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.823626 4766 generic.go:334] "Generic (PLEG): container finished" podID="22d60b44-40c9-425e-8daf-8931a25954e0" containerID="812a3e23be177e19676f6003e9e0ddb46880fe309badbba4e93d1efe04dcf597" exitCode=0 Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.823672 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5c649fd446-flqwn" event={"ID":"22d60b44-40c9-425e-8daf-8931a25954e0","Type":"ContainerDied","Data":"812a3e23be177e19676f6003e9e0ddb46880fe309badbba4e93d1efe04dcf597"} Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.824307 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6kx5n"] Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.826393 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-69d8797fb6-zzsfd" event={"ID":"447a8ec3-4e50-40a9-b418-01fd8c0eb03e","Type":"ContainerDied","Data":"e94bea3a22075449c7ce733d15ed50c31bf49ec686272c0a7961479d9194b9c6"} Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.826626 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-69d8797fb6-zzsfd" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.830074 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6kx5n"] Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.835350 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9ad68dc2-23ff-4044-b74d-149ae8f02bc0","Type":"ContainerDied","Data":"86807e61b818028e1b27b632e251a892f0f024f763279e3a716bc66141f0adc3"} Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.836066 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.841613 4766 generic.go:334] "Generic (PLEG): container finished" podID="17d6e828-fc05-46cb-9bee-bac08ebf331a" containerID="b76d74d517695b954352727012ec69dbd68335889d86f736409624bd420dd1d5" exitCode=0 Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.841668 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b946b75c8-zb6q6" event={"ID":"17d6e828-fc05-46cb-9bee-bac08ebf331a","Type":"ContainerDied","Data":"b76d74d517695b954352727012ec69dbd68335889d86f736409624bd420dd1d5"} Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.841695 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b946b75c8-zb6q6" event={"ID":"17d6e828-fc05-46cb-9bee-bac08ebf331a","Type":"ContainerDied","Data":"d7ba5e3a0e26b335d6f1850d527c93eb68d9d4d8bfecdec3674d222763957cd0"} Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.850057 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e3be-account-create-update-qnsph" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.850269 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7b946b75c8-zb6q6" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.858424 4766 scope.go:117] "RemoveContainer" containerID="f8cd42c17a2b9677c49633243f1573eb74a7d64e3fcc5f7dba33bde53ccef668" Jan 30 16:47:21 crc kubenswrapper[4766]: E0130 16:47:21.860892 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8cd42c17a2b9677c49633243f1573eb74a7d64e3fcc5f7dba33bde53ccef668\": container with ID starting with f8cd42c17a2b9677c49633243f1573eb74a7d64e3fcc5f7dba33bde53ccef668 not found: ID does not exist" containerID="f8cd42c17a2b9677c49633243f1573eb74a7d64e3fcc5f7dba33bde53ccef668" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.861227 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8cd42c17a2b9677c49633243f1573eb74a7d64e3fcc5f7dba33bde53ccef668"} err="failed to get container status \"f8cd42c17a2b9677c49633243f1573eb74a7d64e3fcc5f7dba33bde53ccef668\": rpc error: code = NotFound desc = could not find container \"f8cd42c17a2b9677c49633243f1573eb74a7d64e3fcc5f7dba33bde53ccef668\": container with ID starting with f8cd42c17a2b9677c49633243f1573eb74a7d64e3fcc5f7dba33bde53ccef668 not found: ID does not exist" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.861309 4766 scope.go:117] "RemoveContainer" containerID="a733f4a35556a7749ab7a8a117d06d26d21581815da5c3972ae1bb6be001e832" Jan 30 16:47:21 crc kubenswrapper[4766]: E0130 16:47:21.862215 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a733f4a35556a7749ab7a8a117d06d26d21581815da5c3972ae1bb6be001e832\": container with ID starting with a733f4a35556a7749ab7a8a117d06d26d21581815da5c3972ae1bb6be001e832 not found: ID does not exist" containerID="a733f4a35556a7749ab7a8a117d06d26d21581815da5c3972ae1bb6be001e832" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.863128 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a733f4a35556a7749ab7a8a117d06d26d21581815da5c3972ae1bb6be001e832"} err="failed to get container status \"a733f4a35556a7749ab7a8a117d06d26d21581815da5c3972ae1bb6be001e832\": rpc error: code = NotFound desc = could not find container \"a733f4a35556a7749ab7a8a117d06d26d21581815da5c3972ae1bb6be001e832\": container with ID starting with a733f4a35556a7749ab7a8a117d06d26d21581815da5c3972ae1bb6be001e832 not found: ID does not exist" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.863204 4766 scope.go:117] "RemoveContainer" containerID="b169f04387ed060fbbaaafe5ea96dd7518c3bc7deab7064d883b932c7d250d26" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.869042 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn9rq\" (UniqueName: \"kubernetes.io/projected/34adc844-a813-4bb0-9d46-131d1b5a7b9b-kube-api-access-jn9rq\") pod \"keystone-e3be-account-create-update-qnsph\" (UID: \"34adc844-a813-4bb0-9d46-131d1b5a7b9b\") " pod="openstack/keystone-e3be-account-create-update-qnsph" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.869303 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xl7b\" (UniqueName: \"kubernetes.io/projected/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-kube-api-access-2xl7b\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.869320 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.869360 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.869370 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69h5t\" (UniqueName: \"kubernetes.io/projected/aca8dfc0-f915-4696-95c1-3c232f2ea35a-kube-api-access-69h5t\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.869381 4766 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.869390 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dct4b\" (UniqueName: \"kubernetes.io/projected/17d6e828-fc05-46cb-9bee-bac08ebf331a-kube-api-access-dct4b\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.869398 4766 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17d6e828-fc05-46cb-9bee-bac08ebf331a-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.869406 4766 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/40f1dc52-213f-4a5b-af33-4067a83859e4-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: E0130 16:47:21.869416 4766 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 30 16:47:21 crc kubenswrapper[4766]: E0130 16:47:21.869527 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/768238f5-b74e-4f23-91ec-4eeb69375025-operator-scripts podName:768238f5-b74e-4f23-91ec-4eeb69375025 nodeName:}" failed. No retries permitted until 2026-01-30 16:47:22.369489257 +0000 UTC m=+1497.007446603 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/768238f5-b74e-4f23-91ec-4eeb69375025-operator-scripts") pod "root-account-create-update-zlndr" (UID: "768238f5-b74e-4f23-91ec-4eeb69375025") : configmap "openstack-scripts" not found Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.869448 4766 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 30 16:47:21 crc kubenswrapper[4766]: E0130 16:47:21.879528 4766 projected.go:194] Error preparing data for projected volume kube-api-access-jn9rq for pod openstack/keystone-e3be-account-create-update-qnsph: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 30 16:47:21 crc kubenswrapper[4766]: E0130 16:47:21.879675 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34adc844-a813-4bb0-9d46-131d1b5a7b9b-kube-api-access-jn9rq podName:34adc844-a813-4bb0-9d46-131d1b5a7b9b nodeName:}" failed. No retries permitted until 2026-01-30 16:47:23.879652151 +0000 UTC m=+1498.517609497 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-jn9rq" (UniqueName: "kubernetes.io/projected/34adc844-a813-4bb0-9d46-131d1b5a7b9b-kube-api-access-jn9rq") pod "keystone-e3be-account-create-update-qnsph" (UID: "34adc844-a813-4bb0-9d46-131d1b5a7b9b") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.881397 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bc2931b-8439-4c5c-be4d-43f4aab528f2-config-data" (OuterVolumeSpecName: "config-data") pod "4bc2931b-8439-4c5c-be4d-43f4aab528f2" (UID: "4bc2931b-8439-4c5c-be4d-43f4aab528f2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.908357 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14ae2453-74fa-4114-9261-21b381518493-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "14ae2453-74fa-4114-9261-21b381518493" (UID: "14ae2453-74fa-4114-9261-21b381518493"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.917686 4766 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.955559 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17d6e828-fc05-46cb-9bee-bac08ebf331a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "17d6e828-fc05-46cb-9bee-bac08ebf331a" (UID: "17d6e828-fc05-46cb-9bee-bac08ebf331a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.973645 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17d6e828-fc05-46cb-9bee-bac08ebf331a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.974441 4766 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.974531 4766 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/14ae2453-74fa-4114-9261-21b381518493-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.974645 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bc2931b-8439-4c5c-be4d-43f4aab528f2-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.057943 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aca8dfc0-f915-4696-95c1-3c232f2ea35a" (UID: "aca8dfc0-f915-4696-95c1-3c232f2ea35a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.063623 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dbf5802-dfa7-4b32-aaa5-48fc779da5d6" path="/var/lib/kubelet/pods/0dbf5802-dfa7-4b32-aaa5-48fc779da5d6/volumes" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.064220 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-config-data" (OuterVolumeSpecName: "config-data") pod "aca8dfc0-f915-4696-95c1-3c232f2ea35a" (UID: "aca8dfc0-f915-4696-95c1-3c232f2ea35a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.065857 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e9bbf1f-b039-4112-ab71-308535065091" path="/var/lib/kubelet/pods/4e9bbf1f-b039-4112-ab71-308535065091/volumes" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.066585 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59eff57d-cb92-4c52-aad2-6e43b3908fd4" path="/var/lib/kubelet/pods/59eff57d-cb92-4c52-aad2-6e43b3908fd4/volumes" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.068346 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="845c3343-246e-4309-bd46-9bcd92cad574" path="/var/lib/kubelet/pods/845c3343-246e-4309-bd46-9bcd92cad574/volumes" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.069662 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5ce540c-4925-43fa-b0aa-ef474912f60e" path="/var/lib/kubelet/pods/a5ce540c-4925-43fa-b0aa-ef474912f60e/volumes" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.072724 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b242f466-9049-49a9-b655-b270790de9ce" path="/var/lib/kubelet/pods/b242f466-9049-49a9-b655-b270790de9ce/volumes" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.075241 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb576787-90a5-4e81-a047-6fcf37921335" path="/var/lib/kubelet/pods/bb576787-90a5-4e81-a047-6fcf37921335/volumes" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.090556 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.090746 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: E0130 16:47:22.090955 4766 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 30 16:47:22 crc kubenswrapper[4766]: E0130 16:47:22.091062 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b21357e1-82c9-419a-a191-359c84d6d001-config-data podName:b21357e1-82c9-419a-a191-359c84d6d001 nodeName:}" failed. No retries permitted until 2026-01-30 16:47:30.091037097 +0000 UTC m=+1504.728994493 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/b21357e1-82c9-419a-a191-359c84d6d001-config-data") pod "rabbitmq-cell1-server-0" (UID: "b21357e1-82c9-419a-a191-359c84d6d001") : configmap "rabbitmq-cell1-config-data" not found Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.134520 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6d5b8a42-39dd-4b1b-9f92-1e3585b6707b" (UID: "6d5b8a42-39dd-4b1b-9f92-1e3585b6707b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.154385 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17d6e828-fc05-46cb-9bee-bac08ebf331a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "17d6e828-fc05-46cb-9bee-bac08ebf331a" (UID: "17d6e828-fc05-46cb-9bee-bac08ebf331a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.165830 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17d6e828-fc05-46cb-9bee-bac08ebf331a-config-data" (OuterVolumeSpecName: "config-data") pod "17d6e828-fc05-46cb-9bee-bac08ebf331a" (UID: "17d6e828-fc05-46cb-9bee-bac08ebf331a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.169562 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "6d5b8a42-39dd-4b1b-9f92-1e3585b6707b" (UID: "6d5b8a42-39dd-4b1b-9f92-1e3585b6707b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.169796 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17d6e828-fc05-46cb-9bee-bac08ebf331a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "17d6e828-fc05-46cb-9bee-bac08ebf331a" (UID: "17d6e828-fc05-46cb-9bee-bac08ebf331a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.171584 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-config-data" (OuterVolumeSpecName: "config-data") pod "6d5b8a42-39dd-4b1b-9f92-1e3585b6707b" (UID: "6d5b8a42-39dd-4b1b-9f92-1e3585b6707b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.174453 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "aca8dfc0-f915-4696-95c1-3c232f2ea35a" (UID: "aca8dfc0-f915-4696-95c1-3c232f2ea35a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.191966 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "aca8dfc0-f915-4696-95c1-3c232f2ea35a" (UID: "aca8dfc0-f915-4696-95c1-3c232f2ea35a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.192881 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-public-tls-certs\") pod \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " Jan 30 16:47:22 crc kubenswrapper[4766]: W0130 16:47:22.193148 4766 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/aca8dfc0-f915-4696-95c1-3c232f2ea35a/volumes/kubernetes.io~secret/public-tls-certs Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.193265 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "aca8dfc0-f915-4696-95c1-3c232f2ea35a" (UID: "aca8dfc0-f915-4696-95c1-3c232f2ea35a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.193630 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.193717 4766 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/17d6e828-fc05-46cb-9bee-bac08ebf331a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.193798 4766 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/17d6e828-fc05-46cb-9bee-bac08ebf331a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.193909 4766 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.193987 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17d6e828-fc05-46cb-9bee-bac08ebf331a-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.194051 4766 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.194129 4766 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.194264 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.282424 4766 scope.go:117] "RemoveContainer" containerID="7cb223d43c8f7f218cb3801a506f0b8a1c37370133be56bce90a766f5556e3ab" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.317198 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e3be-account-create-update-qnsph" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.321372 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5c649fd446-flqwn" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.344137 4766 scope.go:117] "RemoveContainer" containerID="7a019f6cf432acd6921c269ed116db1aa5dfd42bb062f9567ee28226592d75f9" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.346912 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.359955 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.372848 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.374813 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-d6c45fdd9-srlkx" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.377804 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.383488 4766 scope.go:117] "RemoveContainer" containerID="a67df9afdf256b25cf9bda3ba94d84b8fbd05c6c568cae7f599293e1949d5425" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.390877 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.399078 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f7793d-39bd-4e96-a857-7de972f0c76d-combined-ca-bundle\") pod \"61f7793d-39bd-4e96-a857-7de972f0c76d\" (UID: \"61f7793d-39bd-4e96-a857-7de972f0c76d\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.399140 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/908c7fd8-c07e-463e-94c4-76980a3a8ba2-log-httpd\") pod \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\" (UID: \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.399185 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/61f7793d-39bd-4e96-a857-7de972f0c76d-memcached-tls-certs\") pod \"61f7793d-39bd-4e96-a857-7de972f0c76d\" (UID: \"61f7793d-39bd-4e96-a857-7de972f0c76d\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.399317 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/908c7fd8-c07e-463e-94c4-76980a3a8ba2-scripts\") pod \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\" (UID: \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.399387 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22d60b44-40c9-425e-8daf-8931a25954e0-combined-ca-bundle\") pod \"22d60b44-40c9-425e-8daf-8931a25954e0\" (UID: \"22d60b44-40c9-425e-8daf-8931a25954e0\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.399485 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/61f7793d-39bd-4e96-a857-7de972f0c76d-kolla-config\") pod \"61f7793d-39bd-4e96-a857-7de972f0c76d\" (UID: \"61f7793d-39bd-4e96-a857-7de972f0c76d\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.399514 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/908c7fd8-c07e-463e-94c4-76980a3a8ba2-config-data\") pod \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\" (UID: \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.399580 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/908c7fd8-c07e-463e-94c4-76980a3a8ba2-run-httpd\") pod \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\" (UID: \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.399637 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/908c7fd8-c07e-463e-94c4-76980a3a8ba2-ceilometer-tls-certs\") pod \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\" (UID: \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.399660 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/908c7fd8-c07e-463e-94c4-76980a3a8ba2-sg-core-conf-yaml\") pod \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\" (UID: \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.399692 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9fwz\" (UniqueName: \"kubernetes.io/projected/22d60b44-40c9-425e-8daf-8931a25954e0-kube-api-access-h9fwz\") pod \"22d60b44-40c9-425e-8daf-8931a25954e0\" (UID: \"22d60b44-40c9-425e-8daf-8931a25954e0\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.399716 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22d60b44-40c9-425e-8daf-8931a25954e0-logs\") pod \"22d60b44-40c9-425e-8daf-8931a25954e0\" (UID: \"22d60b44-40c9-425e-8daf-8931a25954e0\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.399767 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/61f7793d-39bd-4e96-a857-7de972f0c76d-config-data\") pod \"61f7793d-39bd-4e96-a857-7de972f0c76d\" (UID: \"61f7793d-39bd-4e96-a857-7de972f0c76d\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.399795 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbnzx\" (UniqueName: \"kubernetes.io/projected/61f7793d-39bd-4e96-a857-7de972f0c76d-kube-api-access-mbnzx\") pod \"61f7793d-39bd-4e96-a857-7de972f0c76d\" (UID: \"61f7793d-39bd-4e96-a857-7de972f0c76d\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.399815 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22d60b44-40c9-425e-8daf-8931a25954e0-config-data\") pod \"22d60b44-40c9-425e-8daf-8931a25954e0\" (UID: \"22d60b44-40c9-425e-8daf-8931a25954e0\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.399838 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908c7fd8-c07e-463e-94c4-76980a3a8ba2-combined-ca-bundle\") pod \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\" (UID: \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.399896 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cflcc\" (UniqueName: \"kubernetes.io/projected/908c7fd8-c07e-463e-94c4-76980a3a8ba2-kube-api-access-cflcc\") pod \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\" (UID: \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.399951 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/22d60b44-40c9-425e-8daf-8931a25954e0-config-data-custom\") pod \"22d60b44-40c9-425e-8daf-8931a25954e0\" (UID: \"22d60b44-40c9-425e-8daf-8931a25954e0\") " Jan 30 16:47:22 crc kubenswrapper[4766]: E0130 16:47:22.400456 4766 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 30 16:47:22 crc kubenswrapper[4766]: E0130 16:47:22.400519 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/768238f5-b74e-4f23-91ec-4eeb69375025-operator-scripts podName:768238f5-b74e-4f23-91ec-4eeb69375025 nodeName:}" failed. No retries permitted until 2026-01-30 16:47:23.400498684 +0000 UTC m=+1498.038456030 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/768238f5-b74e-4f23-91ec-4eeb69375025-operator-scripts") pod "root-account-create-update-zlndr" (UID: "768238f5-b74e-4f23-91ec-4eeb69375025") : configmap "openstack-scripts" not found Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.401739 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/908c7fd8-c07e-463e-94c4-76980a3a8ba2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "908c7fd8-c07e-463e-94c4-76980a3a8ba2" (UID: "908c7fd8-c07e-463e-94c4-76980a3a8ba2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.403284 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61f7793d-39bd-4e96-a857-7de972f0c76d-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "61f7793d-39bd-4e96-a857-7de972f0c76d" (UID: "61f7793d-39bd-4e96-a857-7de972f0c76d"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.407856 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/908c7fd8-c07e-463e-94c4-76980a3a8ba2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "908c7fd8-c07e-463e-94c4-76980a3a8ba2" (UID: "908c7fd8-c07e-463e-94c4-76980a3a8ba2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.408503 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61f7793d-39bd-4e96-a857-7de972f0c76d-config-data" (OuterVolumeSpecName: "config-data") pod "61f7793d-39bd-4e96-a857-7de972f0c76d" (UID: "61f7793d-39bd-4e96-a857-7de972f0c76d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.409611 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22d60b44-40c9-425e-8daf-8931a25954e0-logs" (OuterVolumeSpecName: "logs") pod "22d60b44-40c9-425e-8daf-8931a25954e0" (UID: "22d60b44-40c9-425e-8daf-8931a25954e0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.428007 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/908c7fd8-c07e-463e-94c4-76980a3a8ba2-scripts" (OuterVolumeSpecName: "scripts") pod "908c7fd8-c07e-463e-94c4-76980a3a8ba2" (UID: "908c7fd8-c07e-463e-94c4-76980a3a8ba2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.430473 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22d60b44-40c9-425e-8daf-8931a25954e0-kube-api-access-h9fwz" (OuterVolumeSpecName: "kube-api-access-h9fwz") pod "22d60b44-40c9-425e-8daf-8931a25954e0" (UID: "22d60b44-40c9-425e-8daf-8931a25954e0"). InnerVolumeSpecName "kube-api-access-h9fwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.430614 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61f7793d-39bd-4e96-a857-7de972f0c76d-kube-api-access-mbnzx" (OuterVolumeSpecName: "kube-api-access-mbnzx") pod "61f7793d-39bd-4e96-a857-7de972f0c76d" (UID: "61f7793d-39bd-4e96-a857-7de972f0c76d"). InnerVolumeSpecName "kube-api-access-mbnzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.430705 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/908c7fd8-c07e-463e-94c4-76980a3a8ba2-kube-api-access-cflcc" (OuterVolumeSpecName: "kube-api-access-cflcc") pod "908c7fd8-c07e-463e-94c4-76980a3a8ba2" (UID: "908c7fd8-c07e-463e-94c4-76980a3a8ba2"). InnerVolumeSpecName "kube-api-access-cflcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.430771 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22d60b44-40c9-425e-8daf-8931a25954e0-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "22d60b44-40c9-425e-8daf-8931a25954e0" (UID: "22d60b44-40c9-425e-8daf-8931a25954e0"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.454800 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/908c7fd8-c07e-463e-94c4-76980a3a8ba2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "908c7fd8-c07e-463e-94c4-76980a3a8ba2" (UID: "908c7fd8-c07e-463e-94c4-76980a3a8ba2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.458895 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.459631 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61f7793d-39bd-4e96-a857-7de972f0c76d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "61f7793d-39bd-4e96-a857-7de972f0c76d" (UID: "61f7793d-39bd-4e96-a857-7de972f0c76d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.469065 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22d60b44-40c9-425e-8daf-8931a25954e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "22d60b44-40c9-425e-8daf-8931a25954e0" (UID: "22d60b44-40c9-425e-8daf-8931a25954e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.485507 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22d60b44-40c9-425e-8daf-8931a25954e0-config-data" (OuterVolumeSpecName: "config-data") pod "22d60b44-40c9-425e-8daf-8931a25954e0" (UID: "22d60b44-40c9-425e-8daf-8931a25954e0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.493339 4766 scope.go:117] "RemoveContainer" containerID="ae7dde53148ed0648831b0890e97ecd78fb3fcfcd4a2cb421fec3c285c7bcafc" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.501715 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d13e6f63-37d4-4780-9902-430a9669901c-logs\") pod \"d13e6f63-37d4-4780-9902-430a9669901c\" (UID: \"d13e6f63-37d4-4780-9902-430a9669901c\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.501819 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbwrx\" (UniqueName: \"kubernetes.io/projected/d13e6f63-37d4-4780-9902-430a9669901c-kube-api-access-rbwrx\") pod \"d13e6f63-37d4-4780-9902-430a9669901c\" (UID: \"d13e6f63-37d4-4780-9902-430a9669901c\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.501859 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/063ebe65-0175-443e-8c75-5018c42b3f36-config-data-custom\") pod \"063ebe65-0175-443e-8c75-5018c42b3f36\" (UID: \"063ebe65-0175-443e-8c75-5018c42b3f36\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.501897 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d13e6f63-37d4-4780-9902-430a9669901c-config-data\") pod \"d13e6f63-37d4-4780-9902-430a9669901c\" (UID: \"d13e6f63-37d4-4780-9902-430a9669901c\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.501917 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26q8r\" (UniqueName: \"kubernetes.io/projected/063ebe65-0175-443e-8c75-5018c42b3f36-kube-api-access-26q8r\") pod \"063ebe65-0175-443e-8c75-5018c42b3f36\" (UID: \"063ebe65-0175-443e-8c75-5018c42b3f36\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.501944 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/063ebe65-0175-443e-8c75-5018c42b3f36-scripts\") pod \"063ebe65-0175-443e-8c75-5018c42b3f36\" (UID: \"063ebe65-0175-443e-8c75-5018c42b3f36\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.501963 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d13e6f63-37d4-4780-9902-430a9669901c-combined-ca-bundle\") pod \"d13e6f63-37d4-4780-9902-430a9669901c\" (UID: \"d13e6f63-37d4-4780-9902-430a9669901c\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.502014 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/063ebe65-0175-443e-8c75-5018c42b3f36-combined-ca-bundle\") pod \"063ebe65-0175-443e-8c75-5018c42b3f36\" (UID: \"063ebe65-0175-443e-8c75-5018c42b3f36\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.502114 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d13e6f63-37d4-4780-9902-430a9669901c-config-data-custom\") pod \"d13e6f63-37d4-4780-9902-430a9669901c\" (UID: \"d13e6f63-37d4-4780-9902-430a9669901c\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.502183 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/063ebe65-0175-443e-8c75-5018c42b3f36-config-data\") pod \"063ebe65-0175-443e-8c75-5018c42b3f36\" (UID: \"063ebe65-0175-443e-8c75-5018c42b3f36\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.502211 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/063ebe65-0175-443e-8c75-5018c42b3f36-etc-machine-id\") pod \"063ebe65-0175-443e-8c75-5018c42b3f36\" (UID: \"063ebe65-0175-443e-8c75-5018c42b3f36\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.502418 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/063ebe65-0175-443e-8c75-5018c42b3f36-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "063ebe65-0175-443e-8c75-5018c42b3f36" (UID: "063ebe65-0175-443e-8c75-5018c42b3f36"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.503852 4766 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/908c7fd8-c07e-463e-94c4-76980a3a8ba2-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.503896 4766 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/063ebe65-0175-443e-8c75-5018c42b3f36-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.503914 4766 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/908c7fd8-c07e-463e-94c4-76980a3a8ba2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.503929 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9fwz\" (UniqueName: \"kubernetes.io/projected/22d60b44-40c9-425e-8daf-8931a25954e0-kube-api-access-h9fwz\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.503947 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22d60b44-40c9-425e-8daf-8931a25954e0-logs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.503960 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/61f7793d-39bd-4e96-a857-7de972f0c76d-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.503972 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbnzx\" (UniqueName: \"kubernetes.io/projected/61f7793d-39bd-4e96-a857-7de972f0c76d-kube-api-access-mbnzx\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.503992 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22d60b44-40c9-425e-8daf-8931a25954e0-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.504009 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cflcc\" (UniqueName: \"kubernetes.io/projected/908c7fd8-c07e-463e-94c4-76980a3a8ba2-kube-api-access-cflcc\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.504024 4766 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/22d60b44-40c9-425e-8daf-8931a25954e0-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.504038 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f7793d-39bd-4e96-a857-7de972f0c76d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.504056 4766 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/908c7fd8-c07e-463e-94c4-76980a3a8ba2-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.504069 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/908c7fd8-c07e-463e-94c4-76980a3a8ba2-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.504082 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22d60b44-40c9-425e-8daf-8931a25954e0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.504094 4766 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/61f7793d-39bd-4e96-a857-7de972f0c76d-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.517124 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d13e6f63-37d4-4780-9902-430a9669901c-logs" (OuterVolumeSpecName: "logs") pod "d13e6f63-37d4-4780-9902-430a9669901c" (UID: "d13e6f63-37d4-4780-9902-430a9669901c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.517751 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/063ebe65-0175-443e-8c75-5018c42b3f36-scripts" (OuterVolumeSpecName: "scripts") pod "063ebe65-0175-443e-8c75-5018c42b3f36" (UID: "063ebe65-0175-443e-8c75-5018c42b3f36"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.518028 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.518657 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_9f9f648f-36fc-4ab4-9e08-cf4e01e30f22/ovn-northd/0.log" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.518811 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.519737 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/063ebe65-0175-443e-8c75-5018c42b3f36-kube-api-access-26q8r" (OuterVolumeSpecName: "kube-api-access-26q8r") pod "063ebe65-0175-443e-8c75-5018c42b3f36" (UID: "063ebe65-0175-443e-8c75-5018c42b3f36"). InnerVolumeSpecName "kube-api-access-26q8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.520919 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d13e6f63-37d4-4780-9902-430a9669901c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d13e6f63-37d4-4780-9902-430a9669901c" (UID: "d13e6f63-37d4-4780-9902-430a9669901c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.522233 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d13e6f63-37d4-4780-9902-430a9669901c-kube-api-access-rbwrx" (OuterVolumeSpecName: "kube-api-access-rbwrx") pod "d13e6f63-37d4-4780-9902-430a9669901c" (UID: "d13e6f63-37d4-4780-9902-430a9669901c"). InnerVolumeSpecName "kube-api-access-rbwrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.535723 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/063ebe65-0175-443e-8c75-5018c42b3f36-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "063ebe65-0175-443e-8c75-5018c42b3f36" (UID: "063ebe65-0175-443e-8c75-5018c42b3f36"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.536017 4766 scope.go:117] "RemoveContainer" containerID="a67df9afdf256b25cf9bda3ba94d84b8fbd05c6c568cae7f599293e1949d5425" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.550024 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 16:47:22 crc kubenswrapper[4766]: E0130 16:47:22.551570 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a67df9afdf256b25cf9bda3ba94d84b8fbd05c6c568cae7f599293e1949d5425\": container with ID starting with a67df9afdf256b25cf9bda3ba94d84b8fbd05c6c568cae7f599293e1949d5425 not found: ID does not exist" containerID="a67df9afdf256b25cf9bda3ba94d84b8fbd05c6c568cae7f599293e1949d5425" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.551621 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a67df9afdf256b25cf9bda3ba94d84b8fbd05c6c568cae7f599293e1949d5425"} err="failed to get container status \"a67df9afdf256b25cf9bda3ba94d84b8fbd05c6c568cae7f599293e1949d5425\": rpc error: code = NotFound desc = could not find container \"a67df9afdf256b25cf9bda3ba94d84b8fbd05c6c568cae7f599293e1949d5425\": container with ID starting with a67df9afdf256b25cf9bda3ba94d84b8fbd05c6c568cae7f599293e1949d5425 not found: ID does not exist" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.551653 4766 scope.go:117] "RemoveContainer" containerID="ae7dde53148ed0648831b0890e97ecd78fb3fcfcd4a2cb421fec3c285c7bcafc" Jan 30 16:47:22 crc kubenswrapper[4766]: E0130 16:47:22.562141 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae7dde53148ed0648831b0890e97ecd78fb3fcfcd4a2cb421fec3c285c7bcafc\": container with ID starting with ae7dde53148ed0648831b0890e97ecd78fb3fcfcd4a2cb421fec3c285c7bcafc not found: ID does not exist" containerID="ae7dde53148ed0648831b0890e97ecd78fb3fcfcd4a2cb421fec3c285c7bcafc" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.562220 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae7dde53148ed0648831b0890e97ecd78fb3fcfcd4a2cb421fec3c285c7bcafc"} err="failed to get container status \"ae7dde53148ed0648831b0890e97ecd78fb3fcfcd4a2cb421fec3c285c7bcafc\": rpc error: code = NotFound desc = could not find container \"ae7dde53148ed0648831b0890e97ecd78fb3fcfcd4a2cb421fec3c285c7bcafc\": container with ID starting with ae7dde53148ed0648831b0890e97ecd78fb3fcfcd4a2cb421fec3c285c7bcafc not found: ID does not exist" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.562252 4766 scope.go:117] "RemoveContainer" containerID="078a351f4bbfda381f7eaea97874a2d3cad8f7b02bef769bcb410ba868b12250" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.570882 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/908c7fd8-c07e-463e-94c4-76980a3a8ba2-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "908c7fd8-c07e-463e-94c4-76980a3a8ba2" (UID: "908c7fd8-c07e-463e-94c4-76980a3a8ba2"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.572505 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.578407 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/908c7fd8-c07e-463e-94c4-76980a3a8ba2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "908c7fd8-c07e-463e-94c4-76980a3a8ba2" (UID: "908c7fd8-c07e-463e-94c4-76980a3a8ba2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.580542 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.594642 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.597956 4766 scope.go:117] "RemoveContainer" containerID="7cabed8561645b99877a1c2df47b93e7663d97c477d7b28bd91f347a72034772" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.606637 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-scripts\") pod \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\" (UID: \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.606700 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-ovn-northd-tls-certs\") pod \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\" (UID: \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.606729 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-metrics-certs-tls-certs\") pod \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\" (UID: \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.606772 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2t7q\" (UniqueName: \"kubernetes.io/projected/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-kube-api-access-p2t7q\") pod \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\" (UID: \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.606875 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-ovn-rundir\") pod \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\" (UID: \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.606906 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-config\") pod \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\" (UID: \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.607025 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-combined-ca-bundle\") pod \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\" (UID: \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.613298 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-scripts" (OuterVolumeSpecName: "scripts") pod "9f9f648f-36fc-4ab4-9e08-cf4e01e30f22" (UID: "9f9f648f-36fc-4ab4-9e08-cf4e01e30f22"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.616708 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.616742 4766 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d13e6f63-37d4-4780-9902-430a9669901c-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.616758 4766 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/908c7fd8-c07e-463e-94c4-76980a3a8ba2-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.616771 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d13e6f63-37d4-4780-9902-430a9669901c-logs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.616782 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908c7fd8-c07e-463e-94c4-76980a3a8ba2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.616796 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbwrx\" (UniqueName: \"kubernetes.io/projected/d13e6f63-37d4-4780-9902-430a9669901c-kube-api-access-rbwrx\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.616808 4766 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/063ebe65-0175-443e-8c75-5018c42b3f36-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.616820 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26q8r\" (UniqueName: \"kubernetes.io/projected/063ebe65-0175-443e-8c75-5018c42b3f36-kube-api-access-26q8r\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.616833 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/063ebe65-0175-443e-8c75-5018c42b3f36-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.620547 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "9f9f648f-36fc-4ab4-9e08-cf4e01e30f22" (UID: "9f9f648f-36fc-4ab4-9e08-cf4e01e30f22"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.622134 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-config" (OuterVolumeSpecName: "config") pod "9f9f648f-36fc-4ab4-9e08-cf4e01e30f22" (UID: "9f9f648f-36fc-4ab4-9e08-cf4e01e30f22"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.651372 4766 scope.go:117] "RemoveContainer" containerID="f5c26f650562d5378425a19e3a733be4cbcd37edac52306372d412afb45a409d" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.657299 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/063ebe65-0175-443e-8c75-5018c42b3f36-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "063ebe65-0175-443e-8c75-5018c42b3f36" (UID: "063ebe65-0175-443e-8c75-5018c42b3f36"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.659133 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-kube-api-access-p2t7q" (OuterVolumeSpecName: "kube-api-access-p2t7q") pod "9f9f648f-36fc-4ab4-9e08-cf4e01e30f22" (UID: "9f9f648f-36fc-4ab4-9e08-cf4e01e30f22"). InnerVolumeSpecName "kube-api-access-p2t7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.672826 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9f9f648f-36fc-4ab4-9e08-cf4e01e30f22" (UID: "9f9f648f-36fc-4ab4-9e08-cf4e01e30f22"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.690364 4766 scope.go:117] "RemoveContainer" containerID="e3f046bd0100af048e190e7bb7dc2c7ede76a6587bf6a06b51be084ec53ea1dc" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.701943 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-69d8797fb6-zzsfd"] Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.716810 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d13e6f63-37d4-4780-9902-430a9669901c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d13e6f63-37d4-4780-9902-430a9669901c" (UID: "d13e6f63-37d4-4780-9902-430a9669901c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.717963 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.717991 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d13e6f63-37d4-4780-9902-430a9669901c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.718004 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/063ebe65-0175-443e-8c75-5018c42b3f36-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.718019 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2t7q\" (UniqueName: \"kubernetes.io/projected/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-kube-api-access-p2t7q\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.718034 4766 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-ovn-rundir\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.718045 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.719378 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61f7793d-39bd-4e96-a857-7de972f0c76d-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "61f7793d-39bd-4e96-a857-7de972f0c76d" (UID: "61f7793d-39bd-4e96-a857-7de972f0c76d"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.725366 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/908c7fd8-c07e-463e-94c4-76980a3a8ba2-config-data" (OuterVolumeSpecName: "config-data") pod "908c7fd8-c07e-463e-94c4-76980a3a8ba2" (UID: "908c7fd8-c07e-463e-94c4-76980a3a8ba2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.726324 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-69d8797fb6-zzsfd"] Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.729530 4766 scope.go:117] "RemoveContainer" containerID="f5c26f650562d5378425a19e3a733be4cbcd37edac52306372d412afb45a409d" Jan 30 16:47:22 crc kubenswrapper[4766]: E0130 16:47:22.730274 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5c26f650562d5378425a19e3a733be4cbcd37edac52306372d412afb45a409d\": container with ID starting with f5c26f650562d5378425a19e3a733be4cbcd37edac52306372d412afb45a409d not found: ID does not exist" containerID="f5c26f650562d5378425a19e3a733be4cbcd37edac52306372d412afb45a409d" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.730309 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5c26f650562d5378425a19e3a733be4cbcd37edac52306372d412afb45a409d"} err="failed to get container status \"f5c26f650562d5378425a19e3a733be4cbcd37edac52306372d412afb45a409d\": rpc error: code = NotFound desc = could not find container \"f5c26f650562d5378425a19e3a733be4cbcd37edac52306372d412afb45a409d\": container with ID starting with f5c26f650562d5378425a19e3a733be4cbcd37edac52306372d412afb45a409d not found: ID does not exist" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.730345 4766 scope.go:117] "RemoveContainer" containerID="e3f046bd0100af048e190e7bb7dc2c7ede76a6587bf6a06b51be084ec53ea1dc" Jan 30 16:47:22 crc kubenswrapper[4766]: E0130 16:47:22.732525 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3f046bd0100af048e190e7bb7dc2c7ede76a6587bf6a06b51be084ec53ea1dc\": container with ID starting with e3f046bd0100af048e190e7bb7dc2c7ede76a6587bf6a06b51be084ec53ea1dc not found: ID does not exist" containerID="e3f046bd0100af048e190e7bb7dc2c7ede76a6587bf6a06b51be084ec53ea1dc" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.732557 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3f046bd0100af048e190e7bb7dc2c7ede76a6587bf6a06b51be084ec53ea1dc"} err="failed to get container status \"e3f046bd0100af048e190e7bb7dc2c7ede76a6587bf6a06b51be084ec53ea1dc\": rpc error: code = NotFound desc = could not find container \"e3f046bd0100af048e190e7bb7dc2c7ede76a6587bf6a06b51be084ec53ea1dc\": container with ID starting with e3f046bd0100af048e190e7bb7dc2c7ede76a6587bf6a06b51be084ec53ea1dc not found: ID does not exist" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.732578 4766 scope.go:117] "RemoveContainer" containerID="e1c9c044f33b3da34602b78fc59451988ca7b3d5b492d71105b99eb5384541ae" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.757943 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.765810 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "9f9f648f-36fc-4ab4-9e08-cf4e01e30f22" (UID: "9f9f648f-36fc-4ab4-9e08-cf4e01e30f22"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.768128 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d13e6f63-37d4-4780-9902-430a9669901c-config-data" (OuterVolumeSpecName: "config-data") pod "d13e6f63-37d4-4780-9902-430a9669901c" (UID: "d13e6f63-37d4-4780-9902-430a9669901c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.768950 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.791392 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "9f9f648f-36fc-4ab4-9e08-cf4e01e30f22" (UID: "9f9f648f-36fc-4ab4-9e08-cf4e01e30f22"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.804929 4766 scope.go:117] "RemoveContainer" containerID="13f1ad493c49e69abd03b3b6444cd83dde3cd1df4412312365d88ef9307e7a64" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.805115 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7b946b75c8-zb6q6"] Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.813022 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7b946b75c8-zb6q6"] Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.819110 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d13e6f63-37d4-4780-9902-430a9669901c-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.819152 4766 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/61f7793d-39bd-4e96-a857-7de972f0c76d-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.819167 4766 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.819284 4766 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.819299 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/908c7fd8-c07e-463e-94c4-76980a3a8ba2-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.870292 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"61f7793d-39bd-4e96-a857-7de972f0c76d","Type":"ContainerDied","Data":"38540b330474d27ec43c9b991dc1ee2efa4d90bf561735549986060c7b3311d2"} Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.870413 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.878301 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5c649fd446-flqwn" event={"ID":"22d60b44-40c9-425e-8daf-8931a25954e0","Type":"ContainerDied","Data":"c7517f7d6af60d2837e96c3e702ddd2f2f09fff46823d6dc0045b42053075fb3"} Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.878669 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5c649fd446-flqwn" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.890211 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.890210 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"063ebe65-0175-443e-8c75-5018c42b3f36","Type":"ContainerDied","Data":"edc0ddf8609d91064e135d7b1badffa0f2b9c01a737dbf1954007ac34a36f143"} Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.905877 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_9f9f648f-36fc-4ab4-9e08-cf4e01e30f22/ovn-northd/0.log" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.906011 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.906481 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22","Type":"ContainerDied","Data":"090eddff40a00fe6ea2b9a4d39ef4e8496a69421f9440b673916d296607e29b3"} Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.918559 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/063ebe65-0175-443e-8c75-5018c42b3f36-config-data" (OuterVolumeSpecName: "config-data") pod "063ebe65-0175-443e-8c75-5018c42b3f36" (UID: "063ebe65-0175-443e-8c75-5018c42b3f36"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.918672 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"908c7fd8-c07e-463e-94c4-76980a3a8ba2","Type":"ContainerDied","Data":"9e20509f1f367971ebad4df00092bfa9e6a737cd37ee5f2217bf7f1fb1c22b6c"} Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.918693 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.920728 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/063ebe65-0175-443e-8c75-5018c42b3f36-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.926299 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-d6c45fdd9-srlkx" event={"ID":"d13e6f63-37d4-4780-9902-430a9669901c","Type":"ContainerDied","Data":"2b767d9a62146b9e45249c95c9dbe239af5e99c61039ee01f25412d61a3eb409"} Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.926392 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-d6c45fdd9-srlkx" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.958692 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e3be-account-create-update-qnsph" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.069100 4766 scope.go:117] "RemoveContainer" containerID="83eef1fac3cc96895ab4ddd98d9e41ad0d9179a5c5f100993449cfa02dfc79ae" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.085620 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.104638 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.111768 4766 scope.go:117] "RemoveContainer" containerID="e32b2cafc5c1ce2a47e798839cf2284131d3d57bc770f6871e99b00c69493387" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.142238 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-d6c45fdd9-srlkx"] Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.173228 4766 scope.go:117] "RemoveContainer" containerID="b76d74d517695b954352727012ec69dbd68335889d86f736409624bd420dd1d5" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.180947 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-d6c45fdd9-srlkx"] Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.202251 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.226362 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.227869 4766 scope.go:117] "RemoveContainer" containerID="c8cf61cca49f5913c4f404f73dad03aeee69e0fc8dd7632010cae106dffa30c1" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.241993 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-5c649fd446-flqwn"] Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.256716 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-5c649fd446-flqwn"] Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.284335 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-e3be-account-create-update-qnsph"] Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.289703 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-e3be-account-create-update-qnsph"] Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.298960 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 16:47:23 crc kubenswrapper[4766]: E0130 16:47:23.302545 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="49f1b21f2e3d5b4e03c8ce1eb39f56c81d4a5fbe999ee27e98fc4e4b585ae884" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.304537 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 16:47:23 crc kubenswrapper[4766]: E0130 16:47:23.320963 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="49f1b21f2e3d5b4e03c8ce1eb39f56c81d4a5fbe999ee27e98fc4e4b585ae884" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.337149 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34adc844-a813-4bb0-9d46-131d1b5a7b9b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.337252 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jn9rq\" (UniqueName: \"kubernetes.io/projected/34adc844-a813-4bb0-9d46-131d1b5a7b9b-kube-api-access-jn9rq\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.355801 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 16:47:23 crc kubenswrapper[4766]: E0130 16:47:23.358703 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="49f1b21f2e3d5b4e03c8ce1eb39f56c81d4a5fbe999ee27e98fc4e4b585ae884" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 16:47:23 crc kubenswrapper[4766]: E0130 16:47:23.358776 4766 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="4f217490-8a26-4f4b-935b-fe5918500948" containerName="nova-scheduler-scheduler" Jan 30 16:47:23 crc kubenswrapper[4766]: E0130 16:47:23.361839 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7d4fac86f391b975d4d442ab4194690e25c69e0569d23636e3c1ed6941b267b8" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 16:47:23 crc kubenswrapper[4766]: E0130 16:47:23.363130 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7d4fac86f391b975d4d442ab4194690e25c69e0569d23636e3c1ed6941b267b8" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.363310 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 16:47:23 crc kubenswrapper[4766]: E0130 16:47:23.364608 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7d4fac86f391b975d4d442ab4194690e25c69e0569d23636e3c1ed6941b267b8" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 16:47:23 crc kubenswrapper[4766]: E0130 16:47:23.364650 4766 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="7fa69536-b701-43a4-814a-2ba16974b1dd" containerName="nova-cell1-conductor-conductor" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.404492 4766 scope.go:117] "RemoveContainer" containerID="b76d74d517695b954352727012ec69dbd68335889d86f736409624bd420dd1d5" Jan 30 16:47:23 crc kubenswrapper[4766]: E0130 16:47:23.405693 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b76d74d517695b954352727012ec69dbd68335889d86f736409624bd420dd1d5\": container with ID starting with b76d74d517695b954352727012ec69dbd68335889d86f736409624bd420dd1d5 not found: ID does not exist" containerID="b76d74d517695b954352727012ec69dbd68335889d86f736409624bd420dd1d5" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.405735 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b76d74d517695b954352727012ec69dbd68335889d86f736409624bd420dd1d5"} err="failed to get container status \"b76d74d517695b954352727012ec69dbd68335889d86f736409624bd420dd1d5\": rpc error: code = NotFound desc = could not find container \"b76d74d517695b954352727012ec69dbd68335889d86f736409624bd420dd1d5\": container with ID starting with b76d74d517695b954352727012ec69dbd68335889d86f736409624bd420dd1d5 not found: ID does not exist" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.405761 4766 scope.go:117] "RemoveContainer" containerID="c8cf61cca49f5913c4f404f73dad03aeee69e0fc8dd7632010cae106dffa30c1" Jan 30 16:47:23 crc kubenswrapper[4766]: E0130 16:47:23.407532 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8cf61cca49f5913c4f404f73dad03aeee69e0fc8dd7632010cae106dffa30c1\": container with ID starting with c8cf61cca49f5913c4f404f73dad03aeee69e0fc8dd7632010cae106dffa30c1 not found: ID does not exist" containerID="c8cf61cca49f5913c4f404f73dad03aeee69e0fc8dd7632010cae106dffa30c1" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.407569 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8cf61cca49f5913c4f404f73dad03aeee69e0fc8dd7632010cae106dffa30c1"} err="failed to get container status \"c8cf61cca49f5913c4f404f73dad03aeee69e0fc8dd7632010cae106dffa30c1\": rpc error: code = NotFound desc = could not find container \"c8cf61cca49f5913c4f404f73dad03aeee69e0fc8dd7632010cae106dffa30c1\": container with ID starting with c8cf61cca49f5913c4f404f73dad03aeee69e0fc8dd7632010cae106dffa30c1 not found: ID does not exist" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.407583 4766 scope.go:117] "RemoveContainer" containerID="7526886bd5bb2b792b565e84d6fd278abe954f56801bb63be7f6750c601e890f" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.433445 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.436788 4766 scope.go:117] "RemoveContainer" containerID="812a3e23be177e19676f6003e9e0ddb46880fe309badbba4e93d1efe04dcf597" Jan 30 16:47:23 crc kubenswrapper[4766]: E0130 16:47:23.438250 4766 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 30 16:47:23 crc kubenswrapper[4766]: E0130 16:47:23.438297 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/768238f5-b74e-4f23-91ec-4eeb69375025-operator-scripts podName:768238f5-b74e-4f23-91ec-4eeb69375025 nodeName:}" failed. No retries permitted until 2026-01-30 16:47:25.438285379 +0000 UTC m=+1500.076242725 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/768238f5-b74e-4f23-91ec-4eeb69375025-operator-scripts") pod "root-account-create-update-zlndr" (UID: "768238f5-b74e-4f23-91ec-4eeb69375025") : configmap "openstack-scripts" not found Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.470780 4766 scope.go:117] "RemoveContainer" containerID="712f1ec6de09438090f58fbb0c4f302531a0e53b3ab1025ce983291fe2a30a55" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.498430 4766 scope.go:117] "RemoveContainer" containerID="a33a51c4ce72a3331d749a25239fbd5adeae2f5c2b9a417968c58a83c32f6d49" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.513116 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zlndr" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.539707 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4qrv\" (UniqueName: \"kubernetes.io/projected/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-kube-api-access-t4qrv\") pod \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") " Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.539772 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-galera-tls-certs\") pod \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") " Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.539825 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-combined-ca-bundle\") pod \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") " Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.539870 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") " Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.539913 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-config-data-default\") pod \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") " Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.539990 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-kolla-config\") pod \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") " Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.540015 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-operator-scripts\") pod \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") " Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.540040 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-config-data-generated\") pod \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") " Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.541357 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "62dd6ad1-1550-48cf-b103-b7ab6dd93c97" (UID: "62dd6ad1-1550-48cf-b103-b7ab6dd93c97"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.542017 4766 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.543129 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "62dd6ad1-1550-48cf-b103-b7ab6dd93c97" (UID: "62dd6ad1-1550-48cf-b103-b7ab6dd93c97"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.543608 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "62dd6ad1-1550-48cf-b103-b7ab6dd93c97" (UID: "62dd6ad1-1550-48cf-b103-b7ab6dd93c97"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.543922 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "62dd6ad1-1550-48cf-b103-b7ab6dd93c97" (UID: "62dd6ad1-1550-48cf-b103-b7ab6dd93c97"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.555654 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-kube-api-access-t4qrv" (OuterVolumeSpecName: "kube-api-access-t4qrv") pod "62dd6ad1-1550-48cf-b103-b7ab6dd93c97" (UID: "62dd6ad1-1550-48cf-b103-b7ab6dd93c97"). InnerVolumeSpecName "kube-api-access-t4qrv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.570671 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "mysql-db") pod "62dd6ad1-1550-48cf-b103-b7ab6dd93c97" (UID: "62dd6ad1-1550-48cf-b103-b7ab6dd93c97"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.571408 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "62dd6ad1-1550-48cf-b103-b7ab6dd93c97" (UID: "62dd6ad1-1550-48cf-b103-b7ab6dd93c97"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.573498 4766 scope.go:117] "RemoveContainer" containerID="e5049dc222f6a4c60730423ca57b88c9c36337971b3ab52ed5de35266e17e533" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.611653 4766 scope.go:117] "RemoveContainer" containerID="722b9f0bf4bb4fdc169a16a2a0008b553646c69b6b43ec117a7046c04ee677ad" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.619907 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "62dd6ad1-1550-48cf-b103-b7ab6dd93c97" (UID: "62dd6ad1-1550-48cf-b103-b7ab6dd93c97"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.643615 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/768238f5-b74e-4f23-91ec-4eeb69375025-operator-scripts\") pod \"768238f5-b74e-4f23-91ec-4eeb69375025\" (UID: \"768238f5-b74e-4f23-91ec-4eeb69375025\") " Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.643857 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fc497\" (UniqueName: \"kubernetes.io/projected/768238f5-b74e-4f23-91ec-4eeb69375025-kube-api-access-fc497\") pod \"768238f5-b74e-4f23-91ec-4eeb69375025\" (UID: \"768238f5-b74e-4f23-91ec-4eeb69375025\") " Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.644304 4766 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.644322 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.644353 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4qrv\" (UniqueName: \"kubernetes.io/projected/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-kube-api-access-t4qrv\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.644364 4766 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.644371 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.644389 4766 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.644398 4766 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.644449 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/768238f5-b74e-4f23-91ec-4eeb69375025-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "768238f5-b74e-4f23-91ec-4eeb69375025" (UID: "768238f5-b74e-4f23-91ec-4eeb69375025"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.647578 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/768238f5-b74e-4f23-91ec-4eeb69375025-kube-api-access-fc497" (OuterVolumeSpecName: "kube-api-access-fc497") pod "768238f5-b74e-4f23-91ec-4eeb69375025" (UID: "768238f5-b74e-4f23-91ec-4eeb69375025"). InnerVolumeSpecName "kube-api-access-fc497". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.652478 4766 scope.go:117] "RemoveContainer" containerID="1018ad035e1117daba7d0fa6d624c300af7a28f4b34f661587a2d4823b6112f1" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.663436 4766 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.674520 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/keystone-7bc6f65df6-mx4xk" podUID="821de7d3-dc41-4351-bced-6ed09a729223" containerName="keystone-api" probeResult="failure" output="Get \"https://10.217.0.150:5000/v3\": read tcp 10.217.0.2:45678->10.217.0.150:5000: read: connection reset by peer" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.697024 4766 scope.go:117] "RemoveContainer" containerID="858741e925270a4f1dbc19a53c612cec0223b237f4d6e8b8741323f1a01a83e4" Jan 30 16:47:23 crc kubenswrapper[4766]: E0130 16:47:23.737337 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2 is running failed: container process not found" containerID="087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 16:47:23 crc kubenswrapper[4766]: E0130 16:47:23.739399 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2 is running failed: container process not found" containerID="087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 16:47:23 crc kubenswrapper[4766]: E0130 16:47:23.740650 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2 is running failed: container process not found" containerID="087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 16:47:23 crc kubenswrapper[4766]: E0130 16:47:23.740726 4766 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-l6hkn" podUID="2a501828-e06b-4096-b555-1ecd9323ee20" containerName="ovsdb-server" Jan 30 16:47:23 crc kubenswrapper[4766]: E0130 16:47:23.743361 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="83993d94a8bc7f594d30caf8ddb5c055e031d5c9a949175233563c06d2f790e9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.746214 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fc497\" (UniqueName: \"kubernetes.io/projected/768238f5-b74e-4f23-91ec-4eeb69375025-kube-api-access-fc497\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.746246 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/768238f5-b74e-4f23-91ec-4eeb69375025-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.746258 4766 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:23 crc kubenswrapper[4766]: E0130 16:47:23.749744 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="83993d94a8bc7f594d30caf8ddb5c055e031d5c9a949175233563c06d2f790e9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 16:47:23 crc kubenswrapper[4766]: E0130 16:47:23.755334 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="83993d94a8bc7f594d30caf8ddb5c055e031d5c9a949175233563c06d2f790e9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 16:47:23 crc kubenswrapper[4766]: E0130 16:47:23.755414 4766 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-l6hkn" podUID="2a501828-e06b-4096-b555-1ecd9323ee20" containerName="ovs-vswitchd" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.765379 4766 scope.go:117] "RemoveContainer" containerID="3a4e2d5078fd2eacb9382be606cd830ba0289dae57441c51076a58524a7c71f4" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.805033 4766 scope.go:117] "RemoveContainer" containerID="69d64425bbacf9da73461e63012a983fa8ef6f8440c070018088e050cf6bc5a6" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.905336 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.908918 4766 scope.go:117] "RemoveContainer" containerID="1fe4777b2695557b65a6f9a91a3f309b01c42b5f0288bbecc862c67c0bda120a" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.945428 4766 scope.go:117] "RemoveContainer" containerID="e3fbc192fdad733807e36f2325831d022e561f39e323dd8f0e5a0da778a417b6" Jan 30 16:47:23 crc kubenswrapper[4766]: E0130 16:47:23.964645 4766 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 30 16:47:23 crc kubenswrapper[4766]: E0130 16:47:23.964727 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bc2a138c-9abd-427b-815c-cbb9e12459f6-config-data podName:bc2a138c-9abd-427b-815c-cbb9e12459f6 nodeName:}" failed. No retries permitted until 2026-01-30 16:47:31.964707957 +0000 UTC m=+1506.602665303 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/bc2a138c-9abd-427b-815c-cbb9e12459f6-config-data") pod "rabbitmq-server-0" (UID: "bc2a138c-9abd-427b-815c-cbb9e12459f6") : configmap "rabbitmq-config-data" not found Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.976679 4766 scope.go:117] "RemoveContainer" containerID="929f2cc066366dea699ff53637f354d8aeab119c1be0aa3851b50d5090307472" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.987287 4766 generic.go:334] "Generic (PLEG): container finished" podID="b21357e1-82c9-419a-a191-359c84d6d001" containerID="db7977d3774b9fe20bfc32eaf113a99ac43aaf8d33fb949d557be22220077920" exitCode=0 Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.987387 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.987401 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b21357e1-82c9-419a-a191-359c84d6d001","Type":"ContainerDied","Data":"db7977d3774b9fe20bfc32eaf113a99ac43aaf8d33fb949d557be22220077920"} Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.987467 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b21357e1-82c9-419a-a191-359c84d6d001","Type":"ContainerDied","Data":"3e10ead1aca56572964d46a5892bb1dffdbbed95ee78ced09f4df00421ff6107"} Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.990097 4766 generic.go:334] "Generic (PLEG): container finished" podID="bc2a138c-9abd-427b-815c-cbb9e12459f6" containerID="40a3ac01470631f3856774db28b8f61347a07c88a9ecabdd8c4a7fdd55f65bf9" exitCode=0 Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.990191 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bc2a138c-9abd-427b-815c-cbb9e12459f6","Type":"ContainerDied","Data":"40a3ac01470631f3856774db28b8f61347a07c88a9ecabdd8c4a7fdd55f65bf9"} Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.995194 4766 generic.go:334] "Generic (PLEG): container finished" podID="821de7d3-dc41-4351-bced-6ed09a729223" containerID="7fedc7578cd65e1da9885d991db738315a5357e363187467c355ed6389131188" exitCode=0 Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.995211 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7bc6f65df6-mx4xk" event={"ID":"821de7d3-dc41-4351-bced-6ed09a729223","Type":"ContainerDied","Data":"7fedc7578cd65e1da9885d991db738315a5357e363187467c355ed6389131188"} Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.001120 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zlndr" event={"ID":"768238f5-b74e-4f23-91ec-4eeb69375025","Type":"ContainerDied","Data":"51ffbc2026ffaf4c9f26fd55d50669f8d3b947029fdc717ba29a5acfdc7e97bf"} Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.001149 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zlndr" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.002521 4766 scope.go:117] "RemoveContainer" containerID="db7977d3774b9fe20bfc32eaf113a99ac43aaf8d33fb949d557be22220077920" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.016724 4766 generic.go:334] "Generic (PLEG): container finished" podID="62dd6ad1-1550-48cf-b103-b7ab6dd93c97" containerID="aff944adaf195dece4b9f2cffe288741329b6e6257d83a2eb304dfe74183c399" exitCode=0 Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.016808 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"62dd6ad1-1550-48cf-b103-b7ab6dd93c97","Type":"ContainerDied","Data":"aff944adaf195dece4b9f2cffe288741329b6e6257d83a2eb304dfe74183c399"} Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.016834 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"62dd6ad1-1550-48cf-b103-b7ab6dd93c97","Type":"ContainerDied","Data":"7cd3716ef2ba5300e2a9e059a29e8e25763df286461c739788ee844a36ee0a0f"} Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.016902 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.040679 4766 scope.go:117] "RemoveContainer" containerID="9ef10f5c3d1f4d9a253dd3e9c4606457ea1bec6b576606c275600dbaeca7eb9d" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.071672 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b21357e1-82c9-419a-a191-359c84d6d001-config-data\") pod \"b21357e1-82c9-419a-a191-359c84d6d001\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.072288 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b21357e1-82c9-419a-a191-359c84d6d001-server-conf\") pod \"b21357e1-82c9-419a-a191-359c84d6d001\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.072327 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b21357e1-82c9-419a-a191-359c84d6d001-pod-info\") pod \"b21357e1-82c9-419a-a191-359c84d6d001\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.072351 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b21357e1-82c9-419a-a191-359c84d6d001-erlang-cookie-secret\") pod \"b21357e1-82c9-419a-a191-359c84d6d001\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.072390 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b21357e1-82c9-419a-a191-359c84d6d001-rabbitmq-erlang-cookie\") pod \"b21357e1-82c9-419a-a191-359c84d6d001\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.072449 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjnbx\" (UniqueName: \"kubernetes.io/projected/b21357e1-82c9-419a-a191-359c84d6d001-kube-api-access-vjnbx\") pod \"b21357e1-82c9-419a-a191-359c84d6d001\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.072466 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b21357e1-82c9-419a-a191-359c84d6d001-rabbitmq-tls\") pod \"b21357e1-82c9-419a-a191-359c84d6d001\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.072505 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b21357e1-82c9-419a-a191-359c84d6d001-rabbitmq-confd\") pod \"b21357e1-82c9-419a-a191-359c84d6d001\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.072533 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b21357e1-82c9-419a-a191-359c84d6d001-plugins-conf\") pod \"b21357e1-82c9-419a-a191-359c84d6d001\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.072574 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"b21357e1-82c9-419a-a191-359c84d6d001\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.072599 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b21357e1-82c9-419a-a191-359c84d6d001-rabbitmq-plugins\") pod \"b21357e1-82c9-419a-a191-359c84d6d001\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.081982 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b21357e1-82c9-419a-a191-359c84d6d001-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "b21357e1-82c9-419a-a191-359c84d6d001" (UID: "b21357e1-82c9-419a-a191-359c84d6d001"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.082444 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b21357e1-82c9-419a-a191-359c84d6d001-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "b21357e1-82c9-419a-a191-359c84d6d001" (UID: "b21357e1-82c9-419a-a191-359c84d6d001"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.082538 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b21357e1-82c9-419a-a191-359c84d6d001-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "b21357e1-82c9-419a-a191-359c84d6d001" (UID: "b21357e1-82c9-419a-a191-359c84d6d001"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.083119 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b21357e1-82c9-419a-a191-359c84d6d001-kube-api-access-vjnbx" (OuterVolumeSpecName: "kube-api-access-vjnbx") pod "b21357e1-82c9-419a-a191-359c84d6d001" (UID: "b21357e1-82c9-419a-a191-359c84d6d001"). InnerVolumeSpecName "kube-api-access-vjnbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.087359 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/b21357e1-82c9-419a-a191-359c84d6d001-pod-info" (OuterVolumeSpecName: "pod-info") pod "b21357e1-82c9-419a-a191-359c84d6d001" (UID: "b21357e1-82c9-419a-a191-359c84d6d001"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.100807 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "b21357e1-82c9-419a-a191-359c84d6d001" (UID: "b21357e1-82c9-419a-a191-359c84d6d001"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.102424 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b21357e1-82c9-419a-a191-359c84d6d001-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "b21357e1-82c9-419a-a191-359c84d6d001" (UID: "b21357e1-82c9-419a-a191-359c84d6d001"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.108369 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b21357e1-82c9-419a-a191-359c84d6d001-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "b21357e1-82c9-419a-a191-359c84d6d001" (UID: "b21357e1-82c9-419a-a191-359c84d6d001"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.115467 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="063ebe65-0175-443e-8c75-5018c42b3f36" path="/var/lib/kubelet/pods/063ebe65-0175-443e-8c75-5018c42b3f36/volumes" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.116395 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14ae2453-74fa-4114-9261-21b381518493" path="/var/lib/kubelet/pods/14ae2453-74fa-4114-9261-21b381518493/volumes" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.116941 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17d6e828-fc05-46cb-9bee-bac08ebf331a" path="/var/lib/kubelet/pods/17d6e828-fc05-46cb-9bee-bac08ebf331a/volumes" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.118001 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22d60b44-40c9-425e-8daf-8931a25954e0" path="/var/lib/kubelet/pods/22d60b44-40c9-425e-8daf-8931a25954e0/volumes" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.118497 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34adc844-a813-4bb0-9d46-131d1b5a7b9b" path="/var/lib/kubelet/pods/34adc844-a813-4bb0-9d46-131d1b5a7b9b/volumes" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.118854 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40f1dc52-213f-4a5b-af33-4067a83859e4" path="/var/lib/kubelet/pods/40f1dc52-213f-4a5b-af33-4067a83859e4/volumes" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.122536 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="447a8ec3-4e50-40a9-b418-01fd8c0eb03e" path="/var/lib/kubelet/pods/447a8ec3-4e50-40a9-b418-01fd8c0eb03e/volumes" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.123150 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bc2931b-8439-4c5c-be4d-43f4aab528f2" path="/var/lib/kubelet/pods/4bc2931b-8439-4c5c-be4d-43f4aab528f2/volumes" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.123806 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61f7793d-39bd-4e96-a857-7de972f0c76d" path="/var/lib/kubelet/pods/61f7793d-39bd-4e96-a857-7de972f0c76d/volumes" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.124357 4766 scope.go:117] "RemoveContainer" containerID="db7977d3774b9fe20bfc32eaf113a99ac43aaf8d33fb949d557be22220077920" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.127998 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d5b8a42-39dd-4b1b-9f92-1e3585b6707b" path="/var/lib/kubelet/pods/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b/volumes" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.128782 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="908c7fd8-c07e-463e-94c4-76980a3a8ba2" path="/var/lib/kubelet/pods/908c7fd8-c07e-463e-94c4-76980a3a8ba2/volumes" Jan 30 16:47:24 crc kubenswrapper[4766]: E0130 16:47:24.135857 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db7977d3774b9fe20bfc32eaf113a99ac43aaf8d33fb949d557be22220077920\": container with ID starting with db7977d3774b9fe20bfc32eaf113a99ac43aaf8d33fb949d557be22220077920 not found: ID does not exist" containerID="db7977d3774b9fe20bfc32eaf113a99ac43aaf8d33fb949d557be22220077920" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.136316 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db7977d3774b9fe20bfc32eaf113a99ac43aaf8d33fb949d557be22220077920"} err="failed to get container status \"db7977d3774b9fe20bfc32eaf113a99ac43aaf8d33fb949d557be22220077920\": rpc error: code = NotFound desc = could not find container \"db7977d3774b9fe20bfc32eaf113a99ac43aaf8d33fb949d557be22220077920\": container with ID starting with db7977d3774b9fe20bfc32eaf113a99ac43aaf8d33fb949d557be22220077920 not found: ID does not exist" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.136437 4766 scope.go:117] "RemoveContainer" containerID="9ef10f5c3d1f4d9a253dd3e9c4606457ea1bec6b576606c275600dbaeca7eb9d" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.137012 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ad68dc2-23ff-4044-b74d-149ae8f02bc0" path="/var/lib/kubelet/pods/9ad68dc2-23ff-4044-b74d-149ae8f02bc0/volumes" Jan 30 16:47:24 crc kubenswrapper[4766]: E0130 16:47:24.140299 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ef10f5c3d1f4d9a253dd3e9c4606457ea1bec6b576606c275600dbaeca7eb9d\": container with ID starting with 9ef10f5c3d1f4d9a253dd3e9c4606457ea1bec6b576606c275600dbaeca7eb9d not found: ID does not exist" containerID="9ef10f5c3d1f4d9a253dd3e9c4606457ea1bec6b576606c275600dbaeca7eb9d" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.140334 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ef10f5c3d1f4d9a253dd3e9c4606457ea1bec6b576606c275600dbaeca7eb9d"} err="failed to get container status \"9ef10f5c3d1f4d9a253dd3e9c4606457ea1bec6b576606c275600dbaeca7eb9d\": rpc error: code = NotFound desc = could not find container \"9ef10f5c3d1f4d9a253dd3e9c4606457ea1bec6b576606c275600dbaeca7eb9d\": container with ID starting with 9ef10f5c3d1f4d9a253dd3e9c4606457ea1bec6b576606c275600dbaeca7eb9d not found: ID does not exist" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.140359 4766 scope.go:117] "RemoveContainer" containerID="aff944adaf195dece4b9f2cffe288741329b6e6257d83a2eb304dfe74183c399" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.143098 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f9f648f-36fc-4ab4-9e08-cf4e01e30f22" path="/var/lib/kubelet/pods/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22/volumes" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.143859 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d13e6f63-37d4-4780-9902-430a9669901c" path="/var/lib/kubelet/pods/d13e6f63-37d4-4780-9902-430a9669901c/volumes" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.166900 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b21357e1-82c9-419a-a191-359c84d6d001-server-conf" (OuterVolumeSpecName: "server-conf") pod "b21357e1-82c9-419a-a191-359c84d6d001" (UID: "b21357e1-82c9-419a-a191-359c84d6d001"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.167704 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b21357e1-82c9-419a-a191-359c84d6d001-config-data" (OuterVolumeSpecName: "config-data") pod "b21357e1-82c9-419a-a191-359c84d6d001" (UID: "b21357e1-82c9-419a-a191-359c84d6d001"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.174276 4766 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.174449 4766 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b21357e1-82c9-419a-a191-359c84d6d001-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.174520 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b21357e1-82c9-419a-a191-359c84d6d001-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.174592 4766 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b21357e1-82c9-419a-a191-359c84d6d001-server-conf\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.174684 4766 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b21357e1-82c9-419a-a191-359c84d6d001-pod-info\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.174743 4766 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b21357e1-82c9-419a-a191-359c84d6d001-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.174794 4766 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b21357e1-82c9-419a-a191-359c84d6d001-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.174846 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjnbx\" (UniqueName: \"kubernetes.io/projected/b21357e1-82c9-419a-a191-359c84d6d001-kube-api-access-vjnbx\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.174897 4766 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b21357e1-82c9-419a-a191-359c84d6d001-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.174953 4766 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b21357e1-82c9-419a-a191-359c84d6d001-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.200589 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-zlndr"] Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.200630 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-zlndr"] Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.200648 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.200661 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.213315 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b21357e1-82c9-419a-a191-359c84d6d001-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "b21357e1-82c9-419a-a191-359c84d6d001" (UID: "b21357e1-82c9-419a-a191-359c84d6d001"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.215432 4766 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.239368 4766 scope.go:117] "RemoveContainer" containerID="6760427dc9415b24501e486881c4fff8d34a1c94fce91dc3276f0bac6cf59171" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.265652 4766 scope.go:117] "RemoveContainer" containerID="aff944adaf195dece4b9f2cffe288741329b6e6257d83a2eb304dfe74183c399" Jan 30 16:47:24 crc kubenswrapper[4766]: E0130 16:47:24.271592 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aff944adaf195dece4b9f2cffe288741329b6e6257d83a2eb304dfe74183c399\": container with ID starting with aff944adaf195dece4b9f2cffe288741329b6e6257d83a2eb304dfe74183c399 not found: ID does not exist" containerID="aff944adaf195dece4b9f2cffe288741329b6e6257d83a2eb304dfe74183c399" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.271637 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aff944adaf195dece4b9f2cffe288741329b6e6257d83a2eb304dfe74183c399"} err="failed to get container status \"aff944adaf195dece4b9f2cffe288741329b6e6257d83a2eb304dfe74183c399\": rpc error: code = NotFound desc = could not find container \"aff944adaf195dece4b9f2cffe288741329b6e6257d83a2eb304dfe74183c399\": container with ID starting with aff944adaf195dece4b9f2cffe288741329b6e6257d83a2eb304dfe74183c399 not found: ID does not exist" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.271664 4766 scope.go:117] "RemoveContainer" containerID="6760427dc9415b24501e486881c4fff8d34a1c94fce91dc3276f0bac6cf59171" Jan 30 16:47:24 crc kubenswrapper[4766]: E0130 16:47:24.272109 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6760427dc9415b24501e486881c4fff8d34a1c94fce91dc3276f0bac6cf59171\": container with ID starting with 6760427dc9415b24501e486881c4fff8d34a1c94fce91dc3276f0bac6cf59171 not found: ID does not exist" containerID="6760427dc9415b24501e486881c4fff8d34a1c94fce91dc3276f0bac6cf59171" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.272162 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6760427dc9415b24501e486881c4fff8d34a1c94fce91dc3276f0bac6cf59171"} err="failed to get container status \"6760427dc9415b24501e486881c4fff8d34a1c94fce91dc3276f0bac6cf59171\": rpc error: code = NotFound desc = could not find container \"6760427dc9415b24501e486881c4fff8d34a1c94fce91dc3276f0bac6cf59171\": container with ID starting with 6760427dc9415b24501e486881c4fff8d34a1c94fce91dc3276f0bac6cf59171 not found: ID does not exist" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.276477 4766 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b21357e1-82c9-419a-a191-359c84d6d001-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.276527 4766 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.346259 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.358752 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.576381 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7bc6f65df6-mx4xk" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.593737 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.682916 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bc2a138c-9abd-427b-815c-cbb9e12459f6-rabbitmq-erlang-cookie\") pod \"bc2a138c-9abd-427b-815c-cbb9e12459f6\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.682985 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbx8k\" (UniqueName: \"kubernetes.io/projected/bc2a138c-9abd-427b-815c-cbb9e12459f6-kube-api-access-kbx8k\") pod \"bc2a138c-9abd-427b-815c-cbb9e12459f6\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.683018 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bc2a138c-9abd-427b-815c-cbb9e12459f6-rabbitmq-confd\") pod \"bc2a138c-9abd-427b-815c-cbb9e12459f6\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.683059 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-combined-ca-bundle\") pod \"821de7d3-dc41-4351-bced-6ed09a729223\" (UID: \"821de7d3-dc41-4351-bced-6ed09a729223\") " Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.683085 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bc2a138c-9abd-427b-815c-cbb9e12459f6-erlang-cookie-secret\") pod \"bc2a138c-9abd-427b-815c-cbb9e12459f6\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.683123 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxtx6\" (UniqueName: \"kubernetes.io/projected/821de7d3-dc41-4351-bced-6ed09a729223-kube-api-access-pxtx6\") pod \"821de7d3-dc41-4351-bced-6ed09a729223\" (UID: \"821de7d3-dc41-4351-bced-6ed09a729223\") " Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.683160 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bc2a138c-9abd-427b-815c-cbb9e12459f6-config-data\") pod \"bc2a138c-9abd-427b-815c-cbb9e12459f6\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.683259 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-credential-keys\") pod \"821de7d3-dc41-4351-bced-6ed09a729223\" (UID: \"821de7d3-dc41-4351-bced-6ed09a729223\") " Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.683285 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-scripts\") pod \"821de7d3-dc41-4351-bced-6ed09a729223\" (UID: \"821de7d3-dc41-4351-bced-6ed09a729223\") " Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.683331 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bc2a138c-9abd-427b-815c-cbb9e12459f6-rabbitmq-plugins\") pod \"bc2a138c-9abd-427b-815c-cbb9e12459f6\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.683360 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bc2a138c-9abd-427b-815c-cbb9e12459f6-rabbitmq-tls\") pod \"bc2a138c-9abd-427b-815c-cbb9e12459f6\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.683391 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bc2a138c-9abd-427b-815c-cbb9e12459f6-plugins-conf\") pod \"bc2a138c-9abd-427b-815c-cbb9e12459f6\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.683414 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bc2a138c-9abd-427b-815c-cbb9e12459f6-server-conf\") pod \"bc2a138c-9abd-427b-815c-cbb9e12459f6\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.683441 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-config-data\") pod \"821de7d3-dc41-4351-bced-6ed09a729223\" (UID: \"821de7d3-dc41-4351-bced-6ed09a729223\") " Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.683483 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-fernet-keys\") pod \"821de7d3-dc41-4351-bced-6ed09a729223\" (UID: \"821de7d3-dc41-4351-bced-6ed09a729223\") " Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.683510 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"bc2a138c-9abd-427b-815c-cbb9e12459f6\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.683540 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bc2a138c-9abd-427b-815c-cbb9e12459f6-pod-info\") pod \"bc2a138c-9abd-427b-815c-cbb9e12459f6\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.683579 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-internal-tls-certs\") pod \"821de7d3-dc41-4351-bced-6ed09a729223\" (UID: \"821de7d3-dc41-4351-bced-6ed09a729223\") " Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.683606 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-public-tls-certs\") pod \"821de7d3-dc41-4351-bced-6ed09a729223\" (UID: \"821de7d3-dc41-4351-bced-6ed09a729223\") " Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.684400 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc2a138c-9abd-427b-815c-cbb9e12459f6-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "bc2a138c-9abd-427b-815c-cbb9e12459f6" (UID: "bc2a138c-9abd-427b-815c-cbb9e12459f6"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.686673 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc2a138c-9abd-427b-815c-cbb9e12459f6-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "bc2a138c-9abd-427b-815c-cbb9e12459f6" (UID: "bc2a138c-9abd-427b-815c-cbb9e12459f6"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.688404 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc2a138c-9abd-427b-815c-cbb9e12459f6-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "bc2a138c-9abd-427b-815c-cbb9e12459f6" (UID: "bc2a138c-9abd-427b-815c-cbb9e12459f6"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.688559 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "821de7d3-dc41-4351-bced-6ed09a729223" (UID: "821de7d3-dc41-4351-bced-6ed09a729223"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.688649 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc2a138c-9abd-427b-815c-cbb9e12459f6-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "bc2a138c-9abd-427b-815c-cbb9e12459f6" (UID: "bc2a138c-9abd-427b-815c-cbb9e12459f6"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.688660 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "821de7d3-dc41-4351-bced-6ed09a729223" (UID: "821de7d3-dc41-4351-bced-6ed09a729223"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.688966 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc2a138c-9abd-427b-815c-cbb9e12459f6-kube-api-access-kbx8k" (OuterVolumeSpecName: "kube-api-access-kbx8k") pod "bc2a138c-9abd-427b-815c-cbb9e12459f6" (UID: "bc2a138c-9abd-427b-815c-cbb9e12459f6"). InnerVolumeSpecName "kube-api-access-kbx8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.690818 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/bc2a138c-9abd-427b-815c-cbb9e12459f6-pod-info" (OuterVolumeSpecName: "pod-info") pod "bc2a138c-9abd-427b-815c-cbb9e12459f6" (UID: "bc2a138c-9abd-427b-815c-cbb9e12459f6"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.692268 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "persistence") pod "bc2a138c-9abd-427b-815c-cbb9e12459f6" (UID: "bc2a138c-9abd-427b-815c-cbb9e12459f6"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.693768 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/821de7d3-dc41-4351-bced-6ed09a729223-kube-api-access-pxtx6" (OuterVolumeSpecName: "kube-api-access-pxtx6") pod "821de7d3-dc41-4351-bced-6ed09a729223" (UID: "821de7d3-dc41-4351-bced-6ed09a729223"). InnerVolumeSpecName "kube-api-access-pxtx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.695995 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-scripts" (OuterVolumeSpecName: "scripts") pod "821de7d3-dc41-4351-bced-6ed09a729223" (UID: "821de7d3-dc41-4351-bced-6ed09a729223"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.699258 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc2a138c-9abd-427b-815c-cbb9e12459f6-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "bc2a138c-9abd-427b-815c-cbb9e12459f6" (UID: "bc2a138c-9abd-427b-815c-cbb9e12459f6"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.710893 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc2a138c-9abd-427b-815c-cbb9e12459f6-config-data" (OuterVolumeSpecName: "config-data") pod "bc2a138c-9abd-427b-815c-cbb9e12459f6" (UID: "bc2a138c-9abd-427b-815c-cbb9e12459f6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.717345 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-config-data" (OuterVolumeSpecName: "config-data") pod "821de7d3-dc41-4351-bced-6ed09a729223" (UID: "821de7d3-dc41-4351-bced-6ed09a729223"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.740469 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "821de7d3-dc41-4351-bced-6ed09a729223" (UID: "821de7d3-dc41-4351-bced-6ed09a729223"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.759135 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc2a138c-9abd-427b-815c-cbb9e12459f6-server-conf" (OuterVolumeSpecName: "server-conf") pod "bc2a138c-9abd-427b-815c-cbb9e12459f6" (UID: "bc2a138c-9abd-427b-815c-cbb9e12459f6"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.763350 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "821de7d3-dc41-4351-bced-6ed09a729223" (UID: "821de7d3-dc41-4351-bced-6ed09a729223"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.784891 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "821de7d3-dc41-4351-bced-6ed09a729223" (UID: "821de7d3-dc41-4351-bced-6ed09a729223"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.789202 4766 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.789249 4766 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.789265 4766 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bc2a138c-9abd-427b-815c-cbb9e12459f6-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.789280 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbx8k\" (UniqueName: \"kubernetes.io/projected/bc2a138c-9abd-427b-815c-cbb9e12459f6-kube-api-access-kbx8k\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.789293 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.789305 4766 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bc2a138c-9abd-427b-815c-cbb9e12459f6-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.789317 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxtx6\" (UniqueName: \"kubernetes.io/projected/821de7d3-dc41-4351-bced-6ed09a729223-kube-api-access-pxtx6\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.789330 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bc2a138c-9abd-427b-815c-cbb9e12459f6-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.789341 4766 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.789352 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.789365 4766 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bc2a138c-9abd-427b-815c-cbb9e12459f6-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.789376 4766 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bc2a138c-9abd-427b-815c-cbb9e12459f6-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.789388 4766 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bc2a138c-9abd-427b-815c-cbb9e12459f6-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.789399 4766 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bc2a138c-9abd-427b-815c-cbb9e12459f6-server-conf\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.789410 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.789420 4766 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.789457 4766 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.789471 4766 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bc2a138c-9abd-427b-815c-cbb9e12459f6-pod-info\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.824471 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc2a138c-9abd-427b-815c-cbb9e12459f6-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "bc2a138c-9abd-427b-815c-cbb9e12459f6" (UID: "bc2a138c-9abd-427b-815c-cbb9e12459f6"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.846520 4766 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.891286 4766 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bc2a138c-9abd-427b-815c-cbb9e12459f6-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.891323 4766 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.034836 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bc2a138c-9abd-427b-815c-cbb9e12459f6","Type":"ContainerDied","Data":"737ac00e5e8f2d0fe8c8cc8ad014b2d9c4eb214f4c0587d701ecfb018001f677"} Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.034872 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.034894 4766 scope.go:117] "RemoveContainer" containerID="40a3ac01470631f3856774db28b8f61347a07c88a9ecabdd8c4a7fdd55f65bf9" Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.039051 4766 generic.go:334] "Generic (PLEG): container finished" podID="7fa69536-b701-43a4-814a-2ba16974b1dd" containerID="7d4fac86f391b975d4d442ab4194690e25c69e0569d23636e3c1ed6941b267b8" exitCode=0 Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.039109 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"7fa69536-b701-43a4-814a-2ba16974b1dd","Type":"ContainerDied","Data":"7d4fac86f391b975d4d442ab4194690e25c69e0569d23636e3c1ed6941b267b8"} Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.041050 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7bc6f65df6-mx4xk" event={"ID":"821de7d3-dc41-4351-bced-6ed09a729223","Type":"ContainerDied","Data":"f7e59fee20a8c8c4ebf0975c2f9adc338f4c7ce8ad17f7e1383af919425199ff"} Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.041204 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7bc6f65df6-mx4xk" Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.066723 4766 scope.go:117] "RemoveContainer" containerID="420bba712e788513308111db89ced03a759c0a7dc6262370124c82df4dd31af5" Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.099585 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.120085 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.129232 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-7bc6f65df6-mx4xk"] Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.131926 4766 scope.go:117] "RemoveContainer" containerID="7fedc7578cd65e1da9885d991db738315a5357e363187467c355ed6389131188" Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.146527 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-7bc6f65df6-mx4xk"] Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.535083 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.691693 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.705603 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f217490-8a26-4f4b-935b-fe5918500948-combined-ca-bundle\") pod \"4f217490-8a26-4f4b-935b-fe5918500948\" (UID: \"4f217490-8a26-4f4b-935b-fe5918500948\") " Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.705761 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmrmz\" (UniqueName: \"kubernetes.io/projected/4f217490-8a26-4f4b-935b-fe5918500948-kube-api-access-jmrmz\") pod \"4f217490-8a26-4f4b-935b-fe5918500948\" (UID: \"4f217490-8a26-4f4b-935b-fe5918500948\") " Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.705801 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f217490-8a26-4f4b-935b-fe5918500948-config-data\") pod \"4f217490-8a26-4f4b-935b-fe5918500948\" (UID: \"4f217490-8a26-4f4b-935b-fe5918500948\") " Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.713121 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f217490-8a26-4f4b-935b-fe5918500948-kube-api-access-jmrmz" (OuterVolumeSpecName: "kube-api-access-jmrmz") pod "4f217490-8a26-4f4b-935b-fe5918500948" (UID: "4f217490-8a26-4f4b-935b-fe5918500948"). InnerVolumeSpecName "kube-api-access-jmrmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.736320 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f217490-8a26-4f4b-935b-fe5918500948-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4f217490-8a26-4f4b-935b-fe5918500948" (UID: "4f217490-8a26-4f4b-935b-fe5918500948"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.743718 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f217490-8a26-4f4b-935b-fe5918500948-config-data" (OuterVolumeSpecName: "config-data") pod "4f217490-8a26-4f4b-935b-fe5918500948" (UID: "4f217490-8a26-4f4b-935b-fe5918500948"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.807436 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fa69536-b701-43a4-814a-2ba16974b1dd-config-data\") pod \"7fa69536-b701-43a4-814a-2ba16974b1dd\" (UID: \"7fa69536-b701-43a4-814a-2ba16974b1dd\") " Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.807506 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fa69536-b701-43a4-814a-2ba16974b1dd-combined-ca-bundle\") pod \"7fa69536-b701-43a4-814a-2ba16974b1dd\" (UID: \"7fa69536-b701-43a4-814a-2ba16974b1dd\") " Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.807607 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5r45p\" (UniqueName: \"kubernetes.io/projected/7fa69536-b701-43a4-814a-2ba16974b1dd-kube-api-access-5r45p\") pod \"7fa69536-b701-43a4-814a-2ba16974b1dd\" (UID: \"7fa69536-b701-43a4-814a-2ba16974b1dd\") " Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.808110 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmrmz\" (UniqueName: \"kubernetes.io/projected/4f217490-8a26-4f4b-935b-fe5918500948-kube-api-access-jmrmz\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.808139 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f217490-8a26-4f4b-935b-fe5918500948-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.808152 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f217490-8a26-4f4b-935b-fe5918500948-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.811171 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fa69536-b701-43a4-814a-2ba16974b1dd-kube-api-access-5r45p" (OuterVolumeSpecName: "kube-api-access-5r45p") pod "7fa69536-b701-43a4-814a-2ba16974b1dd" (UID: "7fa69536-b701-43a4-814a-2ba16974b1dd"). InnerVolumeSpecName "kube-api-access-5r45p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.825579 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fa69536-b701-43a4-814a-2ba16974b1dd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7fa69536-b701-43a4-814a-2ba16974b1dd" (UID: "7fa69536-b701-43a4-814a-2ba16974b1dd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.830512 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fa69536-b701-43a4-814a-2ba16974b1dd-config-data" (OuterVolumeSpecName: "config-data") pod "7fa69536-b701-43a4-814a-2ba16974b1dd" (UID: "7fa69536-b701-43a4-814a-2ba16974b1dd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.910075 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5r45p\" (UniqueName: \"kubernetes.io/projected/7fa69536-b701-43a4-814a-2ba16974b1dd-kube-api-access-5r45p\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.910132 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fa69536-b701-43a4-814a-2ba16974b1dd-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.910144 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fa69536-b701-43a4-814a-2ba16974b1dd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:26 crc kubenswrapper[4766]: I0130 16:47:26.047678 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62dd6ad1-1550-48cf-b103-b7ab6dd93c97" path="/var/lib/kubelet/pods/62dd6ad1-1550-48cf-b103-b7ab6dd93c97/volumes" Jan 30 16:47:26 crc kubenswrapper[4766]: I0130 16:47:26.048422 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="768238f5-b74e-4f23-91ec-4eeb69375025" path="/var/lib/kubelet/pods/768238f5-b74e-4f23-91ec-4eeb69375025/volumes" Jan 30 16:47:26 crc kubenswrapper[4766]: I0130 16:47:26.048975 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="821de7d3-dc41-4351-bced-6ed09a729223" path="/var/lib/kubelet/pods/821de7d3-dc41-4351-bced-6ed09a729223/volumes" Jan 30 16:47:26 crc kubenswrapper[4766]: I0130 16:47:26.050254 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b21357e1-82c9-419a-a191-359c84d6d001" path="/var/lib/kubelet/pods/b21357e1-82c9-419a-a191-359c84d6d001/volumes" Jan 30 16:47:26 crc kubenswrapper[4766]: I0130 16:47:26.052077 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc2a138c-9abd-427b-815c-cbb9e12459f6" path="/var/lib/kubelet/pods/bc2a138c-9abd-427b-815c-cbb9e12459f6/volumes" Jan 30 16:47:26 crc kubenswrapper[4766]: I0130 16:47:26.065288 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"7fa69536-b701-43a4-814a-2ba16974b1dd","Type":"ContainerDied","Data":"dc9c6135c4c38d623c7e0c8ee4ec3b3b5ccbc4d503c09310d8f4f5dcfd14f0b7"} Jan 30 16:47:26 crc kubenswrapper[4766]: I0130 16:47:26.065305 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 16:47:26 crc kubenswrapper[4766]: I0130 16:47:26.065458 4766 scope.go:117] "RemoveContainer" containerID="7d4fac86f391b975d4d442ab4194690e25c69e0569d23636e3c1ed6941b267b8" Jan 30 16:47:26 crc kubenswrapper[4766]: I0130 16:47:26.069765 4766 generic.go:334] "Generic (PLEG): container finished" podID="4f217490-8a26-4f4b-935b-fe5918500948" containerID="49f1b21f2e3d5b4e03c8ce1eb39f56c81d4a5fbe999ee27e98fc4e4b585ae884" exitCode=0 Jan 30 16:47:26 crc kubenswrapper[4766]: I0130 16:47:26.069801 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 16:47:26 crc kubenswrapper[4766]: I0130 16:47:26.069806 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4f217490-8a26-4f4b-935b-fe5918500948","Type":"ContainerDied","Data":"49f1b21f2e3d5b4e03c8ce1eb39f56c81d4a5fbe999ee27e98fc4e4b585ae884"} Jan 30 16:47:26 crc kubenswrapper[4766]: I0130 16:47:26.069831 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4f217490-8a26-4f4b-935b-fe5918500948","Type":"ContainerDied","Data":"f056061bd522d3379f642d93301ecddb3bb56cae94292cc340f18fe39f2e4f4b"} Jan 30 16:47:26 crc kubenswrapper[4766]: I0130 16:47:26.095302 4766 scope.go:117] "RemoveContainer" containerID="49f1b21f2e3d5b4e03c8ce1eb39f56c81d4a5fbe999ee27e98fc4e4b585ae884" Jan 30 16:47:26 crc kubenswrapper[4766]: I0130 16:47:26.101293 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 16:47:26 crc kubenswrapper[4766]: I0130 16:47:26.116886 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 16:47:26 crc kubenswrapper[4766]: I0130 16:47:26.122152 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 16:47:26 crc kubenswrapper[4766]: I0130 16:47:26.122240 4766 scope.go:117] "RemoveContainer" containerID="49f1b21f2e3d5b4e03c8ce1eb39f56c81d4a5fbe999ee27e98fc4e4b585ae884" Jan 30 16:47:26 crc kubenswrapper[4766]: E0130 16:47:26.122734 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49f1b21f2e3d5b4e03c8ce1eb39f56c81d4a5fbe999ee27e98fc4e4b585ae884\": container with ID starting with 49f1b21f2e3d5b4e03c8ce1eb39f56c81d4a5fbe999ee27e98fc4e4b585ae884 not found: ID does not exist" containerID="49f1b21f2e3d5b4e03c8ce1eb39f56c81d4a5fbe999ee27e98fc4e4b585ae884" Jan 30 16:47:26 crc kubenswrapper[4766]: I0130 16:47:26.122773 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49f1b21f2e3d5b4e03c8ce1eb39f56c81d4a5fbe999ee27e98fc4e4b585ae884"} err="failed to get container status \"49f1b21f2e3d5b4e03c8ce1eb39f56c81d4a5fbe999ee27e98fc4e4b585ae884\": rpc error: code = NotFound desc = could not find container \"49f1b21f2e3d5b4e03c8ce1eb39f56c81d4a5fbe999ee27e98fc4e4b585ae884\": container with ID starting with 49f1b21f2e3d5b4e03c8ce1eb39f56c81d4a5fbe999ee27e98fc4e4b585ae884 not found: ID does not exist" Jan 30 16:47:26 crc kubenswrapper[4766]: I0130 16:47:26.127123 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 16:47:27 crc kubenswrapper[4766]: I0130 16:47:27.188225 4766 scope.go:117] "RemoveContainer" containerID="d0d3a385994a831e8571ce1c7041fd4ec8f5ca6264fb5b4f4e85ee29e52f53f1" Jan 30 16:47:28 crc kubenswrapper[4766]: I0130 16:47:28.049281 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f217490-8a26-4f4b-935b-fe5918500948" path="/var/lib/kubelet/pods/4f217490-8a26-4f4b-935b-fe5918500948/volumes" Jan 30 16:47:28 crc kubenswrapper[4766]: I0130 16:47:28.051662 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fa69536-b701-43a4-814a-2ba16974b1dd" path="/var/lib/kubelet/pods/7fa69536-b701-43a4-814a-2ba16974b1dd/volumes" Jan 30 16:47:28 crc kubenswrapper[4766]: E0130 16:47:28.734443 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="83993d94a8bc7f594d30caf8ddb5c055e031d5c9a949175233563c06d2f790e9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 16:47:28 crc kubenswrapper[4766]: E0130 16:47:28.734549 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2 is running failed: container process not found" containerID="087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 16:47:28 crc kubenswrapper[4766]: E0130 16:47:28.735844 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2 is running failed: container process not found" containerID="087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 16:47:28 crc kubenswrapper[4766]: E0130 16:47:28.740130 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="83993d94a8bc7f594d30caf8ddb5c055e031d5c9a949175233563c06d2f790e9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 16:47:28 crc kubenswrapper[4766]: E0130 16:47:28.740351 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2 is running failed: container process not found" containerID="087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 16:47:28 crc kubenswrapper[4766]: E0130 16:47:28.740579 4766 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-l6hkn" podUID="2a501828-e06b-4096-b555-1ecd9323ee20" containerName="ovsdb-server" Jan 30 16:47:28 crc kubenswrapper[4766]: E0130 16:47:28.741813 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="83993d94a8bc7f594d30caf8ddb5c055e031d5c9a949175233563c06d2f790e9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 16:47:28 crc kubenswrapper[4766]: E0130 16:47:28.741848 4766 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-l6hkn" podUID="2a501828-e06b-4096-b555-1ecd9323ee20" containerName="ovs-vswitchd" Jan 30 16:47:33 crc kubenswrapper[4766]: E0130 16:47:33.734288 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2 is running failed: container process not found" containerID="087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 16:47:33 crc kubenswrapper[4766]: E0130 16:47:33.734920 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2 is running failed: container process not found" containerID="087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 16:47:33 crc kubenswrapper[4766]: E0130 16:47:33.735100 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="83993d94a8bc7f594d30caf8ddb5c055e031d5c9a949175233563c06d2f790e9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 16:47:33 crc kubenswrapper[4766]: E0130 16:47:33.735255 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2 is running failed: container process not found" containerID="087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 16:47:33 crc kubenswrapper[4766]: E0130 16:47:33.735321 4766 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-l6hkn" podUID="2a501828-e06b-4096-b555-1ecd9323ee20" containerName="ovsdb-server" Jan 30 16:47:33 crc kubenswrapper[4766]: E0130 16:47:33.738130 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="83993d94a8bc7f594d30caf8ddb5c055e031d5c9a949175233563c06d2f790e9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 16:47:33 crc kubenswrapper[4766]: E0130 16:47:33.739445 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="83993d94a8bc7f594d30caf8ddb5c055e031d5c9a949175233563c06d2f790e9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 16:47:33 crc kubenswrapper[4766]: E0130 16:47:33.739484 4766 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-l6hkn" podUID="2a501828-e06b-4096-b555-1ecd9323ee20" containerName="ovs-vswitchd" Jan 30 16:47:38 crc kubenswrapper[4766]: E0130 16:47:38.734232 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2 is running failed: container process not found" containerID="087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 16:47:38 crc kubenswrapper[4766]: E0130 16:47:38.735306 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2 is running failed: container process not found" containerID="087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 16:47:38 crc kubenswrapper[4766]: E0130 16:47:38.735601 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2 is running failed: container process not found" containerID="087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 16:47:38 crc kubenswrapper[4766]: E0130 16:47:38.735627 4766 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-l6hkn" podUID="2a501828-e06b-4096-b555-1ecd9323ee20" containerName="ovsdb-server" Jan 30 16:47:38 crc kubenswrapper[4766]: E0130 16:47:38.735958 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="83993d94a8bc7f594d30caf8ddb5c055e031d5c9a949175233563c06d2f790e9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 16:47:38 crc kubenswrapper[4766]: E0130 16:47:38.737168 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="83993d94a8bc7f594d30caf8ddb5c055e031d5c9a949175233563c06d2f790e9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 16:47:38 crc kubenswrapper[4766]: E0130 16:47:38.738994 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="83993d94a8bc7f594d30caf8ddb5c055e031d5c9a949175233563c06d2f790e9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 16:47:38 crc kubenswrapper[4766]: E0130 16:47:38.739027 4766 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-l6hkn" podUID="2a501828-e06b-4096-b555-1ecd9323ee20" containerName="ovs-vswitchd" Jan 30 16:47:39 crc kubenswrapper[4766]: I0130 16:47:39.045312 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:47:39 crc kubenswrapper[4766]: I0130 16:47:39.045671 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:47:39 crc kubenswrapper[4766]: I0130 16:47:39.187539 4766 generic.go:334] "Generic (PLEG): container finished" podID="533a3663-0294-48ef-b771-1f5fb3ae05ab" containerID="2ef26908ff305b23e8e962f558b46195015a464a6f4ddf9d9d52d4e04bf0f666" exitCode=0 Jan 30 16:47:39 crc kubenswrapper[4766]: I0130 16:47:39.187595 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d4bdf9c45-5nxgr" event={"ID":"533a3663-0294-48ef-b771-1f5fb3ae05ab","Type":"ContainerDied","Data":"2ef26908ff305b23e8e962f558b46195015a464a6f4ddf9d9d52d4e04bf0f666"} Jan 30 16:47:39 crc kubenswrapper[4766]: I0130 16:47:39.263166 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6d4bdf9c45-5nxgr" Jan 30 16:47:39 crc kubenswrapper[4766]: I0130 16:47:39.437439 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-public-tls-certs\") pod \"533a3663-0294-48ef-b771-1f5fb3ae05ab\" (UID: \"533a3663-0294-48ef-b771-1f5fb3ae05ab\") " Jan 30 16:47:39 crc kubenswrapper[4766]: I0130 16:47:39.437505 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-combined-ca-bundle\") pod \"533a3663-0294-48ef-b771-1f5fb3ae05ab\" (UID: \"533a3663-0294-48ef-b771-1f5fb3ae05ab\") " Jan 30 16:47:39 crc kubenswrapper[4766]: I0130 16:47:39.437535 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jfm4\" (UniqueName: \"kubernetes.io/projected/533a3663-0294-48ef-b771-1f5fb3ae05ab-kube-api-access-8jfm4\") pod \"533a3663-0294-48ef-b771-1f5fb3ae05ab\" (UID: \"533a3663-0294-48ef-b771-1f5fb3ae05ab\") " Jan 30 16:47:39 crc kubenswrapper[4766]: I0130 16:47:39.437589 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-internal-tls-certs\") pod \"533a3663-0294-48ef-b771-1f5fb3ae05ab\" (UID: \"533a3663-0294-48ef-b771-1f5fb3ae05ab\") " Jan 30 16:47:39 crc kubenswrapper[4766]: I0130 16:47:39.437664 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-httpd-config\") pod \"533a3663-0294-48ef-b771-1f5fb3ae05ab\" (UID: \"533a3663-0294-48ef-b771-1f5fb3ae05ab\") " Jan 30 16:47:39 crc kubenswrapper[4766]: I0130 16:47:39.437682 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-config\") pod \"533a3663-0294-48ef-b771-1f5fb3ae05ab\" (UID: \"533a3663-0294-48ef-b771-1f5fb3ae05ab\") " Jan 30 16:47:39 crc kubenswrapper[4766]: I0130 16:47:39.437765 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-ovndb-tls-certs\") pod \"533a3663-0294-48ef-b771-1f5fb3ae05ab\" (UID: \"533a3663-0294-48ef-b771-1f5fb3ae05ab\") " Jan 30 16:47:39 crc kubenswrapper[4766]: I0130 16:47:39.443207 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "533a3663-0294-48ef-b771-1f5fb3ae05ab" (UID: "533a3663-0294-48ef-b771-1f5fb3ae05ab"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:39 crc kubenswrapper[4766]: I0130 16:47:39.443709 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/533a3663-0294-48ef-b771-1f5fb3ae05ab-kube-api-access-8jfm4" (OuterVolumeSpecName: "kube-api-access-8jfm4") pod "533a3663-0294-48ef-b771-1f5fb3ae05ab" (UID: "533a3663-0294-48ef-b771-1f5fb3ae05ab"). InnerVolumeSpecName "kube-api-access-8jfm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:39 crc kubenswrapper[4766]: I0130 16:47:39.474445 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "533a3663-0294-48ef-b771-1f5fb3ae05ab" (UID: "533a3663-0294-48ef-b771-1f5fb3ae05ab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:39 crc kubenswrapper[4766]: I0130 16:47:39.475008 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "533a3663-0294-48ef-b771-1f5fb3ae05ab" (UID: "533a3663-0294-48ef-b771-1f5fb3ae05ab"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:39 crc kubenswrapper[4766]: I0130 16:47:39.477307 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "533a3663-0294-48ef-b771-1f5fb3ae05ab" (UID: "533a3663-0294-48ef-b771-1f5fb3ae05ab"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:39 crc kubenswrapper[4766]: I0130 16:47:39.478628 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-config" (OuterVolumeSpecName: "config") pod "533a3663-0294-48ef-b771-1f5fb3ae05ab" (UID: "533a3663-0294-48ef-b771-1f5fb3ae05ab"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:39 crc kubenswrapper[4766]: I0130 16:47:39.491607 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "533a3663-0294-48ef-b771-1f5fb3ae05ab" (UID: "533a3663-0294-48ef-b771-1f5fb3ae05ab"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:39 crc kubenswrapper[4766]: I0130 16:47:39.539503 4766 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:39 crc kubenswrapper[4766]: I0130 16:47:39.539551 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:39 crc kubenswrapper[4766]: I0130 16:47:39.539560 4766 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:39 crc kubenswrapper[4766]: I0130 16:47:39.539570 4766 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:39 crc kubenswrapper[4766]: I0130 16:47:39.539579 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:39 crc kubenswrapper[4766]: I0130 16:47:39.539587 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jfm4\" (UniqueName: \"kubernetes.io/projected/533a3663-0294-48ef-b771-1f5fb3ae05ab-kube-api-access-8jfm4\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:39 crc kubenswrapper[4766]: I0130 16:47:39.539596 4766 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.198874 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d4bdf9c45-5nxgr" event={"ID":"533a3663-0294-48ef-b771-1f5fb3ae05ab","Type":"ContainerDied","Data":"c0a3cd47bf6f73c69d465e105e571ff0dfdead63ace53c2387dc41608358f285"} Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.199449 4766 scope.go:117] "RemoveContainer" containerID="7b8bf066636272b652b67ba985eba08e74de13009f953d0190f16c41f92e8863" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.198932 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6d4bdf9c45-5nxgr" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.223269 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6d4bdf9c45-5nxgr"] Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.229454 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6d4bdf9c45-5nxgr"] Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.229939 4766 scope.go:117] "RemoveContainer" containerID="2ef26908ff305b23e8e962f558b46195015a464a6f4ddf9d9d52d4e04bf0f666" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.399218 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-k5dgz"] Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.399622 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="908c7fd8-c07e-463e-94c4-76980a3a8ba2" containerName="ceilometer-central-agent" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.399640 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="908c7fd8-c07e-463e-94c4-76980a3a8ba2" containerName="ceilometer-central-agent" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.399652 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17d6e828-fc05-46cb-9bee-bac08ebf331a" containerName="barbican-api-log" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.399658 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="17d6e828-fc05-46cb-9bee-bac08ebf331a" containerName="barbican-api-log" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.399673 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb576787-90a5-4e81-a047-6fcf37921335" containerName="kube-state-metrics" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.399680 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb576787-90a5-4e81-a047-6fcf37921335" containerName="kube-state-metrics" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.399689 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40f1dc52-213f-4a5b-af33-4067a83859e4" containerName="nova-metadata-metadata" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.399695 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="40f1dc52-213f-4a5b-af33-4067a83859e4" containerName="nova-metadata-metadata" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.399706 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ad68dc2-23ff-4044-b74d-149ae8f02bc0" containerName="galera" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.399713 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ad68dc2-23ff-4044-b74d-149ae8f02bc0" containerName="galera" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.399721 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="845c3343-246e-4309-bd46-9bcd92cad574" containerName="extract-utilities" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.399727 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="845c3343-246e-4309-bd46-9bcd92cad574" containerName="extract-utilities" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.399740 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bc2931b-8439-4c5c-be4d-43f4aab528f2" containerName="glance-httpd" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.399745 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bc2931b-8439-4c5c-be4d-43f4aab528f2" containerName="glance-httpd" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.399752 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="845c3343-246e-4309-bd46-9bcd92cad574" containerName="extract-content" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.399758 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="845c3343-246e-4309-bd46-9bcd92cad574" containerName="extract-content" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.399769 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="063ebe65-0175-443e-8c75-5018c42b3f36" containerName="probe" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.399774 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="063ebe65-0175-443e-8c75-5018c42b3f36" containerName="probe" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.399786 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d5b8a42-39dd-4b1b-9f92-1e3585b6707b" containerName="glance-httpd" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.399793 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d5b8a42-39dd-4b1b-9f92-1e3585b6707b" containerName="glance-httpd" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.399802 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="908c7fd8-c07e-463e-94c4-76980a3a8ba2" containerName="proxy-httpd" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.399808 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="908c7fd8-c07e-463e-94c4-76980a3a8ba2" containerName="proxy-httpd" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.399815 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14ae2453-74fa-4114-9261-21b381518493" containerName="nova-api-api" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.399821 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="14ae2453-74fa-4114-9261-21b381518493" containerName="nova-api-api" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.399831 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="533a3663-0294-48ef-b771-1f5fb3ae05ab" containerName="neutron-api" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.399836 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="533a3663-0294-48ef-b771-1f5fb3ae05ab" containerName="neutron-api" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.399844 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ad68dc2-23ff-4044-b74d-149ae8f02bc0" containerName="mysql-bootstrap" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.399850 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ad68dc2-23ff-4044-b74d-149ae8f02bc0" containerName="mysql-bootstrap" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.399858 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b21357e1-82c9-419a-a191-359c84d6d001" containerName="rabbitmq" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.399864 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="b21357e1-82c9-419a-a191-359c84d6d001" containerName="rabbitmq" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.399873 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aca8dfc0-f915-4696-95c1-3c232f2ea35a" containerName="cinder-api-log" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.399879 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="aca8dfc0-f915-4696-95c1-3c232f2ea35a" containerName="cinder-api-log" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.399889 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22d60b44-40c9-425e-8daf-8931a25954e0" containerName="barbican-keystone-listener-log" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.399895 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="22d60b44-40c9-425e-8daf-8931a25954e0" containerName="barbican-keystone-listener-log" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.399903 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17d6e828-fc05-46cb-9bee-bac08ebf331a" containerName="barbican-api" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.399908 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="17d6e828-fc05-46cb-9bee-bac08ebf331a" containerName="barbican-api" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.399918 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f9f648f-36fc-4ab4-9e08-cf4e01e30f22" containerName="ovn-northd" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.399924 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f9f648f-36fc-4ab4-9e08-cf4e01e30f22" containerName="ovn-northd" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.399932 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61f7793d-39bd-4e96-a857-7de972f0c76d" containerName="memcached" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.399938 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="61f7793d-39bd-4e96-a857-7de972f0c76d" containerName="memcached" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.399947 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d5b8a42-39dd-4b1b-9f92-1e3585b6707b" containerName="glance-log" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.399953 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d5b8a42-39dd-4b1b-9f92-1e3585b6707b" containerName="glance-log" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.399965 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62dd6ad1-1550-48cf-b103-b7ab6dd93c97" containerName="galera" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.399971 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="62dd6ad1-1550-48cf-b103-b7ab6dd93c97" containerName="galera" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.399981 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="821de7d3-dc41-4351-bced-6ed09a729223" containerName="keystone-api" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.399988 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="821de7d3-dc41-4351-bced-6ed09a729223" containerName="keystone-api" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.399999 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d13e6f63-37d4-4780-9902-430a9669901c" containerName="barbican-worker-log" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400007 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d13e6f63-37d4-4780-9902-430a9669901c" containerName="barbican-worker-log" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.400017 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="908c7fd8-c07e-463e-94c4-76980a3a8ba2" containerName="ceilometer-notification-agent" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400023 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="908c7fd8-c07e-463e-94c4-76980a3a8ba2" containerName="ceilometer-notification-agent" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.400031 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="447a8ec3-4e50-40a9-b418-01fd8c0eb03e" containerName="placement-api" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400038 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="447a8ec3-4e50-40a9-b418-01fd8c0eb03e" containerName="placement-api" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.400050 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc2a138c-9abd-427b-815c-cbb9e12459f6" containerName="rabbitmq" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400057 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc2a138c-9abd-427b-815c-cbb9e12459f6" containerName="rabbitmq" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.400066 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="908c7fd8-c07e-463e-94c4-76980a3a8ba2" containerName="sg-core" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400072 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="908c7fd8-c07e-463e-94c4-76980a3a8ba2" containerName="sg-core" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.400083 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62dd6ad1-1550-48cf-b103-b7ab6dd93c97" containerName="mysql-bootstrap" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400089 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="62dd6ad1-1550-48cf-b103-b7ab6dd93c97" containerName="mysql-bootstrap" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.400098 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d13e6f63-37d4-4780-9902-430a9669901c" containerName="barbican-worker" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400105 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d13e6f63-37d4-4780-9902-430a9669901c" containerName="barbican-worker" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.400115 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f217490-8a26-4f4b-935b-fe5918500948" containerName="nova-scheduler-scheduler" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400122 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f217490-8a26-4f4b-935b-fe5918500948" containerName="nova-scheduler-scheduler" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.400135 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14ae2453-74fa-4114-9261-21b381518493" containerName="nova-api-log" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400143 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="14ae2453-74fa-4114-9261-21b381518493" containerName="nova-api-log" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.400156 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fa69536-b701-43a4-814a-2ba16974b1dd" containerName="nova-cell1-conductor-conductor" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400165 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fa69536-b701-43a4-814a-2ba16974b1dd" containerName="nova-cell1-conductor-conductor" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.400193 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40f1dc52-213f-4a5b-af33-4067a83859e4" containerName="nova-metadata-log" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400199 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="40f1dc52-213f-4a5b-af33-4067a83859e4" containerName="nova-metadata-log" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.400208 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="845c3343-246e-4309-bd46-9bcd92cad574" containerName="registry-server" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400213 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="845c3343-246e-4309-bd46-9bcd92cad574" containerName="registry-server" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.400225 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bc2931b-8439-4c5c-be4d-43f4aab528f2" containerName="glance-log" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400231 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bc2931b-8439-4c5c-be4d-43f4aab528f2" containerName="glance-log" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.400241 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc2a138c-9abd-427b-815c-cbb9e12459f6" containerName="setup-container" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400246 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc2a138c-9abd-427b-815c-cbb9e12459f6" containerName="setup-container" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.400253 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="063ebe65-0175-443e-8c75-5018c42b3f36" containerName="cinder-scheduler" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400259 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="063ebe65-0175-443e-8c75-5018c42b3f36" containerName="cinder-scheduler" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.400267 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f9f648f-36fc-4ab4-9e08-cf4e01e30f22" containerName="openstack-network-exporter" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400273 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f9f648f-36fc-4ab4-9e08-cf4e01e30f22" containerName="openstack-network-exporter" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.400284 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22d60b44-40c9-425e-8daf-8931a25954e0" containerName="barbican-keystone-listener" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400291 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="22d60b44-40c9-425e-8daf-8931a25954e0" containerName="barbican-keystone-listener" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.400301 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aca8dfc0-f915-4696-95c1-3c232f2ea35a" containerName="cinder-api" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400306 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="aca8dfc0-f915-4696-95c1-3c232f2ea35a" containerName="cinder-api" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.400312 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="447a8ec3-4e50-40a9-b418-01fd8c0eb03e" containerName="placement-log" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400318 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="447a8ec3-4e50-40a9-b418-01fd8c0eb03e" containerName="placement-log" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.400327 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b21357e1-82c9-419a-a191-359c84d6d001" containerName="setup-container" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400333 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="b21357e1-82c9-419a-a191-359c84d6d001" containerName="setup-container" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.400344 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="533a3663-0294-48ef-b771-1f5fb3ae05ab" containerName="neutron-httpd" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400351 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="533a3663-0294-48ef-b771-1f5fb3ae05ab" containerName="neutron-httpd" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400498 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="62dd6ad1-1550-48cf-b103-b7ab6dd93c97" containerName="galera" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400512 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="b21357e1-82c9-419a-a191-359c84d6d001" containerName="rabbitmq" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400525 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="d13e6f63-37d4-4780-9902-430a9669901c" containerName="barbican-worker" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400537 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="908c7fd8-c07e-463e-94c4-76980a3a8ba2" containerName="proxy-httpd" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400547 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="aca8dfc0-f915-4696-95c1-3c232f2ea35a" containerName="cinder-api" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400557 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d5b8a42-39dd-4b1b-9f92-1e3585b6707b" containerName="glance-log" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400567 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bc2931b-8439-4c5c-be4d-43f4aab528f2" containerName="glance-httpd" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400576 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f9f648f-36fc-4ab4-9e08-cf4e01e30f22" containerName="ovn-northd" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400587 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ad68dc2-23ff-4044-b74d-149ae8f02bc0" containerName="galera" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400594 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="61f7793d-39bd-4e96-a857-7de972f0c76d" containerName="memcached" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400604 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="22d60b44-40c9-425e-8daf-8931a25954e0" containerName="barbican-keystone-listener-log" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400615 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="447a8ec3-4e50-40a9-b418-01fd8c0eb03e" containerName="placement-log" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400628 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fa69536-b701-43a4-814a-2ba16974b1dd" containerName="nova-cell1-conductor-conductor" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400640 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f9f648f-36fc-4ab4-9e08-cf4e01e30f22" containerName="openstack-network-exporter" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400652 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="17d6e828-fc05-46cb-9bee-bac08ebf331a" containerName="barbican-api" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400664 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="aca8dfc0-f915-4696-95c1-3c232f2ea35a" containerName="cinder-api-log" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400678 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="40f1dc52-213f-4a5b-af33-4067a83859e4" containerName="nova-metadata-metadata" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400686 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="063ebe65-0175-443e-8c75-5018c42b3f36" containerName="cinder-scheduler" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400695 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc2a138c-9abd-427b-815c-cbb9e12459f6" containerName="rabbitmq" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400701 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="533a3663-0294-48ef-b771-1f5fb3ae05ab" containerName="neutron-api" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400710 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="821de7d3-dc41-4351-bced-6ed09a729223" containerName="keystone-api" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400719 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d5b8a42-39dd-4b1b-9f92-1e3585b6707b" containerName="glance-httpd" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400727 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="40f1dc52-213f-4a5b-af33-4067a83859e4" containerName="nova-metadata-log" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400739 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="14ae2453-74fa-4114-9261-21b381518493" containerName="nova-api-log" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400751 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="14ae2453-74fa-4114-9261-21b381518493" containerName="nova-api-api" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400760 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="063ebe65-0175-443e-8c75-5018c42b3f36" containerName="probe" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400771 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb576787-90a5-4e81-a047-6fcf37921335" containerName="kube-state-metrics" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400780 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="447a8ec3-4e50-40a9-b418-01fd8c0eb03e" containerName="placement-api" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400789 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="22d60b44-40c9-425e-8daf-8931a25954e0" containerName="barbican-keystone-listener" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400798 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="908c7fd8-c07e-463e-94c4-76980a3a8ba2" containerName="ceilometer-central-agent" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400806 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="908c7fd8-c07e-463e-94c4-76980a3a8ba2" containerName="sg-core" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400814 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="908c7fd8-c07e-463e-94c4-76980a3a8ba2" containerName="ceilometer-notification-agent" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400826 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="d13e6f63-37d4-4780-9902-430a9669901c" containerName="barbican-worker-log" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400838 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="845c3343-246e-4309-bd46-9bcd92cad574" containerName="registry-server" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400846 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="533a3663-0294-48ef-b771-1f5fb3ae05ab" containerName="neutron-httpd" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400854 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f217490-8a26-4f4b-935b-fe5918500948" containerName="nova-scheduler-scheduler" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400861 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bc2931b-8439-4c5c-be4d-43f4aab528f2" containerName="glance-log" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400878 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="17d6e828-fc05-46cb-9bee-bac08ebf331a" containerName="barbican-api-log" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.402602 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k5dgz" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.411134 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k5dgz"] Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.552168 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdjs7\" (UniqueName: \"kubernetes.io/projected/350bf3b6-f831-4bd0-a887-8f4b97e294aa-kube-api-access-jdjs7\") pod \"redhat-marketplace-k5dgz\" (UID: \"350bf3b6-f831-4bd0-a887-8f4b97e294aa\") " pod="openshift-marketplace/redhat-marketplace-k5dgz" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.552993 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/350bf3b6-f831-4bd0-a887-8f4b97e294aa-utilities\") pod \"redhat-marketplace-k5dgz\" (UID: \"350bf3b6-f831-4bd0-a887-8f4b97e294aa\") " pod="openshift-marketplace/redhat-marketplace-k5dgz" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.553044 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/350bf3b6-f831-4bd0-a887-8f4b97e294aa-catalog-content\") pod \"redhat-marketplace-k5dgz\" (UID: \"350bf3b6-f831-4bd0-a887-8f4b97e294aa\") " pod="openshift-marketplace/redhat-marketplace-k5dgz" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.655445 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/350bf3b6-f831-4bd0-a887-8f4b97e294aa-utilities\") pod \"redhat-marketplace-k5dgz\" (UID: \"350bf3b6-f831-4bd0-a887-8f4b97e294aa\") " pod="openshift-marketplace/redhat-marketplace-k5dgz" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.655552 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/350bf3b6-f831-4bd0-a887-8f4b97e294aa-catalog-content\") pod \"redhat-marketplace-k5dgz\" (UID: \"350bf3b6-f831-4bd0-a887-8f4b97e294aa\") " pod="openshift-marketplace/redhat-marketplace-k5dgz" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.655694 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdjs7\" (UniqueName: \"kubernetes.io/projected/350bf3b6-f831-4bd0-a887-8f4b97e294aa-kube-api-access-jdjs7\") pod \"redhat-marketplace-k5dgz\" (UID: \"350bf3b6-f831-4bd0-a887-8f4b97e294aa\") " pod="openshift-marketplace/redhat-marketplace-k5dgz" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.656089 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/350bf3b6-f831-4bd0-a887-8f4b97e294aa-utilities\") pod \"redhat-marketplace-k5dgz\" (UID: \"350bf3b6-f831-4bd0-a887-8f4b97e294aa\") " pod="openshift-marketplace/redhat-marketplace-k5dgz" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.656117 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/350bf3b6-f831-4bd0-a887-8f4b97e294aa-catalog-content\") pod \"redhat-marketplace-k5dgz\" (UID: \"350bf3b6-f831-4bd0-a887-8f4b97e294aa\") " pod="openshift-marketplace/redhat-marketplace-k5dgz" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.676757 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdjs7\" (UniqueName: \"kubernetes.io/projected/350bf3b6-f831-4bd0-a887-8f4b97e294aa-kube-api-access-jdjs7\") pod \"redhat-marketplace-k5dgz\" (UID: \"350bf3b6-f831-4bd0-a887-8f4b97e294aa\") " pod="openshift-marketplace/redhat-marketplace-k5dgz" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.720487 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k5dgz" Jan 30 16:47:41 crc kubenswrapper[4766]: I0130 16:47:41.213009 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k5dgz"] Jan 30 16:47:42 crc kubenswrapper[4766]: I0130 16:47:42.049371 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="533a3663-0294-48ef-b771-1f5fb3ae05ab" path="/var/lib/kubelet/pods/533a3663-0294-48ef-b771-1f5fb3ae05ab/volumes" Jan 30 16:47:42 crc kubenswrapper[4766]: I0130 16:47:42.215969 4766 generic.go:334] "Generic (PLEG): container finished" podID="350bf3b6-f831-4bd0-a887-8f4b97e294aa" containerID="3934a8d326c2a8169efe654a399cd58d8c317187c849765e7f39b9c86a22d5e0" exitCode=0 Jan 30 16:47:42 crc kubenswrapper[4766]: I0130 16:47:42.216019 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k5dgz" event={"ID":"350bf3b6-f831-4bd0-a887-8f4b97e294aa","Type":"ContainerDied","Data":"3934a8d326c2a8169efe654a399cd58d8c317187c849765e7f39b9c86a22d5e0"} Jan 30 16:47:42 crc kubenswrapper[4766]: I0130 16:47:42.216049 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k5dgz" event={"ID":"350bf3b6-f831-4bd0-a887-8f4b97e294aa","Type":"ContainerStarted","Data":"8e94e9010d3ddf2209ffde0d21db9289d0f351ce8caffb64e966f0bb2f18ce64"} Jan 30 16:47:43 crc kubenswrapper[4766]: E0130 16:47:43.733449 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2 is running failed: container process not found" containerID="087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 16:47:43 crc kubenswrapper[4766]: E0130 16:47:43.734225 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2 is running failed: container process not found" containerID="087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 16:47:43 crc kubenswrapper[4766]: E0130 16:47:43.734713 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2 is running failed: container process not found" containerID="087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 16:47:43 crc kubenswrapper[4766]: E0130 16:47:43.734756 4766 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-l6hkn" podUID="2a501828-e06b-4096-b555-1ecd9323ee20" containerName="ovsdb-server" Jan 30 16:47:43 crc kubenswrapper[4766]: E0130 16:47:43.735457 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="83993d94a8bc7f594d30caf8ddb5c055e031d5c9a949175233563c06d2f790e9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 16:47:43 crc kubenswrapper[4766]: E0130 16:47:43.736922 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="83993d94a8bc7f594d30caf8ddb5c055e031d5c9a949175233563c06d2f790e9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 16:47:43 crc kubenswrapper[4766]: E0130 16:47:43.738312 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="83993d94a8bc7f594d30caf8ddb5c055e031d5c9a949175233563c06d2f790e9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 16:47:43 crc kubenswrapper[4766]: E0130 16:47:43.738398 4766 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-l6hkn" podUID="2a501828-e06b-4096-b555-1ecd9323ee20" containerName="ovs-vswitchd" Jan 30 16:47:45 crc kubenswrapper[4766]: I0130 16:47:45.243720 4766 generic.go:334] "Generic (PLEG): container finished" podID="350bf3b6-f831-4bd0-a887-8f4b97e294aa" containerID="3c7e5951b1b314e1a5b3490f28ca27b5bee52ad67a2efedf1cde2e1c8e97d6ab" exitCode=0 Jan 30 16:47:45 crc kubenswrapper[4766]: I0130 16:47:45.243791 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k5dgz" event={"ID":"350bf3b6-f831-4bd0-a887-8f4b97e294aa","Type":"ContainerDied","Data":"3c7e5951b1b314e1a5b3490f28ca27b5bee52ad67a2efedf1cde2e1c8e97d6ab"} Jan 30 16:47:46 crc kubenswrapper[4766]: I0130 16:47:46.266730 4766 generic.go:334] "Generic (PLEG): container finished" podID="8b182790-0761-450c-85d1-63ddd59ac10f" containerID="9ef33fd7af0697eee6aa37a4f43e02cd1ff7caec575a2b12e994eb6a0549b3a1" exitCode=137 Jan 30 16:47:46 crc kubenswrapper[4766]: I0130 16:47:46.266778 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b182790-0761-450c-85d1-63ddd59ac10f","Type":"ContainerDied","Data":"9ef33fd7af0697eee6aa37a4f43e02cd1ff7caec575a2b12e994eb6a0549b3a1"} Jan 30 16:47:46 crc kubenswrapper[4766]: I0130 16:47:46.270002 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k5dgz" event={"ID":"350bf3b6-f831-4bd0-a887-8f4b97e294aa","Type":"ContainerStarted","Data":"bb2542f1624c71e872f5681c6672d1606fbeb6f074e817a27e9c2f3df9fbc43a"} Jan 30 16:47:46 crc kubenswrapper[4766]: I0130 16:47:46.272629 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-l6hkn_2a501828-e06b-4096-b555-1ecd9323ee20/ovs-vswitchd/0.log" Jan 30 16:47:46 crc kubenswrapper[4766]: I0130 16:47:46.273507 4766 generic.go:334] "Generic (PLEG): container finished" podID="2a501828-e06b-4096-b555-1ecd9323ee20" containerID="83993d94a8bc7f594d30caf8ddb5c055e031d5c9a949175233563c06d2f790e9" exitCode=137 Jan 30 16:47:46 crc kubenswrapper[4766]: I0130 16:47:46.273659 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-l6hkn" event={"ID":"2a501828-e06b-4096-b555-1ecd9323ee20","Type":"ContainerDied","Data":"83993d94a8bc7f594d30caf8ddb5c055e031d5c9a949175233563c06d2f790e9"} Jan 30 16:47:46 crc kubenswrapper[4766]: I0130 16:47:46.819895 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-l6hkn_2a501828-e06b-4096-b555-1ecd9323ee20/ovs-vswitchd/0.log" Jan 30 16:47:46 crc kubenswrapper[4766]: I0130 16:47:46.821514 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-l6hkn" Jan 30 16:47:46 crc kubenswrapper[4766]: I0130 16:47:46.880938 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 30 16:47:46 crc kubenswrapper[4766]: I0130 16:47:46.956752 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2a501828-e06b-4096-b555-1ecd9323ee20-var-log\") pod \"2a501828-e06b-4096-b555-1ecd9323ee20\" (UID: \"2a501828-e06b-4096-b555-1ecd9323ee20\") " Jan 30 16:47:46 crc kubenswrapper[4766]: I0130 16:47:46.956819 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2a501828-e06b-4096-b555-1ecd9323ee20-etc-ovs\") pod \"2a501828-e06b-4096-b555-1ecd9323ee20\" (UID: \"2a501828-e06b-4096-b555-1ecd9323ee20\") " Jan 30 16:47:46 crc kubenswrapper[4766]: I0130 16:47:46.956889 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2a501828-e06b-4096-b555-1ecd9323ee20-var-run\") pod \"2a501828-e06b-4096-b555-1ecd9323ee20\" (UID: \"2a501828-e06b-4096-b555-1ecd9323ee20\") " Jan 30 16:47:46 crc kubenswrapper[4766]: I0130 16:47:46.956937 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a501828-e06b-4096-b555-1ecd9323ee20-scripts\") pod \"2a501828-e06b-4096-b555-1ecd9323ee20\" (UID: \"2a501828-e06b-4096-b555-1ecd9323ee20\") " Jan 30 16:47:46 crc kubenswrapper[4766]: I0130 16:47:46.957000 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a501828-e06b-4096-b555-1ecd9323ee20-etc-ovs" (OuterVolumeSpecName: "etc-ovs") pod "2a501828-e06b-4096-b555-1ecd9323ee20" (UID: "2a501828-e06b-4096-b555-1ecd9323ee20"). InnerVolumeSpecName "etc-ovs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:47:46 crc kubenswrapper[4766]: I0130 16:47:46.957050 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8b182790-0761-450c-85d1-63ddd59ac10f-etc-swift\") pod \"8b182790-0761-450c-85d1-63ddd59ac10f\" (UID: \"8b182790-0761-450c-85d1-63ddd59ac10f\") " Jan 30 16:47:46 crc kubenswrapper[4766]: I0130 16:47:46.957085 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/8b182790-0761-450c-85d1-63ddd59ac10f-lock\") pod \"8b182790-0761-450c-85d1-63ddd59ac10f\" (UID: \"8b182790-0761-450c-85d1-63ddd59ac10f\") " Jan 30 16:47:46 crc kubenswrapper[4766]: I0130 16:47:46.957094 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a501828-e06b-4096-b555-1ecd9323ee20-var-log" (OuterVolumeSpecName: "var-log") pod "2a501828-e06b-4096-b555-1ecd9323ee20" (UID: "2a501828-e06b-4096-b555-1ecd9323ee20"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:47:46 crc kubenswrapper[4766]: I0130 16:47:46.957109 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2mp4\" (UniqueName: \"kubernetes.io/projected/2a501828-e06b-4096-b555-1ecd9323ee20-kube-api-access-p2mp4\") pod \"2a501828-e06b-4096-b555-1ecd9323ee20\" (UID: \"2a501828-e06b-4096-b555-1ecd9323ee20\") " Jan 30 16:47:46 crc kubenswrapper[4766]: I0130 16:47:46.957148 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2a501828-e06b-4096-b555-1ecd9323ee20-var-lib\") pod \"2a501828-e06b-4096-b555-1ecd9323ee20\" (UID: \"2a501828-e06b-4096-b555-1ecd9323ee20\") " Jan 30 16:47:46 crc kubenswrapper[4766]: I0130 16:47:46.957164 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b182790-0761-450c-85d1-63ddd59ac10f-combined-ca-bundle\") pod \"8b182790-0761-450c-85d1-63ddd59ac10f\" (UID: \"8b182790-0761-450c-85d1-63ddd59ac10f\") " Jan 30 16:47:46 crc kubenswrapper[4766]: I0130 16:47:46.957199 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8b182790-0761-450c-85d1-63ddd59ac10f-cache\") pod \"8b182790-0761-450c-85d1-63ddd59ac10f\" (UID: \"8b182790-0761-450c-85d1-63ddd59ac10f\") " Jan 30 16:47:46 crc kubenswrapper[4766]: I0130 16:47:46.957216 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cp72v\" (UniqueName: \"kubernetes.io/projected/8b182790-0761-450c-85d1-63ddd59ac10f-kube-api-access-cp72v\") pod \"8b182790-0761-450c-85d1-63ddd59ac10f\" (UID: \"8b182790-0761-450c-85d1-63ddd59ac10f\") " Jan 30 16:47:46 crc kubenswrapper[4766]: I0130 16:47:46.957507 4766 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2a501828-e06b-4096-b555-1ecd9323ee20-var-log\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:46 crc kubenswrapper[4766]: I0130 16:47:46.957519 4766 reconciler_common.go:293] "Volume detached for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2a501828-e06b-4096-b555-1ecd9323ee20-etc-ovs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:46 crc kubenswrapper[4766]: I0130 16:47:46.958191 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a501828-e06b-4096-b555-1ecd9323ee20-scripts" (OuterVolumeSpecName: "scripts") pod "2a501828-e06b-4096-b555-1ecd9323ee20" (UID: "2a501828-e06b-4096-b555-1ecd9323ee20"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:46 crc kubenswrapper[4766]: I0130 16:47:46.958218 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a501828-e06b-4096-b555-1ecd9323ee20-var-run" (OuterVolumeSpecName: "var-run") pod "2a501828-e06b-4096-b555-1ecd9323ee20" (UID: "2a501828-e06b-4096-b555-1ecd9323ee20"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:47:46 crc kubenswrapper[4766]: I0130 16:47:46.959561 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b182790-0761-450c-85d1-63ddd59ac10f-cache" (OuterVolumeSpecName: "cache") pod "8b182790-0761-450c-85d1-63ddd59ac10f" (UID: "8b182790-0761-450c-85d1-63ddd59ac10f"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:47:46 crc kubenswrapper[4766]: I0130 16:47:46.959869 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a501828-e06b-4096-b555-1ecd9323ee20-var-lib" (OuterVolumeSpecName: "var-lib") pod "2a501828-e06b-4096-b555-1ecd9323ee20" (UID: "2a501828-e06b-4096-b555-1ecd9323ee20"). InnerVolumeSpecName "var-lib". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:47:46 crc kubenswrapper[4766]: I0130 16:47:46.960366 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b182790-0761-450c-85d1-63ddd59ac10f-lock" (OuterVolumeSpecName: "lock") pod "8b182790-0761-450c-85d1-63ddd59ac10f" (UID: "8b182790-0761-450c-85d1-63ddd59ac10f"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:47:46 crc kubenswrapper[4766]: I0130 16:47:46.964305 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b182790-0761-450c-85d1-63ddd59ac10f-kube-api-access-cp72v" (OuterVolumeSpecName: "kube-api-access-cp72v") pod "8b182790-0761-450c-85d1-63ddd59ac10f" (UID: "8b182790-0761-450c-85d1-63ddd59ac10f"). InnerVolumeSpecName "kube-api-access-cp72v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:46 crc kubenswrapper[4766]: I0130 16:47:46.972597 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a501828-e06b-4096-b555-1ecd9323ee20-kube-api-access-p2mp4" (OuterVolumeSpecName: "kube-api-access-p2mp4") pod "2a501828-e06b-4096-b555-1ecd9323ee20" (UID: "2a501828-e06b-4096-b555-1ecd9323ee20"). InnerVolumeSpecName "kube-api-access-p2mp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:46 crc kubenswrapper[4766]: I0130 16:47:46.972773 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b182790-0761-450c-85d1-63ddd59ac10f-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "8b182790-0761-450c-85d1-63ddd59ac10f" (UID: "8b182790-0761-450c-85d1-63ddd59ac10f"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.058148 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"8b182790-0761-450c-85d1-63ddd59ac10f\" (UID: \"8b182790-0761-450c-85d1-63ddd59ac10f\") " Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.058434 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2mp4\" (UniqueName: \"kubernetes.io/projected/2a501828-e06b-4096-b555-1ecd9323ee20-kube-api-access-p2mp4\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.058453 4766 reconciler_common.go:293] "Volume detached for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2a501828-e06b-4096-b555-1ecd9323ee20-var-lib\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.058463 4766 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8b182790-0761-450c-85d1-63ddd59ac10f-cache\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.058473 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cp72v\" (UniqueName: \"kubernetes.io/projected/8b182790-0761-450c-85d1-63ddd59ac10f-kube-api-access-cp72v\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.058482 4766 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2a501828-e06b-4096-b555-1ecd9323ee20-var-run\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.058489 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a501828-e06b-4096-b555-1ecd9323ee20-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.058497 4766 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8b182790-0761-450c-85d1-63ddd59ac10f-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.058506 4766 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/8b182790-0761-450c-85d1-63ddd59ac10f-lock\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.062585 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "swift") pod "8b182790-0761-450c-85d1-63ddd59ac10f" (UID: "8b182790-0761-450c-85d1-63ddd59ac10f"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.159390 4766 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.192956 4766 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.237703 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b182790-0761-450c-85d1-63ddd59ac10f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8b182790-0761-450c-85d1-63ddd59ac10f" (UID: "8b182790-0761-450c-85d1-63ddd59ac10f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.260496 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b182790-0761-450c-85d1-63ddd59ac10f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.260524 4766 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.290121 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b182790-0761-450c-85d1-63ddd59ac10f","Type":"ContainerDied","Data":"e2895452d8c205fa0d4dc996a2287e6197931bc707b2d07e3c6da2c761ed67e2"} Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.290195 4766 scope.go:117] "RemoveContainer" containerID="9ef33fd7af0697eee6aa37a4f43e02cd1ff7caec575a2b12e994eb6a0549b3a1" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.290463 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.299898 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-l6hkn_2a501828-e06b-4096-b555-1ecd9323ee20/ovs-vswitchd/0.log" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.300908 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-l6hkn" event={"ID":"2a501828-e06b-4096-b555-1ecd9323ee20","Type":"ContainerDied","Data":"f054a0fee68ab2bd51f8c1a2db002cd94be5729245e8ef0109de145c3c8117f0"} Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.300956 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-l6hkn" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.330901 4766 scope.go:117] "RemoveContainer" containerID="fb57872e5fb6a58cc8c40e732147b1054a269fa84054e322cc2f52fa8c9c9ad5" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.337571 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-k5dgz" podStartSLOduration=3.595718713 podStartE2EDuration="7.337553161s" podCreationTimestamp="2026-01-30 16:47:40 +0000 UTC" firstStartedPulling="2026-01-30 16:47:42.217759063 +0000 UTC m=+1516.855716409" lastFinishedPulling="2026-01-30 16:47:45.959593511 +0000 UTC m=+1520.597550857" observedRunningTime="2026-01-30 16:47:47.334650349 +0000 UTC m=+1521.972607705" watchObservedRunningTime="2026-01-30 16:47:47.337553161 +0000 UTC m=+1521.975510507" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.365211 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.368507 4766 scope.go:117] "RemoveContainer" containerID="1867868d042226b0102d7af4efd2c5d0686e840d200dd33d6ec36968fc03fa94" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.378384 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-storage-0"] Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.385136 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-l6hkn"] Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.388386 4766 scope.go:117] "RemoveContainer" containerID="2de20de1c925cc2fe2631c488767f62edc5546cfa1bab3a9f5b3b5568ebd33bd" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.393083 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ovs-l6hkn"] Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.413335 4766 scope.go:117] "RemoveContainer" containerID="cabff9d9eac1e96f01b9ae0ea6118276a0a0f7d8869b118376d2a160d9c95fbd" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.432485 4766 scope.go:117] "RemoveContainer" containerID="686b4de4bfb8090cbee7ffd8b429f45a75fa7f8db6a139284fa6c26cb4ebf320" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.450296 4766 scope.go:117] "RemoveContainer" containerID="93345e4db373057383a4e7560531f5f8dc222e4ea8e6511d8365b6b242bb9305" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.475938 4766 scope.go:117] "RemoveContainer" containerID="ed024a5d8346d6cba34ca8427849879c1c8708dd88d1dff2c821e85ba14d6f5d" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.496697 4766 scope.go:117] "RemoveContainer" containerID="3d565bf23f387505355fc88939efb3e922421c5ce2f3cce9972954f997abf7e9" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.517414 4766 scope.go:117] "RemoveContainer" containerID="7e0ee7c6c23df84239fa6a0f2dda7982f60b3b9413744489a50144073243e8be" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.535652 4766 scope.go:117] "RemoveContainer" containerID="4a378782d7a92d740e9d92e144de664ebf098b972f3febcbf7a8d0d8994d65c2" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.553709 4766 scope.go:117] "RemoveContainer" containerID="b33858618ac4f97b57ed3a00bf2ef12f457aa24b08e1a7b17d0bccf28da68819" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.582537 4766 scope.go:117] "RemoveContainer" containerID="8fb2a9d730e1fac1ed432db1aa83e0d89ad22b45725d36e0ee578815b9d18bd4" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.608362 4766 scope.go:117] "RemoveContainer" containerID="13a067c315d5248f25766b082e783d339afd79a237563ce5f91071342f2570b8" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.624862 4766 scope.go:117] "RemoveContainer" containerID="374f13cd2087a08f8eec3c99c6917ad293b1c5c6f50b2378b94b79cc272999d3" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.641878 4766 scope.go:117] "RemoveContainer" containerID="83993d94a8bc7f594d30caf8ddb5c055e031d5c9a949175233563c06d2f790e9" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.667491 4766 scope.go:117] "RemoveContainer" containerID="087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.684926 4766 scope.go:117] "RemoveContainer" containerID="227e5efd4255dd7061992117871a77b87ce5c9b6b3d5ba505bf41d645da12be4" Jan 30 16:47:48 crc kubenswrapper[4766]: I0130 16:47:48.051423 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a501828-e06b-4096-b555-1ecd9323ee20" path="/var/lib/kubelet/pods/2a501828-e06b-4096-b555-1ecd9323ee20/volumes" Jan 30 16:47:48 crc kubenswrapper[4766]: I0130 16:47:48.052429 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" path="/var/lib/kubelet/pods/8b182790-0761-450c-85d1-63ddd59ac10f/volumes" Jan 30 16:47:50 crc kubenswrapper[4766]: I0130 16:47:50.721214 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-k5dgz" Jan 30 16:47:50 crc kubenswrapper[4766]: I0130 16:47:50.721507 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-k5dgz" Jan 30 16:47:50 crc kubenswrapper[4766]: I0130 16:47:50.762620 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-k5dgz" Jan 30 16:47:51 crc kubenswrapper[4766]: I0130 16:47:51.380833 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-k5dgz" Jan 30 16:47:51 crc kubenswrapper[4766]: I0130 16:47:51.421550 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k5dgz"] Jan 30 16:47:51 crc kubenswrapper[4766]: I0130 16:47:51.603609 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="6d5b8a42-39dd-4b1b-9f92-1e3585b6707b" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.169:9292/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 16:47:51 crc kubenswrapper[4766]: I0130 16:47:51.603632 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="6d5b8a42-39dd-4b1b-9f92-1e3585b6707b" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.169:9292/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 16:47:52 crc kubenswrapper[4766]: I0130 16:47:52.331797 4766 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podaca8dfc0-f915-4696-95c1-3c232f2ea35a"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podaca8dfc0-f915-4696-95c1-3c232f2ea35a] : Timed out while waiting for systemd to remove kubepods-besteffort-podaca8dfc0_f915_4696_95c1_3c232f2ea35a.slice" Jan 30 16:47:52 crc kubenswrapper[4766]: E0130 16:47:52.332237 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort podaca8dfc0-f915-4696-95c1-3c232f2ea35a] : unable to destroy cgroup paths for cgroup [kubepods besteffort podaca8dfc0-f915-4696-95c1-3c232f2ea35a] : Timed out while waiting for systemd to remove kubepods-besteffort-podaca8dfc0_f915_4696_95c1_3c232f2ea35a.slice" pod="openstack/cinder-api-0" podUID="aca8dfc0-f915-4696-95c1-3c232f2ea35a" Jan 30 16:47:52 crc kubenswrapper[4766]: I0130 16:47:52.345454 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 16:47:52 crc kubenswrapper[4766]: I0130 16:47:52.389765 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 30 16:47:52 crc kubenswrapper[4766]: I0130 16:47:52.395338 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 30 16:47:53 crc kubenswrapper[4766]: I0130 16:47:53.352866 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-k5dgz" podUID="350bf3b6-f831-4bd0-a887-8f4b97e294aa" containerName="registry-server" containerID="cri-o://bb2542f1624c71e872f5681c6672d1606fbeb6f074e817a27e9c2f3df9fbc43a" gracePeriod=2 Jan 30 16:47:54 crc kubenswrapper[4766]: I0130 16:47:54.049863 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aca8dfc0-f915-4696-95c1-3c232f2ea35a" path="/var/lib/kubelet/pods/aca8dfc0-f915-4696-95c1-3c232f2ea35a/volumes" Jan 30 16:47:54 crc kubenswrapper[4766]: I0130 16:47:54.367198 4766 generic.go:334] "Generic (PLEG): container finished" podID="350bf3b6-f831-4bd0-a887-8f4b97e294aa" containerID="bb2542f1624c71e872f5681c6672d1606fbeb6f074e817a27e9c2f3df9fbc43a" exitCode=0 Jan 30 16:47:54 crc kubenswrapper[4766]: I0130 16:47:54.367220 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k5dgz" event={"ID":"350bf3b6-f831-4bd0-a887-8f4b97e294aa","Type":"ContainerDied","Data":"bb2542f1624c71e872f5681c6672d1606fbeb6f074e817a27e9c2f3df9fbc43a"} Jan 30 16:47:54 crc kubenswrapper[4766]: I0130 16:47:54.367278 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k5dgz" event={"ID":"350bf3b6-f831-4bd0-a887-8f4b97e294aa","Type":"ContainerDied","Data":"8e94e9010d3ddf2209ffde0d21db9289d0f351ce8caffb64e966f0bb2f18ce64"} Jan 30 16:47:54 crc kubenswrapper[4766]: I0130 16:47:54.367296 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e94e9010d3ddf2209ffde0d21db9289d0f351ce8caffb64e966f0bb2f18ce64" Jan 30 16:47:54 crc kubenswrapper[4766]: I0130 16:47:54.378548 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k5dgz" Jan 30 16:47:54 crc kubenswrapper[4766]: I0130 16:47:54.488804 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/350bf3b6-f831-4bd0-a887-8f4b97e294aa-catalog-content\") pod \"350bf3b6-f831-4bd0-a887-8f4b97e294aa\" (UID: \"350bf3b6-f831-4bd0-a887-8f4b97e294aa\") " Jan 30 16:47:54 crc kubenswrapper[4766]: I0130 16:47:54.488975 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/350bf3b6-f831-4bd0-a887-8f4b97e294aa-utilities\") pod \"350bf3b6-f831-4bd0-a887-8f4b97e294aa\" (UID: \"350bf3b6-f831-4bd0-a887-8f4b97e294aa\") " Jan 30 16:47:54 crc kubenswrapper[4766]: I0130 16:47:54.489036 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdjs7\" (UniqueName: \"kubernetes.io/projected/350bf3b6-f831-4bd0-a887-8f4b97e294aa-kube-api-access-jdjs7\") pod \"350bf3b6-f831-4bd0-a887-8f4b97e294aa\" (UID: \"350bf3b6-f831-4bd0-a887-8f4b97e294aa\") " Jan 30 16:47:54 crc kubenswrapper[4766]: I0130 16:47:54.490539 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/350bf3b6-f831-4bd0-a887-8f4b97e294aa-utilities" (OuterVolumeSpecName: "utilities") pod "350bf3b6-f831-4bd0-a887-8f4b97e294aa" (UID: "350bf3b6-f831-4bd0-a887-8f4b97e294aa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:47:54 crc kubenswrapper[4766]: I0130 16:47:54.494827 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/350bf3b6-f831-4bd0-a887-8f4b97e294aa-kube-api-access-jdjs7" (OuterVolumeSpecName: "kube-api-access-jdjs7") pod "350bf3b6-f831-4bd0-a887-8f4b97e294aa" (UID: "350bf3b6-f831-4bd0-a887-8f4b97e294aa"). InnerVolumeSpecName "kube-api-access-jdjs7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:54 crc kubenswrapper[4766]: I0130 16:47:54.521040 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/350bf3b6-f831-4bd0-a887-8f4b97e294aa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "350bf3b6-f831-4bd0-a887-8f4b97e294aa" (UID: "350bf3b6-f831-4bd0-a887-8f4b97e294aa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:47:54 crc kubenswrapper[4766]: I0130 16:47:54.591131 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdjs7\" (UniqueName: \"kubernetes.io/projected/350bf3b6-f831-4bd0-a887-8f4b97e294aa-kube-api-access-jdjs7\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:54 crc kubenswrapper[4766]: I0130 16:47:54.591209 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/350bf3b6-f831-4bd0-a887-8f4b97e294aa-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:54 crc kubenswrapper[4766]: I0130 16:47:54.591225 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/350bf3b6-f831-4bd0-a887-8f4b97e294aa-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:55 crc kubenswrapper[4766]: I0130 16:47:55.375677 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k5dgz" Jan 30 16:47:55 crc kubenswrapper[4766]: I0130 16:47:55.404164 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k5dgz"] Jan 30 16:47:55 crc kubenswrapper[4766]: I0130 16:47:55.413099 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-k5dgz"] Jan 30 16:47:56 crc kubenswrapper[4766]: I0130 16:47:56.048790 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="350bf3b6-f831-4bd0-a887-8f4b97e294aa" path="/var/lib/kubelet/pods/350bf3b6-f831-4bd0-a887-8f4b97e294aa/volumes" Jan 30 16:48:09 crc kubenswrapper[4766]: I0130 16:48:09.045388 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:48:09 crc kubenswrapper[4766]: I0130 16:48:09.045845 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:48:09 crc kubenswrapper[4766]: I0130 16:48:09.045900 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 16:48:09 crc kubenswrapper[4766]: I0130 16:48:09.046430 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d754c53961dd92b39433a4ade8bd484064163d9f01082dc2421d03e086c5b027"} pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 16:48:09 crc kubenswrapper[4766]: I0130 16:48:09.046484 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" containerID="cri-o://d754c53961dd92b39433a4ade8bd484064163d9f01082dc2421d03e086c5b027" gracePeriod=600 Jan 30 16:48:09 crc kubenswrapper[4766]: E0130 16:48:09.167787 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 16:48:09 crc kubenswrapper[4766]: I0130 16:48:09.494223 4766 generic.go:334] "Generic (PLEG): container finished" podID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerID="d754c53961dd92b39433a4ade8bd484064163d9f01082dc2421d03e086c5b027" exitCode=0 Jan 30 16:48:09 crc kubenswrapper[4766]: I0130 16:48:09.494270 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerDied","Data":"d754c53961dd92b39433a4ade8bd484064163d9f01082dc2421d03e086c5b027"} Jan 30 16:48:09 crc kubenswrapper[4766]: I0130 16:48:09.494308 4766 scope.go:117] "RemoveContainer" containerID="401c81042a218118cfba77ecd472ad3789063907971964c9b9416c5db7f3d8ba" Jan 30 16:48:09 crc kubenswrapper[4766]: I0130 16:48:09.494888 4766 scope.go:117] "RemoveContainer" containerID="d754c53961dd92b39433a4ade8bd484064163d9f01082dc2421d03e086c5b027" Jan 30 16:48:09 crc kubenswrapper[4766]: E0130 16:48:09.495233 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 16:48:22 crc kubenswrapper[4766]: I0130 16:48:22.040093 4766 scope.go:117] "RemoveContainer" containerID="d754c53961dd92b39433a4ade8bd484064163d9f01082dc2421d03e086c5b027" Jan 30 16:48:22 crc kubenswrapper[4766]: E0130 16:48:22.040677 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 16:48:27 crc kubenswrapper[4766]: I0130 16:48:27.966237 4766 scope.go:117] "RemoveContainer" containerID="384add243e65cdf50e496a8167782257f5aa6061e63ba8e7a412091ee4ed18e7" Jan 30 16:48:27 crc kubenswrapper[4766]: I0130 16:48:27.994027 4766 scope.go:117] "RemoveContainer" containerID="3a0eaa2d691ae4d65e795c3996eb0ab131211168f3e378f7e5d301593d79afe7" Jan 30 16:48:28 crc kubenswrapper[4766]: I0130 16:48:28.019309 4766 scope.go:117] "RemoveContainer" containerID="996950689e39dcea64b26ccd476b24aa5095e91f7aed3e954e00b825f7630cc9" Jan 30 16:48:28 crc kubenswrapper[4766]: I0130 16:48:28.041948 4766 scope.go:117] "RemoveContainer" containerID="5d846068f29d3046551737a3e9e9cf0e1ed2259d3b638644a8119627f752a5bb" Jan 30 16:48:28 crc kubenswrapper[4766]: I0130 16:48:28.088331 4766 scope.go:117] "RemoveContainer" containerID="46dfb2a0af6dc1c92f20836420bf6bad9d95ad7a83767eb35ea5c22ee21a6991" Jan 30 16:48:28 crc kubenswrapper[4766]: I0130 16:48:28.110464 4766 scope.go:117] "RemoveContainer" containerID="ccba621742d68e9586276ff231a6fa1b8cc39d7109fc1db500072a77f2e0577a" Jan 30 16:48:28 crc kubenswrapper[4766]: I0130 16:48:28.140078 4766 scope.go:117] "RemoveContainer" containerID="5d73c2b655a052cf02654b11be29a35dfaa9dff493fdf53769ae78f9a9393392" Jan 30 16:48:28 crc kubenswrapper[4766]: I0130 16:48:28.169141 4766 scope.go:117] "RemoveContainer" containerID="16de9997b9c78a1addb7a6173a72d9c91cb7c20a2b569788c1ccd21789b937ba" Jan 30 16:48:28 crc kubenswrapper[4766]: I0130 16:48:28.199057 4766 scope.go:117] "RemoveContainer" containerID="b3115a74162c402b5afd67304852082bc2869cd8ceb2957889ed409ae79ee5a9" Jan 30 16:48:28 crc kubenswrapper[4766]: I0130 16:48:28.220120 4766 scope.go:117] "RemoveContainer" containerID="7bfe4866f66053fb173d427988627ec6e6f5d14c9ef1395833beafecd3414e5d" Jan 30 16:48:28 crc kubenswrapper[4766]: I0130 16:48:28.236992 4766 scope.go:117] "RemoveContainer" containerID="89fde9e0995894b317c9fa05cd0667cbf50e79b056befd3734c3ed716957dbe3" Jan 30 16:48:28 crc kubenswrapper[4766]: I0130 16:48:28.256437 4766 scope.go:117] "RemoveContainer" containerID="cc27ffe2d01636ffacab81d5d7a098bb9dc884b5c3f6289425d3f7eacfe02395" Jan 30 16:48:28 crc kubenswrapper[4766]: I0130 16:48:28.276463 4766 scope.go:117] "RemoveContainer" containerID="88d113226aeebb5db30f4f4f9b3c172c70a6fbe5baa221cf177cb6428428ba00" Jan 30 16:48:28 crc kubenswrapper[4766]: I0130 16:48:28.313932 4766 scope.go:117] "RemoveContainer" containerID="e66531f1ac1c7bb36e0303175964fac57e3e6bc53065d7b2dc2989ce9b3d088e" Jan 30 16:48:28 crc kubenswrapper[4766]: I0130 16:48:28.338154 4766 scope.go:117] "RemoveContainer" containerID="3126afd72a7e503d66c3abfdc8d12c8e5d1f45d05dcb98bf8bf9842b6dbab025" Jan 30 16:48:28 crc kubenswrapper[4766]: I0130 16:48:28.356126 4766 scope.go:117] "RemoveContainer" containerID="10c98f81e678691873d549baafc8dd66a2c7e23fa5f08a3d15b04d97e86b3c60" Jan 30 16:48:28 crc kubenswrapper[4766]: I0130 16:48:28.375578 4766 scope.go:117] "RemoveContainer" containerID="608ba2a26d2d587734c8a4f7540403d434c83f4f3e8dcb71158c93e46d824161" Jan 30 16:48:28 crc kubenswrapper[4766]: I0130 16:48:28.409454 4766 scope.go:117] "RemoveContainer" containerID="29b7ceb22d3dfe6928b75436b2b8db935b27d650279fb88c7e2bd402672ad8a8" Jan 30 16:48:28 crc kubenswrapper[4766]: I0130 16:48:28.427510 4766 scope.go:117] "RemoveContainer" containerID="8b6a5e00eb0e363beb4163ed64b109efdad6014e6d35f2b1358b2fb9057e6db4" Jan 30 16:48:28 crc kubenswrapper[4766]: I0130 16:48:28.449977 4766 scope.go:117] "RemoveContainer" containerID="2b053b03cd6fc4ae384ef42a3a1f67b2abeb432fc716aac5c95d03ae04affdd4" Jan 30 16:48:35 crc kubenswrapper[4766]: I0130 16:48:35.592821 4766 scope.go:117] "RemoveContainer" containerID="d754c53961dd92b39433a4ade8bd484064163d9f01082dc2421d03e086c5b027" Jan 30 16:48:35 crc kubenswrapper[4766]: E0130 16:48:35.594225 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 16:48:50 crc kubenswrapper[4766]: I0130 16:48:50.039530 4766 scope.go:117] "RemoveContainer" containerID="d754c53961dd92b39433a4ade8bd484064163d9f01082dc2421d03e086c5b027" Jan 30 16:48:50 crc kubenswrapper[4766]: E0130 16:48:50.040194 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.253253 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-b2xcg"] Jan 30 16:48:55 crc kubenswrapper[4766]: E0130 16:48:55.253804 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="object-updater" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.253815 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="object-updater" Jan 30 16:48:55 crc kubenswrapper[4766]: E0130 16:48:55.253828 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="container-replicator" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.253834 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="container-replicator" Jan 30 16:48:55 crc kubenswrapper[4766]: E0130 16:48:55.253847 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="object-auditor" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.253852 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="object-auditor" Jan 30 16:48:55 crc kubenswrapper[4766]: E0130 16:48:55.253861 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="350bf3b6-f831-4bd0-a887-8f4b97e294aa" containerName="extract-content" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.253867 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="350bf3b6-f831-4bd0-a887-8f4b97e294aa" containerName="extract-content" Jan 30 16:48:55 crc kubenswrapper[4766]: E0130 16:48:55.253875 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="account-replicator" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.253880 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="account-replicator" Jan 30 16:48:55 crc kubenswrapper[4766]: E0130 16:48:55.253889 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="container-auditor" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.253895 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="container-auditor" Jan 30 16:48:55 crc kubenswrapper[4766]: E0130 16:48:55.253902 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a501828-e06b-4096-b555-1ecd9323ee20" containerName="ovsdb-server-init" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.253908 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a501828-e06b-4096-b555-1ecd9323ee20" containerName="ovsdb-server-init" Jan 30 16:48:55 crc kubenswrapper[4766]: E0130 16:48:55.253919 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="account-auditor" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.253924 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="account-auditor" Jan 30 16:48:55 crc kubenswrapper[4766]: E0130 16:48:55.253933 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="object-server" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.253971 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="object-server" Jan 30 16:48:55 crc kubenswrapper[4766]: E0130 16:48:55.253983 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="account-server" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.253989 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="account-server" Jan 30 16:48:55 crc kubenswrapper[4766]: E0130 16:48:55.254001 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="object-expirer" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.254007 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="object-expirer" Jan 30 16:48:55 crc kubenswrapper[4766]: E0130 16:48:55.254013 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="rsync" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.254018 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="rsync" Jan 30 16:48:55 crc kubenswrapper[4766]: E0130 16:48:55.254028 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="swift-recon-cron" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.254034 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="swift-recon-cron" Jan 30 16:48:55 crc kubenswrapper[4766]: E0130 16:48:55.254043 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a501828-e06b-4096-b555-1ecd9323ee20" containerName="ovs-vswitchd" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.254049 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a501828-e06b-4096-b555-1ecd9323ee20" containerName="ovs-vswitchd" Jan 30 16:48:55 crc kubenswrapper[4766]: E0130 16:48:55.254058 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="container-server" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.254064 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="container-server" Jan 30 16:48:55 crc kubenswrapper[4766]: E0130 16:48:55.254072 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="container-updater" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.254080 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="container-updater" Jan 30 16:48:55 crc kubenswrapper[4766]: E0130 16:48:55.254092 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a501828-e06b-4096-b555-1ecd9323ee20" containerName="ovsdb-server" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.254098 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a501828-e06b-4096-b555-1ecd9323ee20" containerName="ovsdb-server" Jan 30 16:48:55 crc kubenswrapper[4766]: E0130 16:48:55.254106 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="350bf3b6-f831-4bd0-a887-8f4b97e294aa" containerName="extract-utilities" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.254112 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="350bf3b6-f831-4bd0-a887-8f4b97e294aa" containerName="extract-utilities" Jan 30 16:48:55 crc kubenswrapper[4766]: E0130 16:48:55.254122 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="object-replicator" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.254128 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="object-replicator" Jan 30 16:48:55 crc kubenswrapper[4766]: E0130 16:48:55.254138 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="account-reaper" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.254144 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="account-reaper" Jan 30 16:48:55 crc kubenswrapper[4766]: E0130 16:48:55.254153 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="350bf3b6-f831-4bd0-a887-8f4b97e294aa" containerName="registry-server" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.254159 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="350bf3b6-f831-4bd0-a887-8f4b97e294aa" containerName="registry-server" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.254298 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="object-expirer" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.254308 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a501828-e06b-4096-b555-1ecd9323ee20" containerName="ovsdb-server" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.254320 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="object-auditor" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.254327 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="rsync" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.254340 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="object-server" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.254349 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="account-reaper" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.254360 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="container-auditor" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.254372 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="swift-recon-cron" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.254381 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="account-auditor" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.254391 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="350bf3b6-f831-4bd0-a887-8f4b97e294aa" containerName="registry-server" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.254402 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="container-replicator" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.254412 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="account-server" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.254421 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="container-updater" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.254433 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="object-updater" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.254442 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="container-server" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.254450 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="account-replicator" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.254461 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a501828-e06b-4096-b555-1ecd9323ee20" containerName="ovs-vswitchd" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.254475 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="object-replicator" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.255469 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b2xcg" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.282082 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b2xcg"] Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.355139 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8acca189-bd24-494d-974b-062f9594b0c8-catalog-content\") pod \"community-operators-b2xcg\" (UID: \"8acca189-bd24-494d-974b-062f9594b0c8\") " pod="openshift-marketplace/community-operators-b2xcg" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.355232 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8acca189-bd24-494d-974b-062f9594b0c8-utilities\") pod \"community-operators-b2xcg\" (UID: \"8acca189-bd24-494d-974b-062f9594b0c8\") " pod="openshift-marketplace/community-operators-b2xcg" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.355266 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc5jw\" (UniqueName: \"kubernetes.io/projected/8acca189-bd24-494d-974b-062f9594b0c8-kube-api-access-bc5jw\") pod \"community-operators-b2xcg\" (UID: \"8acca189-bd24-494d-974b-062f9594b0c8\") " pod="openshift-marketplace/community-operators-b2xcg" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.456049 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8acca189-bd24-494d-974b-062f9594b0c8-catalog-content\") pod \"community-operators-b2xcg\" (UID: \"8acca189-bd24-494d-974b-062f9594b0c8\") " pod="openshift-marketplace/community-operators-b2xcg" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.456115 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8acca189-bd24-494d-974b-062f9594b0c8-utilities\") pod \"community-operators-b2xcg\" (UID: \"8acca189-bd24-494d-974b-062f9594b0c8\") " pod="openshift-marketplace/community-operators-b2xcg" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.456141 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bc5jw\" (UniqueName: \"kubernetes.io/projected/8acca189-bd24-494d-974b-062f9594b0c8-kube-api-access-bc5jw\") pod \"community-operators-b2xcg\" (UID: \"8acca189-bd24-494d-974b-062f9594b0c8\") " pod="openshift-marketplace/community-operators-b2xcg" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.456539 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8acca189-bd24-494d-974b-062f9594b0c8-catalog-content\") pod \"community-operators-b2xcg\" (UID: \"8acca189-bd24-494d-974b-062f9594b0c8\") " pod="openshift-marketplace/community-operators-b2xcg" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.456964 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8acca189-bd24-494d-974b-062f9594b0c8-utilities\") pod \"community-operators-b2xcg\" (UID: \"8acca189-bd24-494d-974b-062f9594b0c8\") " pod="openshift-marketplace/community-operators-b2xcg" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.478478 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bc5jw\" (UniqueName: \"kubernetes.io/projected/8acca189-bd24-494d-974b-062f9594b0c8-kube-api-access-bc5jw\") pod \"community-operators-b2xcg\" (UID: \"8acca189-bd24-494d-974b-062f9594b0c8\") " pod="openshift-marketplace/community-operators-b2xcg" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.576248 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b2xcg" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.872285 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b2xcg"] Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.898997 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b2xcg" event={"ID":"8acca189-bd24-494d-974b-062f9594b0c8","Type":"ContainerStarted","Data":"10469e338f5860f4a08b1149ed32667edce3f343f0e1a22ed8664ef3328f8240"} Jan 30 16:48:56 crc kubenswrapper[4766]: I0130 16:48:56.907837 4766 generic.go:334] "Generic (PLEG): container finished" podID="8acca189-bd24-494d-974b-062f9594b0c8" containerID="02067043650d85f94e947e0786fda5fdd828a0e4b3cbce844fa8cff64ab9dfdb" exitCode=0 Jan 30 16:48:56 crc kubenswrapper[4766]: I0130 16:48:56.907892 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b2xcg" event={"ID":"8acca189-bd24-494d-974b-062f9594b0c8","Type":"ContainerDied","Data":"02067043650d85f94e947e0786fda5fdd828a0e4b3cbce844fa8cff64ab9dfdb"} Jan 30 16:48:58 crc kubenswrapper[4766]: I0130 16:48:58.927305 4766 generic.go:334] "Generic (PLEG): container finished" podID="8acca189-bd24-494d-974b-062f9594b0c8" containerID="88f1bddc8c3c91a270bf45d180299290c8c455d8e2c3292409a7348dc221f855" exitCode=0 Jan 30 16:48:58 crc kubenswrapper[4766]: I0130 16:48:58.927410 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b2xcg" event={"ID":"8acca189-bd24-494d-974b-062f9594b0c8","Type":"ContainerDied","Data":"88f1bddc8c3c91a270bf45d180299290c8c455d8e2c3292409a7348dc221f855"} Jan 30 16:48:59 crc kubenswrapper[4766]: I0130 16:48:59.935916 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b2xcg" event={"ID":"8acca189-bd24-494d-974b-062f9594b0c8","Type":"ContainerStarted","Data":"c0d96825be792b1c6d5a779eb6182d08485c2e21d3c5cbab9f2636c0f3701862"} Jan 30 16:48:59 crc kubenswrapper[4766]: I0130 16:48:59.957274 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-b2xcg" podStartSLOduration=2.401523901 podStartE2EDuration="4.957255746s" podCreationTimestamp="2026-01-30 16:48:55 +0000 UTC" firstStartedPulling="2026-01-30 16:48:56.909705985 +0000 UTC m=+1591.547663331" lastFinishedPulling="2026-01-30 16:48:59.46543783 +0000 UTC m=+1594.103395176" observedRunningTime="2026-01-30 16:48:59.953075679 +0000 UTC m=+1594.591033025" watchObservedRunningTime="2026-01-30 16:48:59.957255746 +0000 UTC m=+1594.595213082" Jan 30 16:49:05 crc kubenswrapper[4766]: I0130 16:49:05.039982 4766 scope.go:117] "RemoveContainer" containerID="d754c53961dd92b39433a4ade8bd484064163d9f01082dc2421d03e086c5b027" Jan 30 16:49:05 crc kubenswrapper[4766]: E0130 16:49:05.040516 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 16:49:05 crc kubenswrapper[4766]: I0130 16:49:05.576875 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-b2xcg" Jan 30 16:49:05 crc kubenswrapper[4766]: I0130 16:49:05.576975 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-b2xcg" Jan 30 16:49:05 crc kubenswrapper[4766]: I0130 16:49:05.625394 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-b2xcg" Jan 30 16:49:06 crc kubenswrapper[4766]: I0130 16:49:06.013444 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-b2xcg" Jan 30 16:49:06 crc kubenswrapper[4766]: I0130 16:49:06.072000 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b2xcg"] Jan 30 16:49:08 crc kubenswrapper[4766]: I0130 16:49:08.001280 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-b2xcg" podUID="8acca189-bd24-494d-974b-062f9594b0c8" containerName="registry-server" containerID="cri-o://c0d96825be792b1c6d5a779eb6182d08485c2e21d3c5cbab9f2636c0f3701862" gracePeriod=2 Jan 30 16:49:08 crc kubenswrapper[4766]: I0130 16:49:08.411521 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b2xcg" Jan 30 16:49:08 crc kubenswrapper[4766]: I0130 16:49:08.445484 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8acca189-bd24-494d-974b-062f9594b0c8-utilities\") pod \"8acca189-bd24-494d-974b-062f9594b0c8\" (UID: \"8acca189-bd24-494d-974b-062f9594b0c8\") " Jan 30 16:49:08 crc kubenswrapper[4766]: I0130 16:49:08.445571 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bc5jw\" (UniqueName: \"kubernetes.io/projected/8acca189-bd24-494d-974b-062f9594b0c8-kube-api-access-bc5jw\") pod \"8acca189-bd24-494d-974b-062f9594b0c8\" (UID: \"8acca189-bd24-494d-974b-062f9594b0c8\") " Jan 30 16:49:08 crc kubenswrapper[4766]: I0130 16:49:08.445607 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8acca189-bd24-494d-974b-062f9594b0c8-catalog-content\") pod \"8acca189-bd24-494d-974b-062f9594b0c8\" (UID: \"8acca189-bd24-494d-974b-062f9594b0c8\") " Jan 30 16:49:08 crc kubenswrapper[4766]: I0130 16:49:08.446996 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8acca189-bd24-494d-974b-062f9594b0c8-utilities" (OuterVolumeSpecName: "utilities") pod "8acca189-bd24-494d-974b-062f9594b0c8" (UID: "8acca189-bd24-494d-974b-062f9594b0c8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:49:08 crc kubenswrapper[4766]: I0130 16:49:08.453327 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8acca189-bd24-494d-974b-062f9594b0c8-kube-api-access-bc5jw" (OuterVolumeSpecName: "kube-api-access-bc5jw") pod "8acca189-bd24-494d-974b-062f9594b0c8" (UID: "8acca189-bd24-494d-974b-062f9594b0c8"). InnerVolumeSpecName "kube-api-access-bc5jw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:49:08 crc kubenswrapper[4766]: I0130 16:49:08.505660 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8acca189-bd24-494d-974b-062f9594b0c8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8acca189-bd24-494d-974b-062f9594b0c8" (UID: "8acca189-bd24-494d-974b-062f9594b0c8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:49:08 crc kubenswrapper[4766]: I0130 16:49:08.548366 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8acca189-bd24-494d-974b-062f9594b0c8-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:49:08 crc kubenswrapper[4766]: I0130 16:49:08.548426 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bc5jw\" (UniqueName: \"kubernetes.io/projected/8acca189-bd24-494d-974b-062f9594b0c8-kube-api-access-bc5jw\") on node \"crc\" DevicePath \"\"" Jan 30 16:49:08 crc kubenswrapper[4766]: I0130 16:49:08.548442 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8acca189-bd24-494d-974b-062f9594b0c8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:49:09 crc kubenswrapper[4766]: I0130 16:49:09.015468 4766 generic.go:334] "Generic (PLEG): container finished" podID="8acca189-bd24-494d-974b-062f9594b0c8" containerID="c0d96825be792b1c6d5a779eb6182d08485c2e21d3c5cbab9f2636c0f3701862" exitCode=0 Jan 30 16:49:09 crc kubenswrapper[4766]: I0130 16:49:09.015518 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b2xcg" event={"ID":"8acca189-bd24-494d-974b-062f9594b0c8","Type":"ContainerDied","Data":"c0d96825be792b1c6d5a779eb6182d08485c2e21d3c5cbab9f2636c0f3701862"} Jan 30 16:49:09 crc kubenswrapper[4766]: I0130 16:49:09.015549 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b2xcg" event={"ID":"8acca189-bd24-494d-974b-062f9594b0c8","Type":"ContainerDied","Data":"10469e338f5860f4a08b1149ed32667edce3f343f0e1a22ed8664ef3328f8240"} Jan 30 16:49:09 crc kubenswrapper[4766]: I0130 16:49:09.015574 4766 scope.go:117] "RemoveContainer" containerID="c0d96825be792b1c6d5a779eb6182d08485c2e21d3c5cbab9f2636c0f3701862" Jan 30 16:49:09 crc kubenswrapper[4766]: I0130 16:49:09.015705 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b2xcg" Jan 30 16:49:09 crc kubenswrapper[4766]: I0130 16:49:09.035245 4766 scope.go:117] "RemoveContainer" containerID="88f1bddc8c3c91a270bf45d180299290c8c455d8e2c3292409a7348dc221f855" Jan 30 16:49:09 crc kubenswrapper[4766]: I0130 16:49:09.054430 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b2xcg"] Jan 30 16:49:09 crc kubenswrapper[4766]: I0130 16:49:09.061149 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-b2xcg"] Jan 30 16:49:09 crc kubenswrapper[4766]: I0130 16:49:09.064820 4766 scope.go:117] "RemoveContainer" containerID="02067043650d85f94e947e0786fda5fdd828a0e4b3cbce844fa8cff64ab9dfdb" Jan 30 16:49:09 crc kubenswrapper[4766]: I0130 16:49:09.083853 4766 scope.go:117] "RemoveContainer" containerID="c0d96825be792b1c6d5a779eb6182d08485c2e21d3c5cbab9f2636c0f3701862" Jan 30 16:49:09 crc kubenswrapper[4766]: E0130 16:49:09.084448 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0d96825be792b1c6d5a779eb6182d08485c2e21d3c5cbab9f2636c0f3701862\": container with ID starting with c0d96825be792b1c6d5a779eb6182d08485c2e21d3c5cbab9f2636c0f3701862 not found: ID does not exist" containerID="c0d96825be792b1c6d5a779eb6182d08485c2e21d3c5cbab9f2636c0f3701862" Jan 30 16:49:09 crc kubenswrapper[4766]: I0130 16:49:09.084505 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0d96825be792b1c6d5a779eb6182d08485c2e21d3c5cbab9f2636c0f3701862"} err="failed to get container status \"c0d96825be792b1c6d5a779eb6182d08485c2e21d3c5cbab9f2636c0f3701862\": rpc error: code = NotFound desc = could not find container \"c0d96825be792b1c6d5a779eb6182d08485c2e21d3c5cbab9f2636c0f3701862\": container with ID starting with c0d96825be792b1c6d5a779eb6182d08485c2e21d3c5cbab9f2636c0f3701862 not found: ID does not exist" Jan 30 16:49:09 crc kubenswrapper[4766]: I0130 16:49:09.084543 4766 scope.go:117] "RemoveContainer" containerID="88f1bddc8c3c91a270bf45d180299290c8c455d8e2c3292409a7348dc221f855" Jan 30 16:49:09 crc kubenswrapper[4766]: E0130 16:49:09.084990 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88f1bddc8c3c91a270bf45d180299290c8c455d8e2c3292409a7348dc221f855\": container with ID starting with 88f1bddc8c3c91a270bf45d180299290c8c455d8e2c3292409a7348dc221f855 not found: ID does not exist" containerID="88f1bddc8c3c91a270bf45d180299290c8c455d8e2c3292409a7348dc221f855" Jan 30 16:49:09 crc kubenswrapper[4766]: I0130 16:49:09.085037 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88f1bddc8c3c91a270bf45d180299290c8c455d8e2c3292409a7348dc221f855"} err="failed to get container status \"88f1bddc8c3c91a270bf45d180299290c8c455d8e2c3292409a7348dc221f855\": rpc error: code = NotFound desc = could not find container \"88f1bddc8c3c91a270bf45d180299290c8c455d8e2c3292409a7348dc221f855\": container with ID starting with 88f1bddc8c3c91a270bf45d180299290c8c455d8e2c3292409a7348dc221f855 not found: ID does not exist" Jan 30 16:49:09 crc kubenswrapper[4766]: I0130 16:49:09.085063 4766 scope.go:117] "RemoveContainer" containerID="02067043650d85f94e947e0786fda5fdd828a0e4b3cbce844fa8cff64ab9dfdb" Jan 30 16:49:09 crc kubenswrapper[4766]: E0130 16:49:09.085687 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02067043650d85f94e947e0786fda5fdd828a0e4b3cbce844fa8cff64ab9dfdb\": container with ID starting with 02067043650d85f94e947e0786fda5fdd828a0e4b3cbce844fa8cff64ab9dfdb not found: ID does not exist" containerID="02067043650d85f94e947e0786fda5fdd828a0e4b3cbce844fa8cff64ab9dfdb" Jan 30 16:49:09 crc kubenswrapper[4766]: I0130 16:49:09.085723 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02067043650d85f94e947e0786fda5fdd828a0e4b3cbce844fa8cff64ab9dfdb"} err="failed to get container status \"02067043650d85f94e947e0786fda5fdd828a0e4b3cbce844fa8cff64ab9dfdb\": rpc error: code = NotFound desc = could not find container \"02067043650d85f94e947e0786fda5fdd828a0e4b3cbce844fa8cff64ab9dfdb\": container with ID starting with 02067043650d85f94e947e0786fda5fdd828a0e4b3cbce844fa8cff64ab9dfdb not found: ID does not exist" Jan 30 16:49:10 crc kubenswrapper[4766]: I0130 16:49:10.047444 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8acca189-bd24-494d-974b-062f9594b0c8" path="/var/lib/kubelet/pods/8acca189-bd24-494d-974b-062f9594b0c8/volumes" Jan 30 16:49:17 crc kubenswrapper[4766]: I0130 16:49:17.039001 4766 scope.go:117] "RemoveContainer" containerID="d754c53961dd92b39433a4ade8bd484064163d9f01082dc2421d03e086c5b027" Jan 30 16:49:17 crc kubenswrapper[4766]: E0130 16:49:17.041102 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 16:49:28 crc kubenswrapper[4766]: I0130 16:49:28.796203 4766 scope.go:117] "RemoveContainer" containerID="590619885e87e1a14deb1f9f567a37d743fd8966bf2a912bbf096d5bd9ef44b7" Jan 30 16:49:28 crc kubenswrapper[4766]: I0130 16:49:28.854701 4766 scope.go:117] "RemoveContainer" containerID="486e761914f506c8f715baf8a899185c1691423ce4dc1690c67bd2bf32714c57" Jan 30 16:49:28 crc kubenswrapper[4766]: I0130 16:49:28.966893 4766 scope.go:117] "RemoveContainer" containerID="fb2ca6c4c30cdfea0387f0737fa8335ebccfac0d91ab6a883ee48bb871ca5508" Jan 30 16:49:29 crc kubenswrapper[4766]: I0130 16:49:29.001444 4766 scope.go:117] "RemoveContainer" containerID="d472b2710d2b86d4d81d4fb6b931148f6dd0a1a2e9b155c00e350e8d497251f8" Jan 30 16:49:29 crc kubenswrapper[4766]: I0130 16:49:29.024658 4766 scope.go:117] "RemoveContainer" containerID="41ae1fdf6e3a258b7f3ba76000e1d22b3902137f00a4cd0b5ed0e97ffdf576d3" Jan 30 16:49:29 crc kubenswrapper[4766]: I0130 16:49:29.039757 4766 scope.go:117] "RemoveContainer" containerID="d754c53961dd92b39433a4ade8bd484064163d9f01082dc2421d03e086c5b027" Jan 30 16:49:29 crc kubenswrapper[4766]: E0130 16:49:29.040077 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 16:49:29 crc kubenswrapper[4766]: I0130 16:49:29.056685 4766 scope.go:117] "RemoveContainer" containerID="c109162953a72a45d6f1c14f847bc29a8241f51dc6338795a5b5a228252ba405" Jan 30 16:49:29 crc kubenswrapper[4766]: I0130 16:49:29.092387 4766 scope.go:117] "RemoveContainer" containerID="23f20e6f2114bc8f2119ea3e2aff96d54925d71ba01791ac4a7d732855922c81" Jan 30 16:49:29 crc kubenswrapper[4766]: I0130 16:49:29.119211 4766 scope.go:117] "RemoveContainer" containerID="05de0f2960640a1d96ef314bfdd72efd8f32f0b341093df6924e01cbf4898754" Jan 30 16:49:41 crc kubenswrapper[4766]: I0130 16:49:41.039333 4766 scope.go:117] "RemoveContainer" containerID="d754c53961dd92b39433a4ade8bd484064163d9f01082dc2421d03e086c5b027" Jan 30 16:49:41 crc kubenswrapper[4766]: E0130 16:49:41.040094 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 16:49:56 crc kubenswrapper[4766]: I0130 16:49:56.045983 4766 scope.go:117] "RemoveContainer" containerID="d754c53961dd92b39433a4ade8bd484064163d9f01082dc2421d03e086c5b027" Jan 30 16:49:56 crc kubenswrapper[4766]: E0130 16:49:56.046916 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 16:50:10 crc kubenswrapper[4766]: I0130 16:50:10.040814 4766 scope.go:117] "RemoveContainer" containerID="d754c53961dd92b39433a4ade8bd484064163d9f01082dc2421d03e086c5b027" Jan 30 16:50:10 crc kubenswrapper[4766]: E0130 16:50:10.041592 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 16:50:22 crc kubenswrapper[4766]: I0130 16:50:22.039094 4766 scope.go:117] "RemoveContainer" containerID="d754c53961dd92b39433a4ade8bd484064163d9f01082dc2421d03e086c5b027" Jan 30 16:50:22 crc kubenswrapper[4766]: E0130 16:50:22.040107 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 16:50:29 crc kubenswrapper[4766]: I0130 16:50:29.297271 4766 scope.go:117] "RemoveContainer" containerID="ffb6abd846e3b8a61ca7c66fafb67111cf511533b90b2d4f5d986377b3dc5cfe" Jan 30 16:50:29 crc kubenswrapper[4766]: I0130 16:50:29.321342 4766 scope.go:117] "RemoveContainer" containerID="9307aab20bd3270327a754ce5f0bf1e56e353502d938552c29a20aa0ffc8654a" Jan 30 16:50:29 crc kubenswrapper[4766]: I0130 16:50:29.343473 4766 scope.go:117] "RemoveContainer" containerID="c614875e8dcd6859612c0ffca023d9ad703182eac04c4334607745a26ed492e7" Jan 30 16:50:29 crc kubenswrapper[4766]: I0130 16:50:29.363228 4766 scope.go:117] "RemoveContainer" containerID="a63129fee7968993f35cbb7b7849c29b9a1b79d14cad68020d591e8f586579b1" Jan 30 16:50:29 crc kubenswrapper[4766]: I0130 16:50:29.385419 4766 scope.go:117] "RemoveContainer" containerID="ffd3b38875d4c33ec892cb23c7ec536f295d1ae5853614ed528ebfd986790523" Jan 30 16:50:29 crc kubenswrapper[4766]: I0130 16:50:29.403431 4766 scope.go:117] "RemoveContainer" containerID="d026a97eccd46197ca4c58ce5cfec6afaefc72df68f93832ff6fb3ba15cfc040" Jan 30 16:50:29 crc kubenswrapper[4766]: I0130 16:50:29.426425 4766 scope.go:117] "RemoveContainer" containerID="894f0e780f43b16d39f549c963adf0e206c485f0cd403b0f3895c8cb5e61299b" Jan 30 16:50:29 crc kubenswrapper[4766]: I0130 16:50:29.451020 4766 scope.go:117] "RemoveContainer" containerID="0457579c3fc1a9ef824883cd41ddabdf9c479beff458b6eac6ddb0bd7fa49d24" Jan 30 16:50:29 crc kubenswrapper[4766]: I0130 16:50:29.467016 4766 scope.go:117] "RemoveContainer" containerID="ea43d9b31d9aa5149b7739b7621868cd96a13807e7953d198fd25510949afdca" Jan 30 16:50:29 crc kubenswrapper[4766]: I0130 16:50:29.484604 4766 scope.go:117] "RemoveContainer" containerID="abfc1996fe1de3fb5534b103074354ef84caf8f9b984c1f476a8f7df648534ed" Jan 30 16:50:35 crc kubenswrapper[4766]: I0130 16:50:35.040128 4766 scope.go:117] "RemoveContainer" containerID="d754c53961dd92b39433a4ade8bd484064163d9f01082dc2421d03e086c5b027" Jan 30 16:50:35 crc kubenswrapper[4766]: E0130 16:50:35.041013 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 16:50:46 crc kubenswrapper[4766]: I0130 16:50:46.046411 4766 scope.go:117] "RemoveContainer" containerID="d754c53961dd92b39433a4ade8bd484064163d9f01082dc2421d03e086c5b027" Jan 30 16:50:46 crc kubenswrapper[4766]: E0130 16:50:46.047195 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 16:51:00 crc kubenswrapper[4766]: I0130 16:51:00.039592 4766 scope.go:117] "RemoveContainer" containerID="d754c53961dd92b39433a4ade8bd484064163d9f01082dc2421d03e086c5b027" Jan 30 16:51:00 crc kubenswrapper[4766]: E0130 16:51:00.040291 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 16:51:14 crc kubenswrapper[4766]: I0130 16:51:14.039610 4766 scope.go:117] "RemoveContainer" containerID="d754c53961dd92b39433a4ade8bd484064163d9f01082dc2421d03e086c5b027" Jan 30 16:51:14 crc kubenswrapper[4766]: E0130 16:51:14.040335 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 16:51:27 crc kubenswrapper[4766]: I0130 16:51:27.039459 4766 scope.go:117] "RemoveContainer" containerID="d754c53961dd92b39433a4ade8bd484064163d9f01082dc2421d03e086c5b027" Jan 30 16:51:27 crc kubenswrapper[4766]: E0130 16:51:27.040156 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 16:51:29 crc kubenswrapper[4766]: I0130 16:51:29.593568 4766 scope.go:117] "RemoveContainer" containerID="53abeb8a5618ddec5f224dfed1ba79dfbbd62eada83931393de17bebf2e1d5ab" Jan 30 16:51:41 crc kubenswrapper[4766]: I0130 16:51:41.039234 4766 scope.go:117] "RemoveContainer" containerID="d754c53961dd92b39433a4ade8bd484064163d9f01082dc2421d03e086c5b027" Jan 30 16:51:41 crc kubenswrapper[4766]: E0130 16:51:41.039933 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 16:51:53 crc kubenswrapper[4766]: I0130 16:51:53.038920 4766 scope.go:117] "RemoveContainer" containerID="d754c53961dd92b39433a4ade8bd484064163d9f01082dc2421d03e086c5b027" Jan 30 16:51:53 crc kubenswrapper[4766]: E0130 16:51:53.039689 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 16:52:07 crc kubenswrapper[4766]: I0130 16:52:07.039082 4766 scope.go:117] "RemoveContainer" containerID="d754c53961dd92b39433a4ade8bd484064163d9f01082dc2421d03e086c5b027" Jan 30 16:52:07 crc kubenswrapper[4766]: E0130 16:52:07.039816 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 16:52:21 crc kubenswrapper[4766]: I0130 16:52:21.039272 4766 scope.go:117] "RemoveContainer" containerID="d754c53961dd92b39433a4ade8bd484064163d9f01082dc2421d03e086c5b027" Jan 30 16:52:21 crc kubenswrapper[4766]: E0130 16:52:21.040004 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 16:52:29 crc kubenswrapper[4766]: I0130 16:52:29.664710 4766 scope.go:117] "RemoveContainer" containerID="89198eaaa434920b555079a794b492c6b89bd55b10487cc59b3d6ea529f6ecbf" Jan 30 16:52:29 crc kubenswrapper[4766]: I0130 16:52:29.683598 4766 scope.go:117] "RemoveContainer" containerID="1d1aebce59ff54c2cba777487e05b9692a4d8d12844694e6387583c2af634532" Jan 30 16:52:29 crc kubenswrapper[4766]: I0130 16:52:29.707776 4766 scope.go:117] "RemoveContainer" containerID="c231075c5dfb247437daaaeb176a6b0d3dea211afca691c38725b8939aa2480b" Jan 30 16:52:29 crc kubenswrapper[4766]: I0130 16:52:29.729316 4766 scope.go:117] "RemoveContainer" containerID="6416df1047fe308e33b040e08526583d0654fc7b7b0b8ca00590a24d666f84b7" Jan 30 16:52:29 crc kubenswrapper[4766]: I0130 16:52:29.748059 4766 scope.go:117] "RemoveContainer" containerID="244b298b75af4ffc60d556fb768c258be1dcf5b89d3142b104861f7e022ebee0" Jan 30 16:52:29 crc kubenswrapper[4766]: I0130 16:52:29.789896 4766 scope.go:117] "RemoveContainer" containerID="66e9bc5a59fbbe0d1e3626146e5f88333d931fe0fc8ec6bf9dc52c16d98e0f27" Jan 30 16:52:29 crc kubenswrapper[4766]: I0130 16:52:29.824669 4766 scope.go:117] "RemoveContainer" containerID="a9df41b3a8490f673ad155b5c39e9bf02895871bbd8788cd418cae112017c56d" Jan 30 16:52:33 crc kubenswrapper[4766]: I0130 16:52:33.040067 4766 scope.go:117] "RemoveContainer" containerID="d754c53961dd92b39433a4ade8bd484064163d9f01082dc2421d03e086c5b027" Jan 30 16:52:33 crc kubenswrapper[4766]: E0130 16:52:33.040579 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 16:52:48 crc kubenswrapper[4766]: I0130 16:52:48.040914 4766 scope.go:117] "RemoveContainer" containerID="d754c53961dd92b39433a4ade8bd484064163d9f01082dc2421d03e086c5b027" Jan 30 16:52:48 crc kubenswrapper[4766]: E0130 16:52:48.042265 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 16:53:02 crc kubenswrapper[4766]: I0130 16:53:02.039506 4766 scope.go:117] "RemoveContainer" containerID="d754c53961dd92b39433a4ade8bd484064163d9f01082dc2421d03e086c5b027" Jan 30 16:53:02 crc kubenswrapper[4766]: E0130 16:53:02.040382 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 16:53:14 crc kubenswrapper[4766]: I0130 16:53:14.039895 4766 scope.go:117] "RemoveContainer" containerID="d754c53961dd92b39433a4ade8bd484064163d9f01082dc2421d03e086c5b027" Jan 30 16:53:14 crc kubenswrapper[4766]: I0130 16:53:14.713053 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerStarted","Data":"00fe48cd7fae11d07bb44a4d280259adee43debb3566040b546b0f1eb6622872"} Jan 30 16:54:29 crc kubenswrapper[4766]: I0130 16:54:29.931502 4766 scope.go:117] "RemoveContainer" containerID="3934a8d326c2a8169efe654a399cd58d8c317187c849765e7f39b9c86a22d5e0" Jan 30 16:54:29 crc kubenswrapper[4766]: I0130 16:54:29.958654 4766 scope.go:117] "RemoveContainer" containerID="3c7e5951b1b314e1a5b3490f28ca27b5bee52ad67a2efedf1cde2e1c8e97d6ab" Jan 30 16:54:29 crc kubenswrapper[4766]: I0130 16:54:29.987791 4766 scope.go:117] "RemoveContainer" containerID="bb2542f1624c71e872f5681c6672d1606fbeb6f074e817a27e9c2f3df9fbc43a" Jan 30 16:55:39 crc kubenswrapper[4766]: I0130 16:55:39.045887 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:55:39 crc kubenswrapper[4766]: I0130 16:55:39.046433 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:56:09 crc kubenswrapper[4766]: I0130 16:56:09.045090 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:56:09 crc kubenswrapper[4766]: I0130 16:56:09.045667 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:56:34 crc kubenswrapper[4766]: I0130 16:56:34.379350 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wpxvx"] Jan 30 16:56:34 crc kubenswrapper[4766]: E0130 16:56:34.380282 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8acca189-bd24-494d-974b-062f9594b0c8" containerName="extract-content" Jan 30 16:56:34 crc kubenswrapper[4766]: I0130 16:56:34.380296 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8acca189-bd24-494d-974b-062f9594b0c8" containerName="extract-content" Jan 30 16:56:34 crc kubenswrapper[4766]: E0130 16:56:34.380315 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8acca189-bd24-494d-974b-062f9594b0c8" containerName="registry-server" Jan 30 16:56:34 crc kubenswrapper[4766]: I0130 16:56:34.380321 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8acca189-bd24-494d-974b-062f9594b0c8" containerName="registry-server" Jan 30 16:56:34 crc kubenswrapper[4766]: E0130 16:56:34.380334 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8acca189-bd24-494d-974b-062f9594b0c8" containerName="extract-utilities" Jan 30 16:56:34 crc kubenswrapper[4766]: I0130 16:56:34.380341 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8acca189-bd24-494d-974b-062f9594b0c8" containerName="extract-utilities" Jan 30 16:56:34 crc kubenswrapper[4766]: I0130 16:56:34.380479 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8acca189-bd24-494d-974b-062f9594b0c8" containerName="registry-server" Jan 30 16:56:34 crc kubenswrapper[4766]: I0130 16:56:34.381428 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wpxvx" Jan 30 16:56:34 crc kubenswrapper[4766]: I0130 16:56:34.389275 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wpxvx"] Jan 30 16:56:34 crc kubenswrapper[4766]: I0130 16:56:34.532492 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l58wz\" (UniqueName: \"kubernetes.io/projected/fcc1e6c0-a32a-4e87-9073-66f9e0107fe3-kube-api-access-l58wz\") pod \"certified-operators-wpxvx\" (UID: \"fcc1e6c0-a32a-4e87-9073-66f9e0107fe3\") " pod="openshift-marketplace/certified-operators-wpxvx" Jan 30 16:56:34 crc kubenswrapper[4766]: I0130 16:56:34.532582 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fcc1e6c0-a32a-4e87-9073-66f9e0107fe3-utilities\") pod \"certified-operators-wpxvx\" (UID: \"fcc1e6c0-a32a-4e87-9073-66f9e0107fe3\") " pod="openshift-marketplace/certified-operators-wpxvx" Jan 30 16:56:34 crc kubenswrapper[4766]: I0130 16:56:34.532654 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fcc1e6c0-a32a-4e87-9073-66f9e0107fe3-catalog-content\") pod \"certified-operators-wpxvx\" (UID: \"fcc1e6c0-a32a-4e87-9073-66f9e0107fe3\") " pod="openshift-marketplace/certified-operators-wpxvx" Jan 30 16:56:34 crc kubenswrapper[4766]: I0130 16:56:34.634053 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fcc1e6c0-a32a-4e87-9073-66f9e0107fe3-utilities\") pod \"certified-operators-wpxvx\" (UID: \"fcc1e6c0-a32a-4e87-9073-66f9e0107fe3\") " pod="openshift-marketplace/certified-operators-wpxvx" Jan 30 16:56:34 crc kubenswrapper[4766]: I0130 16:56:34.634149 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fcc1e6c0-a32a-4e87-9073-66f9e0107fe3-catalog-content\") pod \"certified-operators-wpxvx\" (UID: \"fcc1e6c0-a32a-4e87-9073-66f9e0107fe3\") " pod="openshift-marketplace/certified-operators-wpxvx" Jan 30 16:56:34 crc kubenswrapper[4766]: I0130 16:56:34.634219 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l58wz\" (UniqueName: \"kubernetes.io/projected/fcc1e6c0-a32a-4e87-9073-66f9e0107fe3-kube-api-access-l58wz\") pod \"certified-operators-wpxvx\" (UID: \"fcc1e6c0-a32a-4e87-9073-66f9e0107fe3\") " pod="openshift-marketplace/certified-operators-wpxvx" Jan 30 16:56:34 crc kubenswrapper[4766]: I0130 16:56:34.634651 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fcc1e6c0-a32a-4e87-9073-66f9e0107fe3-catalog-content\") pod \"certified-operators-wpxvx\" (UID: \"fcc1e6c0-a32a-4e87-9073-66f9e0107fe3\") " pod="openshift-marketplace/certified-operators-wpxvx" Jan 30 16:56:34 crc kubenswrapper[4766]: I0130 16:56:34.634936 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fcc1e6c0-a32a-4e87-9073-66f9e0107fe3-utilities\") pod \"certified-operators-wpxvx\" (UID: \"fcc1e6c0-a32a-4e87-9073-66f9e0107fe3\") " pod="openshift-marketplace/certified-operators-wpxvx" Jan 30 16:56:34 crc kubenswrapper[4766]: I0130 16:56:34.660931 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l58wz\" (UniqueName: \"kubernetes.io/projected/fcc1e6c0-a32a-4e87-9073-66f9e0107fe3-kube-api-access-l58wz\") pod \"certified-operators-wpxvx\" (UID: \"fcc1e6c0-a32a-4e87-9073-66f9e0107fe3\") " pod="openshift-marketplace/certified-operators-wpxvx" Jan 30 16:56:34 crc kubenswrapper[4766]: I0130 16:56:34.700536 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wpxvx" Jan 30 16:56:35 crc kubenswrapper[4766]: I0130 16:56:35.226100 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wpxvx"] Jan 30 16:56:36 crc kubenswrapper[4766]: I0130 16:56:36.097855 4766 generic.go:334] "Generic (PLEG): container finished" podID="fcc1e6c0-a32a-4e87-9073-66f9e0107fe3" containerID="969b3d679aa240cd47b159585dba7aa8a23d90c785984a235cf0e91061c4a1a8" exitCode=0 Jan 30 16:56:36 crc kubenswrapper[4766]: I0130 16:56:36.097996 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wpxvx" event={"ID":"fcc1e6c0-a32a-4e87-9073-66f9e0107fe3","Type":"ContainerDied","Data":"969b3d679aa240cd47b159585dba7aa8a23d90c785984a235cf0e91061c4a1a8"} Jan 30 16:56:36 crc kubenswrapper[4766]: I0130 16:56:36.098253 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wpxvx" event={"ID":"fcc1e6c0-a32a-4e87-9073-66f9e0107fe3","Type":"ContainerStarted","Data":"6b2d0f7be86b3c67cd6f21bd74e8e22e8c307143254904474497446b1ffc7a00"} Jan 30 16:56:36 crc kubenswrapper[4766]: I0130 16:56:36.101575 4766 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 16:56:37 crc kubenswrapper[4766]: I0130 16:56:37.108091 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wpxvx" event={"ID":"fcc1e6c0-a32a-4e87-9073-66f9e0107fe3","Type":"ContainerStarted","Data":"06b745d56a0ea7fc12ca81d2c9ba2f319ffff14bd56e607e281e0645c4942100"} Jan 30 16:56:38 crc kubenswrapper[4766]: I0130 16:56:38.117254 4766 generic.go:334] "Generic (PLEG): container finished" podID="fcc1e6c0-a32a-4e87-9073-66f9e0107fe3" containerID="06b745d56a0ea7fc12ca81d2c9ba2f319ffff14bd56e607e281e0645c4942100" exitCode=0 Jan 30 16:56:38 crc kubenswrapper[4766]: I0130 16:56:38.117337 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wpxvx" event={"ID":"fcc1e6c0-a32a-4e87-9073-66f9e0107fe3","Type":"ContainerDied","Data":"06b745d56a0ea7fc12ca81d2c9ba2f319ffff14bd56e607e281e0645c4942100"} Jan 30 16:56:39 crc kubenswrapper[4766]: I0130 16:56:39.045646 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:56:39 crc kubenswrapper[4766]: I0130 16:56:39.045955 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:56:39 crc kubenswrapper[4766]: I0130 16:56:39.045995 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 16:56:39 crc kubenswrapper[4766]: I0130 16:56:39.046396 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"00fe48cd7fae11d07bb44a4d280259adee43debb3566040b546b0f1eb6622872"} pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 16:56:39 crc kubenswrapper[4766]: I0130 16:56:39.046455 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" containerID="cri-o://00fe48cd7fae11d07bb44a4d280259adee43debb3566040b546b0f1eb6622872" gracePeriod=600 Jan 30 16:56:39 crc kubenswrapper[4766]: I0130 16:56:39.126465 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wpxvx" event={"ID":"fcc1e6c0-a32a-4e87-9073-66f9e0107fe3","Type":"ContainerStarted","Data":"fd37f3cb692fbe2bbeb024aee6c952dc0d0a87c68386d738a8fdaa9dd9d8595a"} Jan 30 16:56:39 crc kubenswrapper[4766]: I0130 16:56:39.149850 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wpxvx" podStartSLOduration=2.748112008 podStartE2EDuration="5.149827911s" podCreationTimestamp="2026-01-30 16:56:34 +0000 UTC" firstStartedPulling="2026-01-30 16:56:36.101381681 +0000 UTC m=+2050.739339027" lastFinishedPulling="2026-01-30 16:56:38.503097584 +0000 UTC m=+2053.141054930" observedRunningTime="2026-01-30 16:56:39.142636633 +0000 UTC m=+2053.780593989" watchObservedRunningTime="2026-01-30 16:56:39.149827911 +0000 UTC m=+2053.787785257" Jan 30 16:56:40 crc kubenswrapper[4766]: I0130 16:56:40.135962 4766 generic.go:334] "Generic (PLEG): container finished" podID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerID="00fe48cd7fae11d07bb44a4d280259adee43debb3566040b546b0f1eb6622872" exitCode=0 Jan 30 16:56:40 crc kubenswrapper[4766]: I0130 16:56:40.136042 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerDied","Data":"00fe48cd7fae11d07bb44a4d280259adee43debb3566040b546b0f1eb6622872"} Jan 30 16:56:40 crc kubenswrapper[4766]: I0130 16:56:40.136896 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerStarted","Data":"b9d05600caa51ab1651a81b6d711e79ba7c41b6988186880a3f395a3c06b7484"} Jan 30 16:56:40 crc kubenswrapper[4766]: I0130 16:56:40.136918 4766 scope.go:117] "RemoveContainer" containerID="d754c53961dd92b39433a4ade8bd484064163d9f01082dc2421d03e086c5b027" Jan 30 16:56:44 crc kubenswrapper[4766]: I0130 16:56:44.701017 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wpxvx" Jan 30 16:56:44 crc kubenswrapper[4766]: I0130 16:56:44.701720 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wpxvx" Jan 30 16:56:44 crc kubenswrapper[4766]: I0130 16:56:44.746736 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wpxvx" Jan 30 16:56:45 crc kubenswrapper[4766]: I0130 16:56:45.253009 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wpxvx" Jan 30 16:56:45 crc kubenswrapper[4766]: I0130 16:56:45.331559 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wpxvx"] Jan 30 16:56:47 crc kubenswrapper[4766]: I0130 16:56:47.193642 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wpxvx" podUID="fcc1e6c0-a32a-4e87-9073-66f9e0107fe3" containerName="registry-server" containerID="cri-o://fd37f3cb692fbe2bbeb024aee6c952dc0d0a87c68386d738a8fdaa9dd9d8595a" gracePeriod=2 Jan 30 16:56:48 crc kubenswrapper[4766]: I0130 16:56:48.206778 4766 generic.go:334] "Generic (PLEG): container finished" podID="fcc1e6c0-a32a-4e87-9073-66f9e0107fe3" containerID="fd37f3cb692fbe2bbeb024aee6c952dc0d0a87c68386d738a8fdaa9dd9d8595a" exitCode=0 Jan 30 16:56:48 crc kubenswrapper[4766]: I0130 16:56:48.206842 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wpxvx" event={"ID":"fcc1e6c0-a32a-4e87-9073-66f9e0107fe3","Type":"ContainerDied","Data":"fd37f3cb692fbe2bbeb024aee6c952dc0d0a87c68386d738a8fdaa9dd9d8595a"} Jan 30 16:56:48 crc kubenswrapper[4766]: I0130 16:56:48.207321 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wpxvx" event={"ID":"fcc1e6c0-a32a-4e87-9073-66f9e0107fe3","Type":"ContainerDied","Data":"6b2d0f7be86b3c67cd6f21bd74e8e22e8c307143254904474497446b1ffc7a00"} Jan 30 16:56:48 crc kubenswrapper[4766]: I0130 16:56:48.207374 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b2d0f7be86b3c67cd6f21bd74e8e22e8c307143254904474497446b1ffc7a00" Jan 30 16:56:48 crc kubenswrapper[4766]: I0130 16:56:48.211668 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wpxvx" Jan 30 16:56:48 crc kubenswrapper[4766]: I0130 16:56:48.333000 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l58wz\" (UniqueName: \"kubernetes.io/projected/fcc1e6c0-a32a-4e87-9073-66f9e0107fe3-kube-api-access-l58wz\") pod \"fcc1e6c0-a32a-4e87-9073-66f9e0107fe3\" (UID: \"fcc1e6c0-a32a-4e87-9073-66f9e0107fe3\") " Jan 30 16:56:48 crc kubenswrapper[4766]: I0130 16:56:48.333086 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fcc1e6c0-a32a-4e87-9073-66f9e0107fe3-utilities\") pod \"fcc1e6c0-a32a-4e87-9073-66f9e0107fe3\" (UID: \"fcc1e6c0-a32a-4e87-9073-66f9e0107fe3\") " Jan 30 16:56:48 crc kubenswrapper[4766]: I0130 16:56:48.333213 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fcc1e6c0-a32a-4e87-9073-66f9e0107fe3-catalog-content\") pod \"fcc1e6c0-a32a-4e87-9073-66f9e0107fe3\" (UID: \"fcc1e6c0-a32a-4e87-9073-66f9e0107fe3\") " Jan 30 16:56:48 crc kubenswrapper[4766]: I0130 16:56:48.334233 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcc1e6c0-a32a-4e87-9073-66f9e0107fe3-utilities" (OuterVolumeSpecName: "utilities") pod "fcc1e6c0-a32a-4e87-9073-66f9e0107fe3" (UID: "fcc1e6c0-a32a-4e87-9073-66f9e0107fe3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:56:48 crc kubenswrapper[4766]: I0130 16:56:48.340769 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcc1e6c0-a32a-4e87-9073-66f9e0107fe3-kube-api-access-l58wz" (OuterVolumeSpecName: "kube-api-access-l58wz") pod "fcc1e6c0-a32a-4e87-9073-66f9e0107fe3" (UID: "fcc1e6c0-a32a-4e87-9073-66f9e0107fe3"). InnerVolumeSpecName "kube-api-access-l58wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:56:48 crc kubenswrapper[4766]: I0130 16:56:48.434397 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l58wz\" (UniqueName: \"kubernetes.io/projected/fcc1e6c0-a32a-4e87-9073-66f9e0107fe3-kube-api-access-l58wz\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:48 crc kubenswrapper[4766]: I0130 16:56:48.434434 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fcc1e6c0-a32a-4e87-9073-66f9e0107fe3-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:48 crc kubenswrapper[4766]: I0130 16:56:48.944019 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcc1e6c0-a32a-4e87-9073-66f9e0107fe3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fcc1e6c0-a32a-4e87-9073-66f9e0107fe3" (UID: "fcc1e6c0-a32a-4e87-9073-66f9e0107fe3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:56:49 crc kubenswrapper[4766]: I0130 16:56:49.042514 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fcc1e6c0-a32a-4e87-9073-66f9e0107fe3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:49 crc kubenswrapper[4766]: I0130 16:56:49.214360 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wpxvx" Jan 30 16:56:49 crc kubenswrapper[4766]: I0130 16:56:49.259305 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wpxvx"] Jan 30 16:56:49 crc kubenswrapper[4766]: I0130 16:56:49.267106 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wpxvx"] Jan 30 16:56:50 crc kubenswrapper[4766]: I0130 16:56:50.054154 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcc1e6c0-a32a-4e87-9073-66f9e0107fe3" path="/var/lib/kubelet/pods/fcc1e6c0-a32a-4e87-9073-66f9e0107fe3/volumes" Jan 30 16:57:40 crc kubenswrapper[4766]: I0130 16:57:40.790226 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sl26v"] Jan 30 16:57:40 crc kubenswrapper[4766]: E0130 16:57:40.791170 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcc1e6c0-a32a-4e87-9073-66f9e0107fe3" containerName="extract-utilities" Jan 30 16:57:40 crc kubenswrapper[4766]: I0130 16:57:40.791199 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcc1e6c0-a32a-4e87-9073-66f9e0107fe3" containerName="extract-utilities" Jan 30 16:57:40 crc kubenswrapper[4766]: E0130 16:57:40.791218 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcc1e6c0-a32a-4e87-9073-66f9e0107fe3" containerName="registry-server" Jan 30 16:57:40 crc kubenswrapper[4766]: I0130 16:57:40.791230 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcc1e6c0-a32a-4e87-9073-66f9e0107fe3" containerName="registry-server" Jan 30 16:57:40 crc kubenswrapper[4766]: E0130 16:57:40.791250 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcc1e6c0-a32a-4e87-9073-66f9e0107fe3" containerName="extract-content" Jan 30 16:57:40 crc kubenswrapper[4766]: I0130 16:57:40.791258 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcc1e6c0-a32a-4e87-9073-66f9e0107fe3" containerName="extract-content" Jan 30 16:57:40 crc kubenswrapper[4766]: I0130 16:57:40.791410 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcc1e6c0-a32a-4e87-9073-66f9e0107fe3" containerName="registry-server" Jan 30 16:57:40 crc kubenswrapper[4766]: I0130 16:57:40.792411 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sl26v" Jan 30 16:57:40 crc kubenswrapper[4766]: I0130 16:57:40.806755 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sl26v"] Jan 30 16:57:40 crc kubenswrapper[4766]: I0130 16:57:40.873331 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fa1d02b-4884-4bcd-ba71-4b69e1671d30-utilities\") pod \"redhat-marketplace-sl26v\" (UID: \"4fa1d02b-4884-4bcd-ba71-4b69e1671d30\") " pod="openshift-marketplace/redhat-marketplace-sl26v" Jan 30 16:57:40 crc kubenswrapper[4766]: I0130 16:57:40.873433 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btzff\" (UniqueName: \"kubernetes.io/projected/4fa1d02b-4884-4bcd-ba71-4b69e1671d30-kube-api-access-btzff\") pod \"redhat-marketplace-sl26v\" (UID: \"4fa1d02b-4884-4bcd-ba71-4b69e1671d30\") " pod="openshift-marketplace/redhat-marketplace-sl26v" Jan 30 16:57:40 crc kubenswrapper[4766]: I0130 16:57:40.873622 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fa1d02b-4884-4bcd-ba71-4b69e1671d30-catalog-content\") pod \"redhat-marketplace-sl26v\" (UID: \"4fa1d02b-4884-4bcd-ba71-4b69e1671d30\") " pod="openshift-marketplace/redhat-marketplace-sl26v" Jan 30 16:57:40 crc kubenswrapper[4766]: I0130 16:57:40.975538 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fa1d02b-4884-4bcd-ba71-4b69e1671d30-utilities\") pod \"redhat-marketplace-sl26v\" (UID: \"4fa1d02b-4884-4bcd-ba71-4b69e1671d30\") " pod="openshift-marketplace/redhat-marketplace-sl26v" Jan 30 16:57:40 crc kubenswrapper[4766]: I0130 16:57:40.975621 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btzff\" (UniqueName: \"kubernetes.io/projected/4fa1d02b-4884-4bcd-ba71-4b69e1671d30-kube-api-access-btzff\") pod \"redhat-marketplace-sl26v\" (UID: \"4fa1d02b-4884-4bcd-ba71-4b69e1671d30\") " pod="openshift-marketplace/redhat-marketplace-sl26v" Jan 30 16:57:40 crc kubenswrapper[4766]: I0130 16:57:40.975647 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fa1d02b-4884-4bcd-ba71-4b69e1671d30-catalog-content\") pod \"redhat-marketplace-sl26v\" (UID: \"4fa1d02b-4884-4bcd-ba71-4b69e1671d30\") " pod="openshift-marketplace/redhat-marketplace-sl26v" Jan 30 16:57:40 crc kubenswrapper[4766]: I0130 16:57:40.976213 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fa1d02b-4884-4bcd-ba71-4b69e1671d30-catalog-content\") pod \"redhat-marketplace-sl26v\" (UID: \"4fa1d02b-4884-4bcd-ba71-4b69e1671d30\") " pod="openshift-marketplace/redhat-marketplace-sl26v" Jan 30 16:57:40 crc kubenswrapper[4766]: I0130 16:57:40.976355 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fa1d02b-4884-4bcd-ba71-4b69e1671d30-utilities\") pod \"redhat-marketplace-sl26v\" (UID: \"4fa1d02b-4884-4bcd-ba71-4b69e1671d30\") " pod="openshift-marketplace/redhat-marketplace-sl26v" Jan 30 16:57:41 crc kubenswrapper[4766]: I0130 16:57:41.002619 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btzff\" (UniqueName: \"kubernetes.io/projected/4fa1d02b-4884-4bcd-ba71-4b69e1671d30-kube-api-access-btzff\") pod \"redhat-marketplace-sl26v\" (UID: \"4fa1d02b-4884-4bcd-ba71-4b69e1671d30\") " pod="openshift-marketplace/redhat-marketplace-sl26v" Jan 30 16:57:41 crc kubenswrapper[4766]: I0130 16:57:41.113418 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sl26v" Jan 30 16:57:41 crc kubenswrapper[4766]: I0130 16:57:41.615463 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sl26v"] Jan 30 16:57:42 crc kubenswrapper[4766]: I0130 16:57:42.580252 4766 generic.go:334] "Generic (PLEG): container finished" podID="4fa1d02b-4884-4bcd-ba71-4b69e1671d30" containerID="739174ead9c392fe0dd2c0f53bf7ea422a402253a0952d03bf93603427e19cc4" exitCode=0 Jan 30 16:57:42 crc kubenswrapper[4766]: I0130 16:57:42.580355 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sl26v" event={"ID":"4fa1d02b-4884-4bcd-ba71-4b69e1671d30","Type":"ContainerDied","Data":"739174ead9c392fe0dd2c0f53bf7ea422a402253a0952d03bf93603427e19cc4"} Jan 30 16:57:42 crc kubenswrapper[4766]: I0130 16:57:42.582330 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sl26v" event={"ID":"4fa1d02b-4884-4bcd-ba71-4b69e1671d30","Type":"ContainerStarted","Data":"ccdd5bdfa52ee79edb7f6774eaa13904e61d886e41d076b7148081f587c764b4"} Jan 30 16:57:43 crc kubenswrapper[4766]: I0130 16:57:43.593585 4766 generic.go:334] "Generic (PLEG): container finished" podID="4fa1d02b-4884-4bcd-ba71-4b69e1671d30" containerID="0b46c073450b9756e49d90514c67e16f5190e7096915555a1e1ddc39bf8742c8" exitCode=0 Jan 30 16:57:43 crc kubenswrapper[4766]: I0130 16:57:43.593678 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sl26v" event={"ID":"4fa1d02b-4884-4bcd-ba71-4b69e1671d30","Type":"ContainerDied","Data":"0b46c073450b9756e49d90514c67e16f5190e7096915555a1e1ddc39bf8742c8"} Jan 30 16:57:44 crc kubenswrapper[4766]: I0130 16:57:44.602848 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sl26v" event={"ID":"4fa1d02b-4884-4bcd-ba71-4b69e1671d30","Type":"ContainerStarted","Data":"b0d27c7cc97e9c50a66b428072c19e09a06ff5634d819fdde93e235c786f8d2f"} Jan 30 16:57:44 crc kubenswrapper[4766]: I0130 16:57:44.623848 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sl26v" podStartSLOduration=3.212726439 podStartE2EDuration="4.623831938s" podCreationTimestamp="2026-01-30 16:57:40 +0000 UTC" firstStartedPulling="2026-01-30 16:57:42.582273907 +0000 UTC m=+2117.220231253" lastFinishedPulling="2026-01-30 16:57:43.993379406 +0000 UTC m=+2118.631336752" observedRunningTime="2026-01-30 16:57:44.618641876 +0000 UTC m=+2119.256599232" watchObservedRunningTime="2026-01-30 16:57:44.623831938 +0000 UTC m=+2119.261789284" Jan 30 16:57:51 crc kubenswrapper[4766]: I0130 16:57:51.114443 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sl26v" Jan 30 16:57:51 crc kubenswrapper[4766]: I0130 16:57:51.115002 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sl26v" Jan 30 16:57:51 crc kubenswrapper[4766]: I0130 16:57:51.156226 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sl26v" Jan 30 16:57:51 crc kubenswrapper[4766]: I0130 16:57:51.685114 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sl26v" Jan 30 16:57:52 crc kubenswrapper[4766]: I0130 16:57:52.514491 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sl26v"] Jan 30 16:57:53 crc kubenswrapper[4766]: I0130 16:57:53.661322 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sl26v" podUID="4fa1d02b-4884-4bcd-ba71-4b69e1671d30" containerName="registry-server" containerID="cri-o://b0d27c7cc97e9c50a66b428072c19e09a06ff5634d819fdde93e235c786f8d2f" gracePeriod=2 Jan 30 16:57:54 crc kubenswrapper[4766]: I0130 16:57:54.670797 4766 generic.go:334] "Generic (PLEG): container finished" podID="4fa1d02b-4884-4bcd-ba71-4b69e1671d30" containerID="b0d27c7cc97e9c50a66b428072c19e09a06ff5634d819fdde93e235c786f8d2f" exitCode=0 Jan 30 16:57:54 crc kubenswrapper[4766]: I0130 16:57:54.671152 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sl26v" event={"ID":"4fa1d02b-4884-4bcd-ba71-4b69e1671d30","Type":"ContainerDied","Data":"b0d27c7cc97e9c50a66b428072c19e09a06ff5634d819fdde93e235c786f8d2f"} Jan 30 16:57:54 crc kubenswrapper[4766]: I0130 16:57:54.783162 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sl26v" Jan 30 16:57:54 crc kubenswrapper[4766]: I0130 16:57:54.877500 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fa1d02b-4884-4bcd-ba71-4b69e1671d30-catalog-content\") pod \"4fa1d02b-4884-4bcd-ba71-4b69e1671d30\" (UID: \"4fa1d02b-4884-4bcd-ba71-4b69e1671d30\") " Jan 30 16:57:54 crc kubenswrapper[4766]: I0130 16:57:54.877559 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btzff\" (UniqueName: \"kubernetes.io/projected/4fa1d02b-4884-4bcd-ba71-4b69e1671d30-kube-api-access-btzff\") pod \"4fa1d02b-4884-4bcd-ba71-4b69e1671d30\" (UID: \"4fa1d02b-4884-4bcd-ba71-4b69e1671d30\") " Jan 30 16:57:54 crc kubenswrapper[4766]: I0130 16:57:54.877672 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fa1d02b-4884-4bcd-ba71-4b69e1671d30-utilities\") pod \"4fa1d02b-4884-4bcd-ba71-4b69e1671d30\" (UID: \"4fa1d02b-4884-4bcd-ba71-4b69e1671d30\") " Jan 30 16:57:54 crc kubenswrapper[4766]: I0130 16:57:54.878830 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fa1d02b-4884-4bcd-ba71-4b69e1671d30-utilities" (OuterVolumeSpecName: "utilities") pod "4fa1d02b-4884-4bcd-ba71-4b69e1671d30" (UID: "4fa1d02b-4884-4bcd-ba71-4b69e1671d30"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:57:54 crc kubenswrapper[4766]: I0130 16:57:54.883518 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fa1d02b-4884-4bcd-ba71-4b69e1671d30-kube-api-access-btzff" (OuterVolumeSpecName: "kube-api-access-btzff") pod "4fa1d02b-4884-4bcd-ba71-4b69e1671d30" (UID: "4fa1d02b-4884-4bcd-ba71-4b69e1671d30"). InnerVolumeSpecName "kube-api-access-btzff". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:57:54 crc kubenswrapper[4766]: I0130 16:57:54.906918 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fa1d02b-4884-4bcd-ba71-4b69e1671d30-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4fa1d02b-4884-4bcd-ba71-4b69e1671d30" (UID: "4fa1d02b-4884-4bcd-ba71-4b69e1671d30"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:57:54 crc kubenswrapper[4766]: I0130 16:57:54.979893 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fa1d02b-4884-4bcd-ba71-4b69e1671d30-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:57:54 crc kubenswrapper[4766]: I0130 16:57:54.979954 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fa1d02b-4884-4bcd-ba71-4b69e1671d30-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:57:54 crc kubenswrapper[4766]: I0130 16:57:54.979981 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btzff\" (UniqueName: \"kubernetes.io/projected/4fa1d02b-4884-4bcd-ba71-4b69e1671d30-kube-api-access-btzff\") on node \"crc\" DevicePath \"\"" Jan 30 16:57:55 crc kubenswrapper[4766]: I0130 16:57:55.121563 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lbvcl"] Jan 30 16:57:55 crc kubenswrapper[4766]: E0130 16:57:55.121895 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fa1d02b-4884-4bcd-ba71-4b69e1671d30" containerName="extract-content" Jan 30 16:57:55 crc kubenswrapper[4766]: I0130 16:57:55.121920 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fa1d02b-4884-4bcd-ba71-4b69e1671d30" containerName="extract-content" Jan 30 16:57:55 crc kubenswrapper[4766]: E0130 16:57:55.121932 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fa1d02b-4884-4bcd-ba71-4b69e1671d30" containerName="registry-server" Jan 30 16:57:55 crc kubenswrapper[4766]: I0130 16:57:55.121940 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fa1d02b-4884-4bcd-ba71-4b69e1671d30" containerName="registry-server" Jan 30 16:57:55 crc kubenswrapper[4766]: E0130 16:57:55.121965 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fa1d02b-4884-4bcd-ba71-4b69e1671d30" containerName="extract-utilities" Jan 30 16:57:55 crc kubenswrapper[4766]: I0130 16:57:55.121973 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fa1d02b-4884-4bcd-ba71-4b69e1671d30" containerName="extract-utilities" Jan 30 16:57:55 crc kubenswrapper[4766]: I0130 16:57:55.122172 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fa1d02b-4884-4bcd-ba71-4b69e1671d30" containerName="registry-server" Jan 30 16:57:55 crc kubenswrapper[4766]: I0130 16:57:55.123379 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lbvcl" Jan 30 16:57:55 crc kubenswrapper[4766]: I0130 16:57:55.133765 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lbvcl"] Jan 30 16:57:55 crc kubenswrapper[4766]: I0130 16:57:55.182147 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6890c084-11c8-4290-86ee-2fb441a2b063-catalog-content\") pod \"redhat-operators-lbvcl\" (UID: \"6890c084-11c8-4290-86ee-2fb441a2b063\") " pod="openshift-marketplace/redhat-operators-lbvcl" Jan 30 16:57:55 crc kubenswrapper[4766]: I0130 16:57:55.182222 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6890c084-11c8-4290-86ee-2fb441a2b063-utilities\") pod \"redhat-operators-lbvcl\" (UID: \"6890c084-11c8-4290-86ee-2fb441a2b063\") " pod="openshift-marketplace/redhat-operators-lbvcl" Jan 30 16:57:55 crc kubenswrapper[4766]: I0130 16:57:55.182252 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwvh2\" (UniqueName: \"kubernetes.io/projected/6890c084-11c8-4290-86ee-2fb441a2b063-kube-api-access-dwvh2\") pod \"redhat-operators-lbvcl\" (UID: \"6890c084-11c8-4290-86ee-2fb441a2b063\") " pod="openshift-marketplace/redhat-operators-lbvcl" Jan 30 16:57:55 crc kubenswrapper[4766]: I0130 16:57:55.283411 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwvh2\" (UniqueName: \"kubernetes.io/projected/6890c084-11c8-4290-86ee-2fb441a2b063-kube-api-access-dwvh2\") pod \"redhat-operators-lbvcl\" (UID: \"6890c084-11c8-4290-86ee-2fb441a2b063\") " pod="openshift-marketplace/redhat-operators-lbvcl" Jan 30 16:57:55 crc kubenswrapper[4766]: I0130 16:57:55.283558 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6890c084-11c8-4290-86ee-2fb441a2b063-catalog-content\") pod \"redhat-operators-lbvcl\" (UID: \"6890c084-11c8-4290-86ee-2fb441a2b063\") " pod="openshift-marketplace/redhat-operators-lbvcl" Jan 30 16:57:55 crc kubenswrapper[4766]: I0130 16:57:55.283612 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6890c084-11c8-4290-86ee-2fb441a2b063-utilities\") pod \"redhat-operators-lbvcl\" (UID: \"6890c084-11c8-4290-86ee-2fb441a2b063\") " pod="openshift-marketplace/redhat-operators-lbvcl" Jan 30 16:57:55 crc kubenswrapper[4766]: I0130 16:57:55.284055 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6890c084-11c8-4290-86ee-2fb441a2b063-catalog-content\") pod \"redhat-operators-lbvcl\" (UID: \"6890c084-11c8-4290-86ee-2fb441a2b063\") " pod="openshift-marketplace/redhat-operators-lbvcl" Jan 30 16:57:55 crc kubenswrapper[4766]: I0130 16:57:55.284147 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6890c084-11c8-4290-86ee-2fb441a2b063-utilities\") pod \"redhat-operators-lbvcl\" (UID: \"6890c084-11c8-4290-86ee-2fb441a2b063\") " pod="openshift-marketplace/redhat-operators-lbvcl" Jan 30 16:57:55 crc kubenswrapper[4766]: I0130 16:57:55.302654 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwvh2\" (UniqueName: \"kubernetes.io/projected/6890c084-11c8-4290-86ee-2fb441a2b063-kube-api-access-dwvh2\") pod \"redhat-operators-lbvcl\" (UID: \"6890c084-11c8-4290-86ee-2fb441a2b063\") " pod="openshift-marketplace/redhat-operators-lbvcl" Jan 30 16:57:55 crc kubenswrapper[4766]: I0130 16:57:55.452084 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lbvcl" Jan 30 16:57:55 crc kubenswrapper[4766]: I0130 16:57:55.679024 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sl26v" event={"ID":"4fa1d02b-4884-4bcd-ba71-4b69e1671d30","Type":"ContainerDied","Data":"ccdd5bdfa52ee79edb7f6774eaa13904e61d886e41d076b7148081f587c764b4"} Jan 30 16:57:55 crc kubenswrapper[4766]: I0130 16:57:55.679065 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sl26v" Jan 30 16:57:55 crc kubenswrapper[4766]: I0130 16:57:55.679079 4766 scope.go:117] "RemoveContainer" containerID="b0d27c7cc97e9c50a66b428072c19e09a06ff5634d819fdde93e235c786f8d2f" Jan 30 16:57:55 crc kubenswrapper[4766]: I0130 16:57:55.721818 4766 scope.go:117] "RemoveContainer" containerID="0b46c073450b9756e49d90514c67e16f5190e7096915555a1e1ddc39bf8742c8" Jan 30 16:57:55 crc kubenswrapper[4766]: I0130 16:57:55.722588 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sl26v"] Jan 30 16:57:55 crc kubenswrapper[4766]: I0130 16:57:55.739985 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sl26v"] Jan 30 16:57:55 crc kubenswrapper[4766]: I0130 16:57:55.756875 4766 scope.go:117] "RemoveContainer" containerID="739174ead9c392fe0dd2c0f53bf7ea422a402253a0952d03bf93603427e19cc4" Jan 30 16:57:55 crc kubenswrapper[4766]: I0130 16:57:55.940306 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lbvcl"] Jan 30 16:57:56 crc kubenswrapper[4766]: I0130 16:57:56.049642 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fa1d02b-4884-4bcd-ba71-4b69e1671d30" path="/var/lib/kubelet/pods/4fa1d02b-4884-4bcd-ba71-4b69e1671d30/volumes" Jan 30 16:57:56 crc kubenswrapper[4766]: I0130 16:57:56.704196 4766 generic.go:334] "Generic (PLEG): container finished" podID="6890c084-11c8-4290-86ee-2fb441a2b063" containerID="f2423a0776230d6cb57f6a986310385cc1e6bf3dd436375b29a2992f3b112ca9" exitCode=0 Jan 30 16:57:56 crc kubenswrapper[4766]: I0130 16:57:56.704494 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lbvcl" event={"ID":"6890c084-11c8-4290-86ee-2fb441a2b063","Type":"ContainerDied","Data":"f2423a0776230d6cb57f6a986310385cc1e6bf3dd436375b29a2992f3b112ca9"} Jan 30 16:57:56 crc kubenswrapper[4766]: I0130 16:57:56.704520 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lbvcl" event={"ID":"6890c084-11c8-4290-86ee-2fb441a2b063","Type":"ContainerStarted","Data":"1b170da8fc570bba1de5c18062ea65fa9bbbbb36c3da01230677781c904c66f0"} Jan 30 16:57:57 crc kubenswrapper[4766]: I0130 16:57:57.716045 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lbvcl" event={"ID":"6890c084-11c8-4290-86ee-2fb441a2b063","Type":"ContainerStarted","Data":"18de966600049282368a8bad3c0e760ae11cf68ff7265b22c312d56a7faefa2f"} Jan 30 16:57:58 crc kubenswrapper[4766]: I0130 16:57:58.725726 4766 generic.go:334] "Generic (PLEG): container finished" podID="6890c084-11c8-4290-86ee-2fb441a2b063" containerID="18de966600049282368a8bad3c0e760ae11cf68ff7265b22c312d56a7faefa2f" exitCode=0 Jan 30 16:57:58 crc kubenswrapper[4766]: I0130 16:57:58.725806 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lbvcl" event={"ID":"6890c084-11c8-4290-86ee-2fb441a2b063","Type":"ContainerDied","Data":"18de966600049282368a8bad3c0e760ae11cf68ff7265b22c312d56a7faefa2f"} Jan 30 16:57:59 crc kubenswrapper[4766]: I0130 16:57:59.758639 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lbvcl" event={"ID":"6890c084-11c8-4290-86ee-2fb441a2b063","Type":"ContainerStarted","Data":"07dc4015a28e7998d766fc454b1bfbaf1c839f1de0e8998d644294ab33b29751"} Jan 30 16:57:59 crc kubenswrapper[4766]: I0130 16:57:59.785785 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lbvcl" podStartSLOduration=2.386991377 podStartE2EDuration="4.785763423s" podCreationTimestamp="2026-01-30 16:57:55 +0000 UTC" firstStartedPulling="2026-01-30 16:57:56.707334547 +0000 UTC m=+2131.345291893" lastFinishedPulling="2026-01-30 16:57:59.106106603 +0000 UTC m=+2133.744063939" observedRunningTime="2026-01-30 16:57:59.779401487 +0000 UTC m=+2134.417358833" watchObservedRunningTime="2026-01-30 16:57:59.785763423 +0000 UTC m=+2134.423720779" Jan 30 16:58:05 crc kubenswrapper[4766]: I0130 16:58:05.452848 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lbvcl" Jan 30 16:58:05 crc kubenswrapper[4766]: I0130 16:58:05.453216 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lbvcl" Jan 30 16:58:05 crc kubenswrapper[4766]: I0130 16:58:05.502039 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lbvcl" Jan 30 16:58:05 crc kubenswrapper[4766]: I0130 16:58:05.842969 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lbvcl" Jan 30 16:58:09 crc kubenswrapper[4766]: I0130 16:58:09.109768 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lbvcl"] Jan 30 16:58:09 crc kubenswrapper[4766]: I0130 16:58:09.110452 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lbvcl" podUID="6890c084-11c8-4290-86ee-2fb441a2b063" containerName="registry-server" containerID="cri-o://07dc4015a28e7998d766fc454b1bfbaf1c839f1de0e8998d644294ab33b29751" gracePeriod=2 Jan 30 16:58:09 crc kubenswrapper[4766]: I0130 16:58:09.824463 4766 generic.go:334] "Generic (PLEG): container finished" podID="6890c084-11c8-4290-86ee-2fb441a2b063" containerID="07dc4015a28e7998d766fc454b1bfbaf1c839f1de0e8998d644294ab33b29751" exitCode=0 Jan 30 16:58:09 crc kubenswrapper[4766]: I0130 16:58:09.824531 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lbvcl" event={"ID":"6890c084-11c8-4290-86ee-2fb441a2b063","Type":"ContainerDied","Data":"07dc4015a28e7998d766fc454b1bfbaf1c839f1de0e8998d644294ab33b29751"} Jan 30 16:58:10 crc kubenswrapper[4766]: I0130 16:58:10.083216 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lbvcl" Jan 30 16:58:10 crc kubenswrapper[4766]: I0130 16:58:10.196815 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6890c084-11c8-4290-86ee-2fb441a2b063-utilities\") pod \"6890c084-11c8-4290-86ee-2fb441a2b063\" (UID: \"6890c084-11c8-4290-86ee-2fb441a2b063\") " Jan 30 16:58:10 crc kubenswrapper[4766]: I0130 16:58:10.196910 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6890c084-11c8-4290-86ee-2fb441a2b063-catalog-content\") pod \"6890c084-11c8-4290-86ee-2fb441a2b063\" (UID: \"6890c084-11c8-4290-86ee-2fb441a2b063\") " Jan 30 16:58:10 crc kubenswrapper[4766]: I0130 16:58:10.196969 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwvh2\" (UniqueName: \"kubernetes.io/projected/6890c084-11c8-4290-86ee-2fb441a2b063-kube-api-access-dwvh2\") pod \"6890c084-11c8-4290-86ee-2fb441a2b063\" (UID: \"6890c084-11c8-4290-86ee-2fb441a2b063\") " Jan 30 16:58:10 crc kubenswrapper[4766]: I0130 16:58:10.197658 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6890c084-11c8-4290-86ee-2fb441a2b063-utilities" (OuterVolumeSpecName: "utilities") pod "6890c084-11c8-4290-86ee-2fb441a2b063" (UID: "6890c084-11c8-4290-86ee-2fb441a2b063"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:58:10 crc kubenswrapper[4766]: I0130 16:58:10.202554 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6890c084-11c8-4290-86ee-2fb441a2b063-kube-api-access-dwvh2" (OuterVolumeSpecName: "kube-api-access-dwvh2") pod "6890c084-11c8-4290-86ee-2fb441a2b063" (UID: "6890c084-11c8-4290-86ee-2fb441a2b063"). InnerVolumeSpecName "kube-api-access-dwvh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:58:10 crc kubenswrapper[4766]: I0130 16:58:10.298529 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwvh2\" (UniqueName: \"kubernetes.io/projected/6890c084-11c8-4290-86ee-2fb441a2b063-kube-api-access-dwvh2\") on node \"crc\" DevicePath \"\"" Jan 30 16:58:10 crc kubenswrapper[4766]: I0130 16:58:10.298562 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6890c084-11c8-4290-86ee-2fb441a2b063-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:58:10 crc kubenswrapper[4766]: I0130 16:58:10.320799 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6890c084-11c8-4290-86ee-2fb441a2b063-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6890c084-11c8-4290-86ee-2fb441a2b063" (UID: "6890c084-11c8-4290-86ee-2fb441a2b063"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:58:10 crc kubenswrapper[4766]: I0130 16:58:10.400080 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6890c084-11c8-4290-86ee-2fb441a2b063-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:58:10 crc kubenswrapper[4766]: I0130 16:58:10.836687 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lbvcl" event={"ID":"6890c084-11c8-4290-86ee-2fb441a2b063","Type":"ContainerDied","Data":"1b170da8fc570bba1de5c18062ea65fa9bbbbb36c3da01230677781c904c66f0"} Jan 30 16:58:10 crc kubenswrapper[4766]: I0130 16:58:10.836751 4766 scope.go:117] "RemoveContainer" containerID="07dc4015a28e7998d766fc454b1bfbaf1c839f1de0e8998d644294ab33b29751" Jan 30 16:58:10 crc kubenswrapper[4766]: I0130 16:58:10.836871 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lbvcl" Jan 30 16:58:10 crc kubenswrapper[4766]: I0130 16:58:10.857376 4766 scope.go:117] "RemoveContainer" containerID="18de966600049282368a8bad3c0e760ae11cf68ff7265b22c312d56a7faefa2f" Jan 30 16:58:10 crc kubenswrapper[4766]: I0130 16:58:10.874936 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lbvcl"] Jan 30 16:58:10 crc kubenswrapper[4766]: I0130 16:58:10.880711 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lbvcl"] Jan 30 16:58:10 crc kubenswrapper[4766]: I0130 16:58:10.907798 4766 scope.go:117] "RemoveContainer" containerID="f2423a0776230d6cb57f6a986310385cc1e6bf3dd436375b29a2992f3b112ca9" Jan 30 16:58:12 crc kubenswrapper[4766]: I0130 16:58:12.048672 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6890c084-11c8-4290-86ee-2fb441a2b063" path="/var/lib/kubelet/pods/6890c084-11c8-4290-86ee-2fb441a2b063/volumes" Jan 30 16:58:39 crc kubenswrapper[4766]: I0130 16:58:39.045100 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:58:39 crc kubenswrapper[4766]: I0130 16:58:39.045627 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:59:09 crc kubenswrapper[4766]: I0130 16:59:09.045613 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:59:09 crc kubenswrapper[4766]: I0130 16:59:09.046288 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:59:09 crc kubenswrapper[4766]: I0130 16:59:09.859846 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9869j"] Jan 30 16:59:09 crc kubenswrapper[4766]: E0130 16:59:09.860244 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6890c084-11c8-4290-86ee-2fb441a2b063" containerName="extract-content" Jan 30 16:59:09 crc kubenswrapper[4766]: I0130 16:59:09.860262 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="6890c084-11c8-4290-86ee-2fb441a2b063" containerName="extract-content" Jan 30 16:59:09 crc kubenswrapper[4766]: E0130 16:59:09.860273 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6890c084-11c8-4290-86ee-2fb441a2b063" containerName="registry-server" Jan 30 16:59:09 crc kubenswrapper[4766]: I0130 16:59:09.860281 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="6890c084-11c8-4290-86ee-2fb441a2b063" containerName="registry-server" Jan 30 16:59:09 crc kubenswrapper[4766]: E0130 16:59:09.860308 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6890c084-11c8-4290-86ee-2fb441a2b063" containerName="extract-utilities" Jan 30 16:59:09 crc kubenswrapper[4766]: I0130 16:59:09.860319 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="6890c084-11c8-4290-86ee-2fb441a2b063" containerName="extract-utilities" Jan 30 16:59:09 crc kubenswrapper[4766]: I0130 16:59:09.860521 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="6890c084-11c8-4290-86ee-2fb441a2b063" containerName="registry-server" Jan 30 16:59:09 crc kubenswrapper[4766]: I0130 16:59:09.861739 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9869j" Jan 30 16:59:09 crc kubenswrapper[4766]: I0130 16:59:09.869641 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9869j"] Jan 30 16:59:10 crc kubenswrapper[4766]: I0130 16:59:10.009492 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhf72\" (UniqueName: \"kubernetes.io/projected/cfad3300-7036-4130-8d07-49650b704e5d-kube-api-access-qhf72\") pod \"community-operators-9869j\" (UID: \"cfad3300-7036-4130-8d07-49650b704e5d\") " pod="openshift-marketplace/community-operators-9869j" Jan 30 16:59:10 crc kubenswrapper[4766]: I0130 16:59:10.010222 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfad3300-7036-4130-8d07-49650b704e5d-catalog-content\") pod \"community-operators-9869j\" (UID: \"cfad3300-7036-4130-8d07-49650b704e5d\") " pod="openshift-marketplace/community-operators-9869j" Jan 30 16:59:10 crc kubenswrapper[4766]: I0130 16:59:10.010321 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfad3300-7036-4130-8d07-49650b704e5d-utilities\") pod \"community-operators-9869j\" (UID: \"cfad3300-7036-4130-8d07-49650b704e5d\") " pod="openshift-marketplace/community-operators-9869j" Jan 30 16:59:10 crc kubenswrapper[4766]: I0130 16:59:10.112103 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfad3300-7036-4130-8d07-49650b704e5d-utilities\") pod \"community-operators-9869j\" (UID: \"cfad3300-7036-4130-8d07-49650b704e5d\") " pod="openshift-marketplace/community-operators-9869j" Jan 30 16:59:10 crc kubenswrapper[4766]: I0130 16:59:10.112198 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhf72\" (UniqueName: \"kubernetes.io/projected/cfad3300-7036-4130-8d07-49650b704e5d-kube-api-access-qhf72\") pod \"community-operators-9869j\" (UID: \"cfad3300-7036-4130-8d07-49650b704e5d\") " pod="openshift-marketplace/community-operators-9869j" Jan 30 16:59:10 crc kubenswrapper[4766]: I0130 16:59:10.112251 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfad3300-7036-4130-8d07-49650b704e5d-catalog-content\") pod \"community-operators-9869j\" (UID: \"cfad3300-7036-4130-8d07-49650b704e5d\") " pod="openshift-marketplace/community-operators-9869j" Jan 30 16:59:10 crc kubenswrapper[4766]: I0130 16:59:10.113278 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfad3300-7036-4130-8d07-49650b704e5d-utilities\") pod \"community-operators-9869j\" (UID: \"cfad3300-7036-4130-8d07-49650b704e5d\") " pod="openshift-marketplace/community-operators-9869j" Jan 30 16:59:10 crc kubenswrapper[4766]: I0130 16:59:10.113524 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfad3300-7036-4130-8d07-49650b704e5d-catalog-content\") pod \"community-operators-9869j\" (UID: \"cfad3300-7036-4130-8d07-49650b704e5d\") " pod="openshift-marketplace/community-operators-9869j" Jan 30 16:59:10 crc kubenswrapper[4766]: I0130 16:59:10.138379 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhf72\" (UniqueName: \"kubernetes.io/projected/cfad3300-7036-4130-8d07-49650b704e5d-kube-api-access-qhf72\") pod \"community-operators-9869j\" (UID: \"cfad3300-7036-4130-8d07-49650b704e5d\") " pod="openshift-marketplace/community-operators-9869j" Jan 30 16:59:10 crc kubenswrapper[4766]: I0130 16:59:10.183908 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9869j" Jan 30 16:59:10 crc kubenswrapper[4766]: I0130 16:59:10.679971 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9869j"] Jan 30 16:59:11 crc kubenswrapper[4766]: I0130 16:59:11.212704 4766 generic.go:334] "Generic (PLEG): container finished" podID="cfad3300-7036-4130-8d07-49650b704e5d" containerID="f7f46bc1bf93df440c0e44324ea2e81685d5604b0ad2aade6c25ed6c9cce2f43" exitCode=0 Jan 30 16:59:11 crc kubenswrapper[4766]: I0130 16:59:11.212758 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9869j" event={"ID":"cfad3300-7036-4130-8d07-49650b704e5d","Type":"ContainerDied","Data":"f7f46bc1bf93df440c0e44324ea2e81685d5604b0ad2aade6c25ed6c9cce2f43"} Jan 30 16:59:11 crc kubenswrapper[4766]: I0130 16:59:11.212788 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9869j" event={"ID":"cfad3300-7036-4130-8d07-49650b704e5d","Type":"ContainerStarted","Data":"f11f3d79fd87671cc27f1787b9c35d3fc4e26257bf6aaca3cfab79e3d4d29c01"} Jan 30 16:59:12 crc kubenswrapper[4766]: I0130 16:59:12.222430 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9869j" event={"ID":"cfad3300-7036-4130-8d07-49650b704e5d","Type":"ContainerStarted","Data":"98296e4d03f17aab9627df71a2bbc74582e1ca9f572b5d5c47e79c543bced9d6"} Jan 30 16:59:13 crc kubenswrapper[4766]: I0130 16:59:13.230162 4766 generic.go:334] "Generic (PLEG): container finished" podID="cfad3300-7036-4130-8d07-49650b704e5d" containerID="98296e4d03f17aab9627df71a2bbc74582e1ca9f572b5d5c47e79c543bced9d6" exitCode=0 Jan 30 16:59:13 crc kubenswrapper[4766]: I0130 16:59:13.230307 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9869j" event={"ID":"cfad3300-7036-4130-8d07-49650b704e5d","Type":"ContainerDied","Data":"98296e4d03f17aab9627df71a2bbc74582e1ca9f572b5d5c47e79c543bced9d6"} Jan 30 16:59:14 crc kubenswrapper[4766]: I0130 16:59:14.240406 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9869j" event={"ID":"cfad3300-7036-4130-8d07-49650b704e5d","Type":"ContainerStarted","Data":"4ada81882c93cacd4a461051e6022fc328f13ba528555aa9ab47ebc2a7365376"} Jan 30 16:59:14 crc kubenswrapper[4766]: I0130 16:59:14.258873 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9869j" podStartSLOduration=2.824696718 podStartE2EDuration="5.258852101s" podCreationTimestamp="2026-01-30 16:59:09 +0000 UTC" firstStartedPulling="2026-01-30 16:59:11.215314807 +0000 UTC m=+2205.853272153" lastFinishedPulling="2026-01-30 16:59:13.64947019 +0000 UTC m=+2208.287427536" observedRunningTime="2026-01-30 16:59:14.255879039 +0000 UTC m=+2208.893836385" watchObservedRunningTime="2026-01-30 16:59:14.258852101 +0000 UTC m=+2208.896809467" Jan 30 16:59:20 crc kubenswrapper[4766]: I0130 16:59:20.185579 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9869j" Jan 30 16:59:20 crc kubenswrapper[4766]: I0130 16:59:20.185877 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9869j" Jan 30 16:59:20 crc kubenswrapper[4766]: I0130 16:59:20.234196 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9869j" Jan 30 16:59:20 crc kubenswrapper[4766]: I0130 16:59:20.329388 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9869j" Jan 30 16:59:20 crc kubenswrapper[4766]: I0130 16:59:20.468350 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9869j"] Jan 30 16:59:22 crc kubenswrapper[4766]: I0130 16:59:22.305710 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9869j" podUID="cfad3300-7036-4130-8d07-49650b704e5d" containerName="registry-server" containerID="cri-o://4ada81882c93cacd4a461051e6022fc328f13ba528555aa9ab47ebc2a7365376" gracePeriod=2 Jan 30 16:59:22 crc kubenswrapper[4766]: I0130 16:59:22.715403 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9869j" Jan 30 16:59:22 crc kubenswrapper[4766]: I0130 16:59:22.914440 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfad3300-7036-4130-8d07-49650b704e5d-utilities\") pod \"cfad3300-7036-4130-8d07-49650b704e5d\" (UID: \"cfad3300-7036-4130-8d07-49650b704e5d\") " Jan 30 16:59:22 crc kubenswrapper[4766]: I0130 16:59:22.914510 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhf72\" (UniqueName: \"kubernetes.io/projected/cfad3300-7036-4130-8d07-49650b704e5d-kube-api-access-qhf72\") pod \"cfad3300-7036-4130-8d07-49650b704e5d\" (UID: \"cfad3300-7036-4130-8d07-49650b704e5d\") " Jan 30 16:59:22 crc kubenswrapper[4766]: I0130 16:59:22.915362 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfad3300-7036-4130-8d07-49650b704e5d-utilities" (OuterVolumeSpecName: "utilities") pod "cfad3300-7036-4130-8d07-49650b704e5d" (UID: "cfad3300-7036-4130-8d07-49650b704e5d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:59:22 crc kubenswrapper[4766]: I0130 16:59:22.915533 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfad3300-7036-4130-8d07-49650b704e5d-catalog-content\") pod \"cfad3300-7036-4130-8d07-49650b704e5d\" (UID: \"cfad3300-7036-4130-8d07-49650b704e5d\") " Jan 30 16:59:22 crc kubenswrapper[4766]: I0130 16:59:22.916069 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfad3300-7036-4130-8d07-49650b704e5d-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:59:22 crc kubenswrapper[4766]: I0130 16:59:22.919801 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfad3300-7036-4130-8d07-49650b704e5d-kube-api-access-qhf72" (OuterVolumeSpecName: "kube-api-access-qhf72") pod "cfad3300-7036-4130-8d07-49650b704e5d" (UID: "cfad3300-7036-4130-8d07-49650b704e5d"). InnerVolumeSpecName "kube-api-access-qhf72". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:59:22 crc kubenswrapper[4766]: I0130 16:59:22.972983 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfad3300-7036-4130-8d07-49650b704e5d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cfad3300-7036-4130-8d07-49650b704e5d" (UID: "cfad3300-7036-4130-8d07-49650b704e5d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:59:23 crc kubenswrapper[4766]: I0130 16:59:23.016663 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhf72\" (UniqueName: \"kubernetes.io/projected/cfad3300-7036-4130-8d07-49650b704e5d-kube-api-access-qhf72\") on node \"crc\" DevicePath \"\"" Jan 30 16:59:23 crc kubenswrapper[4766]: I0130 16:59:23.016702 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfad3300-7036-4130-8d07-49650b704e5d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:59:23 crc kubenswrapper[4766]: I0130 16:59:23.315529 4766 generic.go:334] "Generic (PLEG): container finished" podID="cfad3300-7036-4130-8d07-49650b704e5d" containerID="4ada81882c93cacd4a461051e6022fc328f13ba528555aa9ab47ebc2a7365376" exitCode=0 Jan 30 16:59:23 crc kubenswrapper[4766]: I0130 16:59:23.315571 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9869j" event={"ID":"cfad3300-7036-4130-8d07-49650b704e5d","Type":"ContainerDied","Data":"4ada81882c93cacd4a461051e6022fc328f13ba528555aa9ab47ebc2a7365376"} Jan 30 16:59:23 crc kubenswrapper[4766]: I0130 16:59:23.315597 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9869j" Jan 30 16:59:23 crc kubenswrapper[4766]: I0130 16:59:23.315622 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9869j" event={"ID":"cfad3300-7036-4130-8d07-49650b704e5d","Type":"ContainerDied","Data":"f11f3d79fd87671cc27f1787b9c35d3fc4e26257bf6aaca3cfab79e3d4d29c01"} Jan 30 16:59:23 crc kubenswrapper[4766]: I0130 16:59:23.315640 4766 scope.go:117] "RemoveContainer" containerID="4ada81882c93cacd4a461051e6022fc328f13ba528555aa9ab47ebc2a7365376" Jan 30 16:59:23 crc kubenswrapper[4766]: I0130 16:59:23.335092 4766 scope.go:117] "RemoveContainer" containerID="98296e4d03f17aab9627df71a2bbc74582e1ca9f572b5d5c47e79c543bced9d6" Jan 30 16:59:23 crc kubenswrapper[4766]: I0130 16:59:23.346067 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9869j"] Jan 30 16:59:23 crc kubenswrapper[4766]: I0130 16:59:23.353975 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9869j"] Jan 30 16:59:23 crc kubenswrapper[4766]: I0130 16:59:23.372124 4766 scope.go:117] "RemoveContainer" containerID="f7f46bc1bf93df440c0e44324ea2e81685d5604b0ad2aade6c25ed6c9cce2f43" Jan 30 16:59:23 crc kubenswrapper[4766]: I0130 16:59:23.386758 4766 scope.go:117] "RemoveContainer" containerID="4ada81882c93cacd4a461051e6022fc328f13ba528555aa9ab47ebc2a7365376" Jan 30 16:59:23 crc kubenswrapper[4766]: E0130 16:59:23.387435 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ada81882c93cacd4a461051e6022fc328f13ba528555aa9ab47ebc2a7365376\": container with ID starting with 4ada81882c93cacd4a461051e6022fc328f13ba528555aa9ab47ebc2a7365376 not found: ID does not exist" containerID="4ada81882c93cacd4a461051e6022fc328f13ba528555aa9ab47ebc2a7365376" Jan 30 16:59:23 crc kubenswrapper[4766]: I0130 16:59:23.387479 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ada81882c93cacd4a461051e6022fc328f13ba528555aa9ab47ebc2a7365376"} err="failed to get container status \"4ada81882c93cacd4a461051e6022fc328f13ba528555aa9ab47ebc2a7365376\": rpc error: code = NotFound desc = could not find container \"4ada81882c93cacd4a461051e6022fc328f13ba528555aa9ab47ebc2a7365376\": container with ID starting with 4ada81882c93cacd4a461051e6022fc328f13ba528555aa9ab47ebc2a7365376 not found: ID does not exist" Jan 30 16:59:23 crc kubenswrapper[4766]: I0130 16:59:23.387507 4766 scope.go:117] "RemoveContainer" containerID="98296e4d03f17aab9627df71a2bbc74582e1ca9f572b5d5c47e79c543bced9d6" Jan 30 16:59:23 crc kubenswrapper[4766]: E0130 16:59:23.387926 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98296e4d03f17aab9627df71a2bbc74582e1ca9f572b5d5c47e79c543bced9d6\": container with ID starting with 98296e4d03f17aab9627df71a2bbc74582e1ca9f572b5d5c47e79c543bced9d6 not found: ID does not exist" containerID="98296e4d03f17aab9627df71a2bbc74582e1ca9f572b5d5c47e79c543bced9d6" Jan 30 16:59:23 crc kubenswrapper[4766]: I0130 16:59:23.387978 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98296e4d03f17aab9627df71a2bbc74582e1ca9f572b5d5c47e79c543bced9d6"} err="failed to get container status \"98296e4d03f17aab9627df71a2bbc74582e1ca9f572b5d5c47e79c543bced9d6\": rpc error: code = NotFound desc = could not find container \"98296e4d03f17aab9627df71a2bbc74582e1ca9f572b5d5c47e79c543bced9d6\": container with ID starting with 98296e4d03f17aab9627df71a2bbc74582e1ca9f572b5d5c47e79c543bced9d6 not found: ID does not exist" Jan 30 16:59:23 crc kubenswrapper[4766]: I0130 16:59:23.388002 4766 scope.go:117] "RemoveContainer" containerID="f7f46bc1bf93df440c0e44324ea2e81685d5604b0ad2aade6c25ed6c9cce2f43" Jan 30 16:59:23 crc kubenswrapper[4766]: E0130 16:59:23.388426 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7f46bc1bf93df440c0e44324ea2e81685d5604b0ad2aade6c25ed6c9cce2f43\": container with ID starting with f7f46bc1bf93df440c0e44324ea2e81685d5604b0ad2aade6c25ed6c9cce2f43 not found: ID does not exist" containerID="f7f46bc1bf93df440c0e44324ea2e81685d5604b0ad2aade6c25ed6c9cce2f43" Jan 30 16:59:23 crc kubenswrapper[4766]: I0130 16:59:23.388452 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7f46bc1bf93df440c0e44324ea2e81685d5604b0ad2aade6c25ed6c9cce2f43"} err="failed to get container status \"f7f46bc1bf93df440c0e44324ea2e81685d5604b0ad2aade6c25ed6c9cce2f43\": rpc error: code = NotFound desc = could not find container \"f7f46bc1bf93df440c0e44324ea2e81685d5604b0ad2aade6c25ed6c9cce2f43\": container with ID starting with f7f46bc1bf93df440c0e44324ea2e81685d5604b0ad2aade6c25ed6c9cce2f43 not found: ID does not exist" Jan 30 16:59:24 crc kubenswrapper[4766]: I0130 16:59:24.050664 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfad3300-7036-4130-8d07-49650b704e5d" path="/var/lib/kubelet/pods/cfad3300-7036-4130-8d07-49650b704e5d/volumes" Jan 30 16:59:39 crc kubenswrapper[4766]: I0130 16:59:39.045682 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:59:39 crc kubenswrapper[4766]: I0130 16:59:39.046114 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:59:39 crc kubenswrapper[4766]: I0130 16:59:39.046152 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 16:59:39 crc kubenswrapper[4766]: I0130 16:59:39.046684 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b9d05600caa51ab1651a81b6d711e79ba7c41b6988186880a3f395a3c06b7484"} pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 16:59:39 crc kubenswrapper[4766]: I0130 16:59:39.046735 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" containerID="cri-o://b9d05600caa51ab1651a81b6d711e79ba7c41b6988186880a3f395a3c06b7484" gracePeriod=600 Jan 30 16:59:39 crc kubenswrapper[4766]: E0130 16:59:39.174191 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 16:59:39 crc kubenswrapper[4766]: I0130 16:59:39.418656 4766 generic.go:334] "Generic (PLEG): container finished" podID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerID="b9d05600caa51ab1651a81b6d711e79ba7c41b6988186880a3f395a3c06b7484" exitCode=0 Jan 30 16:59:39 crc kubenswrapper[4766]: I0130 16:59:39.418699 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerDied","Data":"b9d05600caa51ab1651a81b6d711e79ba7c41b6988186880a3f395a3c06b7484"} Jan 30 16:59:39 crc kubenswrapper[4766]: I0130 16:59:39.418728 4766 scope.go:117] "RemoveContainer" containerID="00fe48cd7fae11d07bb44a4d280259adee43debb3566040b546b0f1eb6622872" Jan 30 16:59:39 crc kubenswrapper[4766]: I0130 16:59:39.419297 4766 scope.go:117] "RemoveContainer" containerID="b9d05600caa51ab1651a81b6d711e79ba7c41b6988186880a3f395a3c06b7484" Jan 30 16:59:39 crc kubenswrapper[4766]: E0130 16:59:39.419533 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 16:59:54 crc kubenswrapper[4766]: I0130 16:59:54.039831 4766 scope.go:117] "RemoveContainer" containerID="b9d05600caa51ab1651a81b6d711e79ba7c41b6988186880a3f395a3c06b7484" Jan 30 16:59:54 crc kubenswrapper[4766]: E0130 16:59:54.041581 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:00:00 crc kubenswrapper[4766]: I0130 17:00:00.142006 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496540-qfpbr"] Jan 30 17:00:00 crc kubenswrapper[4766]: E0130 17:00:00.142679 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfad3300-7036-4130-8d07-49650b704e5d" containerName="registry-server" Jan 30 17:00:00 crc kubenswrapper[4766]: I0130 17:00:00.142697 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfad3300-7036-4130-8d07-49650b704e5d" containerName="registry-server" Jan 30 17:00:00 crc kubenswrapper[4766]: E0130 17:00:00.142738 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfad3300-7036-4130-8d07-49650b704e5d" containerName="extract-content" Jan 30 17:00:00 crc kubenswrapper[4766]: I0130 17:00:00.142748 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfad3300-7036-4130-8d07-49650b704e5d" containerName="extract-content" Jan 30 17:00:00 crc kubenswrapper[4766]: E0130 17:00:00.142763 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfad3300-7036-4130-8d07-49650b704e5d" containerName="extract-utilities" Jan 30 17:00:00 crc kubenswrapper[4766]: I0130 17:00:00.142789 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfad3300-7036-4130-8d07-49650b704e5d" containerName="extract-utilities" Jan 30 17:00:00 crc kubenswrapper[4766]: I0130 17:00:00.143001 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfad3300-7036-4130-8d07-49650b704e5d" containerName="registry-server" Jan 30 17:00:00 crc kubenswrapper[4766]: I0130 17:00:00.143613 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-qfpbr" Jan 30 17:00:00 crc kubenswrapper[4766]: I0130 17:00:00.145217 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d00d929-3c4f-4555-b75b-a39750dc609b-config-volume\") pod \"collect-profiles-29496540-qfpbr\" (UID: \"3d00d929-3c4f-4555-b75b-a39750dc609b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-qfpbr" Jan 30 17:00:00 crc kubenswrapper[4766]: I0130 17:00:00.145279 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sthx6\" (UniqueName: \"kubernetes.io/projected/3d00d929-3c4f-4555-b75b-a39750dc609b-kube-api-access-sthx6\") pod \"collect-profiles-29496540-qfpbr\" (UID: \"3d00d929-3c4f-4555-b75b-a39750dc609b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-qfpbr" Jan 30 17:00:00 crc kubenswrapper[4766]: I0130 17:00:00.145387 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3d00d929-3c4f-4555-b75b-a39750dc609b-secret-volume\") pod \"collect-profiles-29496540-qfpbr\" (UID: \"3d00d929-3c4f-4555-b75b-a39750dc609b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-qfpbr" Jan 30 17:00:00 crc kubenswrapper[4766]: I0130 17:00:00.145832 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 17:00:00 crc kubenswrapper[4766]: I0130 17:00:00.146140 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 17:00:00 crc kubenswrapper[4766]: I0130 17:00:00.155024 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496540-qfpbr"] Jan 30 17:00:00 crc kubenswrapper[4766]: I0130 17:00:00.246323 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3d00d929-3c4f-4555-b75b-a39750dc609b-secret-volume\") pod \"collect-profiles-29496540-qfpbr\" (UID: \"3d00d929-3c4f-4555-b75b-a39750dc609b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-qfpbr" Jan 30 17:00:00 crc kubenswrapper[4766]: I0130 17:00:00.246412 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d00d929-3c4f-4555-b75b-a39750dc609b-config-volume\") pod \"collect-profiles-29496540-qfpbr\" (UID: \"3d00d929-3c4f-4555-b75b-a39750dc609b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-qfpbr" Jan 30 17:00:00 crc kubenswrapper[4766]: I0130 17:00:00.246456 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sthx6\" (UniqueName: \"kubernetes.io/projected/3d00d929-3c4f-4555-b75b-a39750dc609b-kube-api-access-sthx6\") pod \"collect-profiles-29496540-qfpbr\" (UID: \"3d00d929-3c4f-4555-b75b-a39750dc609b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-qfpbr" Jan 30 17:00:00 crc kubenswrapper[4766]: I0130 17:00:00.247789 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d00d929-3c4f-4555-b75b-a39750dc609b-config-volume\") pod \"collect-profiles-29496540-qfpbr\" (UID: \"3d00d929-3c4f-4555-b75b-a39750dc609b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-qfpbr" Jan 30 17:00:00 crc kubenswrapper[4766]: I0130 17:00:00.253641 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3d00d929-3c4f-4555-b75b-a39750dc609b-secret-volume\") pod \"collect-profiles-29496540-qfpbr\" (UID: \"3d00d929-3c4f-4555-b75b-a39750dc609b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-qfpbr" Jan 30 17:00:00 crc kubenswrapper[4766]: I0130 17:00:00.262006 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sthx6\" (UniqueName: \"kubernetes.io/projected/3d00d929-3c4f-4555-b75b-a39750dc609b-kube-api-access-sthx6\") pod \"collect-profiles-29496540-qfpbr\" (UID: \"3d00d929-3c4f-4555-b75b-a39750dc609b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-qfpbr" Jan 30 17:00:00 crc kubenswrapper[4766]: I0130 17:00:00.466970 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-qfpbr" Jan 30 17:00:00 crc kubenswrapper[4766]: I0130 17:00:00.890948 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496540-qfpbr"] Jan 30 17:00:01 crc kubenswrapper[4766]: I0130 17:00:01.576317 4766 generic.go:334] "Generic (PLEG): container finished" podID="3d00d929-3c4f-4555-b75b-a39750dc609b" containerID="d1bbe33187614be0056c390feb3f40bb39d47764bf4e3d7add03326875657c91" exitCode=0 Jan 30 17:00:01 crc kubenswrapper[4766]: I0130 17:00:01.576386 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-qfpbr" event={"ID":"3d00d929-3c4f-4555-b75b-a39750dc609b","Type":"ContainerDied","Data":"d1bbe33187614be0056c390feb3f40bb39d47764bf4e3d7add03326875657c91"} Jan 30 17:00:01 crc kubenswrapper[4766]: I0130 17:00:01.576602 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-qfpbr" event={"ID":"3d00d929-3c4f-4555-b75b-a39750dc609b","Type":"ContainerStarted","Data":"81de0c48b6bc80193f93e6c5fa1672a7ec5bfe016ac85fdc034c9958de81096c"} Jan 30 17:00:02 crc kubenswrapper[4766]: I0130 17:00:02.824353 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-qfpbr" Jan 30 17:00:02 crc kubenswrapper[4766]: I0130 17:00:02.985322 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sthx6\" (UniqueName: \"kubernetes.io/projected/3d00d929-3c4f-4555-b75b-a39750dc609b-kube-api-access-sthx6\") pod \"3d00d929-3c4f-4555-b75b-a39750dc609b\" (UID: \"3d00d929-3c4f-4555-b75b-a39750dc609b\") " Jan 30 17:00:02 crc kubenswrapper[4766]: I0130 17:00:02.985495 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3d00d929-3c4f-4555-b75b-a39750dc609b-secret-volume\") pod \"3d00d929-3c4f-4555-b75b-a39750dc609b\" (UID: \"3d00d929-3c4f-4555-b75b-a39750dc609b\") " Jan 30 17:00:02 crc kubenswrapper[4766]: I0130 17:00:02.985554 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d00d929-3c4f-4555-b75b-a39750dc609b-config-volume\") pod \"3d00d929-3c4f-4555-b75b-a39750dc609b\" (UID: \"3d00d929-3c4f-4555-b75b-a39750dc609b\") " Jan 30 17:00:02 crc kubenswrapper[4766]: I0130 17:00:02.986514 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d00d929-3c4f-4555-b75b-a39750dc609b-config-volume" (OuterVolumeSpecName: "config-volume") pod "3d00d929-3c4f-4555-b75b-a39750dc609b" (UID: "3d00d929-3c4f-4555-b75b-a39750dc609b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:00:02 crc kubenswrapper[4766]: I0130 17:00:02.991727 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d00d929-3c4f-4555-b75b-a39750dc609b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3d00d929-3c4f-4555-b75b-a39750dc609b" (UID: "3d00d929-3c4f-4555-b75b-a39750dc609b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:00:02 crc kubenswrapper[4766]: I0130 17:00:02.992046 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d00d929-3c4f-4555-b75b-a39750dc609b-kube-api-access-sthx6" (OuterVolumeSpecName: "kube-api-access-sthx6") pod "3d00d929-3c4f-4555-b75b-a39750dc609b" (UID: "3d00d929-3c4f-4555-b75b-a39750dc609b"). InnerVolumeSpecName "kube-api-access-sthx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:00:03 crc kubenswrapper[4766]: I0130 17:00:03.087107 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sthx6\" (UniqueName: \"kubernetes.io/projected/3d00d929-3c4f-4555-b75b-a39750dc609b-kube-api-access-sthx6\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:03 crc kubenswrapper[4766]: I0130 17:00:03.087147 4766 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3d00d929-3c4f-4555-b75b-a39750dc609b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:03 crc kubenswrapper[4766]: I0130 17:00:03.087159 4766 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d00d929-3c4f-4555-b75b-a39750dc609b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:03 crc kubenswrapper[4766]: I0130 17:00:03.594404 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-qfpbr" event={"ID":"3d00d929-3c4f-4555-b75b-a39750dc609b","Type":"ContainerDied","Data":"81de0c48b6bc80193f93e6c5fa1672a7ec5bfe016ac85fdc034c9958de81096c"} Jan 30 17:00:03 crc kubenswrapper[4766]: I0130 17:00:03.594440 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81de0c48b6bc80193f93e6c5fa1672a7ec5bfe016ac85fdc034c9958de81096c" Jan 30 17:00:03 crc kubenswrapper[4766]: I0130 17:00:03.594502 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-qfpbr" Jan 30 17:00:03 crc kubenswrapper[4766]: I0130 17:00:03.887987 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496495-brtsv"] Jan 30 17:00:03 crc kubenswrapper[4766]: I0130 17:00:03.892757 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496495-brtsv"] Jan 30 17:00:04 crc kubenswrapper[4766]: I0130 17:00:04.050447 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08038447-8cce-4cea-9ef9-f7dbcce48697" path="/var/lib/kubelet/pods/08038447-8cce-4cea-9ef9-f7dbcce48697/volumes" Jan 30 17:00:05 crc kubenswrapper[4766]: I0130 17:00:05.038945 4766 scope.go:117] "RemoveContainer" containerID="b9d05600caa51ab1651a81b6d711e79ba7c41b6988186880a3f395a3c06b7484" Jan 30 17:00:05 crc kubenswrapper[4766]: E0130 17:00:05.039463 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:00:16 crc kubenswrapper[4766]: I0130 17:00:16.045072 4766 scope.go:117] "RemoveContainer" containerID="b9d05600caa51ab1651a81b6d711e79ba7c41b6988186880a3f395a3c06b7484" Jan 30 17:00:16 crc kubenswrapper[4766]: E0130 17:00:16.045940 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:00:28 crc kubenswrapper[4766]: I0130 17:00:28.039111 4766 scope.go:117] "RemoveContainer" containerID="b9d05600caa51ab1651a81b6d711e79ba7c41b6988186880a3f395a3c06b7484" Jan 30 17:00:28 crc kubenswrapper[4766]: E0130 17:00:28.039835 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:00:30 crc kubenswrapper[4766]: I0130 17:00:30.130333 4766 scope.go:117] "RemoveContainer" containerID="b112e3544153b7e8a93c7abc5b6cc98c8d5d4abc22a87cb47302149bba9f4cfe" Jan 30 17:00:40 crc kubenswrapper[4766]: I0130 17:00:40.039470 4766 scope.go:117] "RemoveContainer" containerID="b9d05600caa51ab1651a81b6d711e79ba7c41b6988186880a3f395a3c06b7484" Jan 30 17:00:40 crc kubenswrapper[4766]: E0130 17:00:40.040541 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:00:53 crc kubenswrapper[4766]: I0130 17:00:53.040513 4766 scope.go:117] "RemoveContainer" containerID="b9d05600caa51ab1651a81b6d711e79ba7c41b6988186880a3f395a3c06b7484" Jan 30 17:00:53 crc kubenswrapper[4766]: E0130 17:00:53.041374 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:01:05 crc kubenswrapper[4766]: I0130 17:01:05.040358 4766 scope.go:117] "RemoveContainer" containerID="b9d05600caa51ab1651a81b6d711e79ba7c41b6988186880a3f395a3c06b7484" Jan 30 17:01:05 crc kubenswrapper[4766]: E0130 17:01:05.041628 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:01:19 crc kubenswrapper[4766]: I0130 17:01:19.039782 4766 scope.go:117] "RemoveContainer" containerID="b9d05600caa51ab1651a81b6d711e79ba7c41b6988186880a3f395a3c06b7484" Jan 30 17:01:19 crc kubenswrapper[4766]: E0130 17:01:19.040505 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:01:31 crc kubenswrapper[4766]: I0130 17:01:31.039902 4766 scope.go:117] "RemoveContainer" containerID="b9d05600caa51ab1651a81b6d711e79ba7c41b6988186880a3f395a3c06b7484" Jan 30 17:01:31 crc kubenswrapper[4766]: E0130 17:01:31.040700 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:01:45 crc kubenswrapper[4766]: I0130 17:01:45.039891 4766 scope.go:117] "RemoveContainer" containerID="b9d05600caa51ab1651a81b6d711e79ba7c41b6988186880a3f395a3c06b7484" Jan 30 17:01:45 crc kubenswrapper[4766]: E0130 17:01:45.041672 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:01:58 crc kubenswrapper[4766]: I0130 17:01:58.039766 4766 scope.go:117] "RemoveContainer" containerID="b9d05600caa51ab1651a81b6d711e79ba7c41b6988186880a3f395a3c06b7484" Jan 30 17:01:58 crc kubenswrapper[4766]: E0130 17:01:58.040799 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:02:11 crc kubenswrapper[4766]: I0130 17:02:11.039445 4766 scope.go:117] "RemoveContainer" containerID="b9d05600caa51ab1651a81b6d711e79ba7c41b6988186880a3f395a3c06b7484" Jan 30 17:02:11 crc kubenswrapper[4766]: E0130 17:02:11.040380 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:02:23 crc kubenswrapper[4766]: I0130 17:02:23.040391 4766 scope.go:117] "RemoveContainer" containerID="b9d05600caa51ab1651a81b6d711e79ba7c41b6988186880a3f395a3c06b7484" Jan 30 17:02:23 crc kubenswrapper[4766]: E0130 17:02:23.041205 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:02:36 crc kubenswrapper[4766]: I0130 17:02:36.045246 4766 scope.go:117] "RemoveContainer" containerID="b9d05600caa51ab1651a81b6d711e79ba7c41b6988186880a3f395a3c06b7484" Jan 30 17:02:36 crc kubenswrapper[4766]: E0130 17:02:36.046060 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:02:49 crc kubenswrapper[4766]: I0130 17:02:49.039293 4766 scope.go:117] "RemoveContainer" containerID="b9d05600caa51ab1651a81b6d711e79ba7c41b6988186880a3f395a3c06b7484" Jan 30 17:02:49 crc kubenswrapper[4766]: E0130 17:02:49.040092 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:03:00 crc kubenswrapper[4766]: I0130 17:03:00.040116 4766 scope.go:117] "RemoveContainer" containerID="b9d05600caa51ab1651a81b6d711e79ba7c41b6988186880a3f395a3c06b7484" Jan 30 17:03:00 crc kubenswrapper[4766]: E0130 17:03:00.040900 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:03:11 crc kubenswrapper[4766]: I0130 17:03:11.039742 4766 scope.go:117] "RemoveContainer" containerID="b9d05600caa51ab1651a81b6d711e79ba7c41b6988186880a3f395a3c06b7484" Jan 30 17:03:11 crc kubenswrapper[4766]: E0130 17:03:11.040623 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:03:23 crc kubenswrapper[4766]: I0130 17:03:23.039674 4766 scope.go:117] "RemoveContainer" containerID="b9d05600caa51ab1651a81b6d711e79ba7c41b6988186880a3f395a3c06b7484" Jan 30 17:03:23 crc kubenswrapper[4766]: E0130 17:03:23.040389 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:03:30 crc kubenswrapper[4766]: I0130 17:03:30.217006 4766 scope.go:117] "RemoveContainer" containerID="fd37f3cb692fbe2bbeb024aee6c952dc0d0a87c68386d738a8fdaa9dd9d8595a" Jan 30 17:03:30 crc kubenswrapper[4766]: I0130 17:03:30.236633 4766 scope.go:117] "RemoveContainer" containerID="06b745d56a0ea7fc12ca81d2c9ba2f319ffff14bd56e607e281e0645c4942100" Jan 30 17:03:30 crc kubenswrapper[4766]: I0130 17:03:30.254359 4766 scope.go:117] "RemoveContainer" containerID="969b3d679aa240cd47b159585dba7aa8a23d90c785984a235cf0e91061c4a1a8" Jan 30 17:03:36 crc kubenswrapper[4766]: I0130 17:03:36.042779 4766 scope.go:117] "RemoveContainer" containerID="b9d05600caa51ab1651a81b6d711e79ba7c41b6988186880a3f395a3c06b7484" Jan 30 17:03:36 crc kubenswrapper[4766]: E0130 17:03:36.043385 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:03:51 crc kubenswrapper[4766]: I0130 17:03:51.039479 4766 scope.go:117] "RemoveContainer" containerID="b9d05600caa51ab1651a81b6d711e79ba7c41b6988186880a3f395a3c06b7484" Jan 30 17:03:51 crc kubenswrapper[4766]: E0130 17:03:51.040216 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:04:05 crc kubenswrapper[4766]: I0130 17:04:05.039951 4766 scope.go:117] "RemoveContainer" containerID="b9d05600caa51ab1651a81b6d711e79ba7c41b6988186880a3f395a3c06b7484" Jan 30 17:04:05 crc kubenswrapper[4766]: E0130 17:04:05.040668 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:04:18 crc kubenswrapper[4766]: I0130 17:04:18.039681 4766 scope.go:117] "RemoveContainer" containerID="b9d05600caa51ab1651a81b6d711e79ba7c41b6988186880a3f395a3c06b7484" Jan 30 17:04:18 crc kubenswrapper[4766]: E0130 17:04:18.040456 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:04:32 crc kubenswrapper[4766]: I0130 17:04:32.038936 4766 scope.go:117] "RemoveContainer" containerID="b9d05600caa51ab1651a81b6d711e79ba7c41b6988186880a3f395a3c06b7484" Jan 30 17:04:32 crc kubenswrapper[4766]: E0130 17:04:32.039575 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:04:43 crc kubenswrapper[4766]: I0130 17:04:43.039886 4766 scope.go:117] "RemoveContainer" containerID="b9d05600caa51ab1651a81b6d711e79ba7c41b6988186880a3f395a3c06b7484" Jan 30 17:04:43 crc kubenswrapper[4766]: I0130 17:04:43.500462 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerStarted","Data":"5d3becd45505d4de521190d32436097d94d4667af4d51e364dad238f886a491a"} Jan 30 17:07:09 crc kubenswrapper[4766]: I0130 17:07:09.045343 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:07:09 crc kubenswrapper[4766]: I0130 17:07:09.045905 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:07:24 crc kubenswrapper[4766]: I0130 17:07:24.051501 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6sj4c"] Jan 30 17:07:24 crc kubenswrapper[4766]: E0130 17:07:24.052488 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d00d929-3c4f-4555-b75b-a39750dc609b" containerName="collect-profiles" Jan 30 17:07:24 crc kubenswrapper[4766]: I0130 17:07:24.052523 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d00d929-3c4f-4555-b75b-a39750dc609b" containerName="collect-profiles" Jan 30 17:07:24 crc kubenswrapper[4766]: I0130 17:07:24.052661 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d00d929-3c4f-4555-b75b-a39750dc609b" containerName="collect-profiles" Jan 30 17:07:24 crc kubenswrapper[4766]: I0130 17:07:24.055218 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6sj4c" Jan 30 17:07:24 crc kubenswrapper[4766]: I0130 17:07:24.065405 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6sj4c"] Jan 30 17:07:24 crc kubenswrapper[4766]: I0130 17:07:24.158804 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcbe56d8-9a5b-4234-9031-a67f1cd65a33-catalog-content\") pod \"certified-operators-6sj4c\" (UID: \"dcbe56d8-9a5b-4234-9031-a67f1cd65a33\") " pod="openshift-marketplace/certified-operators-6sj4c" Jan 30 17:07:24 crc kubenswrapper[4766]: I0130 17:07:24.159088 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn9c8\" (UniqueName: \"kubernetes.io/projected/dcbe56d8-9a5b-4234-9031-a67f1cd65a33-kube-api-access-zn9c8\") pod \"certified-operators-6sj4c\" (UID: \"dcbe56d8-9a5b-4234-9031-a67f1cd65a33\") " pod="openshift-marketplace/certified-operators-6sj4c" Jan 30 17:07:24 crc kubenswrapper[4766]: I0130 17:07:24.159140 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcbe56d8-9a5b-4234-9031-a67f1cd65a33-utilities\") pod \"certified-operators-6sj4c\" (UID: \"dcbe56d8-9a5b-4234-9031-a67f1cd65a33\") " pod="openshift-marketplace/certified-operators-6sj4c" Jan 30 17:07:24 crc kubenswrapper[4766]: I0130 17:07:24.261473 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcbe56d8-9a5b-4234-9031-a67f1cd65a33-catalog-content\") pod \"certified-operators-6sj4c\" (UID: \"dcbe56d8-9a5b-4234-9031-a67f1cd65a33\") " pod="openshift-marketplace/certified-operators-6sj4c" Jan 30 17:07:24 crc kubenswrapper[4766]: I0130 17:07:24.261551 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zn9c8\" (UniqueName: \"kubernetes.io/projected/dcbe56d8-9a5b-4234-9031-a67f1cd65a33-kube-api-access-zn9c8\") pod \"certified-operators-6sj4c\" (UID: \"dcbe56d8-9a5b-4234-9031-a67f1cd65a33\") " pod="openshift-marketplace/certified-operators-6sj4c" Jan 30 17:07:24 crc kubenswrapper[4766]: I0130 17:07:24.261569 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcbe56d8-9a5b-4234-9031-a67f1cd65a33-utilities\") pod \"certified-operators-6sj4c\" (UID: \"dcbe56d8-9a5b-4234-9031-a67f1cd65a33\") " pod="openshift-marketplace/certified-operators-6sj4c" Jan 30 17:07:24 crc kubenswrapper[4766]: I0130 17:07:24.262032 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcbe56d8-9a5b-4234-9031-a67f1cd65a33-catalog-content\") pod \"certified-operators-6sj4c\" (UID: \"dcbe56d8-9a5b-4234-9031-a67f1cd65a33\") " pod="openshift-marketplace/certified-operators-6sj4c" Jan 30 17:07:24 crc kubenswrapper[4766]: I0130 17:07:24.262098 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcbe56d8-9a5b-4234-9031-a67f1cd65a33-utilities\") pod \"certified-operators-6sj4c\" (UID: \"dcbe56d8-9a5b-4234-9031-a67f1cd65a33\") " pod="openshift-marketplace/certified-operators-6sj4c" Jan 30 17:07:24 crc kubenswrapper[4766]: I0130 17:07:24.280816 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zn9c8\" (UniqueName: \"kubernetes.io/projected/dcbe56d8-9a5b-4234-9031-a67f1cd65a33-kube-api-access-zn9c8\") pod \"certified-operators-6sj4c\" (UID: \"dcbe56d8-9a5b-4234-9031-a67f1cd65a33\") " pod="openshift-marketplace/certified-operators-6sj4c" Jan 30 17:07:24 crc kubenswrapper[4766]: I0130 17:07:24.409683 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6sj4c" Jan 30 17:07:24 crc kubenswrapper[4766]: I0130 17:07:24.924865 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6sj4c"] Jan 30 17:07:25 crc kubenswrapper[4766]: I0130 17:07:25.552753 4766 generic.go:334] "Generic (PLEG): container finished" podID="dcbe56d8-9a5b-4234-9031-a67f1cd65a33" containerID="ce7ecf7ce1672182b243c55e1be02164403a93392999cdb798c54eb71898e408" exitCode=0 Jan 30 17:07:25 crc kubenswrapper[4766]: I0130 17:07:25.552804 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6sj4c" event={"ID":"dcbe56d8-9a5b-4234-9031-a67f1cd65a33","Type":"ContainerDied","Data":"ce7ecf7ce1672182b243c55e1be02164403a93392999cdb798c54eb71898e408"} Jan 30 17:07:25 crc kubenswrapper[4766]: I0130 17:07:25.552845 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6sj4c" event={"ID":"dcbe56d8-9a5b-4234-9031-a67f1cd65a33","Type":"ContainerStarted","Data":"f76872b006034beb8012688a1eaf0f28f86663996b79ddf0dfcafacdcbde543f"} Jan 30 17:07:25 crc kubenswrapper[4766]: I0130 17:07:25.554648 4766 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 17:07:26 crc kubenswrapper[4766]: I0130 17:07:26.568232 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6sj4c" event={"ID":"dcbe56d8-9a5b-4234-9031-a67f1cd65a33","Type":"ContainerStarted","Data":"2704ea2ea676deded11d6878f000f4cd94bbf9dc2ff88f62199312004282d75c"} Jan 30 17:07:27 crc kubenswrapper[4766]: I0130 17:07:27.577547 4766 generic.go:334] "Generic (PLEG): container finished" podID="dcbe56d8-9a5b-4234-9031-a67f1cd65a33" containerID="2704ea2ea676deded11d6878f000f4cd94bbf9dc2ff88f62199312004282d75c" exitCode=0 Jan 30 17:07:27 crc kubenswrapper[4766]: I0130 17:07:27.577660 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6sj4c" event={"ID":"dcbe56d8-9a5b-4234-9031-a67f1cd65a33","Type":"ContainerDied","Data":"2704ea2ea676deded11d6878f000f4cd94bbf9dc2ff88f62199312004282d75c"} Jan 30 17:07:29 crc kubenswrapper[4766]: I0130 17:07:29.594944 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6sj4c" event={"ID":"dcbe56d8-9a5b-4234-9031-a67f1cd65a33","Type":"ContainerStarted","Data":"fa083eec63b2589c0ee11adeb966cdda1361d7471401b63f29d64ec2bc2e3243"} Jan 30 17:07:29 crc kubenswrapper[4766]: I0130 17:07:29.622870 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6sj4c" podStartSLOduration=2.825000481 podStartE2EDuration="5.622851342s" podCreationTimestamp="2026-01-30 17:07:24 +0000 UTC" firstStartedPulling="2026-01-30 17:07:25.55441188 +0000 UTC m=+2700.192369226" lastFinishedPulling="2026-01-30 17:07:28.352262741 +0000 UTC m=+2702.990220087" observedRunningTime="2026-01-30 17:07:29.617977328 +0000 UTC m=+2704.255934684" watchObservedRunningTime="2026-01-30 17:07:29.622851342 +0000 UTC m=+2704.260808688" Jan 30 17:07:34 crc kubenswrapper[4766]: I0130 17:07:34.410001 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6sj4c" Jan 30 17:07:34 crc kubenswrapper[4766]: I0130 17:07:34.410669 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6sj4c" Jan 30 17:07:34 crc kubenswrapper[4766]: I0130 17:07:34.456091 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6sj4c" Jan 30 17:07:34 crc kubenswrapper[4766]: I0130 17:07:34.677401 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6sj4c" Jan 30 17:07:34 crc kubenswrapper[4766]: I0130 17:07:34.739892 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6sj4c"] Jan 30 17:07:36 crc kubenswrapper[4766]: I0130 17:07:36.640533 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6sj4c" podUID="dcbe56d8-9a5b-4234-9031-a67f1cd65a33" containerName="registry-server" containerID="cri-o://fa083eec63b2589c0ee11adeb966cdda1361d7471401b63f29d64ec2bc2e3243" gracePeriod=2 Jan 30 17:07:37 crc kubenswrapper[4766]: I0130 17:07:37.032969 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6sj4c" Jan 30 17:07:37 crc kubenswrapper[4766]: I0130 17:07:37.142614 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcbe56d8-9a5b-4234-9031-a67f1cd65a33-utilities\") pod \"dcbe56d8-9a5b-4234-9031-a67f1cd65a33\" (UID: \"dcbe56d8-9a5b-4234-9031-a67f1cd65a33\") " Jan 30 17:07:37 crc kubenswrapper[4766]: I0130 17:07:37.142747 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcbe56d8-9a5b-4234-9031-a67f1cd65a33-catalog-content\") pod \"dcbe56d8-9a5b-4234-9031-a67f1cd65a33\" (UID: \"dcbe56d8-9a5b-4234-9031-a67f1cd65a33\") " Jan 30 17:07:37 crc kubenswrapper[4766]: I0130 17:07:37.143042 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zn9c8\" (UniqueName: \"kubernetes.io/projected/dcbe56d8-9a5b-4234-9031-a67f1cd65a33-kube-api-access-zn9c8\") pod \"dcbe56d8-9a5b-4234-9031-a67f1cd65a33\" (UID: \"dcbe56d8-9a5b-4234-9031-a67f1cd65a33\") " Jan 30 17:07:37 crc kubenswrapper[4766]: I0130 17:07:37.144945 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcbe56d8-9a5b-4234-9031-a67f1cd65a33-utilities" (OuterVolumeSpecName: "utilities") pod "dcbe56d8-9a5b-4234-9031-a67f1cd65a33" (UID: "dcbe56d8-9a5b-4234-9031-a67f1cd65a33"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:07:37 crc kubenswrapper[4766]: I0130 17:07:37.152498 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcbe56d8-9a5b-4234-9031-a67f1cd65a33-kube-api-access-zn9c8" (OuterVolumeSpecName: "kube-api-access-zn9c8") pod "dcbe56d8-9a5b-4234-9031-a67f1cd65a33" (UID: "dcbe56d8-9a5b-4234-9031-a67f1cd65a33"). InnerVolumeSpecName "kube-api-access-zn9c8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:07:37 crc kubenswrapper[4766]: I0130 17:07:37.194098 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcbe56d8-9a5b-4234-9031-a67f1cd65a33-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dcbe56d8-9a5b-4234-9031-a67f1cd65a33" (UID: "dcbe56d8-9a5b-4234-9031-a67f1cd65a33"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:07:37 crc kubenswrapper[4766]: I0130 17:07:37.244555 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zn9c8\" (UniqueName: \"kubernetes.io/projected/dcbe56d8-9a5b-4234-9031-a67f1cd65a33-kube-api-access-zn9c8\") on node \"crc\" DevicePath \"\"" Jan 30 17:07:37 crc kubenswrapper[4766]: I0130 17:07:37.244601 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcbe56d8-9a5b-4234-9031-a67f1cd65a33-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:07:37 crc kubenswrapper[4766]: I0130 17:07:37.244612 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcbe56d8-9a5b-4234-9031-a67f1cd65a33-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:07:37 crc kubenswrapper[4766]: I0130 17:07:37.657901 4766 generic.go:334] "Generic (PLEG): container finished" podID="dcbe56d8-9a5b-4234-9031-a67f1cd65a33" containerID="fa083eec63b2589c0ee11adeb966cdda1361d7471401b63f29d64ec2bc2e3243" exitCode=0 Jan 30 17:07:37 crc kubenswrapper[4766]: I0130 17:07:37.657953 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6sj4c" event={"ID":"dcbe56d8-9a5b-4234-9031-a67f1cd65a33","Type":"ContainerDied","Data":"fa083eec63b2589c0ee11adeb966cdda1361d7471401b63f29d64ec2bc2e3243"} Jan 30 17:07:37 crc kubenswrapper[4766]: I0130 17:07:37.658000 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6sj4c" Jan 30 17:07:37 crc kubenswrapper[4766]: I0130 17:07:37.658021 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6sj4c" event={"ID":"dcbe56d8-9a5b-4234-9031-a67f1cd65a33","Type":"ContainerDied","Data":"f76872b006034beb8012688a1eaf0f28f86663996b79ddf0dfcafacdcbde543f"} Jan 30 17:07:37 crc kubenswrapper[4766]: I0130 17:07:37.658048 4766 scope.go:117] "RemoveContainer" containerID="fa083eec63b2589c0ee11adeb966cdda1361d7471401b63f29d64ec2bc2e3243" Jan 30 17:07:37 crc kubenswrapper[4766]: I0130 17:07:37.677816 4766 scope.go:117] "RemoveContainer" containerID="2704ea2ea676deded11d6878f000f4cd94bbf9dc2ff88f62199312004282d75c" Jan 30 17:07:37 crc kubenswrapper[4766]: I0130 17:07:37.709204 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6sj4c"] Jan 30 17:07:37 crc kubenswrapper[4766]: I0130 17:07:37.716485 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6sj4c"] Jan 30 17:07:37 crc kubenswrapper[4766]: I0130 17:07:37.724584 4766 scope.go:117] "RemoveContainer" containerID="ce7ecf7ce1672182b243c55e1be02164403a93392999cdb798c54eb71898e408" Jan 30 17:07:37 crc kubenswrapper[4766]: I0130 17:07:37.739817 4766 scope.go:117] "RemoveContainer" containerID="fa083eec63b2589c0ee11adeb966cdda1361d7471401b63f29d64ec2bc2e3243" Jan 30 17:07:37 crc kubenswrapper[4766]: E0130 17:07:37.740259 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa083eec63b2589c0ee11adeb966cdda1361d7471401b63f29d64ec2bc2e3243\": container with ID starting with fa083eec63b2589c0ee11adeb966cdda1361d7471401b63f29d64ec2bc2e3243 not found: ID does not exist" containerID="fa083eec63b2589c0ee11adeb966cdda1361d7471401b63f29d64ec2bc2e3243" Jan 30 17:07:37 crc kubenswrapper[4766]: I0130 17:07:37.740303 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa083eec63b2589c0ee11adeb966cdda1361d7471401b63f29d64ec2bc2e3243"} err="failed to get container status \"fa083eec63b2589c0ee11adeb966cdda1361d7471401b63f29d64ec2bc2e3243\": rpc error: code = NotFound desc = could not find container \"fa083eec63b2589c0ee11adeb966cdda1361d7471401b63f29d64ec2bc2e3243\": container with ID starting with fa083eec63b2589c0ee11adeb966cdda1361d7471401b63f29d64ec2bc2e3243 not found: ID does not exist" Jan 30 17:07:37 crc kubenswrapper[4766]: I0130 17:07:37.740330 4766 scope.go:117] "RemoveContainer" containerID="2704ea2ea676deded11d6878f000f4cd94bbf9dc2ff88f62199312004282d75c" Jan 30 17:07:37 crc kubenswrapper[4766]: E0130 17:07:37.740713 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2704ea2ea676deded11d6878f000f4cd94bbf9dc2ff88f62199312004282d75c\": container with ID starting with 2704ea2ea676deded11d6878f000f4cd94bbf9dc2ff88f62199312004282d75c not found: ID does not exist" containerID="2704ea2ea676deded11d6878f000f4cd94bbf9dc2ff88f62199312004282d75c" Jan 30 17:07:37 crc kubenswrapper[4766]: I0130 17:07:37.740748 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2704ea2ea676deded11d6878f000f4cd94bbf9dc2ff88f62199312004282d75c"} err="failed to get container status \"2704ea2ea676deded11d6878f000f4cd94bbf9dc2ff88f62199312004282d75c\": rpc error: code = NotFound desc = could not find container \"2704ea2ea676deded11d6878f000f4cd94bbf9dc2ff88f62199312004282d75c\": container with ID starting with 2704ea2ea676deded11d6878f000f4cd94bbf9dc2ff88f62199312004282d75c not found: ID does not exist" Jan 30 17:07:37 crc kubenswrapper[4766]: I0130 17:07:37.740768 4766 scope.go:117] "RemoveContainer" containerID="ce7ecf7ce1672182b243c55e1be02164403a93392999cdb798c54eb71898e408" Jan 30 17:07:37 crc kubenswrapper[4766]: E0130 17:07:37.741336 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce7ecf7ce1672182b243c55e1be02164403a93392999cdb798c54eb71898e408\": container with ID starting with ce7ecf7ce1672182b243c55e1be02164403a93392999cdb798c54eb71898e408 not found: ID does not exist" containerID="ce7ecf7ce1672182b243c55e1be02164403a93392999cdb798c54eb71898e408" Jan 30 17:07:37 crc kubenswrapper[4766]: I0130 17:07:37.741376 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce7ecf7ce1672182b243c55e1be02164403a93392999cdb798c54eb71898e408"} err="failed to get container status \"ce7ecf7ce1672182b243c55e1be02164403a93392999cdb798c54eb71898e408\": rpc error: code = NotFound desc = could not find container \"ce7ecf7ce1672182b243c55e1be02164403a93392999cdb798c54eb71898e408\": container with ID starting with ce7ecf7ce1672182b243c55e1be02164403a93392999cdb798c54eb71898e408 not found: ID does not exist" Jan 30 17:07:38 crc kubenswrapper[4766]: I0130 17:07:38.052308 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcbe56d8-9a5b-4234-9031-a67f1cd65a33" path="/var/lib/kubelet/pods/dcbe56d8-9a5b-4234-9031-a67f1cd65a33/volumes" Jan 30 17:07:39 crc kubenswrapper[4766]: I0130 17:07:39.045628 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:07:39 crc kubenswrapper[4766]: I0130 17:07:39.045978 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:08:09 crc kubenswrapper[4766]: I0130 17:08:09.045692 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:08:09 crc kubenswrapper[4766]: I0130 17:08:09.046086 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:08:09 crc kubenswrapper[4766]: I0130 17:08:09.046129 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 17:08:09 crc kubenswrapper[4766]: I0130 17:08:09.046820 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5d3becd45505d4de521190d32436097d94d4667af4d51e364dad238f886a491a"} pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 17:08:09 crc kubenswrapper[4766]: I0130 17:08:09.046899 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" containerID="cri-o://5d3becd45505d4de521190d32436097d94d4667af4d51e364dad238f886a491a" gracePeriod=600 Jan 30 17:08:09 crc kubenswrapper[4766]: I0130 17:08:09.888086 4766 generic.go:334] "Generic (PLEG): container finished" podID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerID="5d3becd45505d4de521190d32436097d94d4667af4d51e364dad238f886a491a" exitCode=0 Jan 30 17:08:09 crc kubenswrapper[4766]: I0130 17:08:09.888200 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerDied","Data":"5d3becd45505d4de521190d32436097d94d4667af4d51e364dad238f886a491a"} Jan 30 17:08:09 crc kubenswrapper[4766]: I0130 17:08:09.888743 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerStarted","Data":"33bfd46087a94354017580cc322df23894b074e070192c17b1c605a92e00a8b9"} Jan 30 17:08:09 crc kubenswrapper[4766]: I0130 17:08:09.888769 4766 scope.go:117] "RemoveContainer" containerID="b9d05600caa51ab1651a81b6d711e79ba7c41b6988186880a3f395a3c06b7484" Jan 30 17:08:43 crc kubenswrapper[4766]: I0130 17:08:43.899261 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zgxw8"] Jan 30 17:08:43 crc kubenswrapper[4766]: E0130 17:08:43.900109 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcbe56d8-9a5b-4234-9031-a67f1cd65a33" containerName="extract-content" Jan 30 17:08:43 crc kubenswrapper[4766]: I0130 17:08:43.900124 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcbe56d8-9a5b-4234-9031-a67f1cd65a33" containerName="extract-content" Jan 30 17:08:43 crc kubenswrapper[4766]: E0130 17:08:43.900134 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcbe56d8-9a5b-4234-9031-a67f1cd65a33" containerName="registry-server" Jan 30 17:08:43 crc kubenswrapper[4766]: I0130 17:08:43.900141 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcbe56d8-9a5b-4234-9031-a67f1cd65a33" containerName="registry-server" Jan 30 17:08:43 crc kubenswrapper[4766]: E0130 17:08:43.900162 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcbe56d8-9a5b-4234-9031-a67f1cd65a33" containerName="extract-utilities" Jan 30 17:08:43 crc kubenswrapper[4766]: I0130 17:08:43.900170 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcbe56d8-9a5b-4234-9031-a67f1cd65a33" containerName="extract-utilities" Jan 30 17:08:43 crc kubenswrapper[4766]: I0130 17:08:43.900355 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcbe56d8-9a5b-4234-9031-a67f1cd65a33" containerName="registry-server" Jan 30 17:08:43 crc kubenswrapper[4766]: I0130 17:08:43.901414 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zgxw8" Jan 30 17:08:43 crc kubenswrapper[4766]: I0130 17:08:43.911811 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zgxw8"] Jan 30 17:08:44 crc kubenswrapper[4766]: I0130 17:08:44.024822 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0857e092-05eb-4415-bd8b-c133565af044-catalog-content\") pod \"redhat-marketplace-zgxw8\" (UID: \"0857e092-05eb-4415-bd8b-c133565af044\") " pod="openshift-marketplace/redhat-marketplace-zgxw8" Jan 30 17:08:44 crc kubenswrapper[4766]: I0130 17:08:44.024925 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x24dt\" (UniqueName: \"kubernetes.io/projected/0857e092-05eb-4415-bd8b-c133565af044-kube-api-access-x24dt\") pod \"redhat-marketplace-zgxw8\" (UID: \"0857e092-05eb-4415-bd8b-c133565af044\") " pod="openshift-marketplace/redhat-marketplace-zgxw8" Jan 30 17:08:44 crc kubenswrapper[4766]: I0130 17:08:44.024970 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0857e092-05eb-4415-bd8b-c133565af044-utilities\") pod \"redhat-marketplace-zgxw8\" (UID: \"0857e092-05eb-4415-bd8b-c133565af044\") " pod="openshift-marketplace/redhat-marketplace-zgxw8" Jan 30 17:08:44 crc kubenswrapper[4766]: I0130 17:08:44.126280 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0857e092-05eb-4415-bd8b-c133565af044-catalog-content\") pod \"redhat-marketplace-zgxw8\" (UID: \"0857e092-05eb-4415-bd8b-c133565af044\") " pod="openshift-marketplace/redhat-marketplace-zgxw8" Jan 30 17:08:44 crc kubenswrapper[4766]: I0130 17:08:44.126345 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x24dt\" (UniqueName: \"kubernetes.io/projected/0857e092-05eb-4415-bd8b-c133565af044-kube-api-access-x24dt\") pod \"redhat-marketplace-zgxw8\" (UID: \"0857e092-05eb-4415-bd8b-c133565af044\") " pod="openshift-marketplace/redhat-marketplace-zgxw8" Jan 30 17:08:44 crc kubenswrapper[4766]: I0130 17:08:44.126387 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0857e092-05eb-4415-bd8b-c133565af044-utilities\") pod \"redhat-marketplace-zgxw8\" (UID: \"0857e092-05eb-4415-bd8b-c133565af044\") " pod="openshift-marketplace/redhat-marketplace-zgxw8" Jan 30 17:08:44 crc kubenswrapper[4766]: I0130 17:08:44.126795 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0857e092-05eb-4415-bd8b-c133565af044-utilities\") pod \"redhat-marketplace-zgxw8\" (UID: \"0857e092-05eb-4415-bd8b-c133565af044\") " pod="openshift-marketplace/redhat-marketplace-zgxw8" Jan 30 17:08:44 crc kubenswrapper[4766]: I0130 17:08:44.126973 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0857e092-05eb-4415-bd8b-c133565af044-catalog-content\") pod \"redhat-marketplace-zgxw8\" (UID: \"0857e092-05eb-4415-bd8b-c133565af044\") " pod="openshift-marketplace/redhat-marketplace-zgxw8" Jan 30 17:08:44 crc kubenswrapper[4766]: I0130 17:08:44.151135 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x24dt\" (UniqueName: \"kubernetes.io/projected/0857e092-05eb-4415-bd8b-c133565af044-kube-api-access-x24dt\") pod \"redhat-marketplace-zgxw8\" (UID: \"0857e092-05eb-4415-bd8b-c133565af044\") " pod="openshift-marketplace/redhat-marketplace-zgxw8" Jan 30 17:08:44 crc kubenswrapper[4766]: I0130 17:08:44.220659 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zgxw8" Jan 30 17:08:44 crc kubenswrapper[4766]: I0130 17:08:44.667504 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zgxw8"] Jan 30 17:08:45 crc kubenswrapper[4766]: I0130 17:08:45.156329 4766 generic.go:334] "Generic (PLEG): container finished" podID="0857e092-05eb-4415-bd8b-c133565af044" containerID="9dbc09c417ff84604825848895701855cf762a5163a50678b08ab023b2ee8a7f" exitCode=0 Jan 30 17:08:45 crc kubenswrapper[4766]: I0130 17:08:45.156630 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zgxw8" event={"ID":"0857e092-05eb-4415-bd8b-c133565af044","Type":"ContainerDied","Data":"9dbc09c417ff84604825848895701855cf762a5163a50678b08ab023b2ee8a7f"} Jan 30 17:08:45 crc kubenswrapper[4766]: I0130 17:08:45.156669 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zgxw8" event={"ID":"0857e092-05eb-4415-bd8b-c133565af044","Type":"ContainerStarted","Data":"cf0f23cc7044c135f42645d5c53ead018194659143f6d3b2e787f14109e47195"} Jan 30 17:08:47 crc kubenswrapper[4766]: I0130 17:08:47.176285 4766 generic.go:334] "Generic (PLEG): container finished" podID="0857e092-05eb-4415-bd8b-c133565af044" containerID="bf810bb6f7501ef9039b720c188b1a96b7e5e4a575b9cd4c68f8e6be67abf338" exitCode=0 Jan 30 17:08:47 crc kubenswrapper[4766]: I0130 17:08:47.176324 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zgxw8" event={"ID":"0857e092-05eb-4415-bd8b-c133565af044","Type":"ContainerDied","Data":"bf810bb6f7501ef9039b720c188b1a96b7e5e4a575b9cd4c68f8e6be67abf338"} Jan 30 17:08:48 crc kubenswrapper[4766]: I0130 17:08:48.189599 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zgxw8" event={"ID":"0857e092-05eb-4415-bd8b-c133565af044","Type":"ContainerStarted","Data":"8a8c66a79a508c8a5249681e1d28617810e28953550a31b1e7bf9f121497485a"} Jan 30 17:08:48 crc kubenswrapper[4766]: I0130 17:08:48.210737 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zgxw8" podStartSLOduration=2.80103077 podStartE2EDuration="5.210718946s" podCreationTimestamp="2026-01-30 17:08:43 +0000 UTC" firstStartedPulling="2026-01-30 17:08:45.159988282 +0000 UTC m=+2779.797945678" lastFinishedPulling="2026-01-30 17:08:47.569676508 +0000 UTC m=+2782.207633854" observedRunningTime="2026-01-30 17:08:48.208088386 +0000 UTC m=+2782.846045732" watchObservedRunningTime="2026-01-30 17:08:48.210718946 +0000 UTC m=+2782.848676292" Jan 30 17:08:54 crc kubenswrapper[4766]: I0130 17:08:54.221646 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zgxw8" Jan 30 17:08:54 crc kubenswrapper[4766]: I0130 17:08:54.222445 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zgxw8" Jan 30 17:08:54 crc kubenswrapper[4766]: I0130 17:08:54.266376 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zgxw8" Jan 30 17:08:54 crc kubenswrapper[4766]: I0130 17:08:54.306005 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zgxw8" Jan 30 17:08:54 crc kubenswrapper[4766]: I0130 17:08:54.497865 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zgxw8"] Jan 30 17:08:56 crc kubenswrapper[4766]: I0130 17:08:56.252078 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zgxw8" podUID="0857e092-05eb-4415-bd8b-c133565af044" containerName="registry-server" containerID="cri-o://8a8c66a79a508c8a5249681e1d28617810e28953550a31b1e7bf9f121497485a" gracePeriod=2 Jan 30 17:08:56 crc kubenswrapper[4766]: I0130 17:08:56.643802 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zgxw8" Jan 30 17:08:56 crc kubenswrapper[4766]: I0130 17:08:56.808508 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x24dt\" (UniqueName: \"kubernetes.io/projected/0857e092-05eb-4415-bd8b-c133565af044-kube-api-access-x24dt\") pod \"0857e092-05eb-4415-bd8b-c133565af044\" (UID: \"0857e092-05eb-4415-bd8b-c133565af044\") " Jan 30 17:08:56 crc kubenswrapper[4766]: I0130 17:08:56.808657 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0857e092-05eb-4415-bd8b-c133565af044-utilities\") pod \"0857e092-05eb-4415-bd8b-c133565af044\" (UID: \"0857e092-05eb-4415-bd8b-c133565af044\") " Jan 30 17:08:56 crc kubenswrapper[4766]: I0130 17:08:56.808687 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0857e092-05eb-4415-bd8b-c133565af044-catalog-content\") pod \"0857e092-05eb-4415-bd8b-c133565af044\" (UID: \"0857e092-05eb-4415-bd8b-c133565af044\") " Jan 30 17:08:56 crc kubenswrapper[4766]: I0130 17:08:56.809617 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0857e092-05eb-4415-bd8b-c133565af044-utilities" (OuterVolumeSpecName: "utilities") pod "0857e092-05eb-4415-bd8b-c133565af044" (UID: "0857e092-05eb-4415-bd8b-c133565af044"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:08:56 crc kubenswrapper[4766]: I0130 17:08:56.816344 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0857e092-05eb-4415-bd8b-c133565af044-kube-api-access-x24dt" (OuterVolumeSpecName: "kube-api-access-x24dt") pod "0857e092-05eb-4415-bd8b-c133565af044" (UID: "0857e092-05eb-4415-bd8b-c133565af044"). InnerVolumeSpecName "kube-api-access-x24dt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:08:56 crc kubenswrapper[4766]: I0130 17:08:56.848632 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0857e092-05eb-4415-bd8b-c133565af044-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0857e092-05eb-4415-bd8b-c133565af044" (UID: "0857e092-05eb-4415-bd8b-c133565af044"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:08:56 crc kubenswrapper[4766]: I0130 17:08:56.909825 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0857e092-05eb-4415-bd8b-c133565af044-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:08:56 crc kubenswrapper[4766]: I0130 17:08:56.909868 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0857e092-05eb-4415-bd8b-c133565af044-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:08:56 crc kubenswrapper[4766]: I0130 17:08:56.909914 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x24dt\" (UniqueName: \"kubernetes.io/projected/0857e092-05eb-4415-bd8b-c133565af044-kube-api-access-x24dt\") on node \"crc\" DevicePath \"\"" Jan 30 17:08:57 crc kubenswrapper[4766]: I0130 17:08:57.261619 4766 generic.go:334] "Generic (PLEG): container finished" podID="0857e092-05eb-4415-bd8b-c133565af044" containerID="8a8c66a79a508c8a5249681e1d28617810e28953550a31b1e7bf9f121497485a" exitCode=0 Jan 30 17:08:57 crc kubenswrapper[4766]: I0130 17:08:57.261686 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zgxw8" Jan 30 17:08:57 crc kubenswrapper[4766]: I0130 17:08:57.261672 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zgxw8" event={"ID":"0857e092-05eb-4415-bd8b-c133565af044","Type":"ContainerDied","Data":"8a8c66a79a508c8a5249681e1d28617810e28953550a31b1e7bf9f121497485a"} Jan 30 17:08:57 crc kubenswrapper[4766]: I0130 17:08:57.262350 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zgxw8" event={"ID":"0857e092-05eb-4415-bd8b-c133565af044","Type":"ContainerDied","Data":"cf0f23cc7044c135f42645d5c53ead018194659143f6d3b2e787f14109e47195"} Jan 30 17:08:57 crc kubenswrapper[4766]: I0130 17:08:57.262380 4766 scope.go:117] "RemoveContainer" containerID="8a8c66a79a508c8a5249681e1d28617810e28953550a31b1e7bf9f121497485a" Jan 30 17:08:57 crc kubenswrapper[4766]: I0130 17:08:57.278672 4766 scope.go:117] "RemoveContainer" containerID="bf810bb6f7501ef9039b720c188b1a96b7e5e4a575b9cd4c68f8e6be67abf338" Jan 30 17:08:57 crc kubenswrapper[4766]: I0130 17:08:57.298492 4766 scope.go:117] "RemoveContainer" containerID="9dbc09c417ff84604825848895701855cf762a5163a50678b08ab023b2ee8a7f" Jan 30 17:08:57 crc kubenswrapper[4766]: I0130 17:08:57.307874 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zgxw8"] Jan 30 17:08:57 crc kubenswrapper[4766]: I0130 17:08:57.321760 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zgxw8"] Jan 30 17:08:57 crc kubenswrapper[4766]: I0130 17:08:57.339589 4766 scope.go:117] "RemoveContainer" containerID="8a8c66a79a508c8a5249681e1d28617810e28953550a31b1e7bf9f121497485a" Jan 30 17:08:57 crc kubenswrapper[4766]: E0130 17:08:57.340070 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a8c66a79a508c8a5249681e1d28617810e28953550a31b1e7bf9f121497485a\": container with ID starting with 8a8c66a79a508c8a5249681e1d28617810e28953550a31b1e7bf9f121497485a not found: ID does not exist" containerID="8a8c66a79a508c8a5249681e1d28617810e28953550a31b1e7bf9f121497485a" Jan 30 17:08:57 crc kubenswrapper[4766]: I0130 17:08:57.340113 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a8c66a79a508c8a5249681e1d28617810e28953550a31b1e7bf9f121497485a"} err="failed to get container status \"8a8c66a79a508c8a5249681e1d28617810e28953550a31b1e7bf9f121497485a\": rpc error: code = NotFound desc = could not find container \"8a8c66a79a508c8a5249681e1d28617810e28953550a31b1e7bf9f121497485a\": container with ID starting with 8a8c66a79a508c8a5249681e1d28617810e28953550a31b1e7bf9f121497485a not found: ID does not exist" Jan 30 17:08:57 crc kubenswrapper[4766]: I0130 17:08:57.340136 4766 scope.go:117] "RemoveContainer" containerID="bf810bb6f7501ef9039b720c188b1a96b7e5e4a575b9cd4c68f8e6be67abf338" Jan 30 17:08:57 crc kubenswrapper[4766]: E0130 17:08:57.340539 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf810bb6f7501ef9039b720c188b1a96b7e5e4a575b9cd4c68f8e6be67abf338\": container with ID starting with bf810bb6f7501ef9039b720c188b1a96b7e5e4a575b9cd4c68f8e6be67abf338 not found: ID does not exist" containerID="bf810bb6f7501ef9039b720c188b1a96b7e5e4a575b9cd4c68f8e6be67abf338" Jan 30 17:08:57 crc kubenswrapper[4766]: I0130 17:08:57.340612 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf810bb6f7501ef9039b720c188b1a96b7e5e4a575b9cd4c68f8e6be67abf338"} err="failed to get container status \"bf810bb6f7501ef9039b720c188b1a96b7e5e4a575b9cd4c68f8e6be67abf338\": rpc error: code = NotFound desc = could not find container \"bf810bb6f7501ef9039b720c188b1a96b7e5e4a575b9cd4c68f8e6be67abf338\": container with ID starting with bf810bb6f7501ef9039b720c188b1a96b7e5e4a575b9cd4c68f8e6be67abf338 not found: ID does not exist" Jan 30 17:08:57 crc kubenswrapper[4766]: I0130 17:08:57.340655 4766 scope.go:117] "RemoveContainer" containerID="9dbc09c417ff84604825848895701855cf762a5163a50678b08ab023b2ee8a7f" Jan 30 17:08:57 crc kubenswrapper[4766]: E0130 17:08:57.340951 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9dbc09c417ff84604825848895701855cf762a5163a50678b08ab023b2ee8a7f\": container with ID starting with 9dbc09c417ff84604825848895701855cf762a5163a50678b08ab023b2ee8a7f not found: ID does not exist" containerID="9dbc09c417ff84604825848895701855cf762a5163a50678b08ab023b2ee8a7f" Jan 30 17:08:57 crc kubenswrapper[4766]: I0130 17:08:57.340977 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dbc09c417ff84604825848895701855cf762a5163a50678b08ab023b2ee8a7f"} err="failed to get container status \"9dbc09c417ff84604825848895701855cf762a5163a50678b08ab023b2ee8a7f\": rpc error: code = NotFound desc = could not find container \"9dbc09c417ff84604825848895701855cf762a5163a50678b08ab023b2ee8a7f\": container with ID starting with 9dbc09c417ff84604825848895701855cf762a5163a50678b08ab023b2ee8a7f not found: ID does not exist" Jan 30 17:08:58 crc kubenswrapper[4766]: I0130 17:08:58.053068 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0857e092-05eb-4415-bd8b-c133565af044" path="/var/lib/kubelet/pods/0857e092-05eb-4415-bd8b-c133565af044/volumes" Jan 30 17:10:09 crc kubenswrapper[4766]: I0130 17:10:09.045764 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:10:09 crc kubenswrapper[4766]: I0130 17:10:09.046327 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:10:39 crc kubenswrapper[4766]: I0130 17:10:39.045724 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:10:39 crc kubenswrapper[4766]: I0130 17:10:39.046253 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:10:39 crc kubenswrapper[4766]: I0130 17:10:39.534866 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7tb9c"] Jan 30 17:10:39 crc kubenswrapper[4766]: E0130 17:10:39.535262 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0857e092-05eb-4415-bd8b-c133565af044" containerName="extract-utilities" Jan 30 17:10:39 crc kubenswrapper[4766]: I0130 17:10:39.535280 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="0857e092-05eb-4415-bd8b-c133565af044" containerName="extract-utilities" Jan 30 17:10:39 crc kubenswrapper[4766]: E0130 17:10:39.535297 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0857e092-05eb-4415-bd8b-c133565af044" containerName="extract-content" Jan 30 17:10:39 crc kubenswrapper[4766]: I0130 17:10:39.535306 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="0857e092-05eb-4415-bd8b-c133565af044" containerName="extract-content" Jan 30 17:10:39 crc kubenswrapper[4766]: E0130 17:10:39.535321 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0857e092-05eb-4415-bd8b-c133565af044" containerName="registry-server" Jan 30 17:10:39 crc kubenswrapper[4766]: I0130 17:10:39.535329 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="0857e092-05eb-4415-bd8b-c133565af044" containerName="registry-server" Jan 30 17:10:39 crc kubenswrapper[4766]: I0130 17:10:39.535516 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="0857e092-05eb-4415-bd8b-c133565af044" containerName="registry-server" Jan 30 17:10:39 crc kubenswrapper[4766]: I0130 17:10:39.536444 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7tb9c" Jan 30 17:10:39 crc kubenswrapper[4766]: I0130 17:10:39.557331 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7tb9c"] Jan 30 17:10:39 crc kubenswrapper[4766]: I0130 17:10:39.648726 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fprdh\" (UniqueName: \"kubernetes.io/projected/a230b4cf-8e5f-4073-9703-f9b0bb153676-kube-api-access-fprdh\") pod \"redhat-operators-7tb9c\" (UID: \"a230b4cf-8e5f-4073-9703-f9b0bb153676\") " pod="openshift-marketplace/redhat-operators-7tb9c" Jan 30 17:10:39 crc kubenswrapper[4766]: I0130 17:10:39.648772 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a230b4cf-8e5f-4073-9703-f9b0bb153676-utilities\") pod \"redhat-operators-7tb9c\" (UID: \"a230b4cf-8e5f-4073-9703-f9b0bb153676\") " pod="openshift-marketplace/redhat-operators-7tb9c" Jan 30 17:10:39 crc kubenswrapper[4766]: I0130 17:10:39.648806 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a230b4cf-8e5f-4073-9703-f9b0bb153676-catalog-content\") pod \"redhat-operators-7tb9c\" (UID: \"a230b4cf-8e5f-4073-9703-f9b0bb153676\") " pod="openshift-marketplace/redhat-operators-7tb9c" Jan 30 17:10:39 crc kubenswrapper[4766]: I0130 17:10:39.750694 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fprdh\" (UniqueName: \"kubernetes.io/projected/a230b4cf-8e5f-4073-9703-f9b0bb153676-kube-api-access-fprdh\") pod \"redhat-operators-7tb9c\" (UID: \"a230b4cf-8e5f-4073-9703-f9b0bb153676\") " pod="openshift-marketplace/redhat-operators-7tb9c" Jan 30 17:10:39 crc kubenswrapper[4766]: I0130 17:10:39.750759 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a230b4cf-8e5f-4073-9703-f9b0bb153676-utilities\") pod \"redhat-operators-7tb9c\" (UID: \"a230b4cf-8e5f-4073-9703-f9b0bb153676\") " pod="openshift-marketplace/redhat-operators-7tb9c" Jan 30 17:10:39 crc kubenswrapper[4766]: I0130 17:10:39.750794 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a230b4cf-8e5f-4073-9703-f9b0bb153676-catalog-content\") pod \"redhat-operators-7tb9c\" (UID: \"a230b4cf-8e5f-4073-9703-f9b0bb153676\") " pod="openshift-marketplace/redhat-operators-7tb9c" Jan 30 17:10:39 crc kubenswrapper[4766]: I0130 17:10:39.751341 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a230b4cf-8e5f-4073-9703-f9b0bb153676-catalog-content\") pod \"redhat-operators-7tb9c\" (UID: \"a230b4cf-8e5f-4073-9703-f9b0bb153676\") " pod="openshift-marketplace/redhat-operators-7tb9c" Jan 30 17:10:39 crc kubenswrapper[4766]: I0130 17:10:39.751492 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a230b4cf-8e5f-4073-9703-f9b0bb153676-utilities\") pod \"redhat-operators-7tb9c\" (UID: \"a230b4cf-8e5f-4073-9703-f9b0bb153676\") " pod="openshift-marketplace/redhat-operators-7tb9c" Jan 30 17:10:39 crc kubenswrapper[4766]: I0130 17:10:39.771751 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fprdh\" (UniqueName: \"kubernetes.io/projected/a230b4cf-8e5f-4073-9703-f9b0bb153676-kube-api-access-fprdh\") pod \"redhat-operators-7tb9c\" (UID: \"a230b4cf-8e5f-4073-9703-f9b0bb153676\") " pod="openshift-marketplace/redhat-operators-7tb9c" Jan 30 17:10:39 crc kubenswrapper[4766]: I0130 17:10:39.852320 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7tb9c" Jan 30 17:10:40 crc kubenswrapper[4766]: I0130 17:10:40.288442 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7tb9c"] Jan 30 17:10:40 crc kubenswrapper[4766]: I0130 17:10:40.528651 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zngnx"] Jan 30 17:10:40 crc kubenswrapper[4766]: I0130 17:10:40.530329 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zngnx" Jan 30 17:10:40 crc kubenswrapper[4766]: I0130 17:10:40.541442 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zngnx"] Jan 30 17:10:40 crc kubenswrapper[4766]: I0130 17:10:40.667612 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e507d583-4c30-4a78-902f-9b53865469c9-utilities\") pod \"community-operators-zngnx\" (UID: \"e507d583-4c30-4a78-902f-9b53865469c9\") " pod="openshift-marketplace/community-operators-zngnx" Jan 30 17:10:40 crc kubenswrapper[4766]: I0130 17:10:40.667733 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e507d583-4c30-4a78-902f-9b53865469c9-catalog-content\") pod \"community-operators-zngnx\" (UID: \"e507d583-4c30-4a78-902f-9b53865469c9\") " pod="openshift-marketplace/community-operators-zngnx" Jan 30 17:10:40 crc kubenswrapper[4766]: I0130 17:10:40.667871 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfswt\" (UniqueName: \"kubernetes.io/projected/e507d583-4c30-4a78-902f-9b53865469c9-kube-api-access-cfswt\") pod \"community-operators-zngnx\" (UID: \"e507d583-4c30-4a78-902f-9b53865469c9\") " pod="openshift-marketplace/community-operators-zngnx" Jan 30 17:10:40 crc kubenswrapper[4766]: I0130 17:10:40.769857 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e507d583-4c30-4a78-902f-9b53865469c9-catalog-content\") pod \"community-operators-zngnx\" (UID: \"e507d583-4c30-4a78-902f-9b53865469c9\") " pod="openshift-marketplace/community-operators-zngnx" Jan 30 17:10:40 crc kubenswrapper[4766]: I0130 17:10:40.769957 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfswt\" (UniqueName: \"kubernetes.io/projected/e507d583-4c30-4a78-902f-9b53865469c9-kube-api-access-cfswt\") pod \"community-operators-zngnx\" (UID: \"e507d583-4c30-4a78-902f-9b53865469c9\") " pod="openshift-marketplace/community-operators-zngnx" Jan 30 17:10:40 crc kubenswrapper[4766]: I0130 17:10:40.770023 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e507d583-4c30-4a78-902f-9b53865469c9-utilities\") pod \"community-operators-zngnx\" (UID: \"e507d583-4c30-4a78-902f-9b53865469c9\") " pod="openshift-marketplace/community-operators-zngnx" Jan 30 17:10:40 crc kubenswrapper[4766]: I0130 17:10:40.770453 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e507d583-4c30-4a78-902f-9b53865469c9-catalog-content\") pod \"community-operators-zngnx\" (UID: \"e507d583-4c30-4a78-902f-9b53865469c9\") " pod="openshift-marketplace/community-operators-zngnx" Jan 30 17:10:40 crc kubenswrapper[4766]: I0130 17:10:40.770516 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e507d583-4c30-4a78-902f-9b53865469c9-utilities\") pod \"community-operators-zngnx\" (UID: \"e507d583-4c30-4a78-902f-9b53865469c9\") " pod="openshift-marketplace/community-operators-zngnx" Jan 30 17:10:40 crc kubenswrapper[4766]: I0130 17:10:40.793732 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfswt\" (UniqueName: \"kubernetes.io/projected/e507d583-4c30-4a78-902f-9b53865469c9-kube-api-access-cfswt\") pod \"community-operators-zngnx\" (UID: \"e507d583-4c30-4a78-902f-9b53865469c9\") " pod="openshift-marketplace/community-operators-zngnx" Jan 30 17:10:40 crc kubenswrapper[4766]: I0130 17:10:40.853476 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zngnx" Jan 30 17:10:41 crc kubenswrapper[4766]: I0130 17:10:41.016964 4766 generic.go:334] "Generic (PLEG): container finished" podID="a230b4cf-8e5f-4073-9703-f9b0bb153676" containerID="202f28b02298f21c33c1f35e3a5e23a0cb214ea7a644dc3e562c70d5a7f17be6" exitCode=0 Jan 30 17:10:41 crc kubenswrapper[4766]: I0130 17:10:41.017219 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7tb9c" event={"ID":"a230b4cf-8e5f-4073-9703-f9b0bb153676","Type":"ContainerDied","Data":"202f28b02298f21c33c1f35e3a5e23a0cb214ea7a644dc3e562c70d5a7f17be6"} Jan 30 17:10:41 crc kubenswrapper[4766]: I0130 17:10:41.017275 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7tb9c" event={"ID":"a230b4cf-8e5f-4073-9703-f9b0bb153676","Type":"ContainerStarted","Data":"f0333f596b589563f978aad186da16aacd88d7acd56905ba8557c3d26b41ec37"} Jan 30 17:10:41 crc kubenswrapper[4766]: I0130 17:10:41.132372 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zngnx"] Jan 30 17:10:41 crc kubenswrapper[4766]: W0130 17:10:41.135626 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode507d583_4c30_4a78_902f_9b53865469c9.slice/crio-4dec609aa4885b78f47eb0e4e6f2d968e7a0eb19591ef99af791ed738dbfcf3f WatchSource:0}: Error finding container 4dec609aa4885b78f47eb0e4e6f2d968e7a0eb19591ef99af791ed738dbfcf3f: Status 404 returned error can't find the container with id 4dec609aa4885b78f47eb0e4e6f2d968e7a0eb19591ef99af791ed738dbfcf3f Jan 30 17:10:42 crc kubenswrapper[4766]: I0130 17:10:42.037007 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7tb9c" event={"ID":"a230b4cf-8e5f-4073-9703-f9b0bb153676","Type":"ContainerStarted","Data":"9da17ae8ec9d9e5e04ce2d515bc09cabd9526e1f863e580f19b0f4a241a7f58c"} Jan 30 17:10:42 crc kubenswrapper[4766]: I0130 17:10:42.043566 4766 generic.go:334] "Generic (PLEG): container finished" podID="e507d583-4c30-4a78-902f-9b53865469c9" containerID="d1641c04fdaf2c4bb1b7a1a71f2402b193ed0cc9257453d33d9e9bea7cd9023e" exitCode=0 Jan 30 17:10:42 crc kubenswrapper[4766]: I0130 17:10:42.053148 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zngnx" event={"ID":"e507d583-4c30-4a78-902f-9b53865469c9","Type":"ContainerDied","Data":"d1641c04fdaf2c4bb1b7a1a71f2402b193ed0cc9257453d33d9e9bea7cd9023e"} Jan 30 17:10:42 crc kubenswrapper[4766]: I0130 17:10:42.053210 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zngnx" event={"ID":"e507d583-4c30-4a78-902f-9b53865469c9","Type":"ContainerStarted","Data":"4dec609aa4885b78f47eb0e4e6f2d968e7a0eb19591ef99af791ed738dbfcf3f"} Jan 30 17:10:43 crc kubenswrapper[4766]: I0130 17:10:43.057001 4766 generic.go:334] "Generic (PLEG): container finished" podID="e507d583-4c30-4a78-902f-9b53865469c9" containerID="3dd6a530ff7c14d38801ca3930f0c13803a0a7d893eebee3e03f62322b4102de" exitCode=0 Jan 30 17:10:43 crc kubenswrapper[4766]: I0130 17:10:43.057049 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zngnx" event={"ID":"e507d583-4c30-4a78-902f-9b53865469c9","Type":"ContainerDied","Data":"3dd6a530ff7c14d38801ca3930f0c13803a0a7d893eebee3e03f62322b4102de"} Jan 30 17:10:43 crc kubenswrapper[4766]: I0130 17:10:43.059396 4766 generic.go:334] "Generic (PLEG): container finished" podID="a230b4cf-8e5f-4073-9703-f9b0bb153676" containerID="9da17ae8ec9d9e5e04ce2d515bc09cabd9526e1f863e580f19b0f4a241a7f58c" exitCode=0 Jan 30 17:10:43 crc kubenswrapper[4766]: I0130 17:10:43.059437 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7tb9c" event={"ID":"a230b4cf-8e5f-4073-9703-f9b0bb153676","Type":"ContainerDied","Data":"9da17ae8ec9d9e5e04ce2d515bc09cabd9526e1f863e580f19b0f4a241a7f58c"} Jan 30 17:10:44 crc kubenswrapper[4766]: I0130 17:10:44.067475 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7tb9c" event={"ID":"a230b4cf-8e5f-4073-9703-f9b0bb153676","Type":"ContainerStarted","Data":"9bcccebb2367c627ac434306c10f28f87c0f4690fdbec71abdfd31231de0b8e5"} Jan 30 17:10:44 crc kubenswrapper[4766]: I0130 17:10:44.070983 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zngnx" event={"ID":"e507d583-4c30-4a78-902f-9b53865469c9","Type":"ContainerStarted","Data":"3da8ea54efb2ffb570d1577df49998152fd78da19ab9060b5cce0725281985d9"} Jan 30 17:10:44 crc kubenswrapper[4766]: I0130 17:10:44.094904 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7tb9c" podStartSLOduration=2.674994035 podStartE2EDuration="5.094888346s" podCreationTimestamp="2026-01-30 17:10:39 +0000 UTC" firstStartedPulling="2026-01-30 17:10:41.019971247 +0000 UTC m=+2895.657928593" lastFinishedPulling="2026-01-30 17:10:43.439865558 +0000 UTC m=+2898.077822904" observedRunningTime="2026-01-30 17:10:44.094127185 +0000 UTC m=+2898.732084531" watchObservedRunningTime="2026-01-30 17:10:44.094888346 +0000 UTC m=+2898.732845692" Jan 30 17:10:44 crc kubenswrapper[4766]: I0130 17:10:44.123211 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zngnx" podStartSLOduration=2.705769101 podStartE2EDuration="4.123190806s" podCreationTimestamp="2026-01-30 17:10:40 +0000 UTC" firstStartedPulling="2026-01-30 17:10:42.04590421 +0000 UTC m=+2896.683861556" lastFinishedPulling="2026-01-30 17:10:43.463325915 +0000 UTC m=+2898.101283261" observedRunningTime="2026-01-30 17:10:44.115569817 +0000 UTC m=+2898.753527183" watchObservedRunningTime="2026-01-30 17:10:44.123190806 +0000 UTC m=+2898.761148152" Jan 30 17:10:49 crc kubenswrapper[4766]: I0130 17:10:49.853304 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7tb9c" Jan 30 17:10:49 crc kubenswrapper[4766]: I0130 17:10:49.853946 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7tb9c" Jan 30 17:10:49 crc kubenswrapper[4766]: I0130 17:10:49.907450 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7tb9c" Jan 30 17:10:50 crc kubenswrapper[4766]: I0130 17:10:50.143134 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7tb9c" Jan 30 17:10:50 crc kubenswrapper[4766]: I0130 17:10:50.181123 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7tb9c"] Jan 30 17:10:50 crc kubenswrapper[4766]: I0130 17:10:50.853963 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zngnx" Jan 30 17:10:50 crc kubenswrapper[4766]: I0130 17:10:50.854037 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zngnx" Jan 30 17:10:50 crc kubenswrapper[4766]: I0130 17:10:50.913728 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zngnx" Jan 30 17:10:51 crc kubenswrapper[4766]: I0130 17:10:51.157113 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zngnx" Jan 30 17:10:52 crc kubenswrapper[4766]: I0130 17:10:52.125697 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7tb9c" podUID="a230b4cf-8e5f-4073-9703-f9b0bb153676" containerName="registry-server" containerID="cri-o://9bcccebb2367c627ac434306c10f28f87c0f4690fdbec71abdfd31231de0b8e5" gracePeriod=2 Jan 30 17:10:52 crc kubenswrapper[4766]: I0130 17:10:52.554970 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zngnx"] Jan 30 17:10:52 crc kubenswrapper[4766]: I0130 17:10:52.565938 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7tb9c" Jan 30 17:10:52 crc kubenswrapper[4766]: I0130 17:10:52.753565 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a230b4cf-8e5f-4073-9703-f9b0bb153676-utilities\") pod \"a230b4cf-8e5f-4073-9703-f9b0bb153676\" (UID: \"a230b4cf-8e5f-4073-9703-f9b0bb153676\") " Jan 30 17:10:52 crc kubenswrapper[4766]: I0130 17:10:52.753936 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a230b4cf-8e5f-4073-9703-f9b0bb153676-catalog-content\") pod \"a230b4cf-8e5f-4073-9703-f9b0bb153676\" (UID: \"a230b4cf-8e5f-4073-9703-f9b0bb153676\") " Jan 30 17:10:52 crc kubenswrapper[4766]: I0130 17:10:52.754036 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fprdh\" (UniqueName: \"kubernetes.io/projected/a230b4cf-8e5f-4073-9703-f9b0bb153676-kube-api-access-fprdh\") pod \"a230b4cf-8e5f-4073-9703-f9b0bb153676\" (UID: \"a230b4cf-8e5f-4073-9703-f9b0bb153676\") " Jan 30 17:10:52 crc kubenswrapper[4766]: I0130 17:10:52.755820 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a230b4cf-8e5f-4073-9703-f9b0bb153676-utilities" (OuterVolumeSpecName: "utilities") pod "a230b4cf-8e5f-4073-9703-f9b0bb153676" (UID: "a230b4cf-8e5f-4073-9703-f9b0bb153676"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:10:52 crc kubenswrapper[4766]: I0130 17:10:52.760022 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a230b4cf-8e5f-4073-9703-f9b0bb153676-kube-api-access-fprdh" (OuterVolumeSpecName: "kube-api-access-fprdh") pod "a230b4cf-8e5f-4073-9703-f9b0bb153676" (UID: "a230b4cf-8e5f-4073-9703-f9b0bb153676"). InnerVolumeSpecName "kube-api-access-fprdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:10:52 crc kubenswrapper[4766]: I0130 17:10:52.855293 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a230b4cf-8e5f-4073-9703-f9b0bb153676-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:10:52 crc kubenswrapper[4766]: I0130 17:10:52.855336 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fprdh\" (UniqueName: \"kubernetes.io/projected/a230b4cf-8e5f-4073-9703-f9b0bb153676-kube-api-access-fprdh\") on node \"crc\" DevicePath \"\"" Jan 30 17:10:53 crc kubenswrapper[4766]: I0130 17:10:53.118865 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a230b4cf-8e5f-4073-9703-f9b0bb153676-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a230b4cf-8e5f-4073-9703-f9b0bb153676" (UID: "a230b4cf-8e5f-4073-9703-f9b0bb153676"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:10:53 crc kubenswrapper[4766]: I0130 17:10:53.134704 4766 generic.go:334] "Generic (PLEG): container finished" podID="a230b4cf-8e5f-4073-9703-f9b0bb153676" containerID="9bcccebb2367c627ac434306c10f28f87c0f4690fdbec71abdfd31231de0b8e5" exitCode=0 Jan 30 17:10:53 crc kubenswrapper[4766]: I0130 17:10:53.134766 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7tb9c" Jan 30 17:10:53 crc kubenswrapper[4766]: I0130 17:10:53.134789 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7tb9c" event={"ID":"a230b4cf-8e5f-4073-9703-f9b0bb153676","Type":"ContainerDied","Data":"9bcccebb2367c627ac434306c10f28f87c0f4690fdbec71abdfd31231de0b8e5"} Jan 30 17:10:53 crc kubenswrapper[4766]: I0130 17:10:53.134829 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7tb9c" event={"ID":"a230b4cf-8e5f-4073-9703-f9b0bb153676","Type":"ContainerDied","Data":"f0333f596b589563f978aad186da16aacd88d7acd56905ba8557c3d26b41ec37"} Jan 30 17:10:53 crc kubenswrapper[4766]: I0130 17:10:53.134849 4766 scope.go:117] "RemoveContainer" containerID="9bcccebb2367c627ac434306c10f28f87c0f4690fdbec71abdfd31231de0b8e5" Jan 30 17:10:53 crc kubenswrapper[4766]: I0130 17:10:53.135269 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zngnx" podUID="e507d583-4c30-4a78-902f-9b53865469c9" containerName="registry-server" containerID="cri-o://3da8ea54efb2ffb570d1577df49998152fd78da19ab9060b5cce0725281985d9" gracePeriod=2 Jan 30 17:10:53 crc kubenswrapper[4766]: I0130 17:10:53.161142 4766 scope.go:117] "RemoveContainer" containerID="9da17ae8ec9d9e5e04ce2d515bc09cabd9526e1f863e580f19b0f4a241a7f58c" Jan 30 17:10:53 crc kubenswrapper[4766]: I0130 17:10:53.167817 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a230b4cf-8e5f-4073-9703-f9b0bb153676-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:10:53 crc kubenswrapper[4766]: I0130 17:10:53.175553 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7tb9c"] Jan 30 17:10:53 crc kubenswrapper[4766]: I0130 17:10:53.182298 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7tb9c"] Jan 30 17:10:53 crc kubenswrapper[4766]: I0130 17:10:53.199617 4766 scope.go:117] "RemoveContainer" containerID="202f28b02298f21c33c1f35e3a5e23a0cb214ea7a644dc3e562c70d5a7f17be6" Jan 30 17:10:53 crc kubenswrapper[4766]: I0130 17:10:53.341519 4766 scope.go:117] "RemoveContainer" containerID="9bcccebb2367c627ac434306c10f28f87c0f4690fdbec71abdfd31231de0b8e5" Jan 30 17:10:53 crc kubenswrapper[4766]: E0130 17:10:53.342608 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bcccebb2367c627ac434306c10f28f87c0f4690fdbec71abdfd31231de0b8e5\": container with ID starting with 9bcccebb2367c627ac434306c10f28f87c0f4690fdbec71abdfd31231de0b8e5 not found: ID does not exist" containerID="9bcccebb2367c627ac434306c10f28f87c0f4690fdbec71abdfd31231de0b8e5" Jan 30 17:10:53 crc kubenswrapper[4766]: I0130 17:10:53.342657 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bcccebb2367c627ac434306c10f28f87c0f4690fdbec71abdfd31231de0b8e5"} err="failed to get container status \"9bcccebb2367c627ac434306c10f28f87c0f4690fdbec71abdfd31231de0b8e5\": rpc error: code = NotFound desc = could not find container \"9bcccebb2367c627ac434306c10f28f87c0f4690fdbec71abdfd31231de0b8e5\": container with ID starting with 9bcccebb2367c627ac434306c10f28f87c0f4690fdbec71abdfd31231de0b8e5 not found: ID does not exist" Jan 30 17:10:53 crc kubenswrapper[4766]: I0130 17:10:53.342690 4766 scope.go:117] "RemoveContainer" containerID="9da17ae8ec9d9e5e04ce2d515bc09cabd9526e1f863e580f19b0f4a241a7f58c" Jan 30 17:10:53 crc kubenswrapper[4766]: E0130 17:10:53.343511 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9da17ae8ec9d9e5e04ce2d515bc09cabd9526e1f863e580f19b0f4a241a7f58c\": container with ID starting with 9da17ae8ec9d9e5e04ce2d515bc09cabd9526e1f863e580f19b0f4a241a7f58c not found: ID does not exist" containerID="9da17ae8ec9d9e5e04ce2d515bc09cabd9526e1f863e580f19b0f4a241a7f58c" Jan 30 17:10:53 crc kubenswrapper[4766]: I0130 17:10:53.343540 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9da17ae8ec9d9e5e04ce2d515bc09cabd9526e1f863e580f19b0f4a241a7f58c"} err="failed to get container status \"9da17ae8ec9d9e5e04ce2d515bc09cabd9526e1f863e580f19b0f4a241a7f58c\": rpc error: code = NotFound desc = could not find container \"9da17ae8ec9d9e5e04ce2d515bc09cabd9526e1f863e580f19b0f4a241a7f58c\": container with ID starting with 9da17ae8ec9d9e5e04ce2d515bc09cabd9526e1f863e580f19b0f4a241a7f58c not found: ID does not exist" Jan 30 17:10:53 crc kubenswrapper[4766]: I0130 17:10:53.343562 4766 scope.go:117] "RemoveContainer" containerID="202f28b02298f21c33c1f35e3a5e23a0cb214ea7a644dc3e562c70d5a7f17be6" Jan 30 17:10:53 crc kubenswrapper[4766]: E0130 17:10:53.344772 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"202f28b02298f21c33c1f35e3a5e23a0cb214ea7a644dc3e562c70d5a7f17be6\": container with ID starting with 202f28b02298f21c33c1f35e3a5e23a0cb214ea7a644dc3e562c70d5a7f17be6 not found: ID does not exist" containerID="202f28b02298f21c33c1f35e3a5e23a0cb214ea7a644dc3e562c70d5a7f17be6" Jan 30 17:10:53 crc kubenswrapper[4766]: I0130 17:10:53.344816 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"202f28b02298f21c33c1f35e3a5e23a0cb214ea7a644dc3e562c70d5a7f17be6"} err="failed to get container status \"202f28b02298f21c33c1f35e3a5e23a0cb214ea7a644dc3e562c70d5a7f17be6\": rpc error: code = NotFound desc = could not find container \"202f28b02298f21c33c1f35e3a5e23a0cb214ea7a644dc3e562c70d5a7f17be6\": container with ID starting with 202f28b02298f21c33c1f35e3a5e23a0cb214ea7a644dc3e562c70d5a7f17be6 not found: ID does not exist" Jan 30 17:10:53 crc kubenswrapper[4766]: I0130 17:10:53.553493 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zngnx" Jan 30 17:10:53 crc kubenswrapper[4766]: I0130 17:10:53.573382 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfswt\" (UniqueName: \"kubernetes.io/projected/e507d583-4c30-4a78-902f-9b53865469c9-kube-api-access-cfswt\") pod \"e507d583-4c30-4a78-902f-9b53865469c9\" (UID: \"e507d583-4c30-4a78-902f-9b53865469c9\") " Jan 30 17:10:53 crc kubenswrapper[4766]: I0130 17:10:53.573467 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e507d583-4c30-4a78-902f-9b53865469c9-catalog-content\") pod \"e507d583-4c30-4a78-902f-9b53865469c9\" (UID: \"e507d583-4c30-4a78-902f-9b53865469c9\") " Jan 30 17:10:53 crc kubenswrapper[4766]: I0130 17:10:53.573492 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e507d583-4c30-4a78-902f-9b53865469c9-utilities\") pod \"e507d583-4c30-4a78-902f-9b53865469c9\" (UID: \"e507d583-4c30-4a78-902f-9b53865469c9\") " Jan 30 17:10:53 crc kubenswrapper[4766]: I0130 17:10:53.574707 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e507d583-4c30-4a78-902f-9b53865469c9-utilities" (OuterVolumeSpecName: "utilities") pod "e507d583-4c30-4a78-902f-9b53865469c9" (UID: "e507d583-4c30-4a78-902f-9b53865469c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:10:53 crc kubenswrapper[4766]: I0130 17:10:53.578267 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e507d583-4c30-4a78-902f-9b53865469c9-kube-api-access-cfswt" (OuterVolumeSpecName: "kube-api-access-cfswt") pod "e507d583-4c30-4a78-902f-9b53865469c9" (UID: "e507d583-4c30-4a78-902f-9b53865469c9"). InnerVolumeSpecName "kube-api-access-cfswt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:10:53 crc kubenswrapper[4766]: I0130 17:10:53.629337 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e507d583-4c30-4a78-902f-9b53865469c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e507d583-4c30-4a78-902f-9b53865469c9" (UID: "e507d583-4c30-4a78-902f-9b53865469c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:10:53 crc kubenswrapper[4766]: I0130 17:10:53.675503 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfswt\" (UniqueName: \"kubernetes.io/projected/e507d583-4c30-4a78-902f-9b53865469c9-kube-api-access-cfswt\") on node \"crc\" DevicePath \"\"" Jan 30 17:10:53 crc kubenswrapper[4766]: I0130 17:10:53.675801 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e507d583-4c30-4a78-902f-9b53865469c9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:10:53 crc kubenswrapper[4766]: I0130 17:10:53.675880 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e507d583-4c30-4a78-902f-9b53865469c9-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:10:54 crc kubenswrapper[4766]: I0130 17:10:54.048127 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a230b4cf-8e5f-4073-9703-f9b0bb153676" path="/var/lib/kubelet/pods/a230b4cf-8e5f-4073-9703-f9b0bb153676/volumes" Jan 30 17:10:54 crc kubenswrapper[4766]: I0130 17:10:54.143535 4766 generic.go:334] "Generic (PLEG): container finished" podID="e507d583-4c30-4a78-902f-9b53865469c9" containerID="3da8ea54efb2ffb570d1577df49998152fd78da19ab9060b5cce0725281985d9" exitCode=0 Jan 30 17:10:54 crc kubenswrapper[4766]: I0130 17:10:54.143602 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zngnx" event={"ID":"e507d583-4c30-4a78-902f-9b53865469c9","Type":"ContainerDied","Data":"3da8ea54efb2ffb570d1577df49998152fd78da19ab9060b5cce0725281985d9"} Jan 30 17:10:54 crc kubenswrapper[4766]: I0130 17:10:54.143656 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zngnx" Jan 30 17:10:54 crc kubenswrapper[4766]: I0130 17:10:54.144478 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zngnx" event={"ID":"e507d583-4c30-4a78-902f-9b53865469c9","Type":"ContainerDied","Data":"4dec609aa4885b78f47eb0e4e6f2d968e7a0eb19591ef99af791ed738dbfcf3f"} Jan 30 17:10:54 crc kubenswrapper[4766]: I0130 17:10:54.144508 4766 scope.go:117] "RemoveContainer" containerID="3da8ea54efb2ffb570d1577df49998152fd78da19ab9060b5cce0725281985d9" Jan 30 17:10:54 crc kubenswrapper[4766]: I0130 17:10:54.175596 4766 scope.go:117] "RemoveContainer" containerID="3dd6a530ff7c14d38801ca3930f0c13803a0a7d893eebee3e03f62322b4102de" Jan 30 17:10:54 crc kubenswrapper[4766]: I0130 17:10:54.183554 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zngnx"] Jan 30 17:10:54 crc kubenswrapper[4766]: I0130 17:10:54.191537 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zngnx"] Jan 30 17:10:54 crc kubenswrapper[4766]: I0130 17:10:54.198103 4766 scope.go:117] "RemoveContainer" containerID="d1641c04fdaf2c4bb1b7a1a71f2402b193ed0cc9257453d33d9e9bea7cd9023e" Jan 30 17:10:54 crc kubenswrapper[4766]: I0130 17:10:54.213890 4766 scope.go:117] "RemoveContainer" containerID="3da8ea54efb2ffb570d1577df49998152fd78da19ab9060b5cce0725281985d9" Jan 30 17:10:54 crc kubenswrapper[4766]: E0130 17:10:54.214286 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3da8ea54efb2ffb570d1577df49998152fd78da19ab9060b5cce0725281985d9\": container with ID starting with 3da8ea54efb2ffb570d1577df49998152fd78da19ab9060b5cce0725281985d9 not found: ID does not exist" containerID="3da8ea54efb2ffb570d1577df49998152fd78da19ab9060b5cce0725281985d9" Jan 30 17:10:54 crc kubenswrapper[4766]: I0130 17:10:54.214320 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3da8ea54efb2ffb570d1577df49998152fd78da19ab9060b5cce0725281985d9"} err="failed to get container status \"3da8ea54efb2ffb570d1577df49998152fd78da19ab9060b5cce0725281985d9\": rpc error: code = NotFound desc = could not find container \"3da8ea54efb2ffb570d1577df49998152fd78da19ab9060b5cce0725281985d9\": container with ID starting with 3da8ea54efb2ffb570d1577df49998152fd78da19ab9060b5cce0725281985d9 not found: ID does not exist" Jan 30 17:10:54 crc kubenswrapper[4766]: I0130 17:10:54.214341 4766 scope.go:117] "RemoveContainer" containerID="3dd6a530ff7c14d38801ca3930f0c13803a0a7d893eebee3e03f62322b4102de" Jan 30 17:10:54 crc kubenswrapper[4766]: E0130 17:10:54.214578 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3dd6a530ff7c14d38801ca3930f0c13803a0a7d893eebee3e03f62322b4102de\": container with ID starting with 3dd6a530ff7c14d38801ca3930f0c13803a0a7d893eebee3e03f62322b4102de not found: ID does not exist" containerID="3dd6a530ff7c14d38801ca3930f0c13803a0a7d893eebee3e03f62322b4102de" Jan 30 17:10:54 crc kubenswrapper[4766]: I0130 17:10:54.214610 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dd6a530ff7c14d38801ca3930f0c13803a0a7d893eebee3e03f62322b4102de"} err="failed to get container status \"3dd6a530ff7c14d38801ca3930f0c13803a0a7d893eebee3e03f62322b4102de\": rpc error: code = NotFound desc = could not find container \"3dd6a530ff7c14d38801ca3930f0c13803a0a7d893eebee3e03f62322b4102de\": container with ID starting with 3dd6a530ff7c14d38801ca3930f0c13803a0a7d893eebee3e03f62322b4102de not found: ID does not exist" Jan 30 17:10:54 crc kubenswrapper[4766]: I0130 17:10:54.214626 4766 scope.go:117] "RemoveContainer" containerID="d1641c04fdaf2c4bb1b7a1a71f2402b193ed0cc9257453d33d9e9bea7cd9023e" Jan 30 17:10:54 crc kubenswrapper[4766]: E0130 17:10:54.215006 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1641c04fdaf2c4bb1b7a1a71f2402b193ed0cc9257453d33d9e9bea7cd9023e\": container with ID starting with d1641c04fdaf2c4bb1b7a1a71f2402b193ed0cc9257453d33d9e9bea7cd9023e not found: ID does not exist" containerID="d1641c04fdaf2c4bb1b7a1a71f2402b193ed0cc9257453d33d9e9bea7cd9023e" Jan 30 17:10:54 crc kubenswrapper[4766]: I0130 17:10:54.215029 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1641c04fdaf2c4bb1b7a1a71f2402b193ed0cc9257453d33d9e9bea7cd9023e"} err="failed to get container status \"d1641c04fdaf2c4bb1b7a1a71f2402b193ed0cc9257453d33d9e9bea7cd9023e\": rpc error: code = NotFound desc = could not find container \"d1641c04fdaf2c4bb1b7a1a71f2402b193ed0cc9257453d33d9e9bea7cd9023e\": container with ID starting with d1641c04fdaf2c4bb1b7a1a71f2402b193ed0cc9257453d33d9e9bea7cd9023e not found: ID does not exist" Jan 30 17:10:56 crc kubenswrapper[4766]: I0130 17:10:56.049423 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e507d583-4c30-4a78-902f-9b53865469c9" path="/var/lib/kubelet/pods/e507d583-4c30-4a78-902f-9b53865469c9/volumes" Jan 30 17:11:09 crc kubenswrapper[4766]: I0130 17:11:09.045542 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:11:09 crc kubenswrapper[4766]: I0130 17:11:09.046649 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:11:09 crc kubenswrapper[4766]: I0130 17:11:09.046744 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 17:11:09 crc kubenswrapper[4766]: I0130 17:11:09.047780 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"33bfd46087a94354017580cc322df23894b074e070192c17b1c605a92e00a8b9"} pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 17:11:09 crc kubenswrapper[4766]: I0130 17:11:09.047861 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" containerID="cri-o://33bfd46087a94354017580cc322df23894b074e070192c17b1c605a92e00a8b9" gracePeriod=600 Jan 30 17:11:09 crc kubenswrapper[4766]: E0130 17:11:09.717620 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:11:10 crc kubenswrapper[4766]: I0130 17:11:10.268961 4766 generic.go:334] "Generic (PLEG): container finished" podID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerID="33bfd46087a94354017580cc322df23894b074e070192c17b1c605a92e00a8b9" exitCode=0 Jan 30 17:11:10 crc kubenswrapper[4766]: I0130 17:11:10.269050 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerDied","Data":"33bfd46087a94354017580cc322df23894b074e070192c17b1c605a92e00a8b9"} Jan 30 17:11:10 crc kubenswrapper[4766]: I0130 17:11:10.270065 4766 scope.go:117] "RemoveContainer" containerID="5d3becd45505d4de521190d32436097d94d4667af4d51e364dad238f886a491a" Jan 30 17:11:10 crc kubenswrapper[4766]: I0130 17:11:10.270628 4766 scope.go:117] "RemoveContainer" containerID="33bfd46087a94354017580cc322df23894b074e070192c17b1c605a92e00a8b9" Jan 30 17:11:10 crc kubenswrapper[4766]: E0130 17:11:10.270881 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:11:25 crc kubenswrapper[4766]: I0130 17:11:25.039070 4766 scope.go:117] "RemoveContainer" containerID="33bfd46087a94354017580cc322df23894b074e070192c17b1c605a92e00a8b9" Jan 30 17:11:25 crc kubenswrapper[4766]: E0130 17:11:25.039887 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:11:36 crc kubenswrapper[4766]: I0130 17:11:36.039115 4766 scope.go:117] "RemoveContainer" containerID="33bfd46087a94354017580cc322df23894b074e070192c17b1c605a92e00a8b9" Jan 30 17:11:36 crc kubenswrapper[4766]: E0130 17:11:36.040034 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:11:50 crc kubenswrapper[4766]: I0130 17:11:50.040034 4766 scope.go:117] "RemoveContainer" containerID="33bfd46087a94354017580cc322df23894b074e070192c17b1c605a92e00a8b9" Jan 30 17:11:50 crc kubenswrapper[4766]: E0130 17:11:50.040855 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:12:02 crc kubenswrapper[4766]: I0130 17:12:02.040022 4766 scope.go:117] "RemoveContainer" containerID="33bfd46087a94354017580cc322df23894b074e070192c17b1c605a92e00a8b9" Jan 30 17:12:02 crc kubenswrapper[4766]: E0130 17:12:02.040990 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:12:14 crc kubenswrapper[4766]: I0130 17:12:14.040059 4766 scope.go:117] "RemoveContainer" containerID="33bfd46087a94354017580cc322df23894b074e070192c17b1c605a92e00a8b9" Jan 30 17:12:14 crc kubenswrapper[4766]: E0130 17:12:14.041704 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:12:27 crc kubenswrapper[4766]: I0130 17:12:27.039845 4766 scope.go:117] "RemoveContainer" containerID="33bfd46087a94354017580cc322df23894b074e070192c17b1c605a92e00a8b9" Jan 30 17:12:27 crc kubenswrapper[4766]: E0130 17:12:27.041275 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:12:39 crc kubenswrapper[4766]: I0130 17:12:39.040053 4766 scope.go:117] "RemoveContainer" containerID="33bfd46087a94354017580cc322df23894b074e070192c17b1c605a92e00a8b9" Jan 30 17:12:39 crc kubenswrapper[4766]: E0130 17:12:39.040926 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:12:52 crc kubenswrapper[4766]: I0130 17:12:52.040009 4766 scope.go:117] "RemoveContainer" containerID="33bfd46087a94354017580cc322df23894b074e070192c17b1c605a92e00a8b9" Jan 30 17:12:52 crc kubenswrapper[4766]: E0130 17:12:52.040606 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:13:06 crc kubenswrapper[4766]: I0130 17:13:06.039828 4766 scope.go:117] "RemoveContainer" containerID="33bfd46087a94354017580cc322df23894b074e070192c17b1c605a92e00a8b9" Jan 30 17:13:06 crc kubenswrapper[4766]: E0130 17:13:06.040725 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:13:19 crc kubenswrapper[4766]: I0130 17:13:19.040297 4766 scope.go:117] "RemoveContainer" containerID="33bfd46087a94354017580cc322df23894b074e070192c17b1c605a92e00a8b9" Jan 30 17:13:19 crc kubenswrapper[4766]: E0130 17:13:19.041865 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:13:31 crc kubenswrapper[4766]: I0130 17:13:31.039440 4766 scope.go:117] "RemoveContainer" containerID="33bfd46087a94354017580cc322df23894b074e070192c17b1c605a92e00a8b9" Jan 30 17:13:31 crc kubenswrapper[4766]: E0130 17:13:31.040288 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:13:42 crc kubenswrapper[4766]: I0130 17:13:42.039417 4766 scope.go:117] "RemoveContainer" containerID="33bfd46087a94354017580cc322df23894b074e070192c17b1c605a92e00a8b9" Jan 30 17:13:42 crc kubenswrapper[4766]: E0130 17:13:42.040916 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:13:53 crc kubenswrapper[4766]: I0130 17:13:53.039554 4766 scope.go:117] "RemoveContainer" containerID="33bfd46087a94354017580cc322df23894b074e070192c17b1c605a92e00a8b9" Jan 30 17:13:53 crc kubenswrapper[4766]: E0130 17:13:53.040387 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:14:04 crc kubenswrapper[4766]: I0130 17:14:04.040619 4766 scope.go:117] "RemoveContainer" containerID="33bfd46087a94354017580cc322df23894b074e070192c17b1c605a92e00a8b9" Jan 30 17:14:04 crc kubenswrapper[4766]: E0130 17:14:04.041853 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:14:15 crc kubenswrapper[4766]: I0130 17:14:15.039122 4766 scope.go:117] "RemoveContainer" containerID="33bfd46087a94354017580cc322df23894b074e070192c17b1c605a92e00a8b9" Jan 30 17:14:15 crc kubenswrapper[4766]: E0130 17:14:15.039836 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:14:30 crc kubenswrapper[4766]: I0130 17:14:30.039532 4766 scope.go:117] "RemoveContainer" containerID="33bfd46087a94354017580cc322df23894b074e070192c17b1c605a92e00a8b9" Jan 30 17:14:30 crc kubenswrapper[4766]: E0130 17:14:30.040454 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:14:43 crc kubenswrapper[4766]: I0130 17:14:43.039313 4766 scope.go:117] "RemoveContainer" containerID="33bfd46087a94354017580cc322df23894b074e070192c17b1c605a92e00a8b9" Jan 30 17:14:43 crc kubenswrapper[4766]: E0130 17:14:43.041740 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:14:54 crc kubenswrapper[4766]: I0130 17:14:54.039305 4766 scope.go:117] "RemoveContainer" containerID="33bfd46087a94354017580cc322df23894b074e070192c17b1c605a92e00a8b9" Jan 30 17:14:54 crc kubenswrapper[4766]: E0130 17:14:54.040245 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:15:00 crc kubenswrapper[4766]: I0130 17:15:00.157672 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496555-l7zjm"] Jan 30 17:15:00 crc kubenswrapper[4766]: E0130 17:15:00.159715 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e507d583-4c30-4a78-902f-9b53865469c9" containerName="registry-server" Jan 30 17:15:00 crc kubenswrapper[4766]: I0130 17:15:00.159830 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="e507d583-4c30-4a78-902f-9b53865469c9" containerName="registry-server" Jan 30 17:15:00 crc kubenswrapper[4766]: E0130 17:15:00.159915 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e507d583-4c30-4a78-902f-9b53865469c9" containerName="extract-content" Jan 30 17:15:00 crc kubenswrapper[4766]: I0130 17:15:00.159997 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="e507d583-4c30-4a78-902f-9b53865469c9" containerName="extract-content" Jan 30 17:15:00 crc kubenswrapper[4766]: E0130 17:15:00.160083 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a230b4cf-8e5f-4073-9703-f9b0bb153676" containerName="registry-server" Jan 30 17:15:00 crc kubenswrapper[4766]: I0130 17:15:00.160158 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="a230b4cf-8e5f-4073-9703-f9b0bb153676" containerName="registry-server" Jan 30 17:15:00 crc kubenswrapper[4766]: E0130 17:15:00.160250 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a230b4cf-8e5f-4073-9703-f9b0bb153676" containerName="extract-utilities" Jan 30 17:15:00 crc kubenswrapper[4766]: I0130 17:15:00.160332 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="a230b4cf-8e5f-4073-9703-f9b0bb153676" containerName="extract-utilities" Jan 30 17:15:00 crc kubenswrapper[4766]: E0130 17:15:00.160409 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a230b4cf-8e5f-4073-9703-f9b0bb153676" containerName="extract-content" Jan 30 17:15:00 crc kubenswrapper[4766]: I0130 17:15:00.160491 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="a230b4cf-8e5f-4073-9703-f9b0bb153676" containerName="extract-content" Jan 30 17:15:00 crc kubenswrapper[4766]: E0130 17:15:00.160595 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e507d583-4c30-4a78-902f-9b53865469c9" containerName="extract-utilities" Jan 30 17:15:00 crc kubenswrapper[4766]: I0130 17:15:00.160699 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="e507d583-4c30-4a78-902f-9b53865469c9" containerName="extract-utilities" Jan 30 17:15:00 crc kubenswrapper[4766]: I0130 17:15:00.161050 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="a230b4cf-8e5f-4073-9703-f9b0bb153676" containerName="registry-server" Jan 30 17:15:00 crc kubenswrapper[4766]: I0130 17:15:00.161160 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="e507d583-4c30-4a78-902f-9b53865469c9" containerName="registry-server" Jan 30 17:15:00 crc kubenswrapper[4766]: I0130 17:15:00.162033 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-l7zjm" Jan 30 17:15:00 crc kubenswrapper[4766]: I0130 17:15:00.164505 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 17:15:00 crc kubenswrapper[4766]: I0130 17:15:00.164685 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 17:15:00 crc kubenswrapper[4766]: I0130 17:15:00.167871 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496555-l7zjm"] Jan 30 17:15:00 crc kubenswrapper[4766]: I0130 17:15:00.294649 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/20c37317-bc31-4749-bf2a-000f3786ebdb-secret-volume\") pod \"collect-profiles-29496555-l7zjm\" (UID: \"20c37317-bc31-4749-bf2a-000f3786ebdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-l7zjm" Jan 30 17:15:00 crc kubenswrapper[4766]: I0130 17:15:00.294781 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20c37317-bc31-4749-bf2a-000f3786ebdb-config-volume\") pod \"collect-profiles-29496555-l7zjm\" (UID: \"20c37317-bc31-4749-bf2a-000f3786ebdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-l7zjm" Jan 30 17:15:00 crc kubenswrapper[4766]: I0130 17:15:00.294836 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94758\" (UniqueName: \"kubernetes.io/projected/20c37317-bc31-4749-bf2a-000f3786ebdb-kube-api-access-94758\") pod \"collect-profiles-29496555-l7zjm\" (UID: \"20c37317-bc31-4749-bf2a-000f3786ebdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-l7zjm" Jan 30 17:15:00 crc kubenswrapper[4766]: I0130 17:15:00.396023 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/20c37317-bc31-4749-bf2a-000f3786ebdb-secret-volume\") pod \"collect-profiles-29496555-l7zjm\" (UID: \"20c37317-bc31-4749-bf2a-000f3786ebdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-l7zjm" Jan 30 17:15:00 crc kubenswrapper[4766]: I0130 17:15:00.396121 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20c37317-bc31-4749-bf2a-000f3786ebdb-config-volume\") pod \"collect-profiles-29496555-l7zjm\" (UID: \"20c37317-bc31-4749-bf2a-000f3786ebdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-l7zjm" Jan 30 17:15:00 crc kubenswrapper[4766]: I0130 17:15:00.396331 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94758\" (UniqueName: \"kubernetes.io/projected/20c37317-bc31-4749-bf2a-000f3786ebdb-kube-api-access-94758\") pod \"collect-profiles-29496555-l7zjm\" (UID: \"20c37317-bc31-4749-bf2a-000f3786ebdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-l7zjm" Jan 30 17:15:00 crc kubenswrapper[4766]: I0130 17:15:00.397640 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20c37317-bc31-4749-bf2a-000f3786ebdb-config-volume\") pod \"collect-profiles-29496555-l7zjm\" (UID: \"20c37317-bc31-4749-bf2a-000f3786ebdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-l7zjm" Jan 30 17:15:00 crc kubenswrapper[4766]: I0130 17:15:00.402478 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/20c37317-bc31-4749-bf2a-000f3786ebdb-secret-volume\") pod \"collect-profiles-29496555-l7zjm\" (UID: \"20c37317-bc31-4749-bf2a-000f3786ebdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-l7zjm" Jan 30 17:15:00 crc kubenswrapper[4766]: I0130 17:15:00.412241 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94758\" (UniqueName: \"kubernetes.io/projected/20c37317-bc31-4749-bf2a-000f3786ebdb-kube-api-access-94758\") pod \"collect-profiles-29496555-l7zjm\" (UID: \"20c37317-bc31-4749-bf2a-000f3786ebdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-l7zjm" Jan 30 17:15:00 crc kubenswrapper[4766]: I0130 17:15:00.485991 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-l7zjm" Jan 30 17:15:00 crc kubenswrapper[4766]: I0130 17:15:00.932637 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496555-l7zjm"] Jan 30 17:15:01 crc kubenswrapper[4766]: I0130 17:15:01.075450 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-l7zjm" event={"ID":"20c37317-bc31-4749-bf2a-000f3786ebdb","Type":"ContainerStarted","Data":"e7a7edb57ac3d27e7b4d4cf72feb542694a5d4be05f6296f5473eacbc813a28b"} Jan 30 17:15:01 crc kubenswrapper[4766]: I0130 17:15:01.075884 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-l7zjm" event={"ID":"20c37317-bc31-4749-bf2a-000f3786ebdb","Type":"ContainerStarted","Data":"7a35e248c8397a411954c6581821563040299233281df19d033970d285a3de58"} Jan 30 17:15:01 crc kubenswrapper[4766]: I0130 17:15:01.091771 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-l7zjm" podStartSLOduration=1.091749869 podStartE2EDuration="1.091749869s" podCreationTimestamp="2026-01-30 17:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:15:01.089416776 +0000 UTC m=+3155.727374122" watchObservedRunningTime="2026-01-30 17:15:01.091749869 +0000 UTC m=+3155.729707215" Jan 30 17:15:02 crc kubenswrapper[4766]: I0130 17:15:02.082397 4766 generic.go:334] "Generic (PLEG): container finished" podID="20c37317-bc31-4749-bf2a-000f3786ebdb" containerID="e7a7edb57ac3d27e7b4d4cf72feb542694a5d4be05f6296f5473eacbc813a28b" exitCode=0 Jan 30 17:15:02 crc kubenswrapper[4766]: I0130 17:15:02.082452 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-l7zjm" event={"ID":"20c37317-bc31-4749-bf2a-000f3786ebdb","Type":"ContainerDied","Data":"e7a7edb57ac3d27e7b4d4cf72feb542694a5d4be05f6296f5473eacbc813a28b"} Jan 30 17:15:03 crc kubenswrapper[4766]: I0130 17:15:03.317130 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-l7zjm" Jan 30 17:15:03 crc kubenswrapper[4766]: I0130 17:15:03.442249 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94758\" (UniqueName: \"kubernetes.io/projected/20c37317-bc31-4749-bf2a-000f3786ebdb-kube-api-access-94758\") pod \"20c37317-bc31-4749-bf2a-000f3786ebdb\" (UID: \"20c37317-bc31-4749-bf2a-000f3786ebdb\") " Jan 30 17:15:03 crc kubenswrapper[4766]: I0130 17:15:03.442486 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20c37317-bc31-4749-bf2a-000f3786ebdb-config-volume\") pod \"20c37317-bc31-4749-bf2a-000f3786ebdb\" (UID: \"20c37317-bc31-4749-bf2a-000f3786ebdb\") " Jan 30 17:15:03 crc kubenswrapper[4766]: I0130 17:15:03.442548 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/20c37317-bc31-4749-bf2a-000f3786ebdb-secret-volume\") pod \"20c37317-bc31-4749-bf2a-000f3786ebdb\" (UID: \"20c37317-bc31-4749-bf2a-000f3786ebdb\") " Jan 30 17:15:03 crc kubenswrapper[4766]: I0130 17:15:03.443569 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20c37317-bc31-4749-bf2a-000f3786ebdb-config-volume" (OuterVolumeSpecName: "config-volume") pod "20c37317-bc31-4749-bf2a-000f3786ebdb" (UID: "20c37317-bc31-4749-bf2a-000f3786ebdb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:15:03 crc kubenswrapper[4766]: I0130 17:15:03.447868 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20c37317-bc31-4749-bf2a-000f3786ebdb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "20c37317-bc31-4749-bf2a-000f3786ebdb" (UID: "20c37317-bc31-4749-bf2a-000f3786ebdb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:15:03 crc kubenswrapper[4766]: I0130 17:15:03.447868 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20c37317-bc31-4749-bf2a-000f3786ebdb-kube-api-access-94758" (OuterVolumeSpecName: "kube-api-access-94758") pod "20c37317-bc31-4749-bf2a-000f3786ebdb" (UID: "20c37317-bc31-4749-bf2a-000f3786ebdb"). InnerVolumeSpecName "kube-api-access-94758". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:15:03 crc kubenswrapper[4766]: I0130 17:15:03.544234 4766 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20c37317-bc31-4749-bf2a-000f3786ebdb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:03 crc kubenswrapper[4766]: I0130 17:15:03.544270 4766 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/20c37317-bc31-4749-bf2a-000f3786ebdb-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:03 crc kubenswrapper[4766]: I0130 17:15:03.544283 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94758\" (UniqueName: \"kubernetes.io/projected/20c37317-bc31-4749-bf2a-000f3786ebdb-kube-api-access-94758\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:04 crc kubenswrapper[4766]: I0130 17:15:04.097819 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-l7zjm" event={"ID":"20c37317-bc31-4749-bf2a-000f3786ebdb","Type":"ContainerDied","Data":"7a35e248c8397a411954c6581821563040299233281df19d033970d285a3de58"} Jan 30 17:15:04 crc kubenswrapper[4766]: I0130 17:15:04.097858 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a35e248c8397a411954c6581821563040299233281df19d033970d285a3de58" Jan 30 17:15:04 crc kubenswrapper[4766]: I0130 17:15:04.097878 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-l7zjm" Jan 30 17:15:04 crc kubenswrapper[4766]: I0130 17:15:04.393459 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496510-glrms"] Jan 30 17:15:04 crc kubenswrapper[4766]: I0130 17:15:04.398309 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496510-glrms"] Jan 30 17:15:06 crc kubenswrapper[4766]: I0130 17:15:06.049480 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aabaaf93-f51e-4847-b39a-8ecccc43f8d4" path="/var/lib/kubelet/pods/aabaaf93-f51e-4847-b39a-8ecccc43f8d4/volumes" Jan 30 17:15:09 crc kubenswrapper[4766]: I0130 17:15:09.039767 4766 scope.go:117] "RemoveContainer" containerID="33bfd46087a94354017580cc322df23894b074e070192c17b1c605a92e00a8b9" Jan 30 17:15:09 crc kubenswrapper[4766]: E0130 17:15:09.040067 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:15:22 crc kubenswrapper[4766]: I0130 17:15:22.039387 4766 scope.go:117] "RemoveContainer" containerID="33bfd46087a94354017580cc322df23894b074e070192c17b1c605a92e00a8b9" Jan 30 17:15:22 crc kubenswrapper[4766]: E0130 17:15:22.040235 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:15:30 crc kubenswrapper[4766]: I0130 17:15:30.514511 4766 scope.go:117] "RemoveContainer" containerID="add3babd5c979004ca5cf98ed2207ebf2c3f7f606e68f1380f3bcb0131882a0e" Jan 30 17:15:33 crc kubenswrapper[4766]: I0130 17:15:33.039783 4766 scope.go:117] "RemoveContainer" containerID="33bfd46087a94354017580cc322df23894b074e070192c17b1c605a92e00a8b9" Jan 30 17:15:33 crc kubenswrapper[4766]: E0130 17:15:33.040676 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:15:44 crc kubenswrapper[4766]: I0130 17:15:44.040079 4766 scope.go:117] "RemoveContainer" containerID="33bfd46087a94354017580cc322df23894b074e070192c17b1c605a92e00a8b9" Jan 30 17:15:44 crc kubenswrapper[4766]: E0130 17:15:44.040948 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:15:56 crc kubenswrapper[4766]: I0130 17:15:56.042899 4766 scope.go:117] "RemoveContainer" containerID="33bfd46087a94354017580cc322df23894b074e070192c17b1c605a92e00a8b9" Jan 30 17:15:56 crc kubenswrapper[4766]: E0130 17:15:56.043844 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:16:11 crc kubenswrapper[4766]: I0130 17:16:11.040450 4766 scope.go:117] "RemoveContainer" containerID="33bfd46087a94354017580cc322df23894b074e070192c17b1c605a92e00a8b9" Jan 30 17:16:11 crc kubenswrapper[4766]: I0130 17:16:11.584929 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerStarted","Data":"d1a44a725e65357e16f05f690c1dcafe8159120a80d628d21f45739a01c94504"} Jan 30 17:17:29 crc kubenswrapper[4766]: I0130 17:17:29.738482 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xbqw6"] Jan 30 17:17:29 crc kubenswrapper[4766]: E0130 17:17:29.740072 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20c37317-bc31-4749-bf2a-000f3786ebdb" containerName="collect-profiles" Jan 30 17:17:29 crc kubenswrapper[4766]: I0130 17:17:29.740091 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="20c37317-bc31-4749-bf2a-000f3786ebdb" containerName="collect-profiles" Jan 30 17:17:29 crc kubenswrapper[4766]: I0130 17:17:29.740313 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="20c37317-bc31-4749-bf2a-000f3786ebdb" containerName="collect-profiles" Jan 30 17:17:29 crc kubenswrapper[4766]: I0130 17:17:29.741635 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xbqw6" Jan 30 17:17:29 crc kubenswrapper[4766]: I0130 17:17:29.745787 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xbqw6"] Jan 30 17:17:29 crc kubenswrapper[4766]: I0130 17:17:29.858585 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d7c1afe-4961-4d01-9513-635a558d6eba-catalog-content\") pod \"certified-operators-xbqw6\" (UID: \"8d7c1afe-4961-4d01-9513-635a558d6eba\") " pod="openshift-marketplace/certified-operators-xbqw6" Jan 30 17:17:29 crc kubenswrapper[4766]: I0130 17:17:29.858646 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf48k\" (UniqueName: \"kubernetes.io/projected/8d7c1afe-4961-4d01-9513-635a558d6eba-kube-api-access-vf48k\") pod \"certified-operators-xbqw6\" (UID: \"8d7c1afe-4961-4d01-9513-635a558d6eba\") " pod="openshift-marketplace/certified-operators-xbqw6" Jan 30 17:17:29 crc kubenswrapper[4766]: I0130 17:17:29.858683 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d7c1afe-4961-4d01-9513-635a558d6eba-utilities\") pod \"certified-operators-xbqw6\" (UID: \"8d7c1afe-4961-4d01-9513-635a558d6eba\") " pod="openshift-marketplace/certified-operators-xbqw6" Jan 30 17:17:29 crc kubenswrapper[4766]: I0130 17:17:29.960682 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vf48k\" (UniqueName: \"kubernetes.io/projected/8d7c1afe-4961-4d01-9513-635a558d6eba-kube-api-access-vf48k\") pod \"certified-operators-xbqw6\" (UID: \"8d7c1afe-4961-4d01-9513-635a558d6eba\") " pod="openshift-marketplace/certified-operators-xbqw6" Jan 30 17:17:29 crc kubenswrapper[4766]: I0130 17:17:29.960758 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d7c1afe-4961-4d01-9513-635a558d6eba-utilities\") pod \"certified-operators-xbqw6\" (UID: \"8d7c1afe-4961-4d01-9513-635a558d6eba\") " pod="openshift-marketplace/certified-operators-xbqw6" Jan 30 17:17:29 crc kubenswrapper[4766]: I0130 17:17:29.960843 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d7c1afe-4961-4d01-9513-635a558d6eba-catalog-content\") pod \"certified-operators-xbqw6\" (UID: \"8d7c1afe-4961-4d01-9513-635a558d6eba\") " pod="openshift-marketplace/certified-operators-xbqw6" Jan 30 17:17:29 crc kubenswrapper[4766]: I0130 17:17:29.961333 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d7c1afe-4961-4d01-9513-635a558d6eba-catalog-content\") pod \"certified-operators-xbqw6\" (UID: \"8d7c1afe-4961-4d01-9513-635a558d6eba\") " pod="openshift-marketplace/certified-operators-xbqw6" Jan 30 17:17:29 crc kubenswrapper[4766]: I0130 17:17:29.961458 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d7c1afe-4961-4d01-9513-635a558d6eba-utilities\") pod \"certified-operators-xbqw6\" (UID: \"8d7c1afe-4961-4d01-9513-635a558d6eba\") " pod="openshift-marketplace/certified-operators-xbqw6" Jan 30 17:17:29 crc kubenswrapper[4766]: I0130 17:17:29.986388 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vf48k\" (UniqueName: \"kubernetes.io/projected/8d7c1afe-4961-4d01-9513-635a558d6eba-kube-api-access-vf48k\") pod \"certified-operators-xbqw6\" (UID: \"8d7c1afe-4961-4d01-9513-635a558d6eba\") " pod="openshift-marketplace/certified-operators-xbqw6" Jan 30 17:17:30 crc kubenswrapper[4766]: I0130 17:17:30.068806 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xbqw6" Jan 30 17:17:30 crc kubenswrapper[4766]: I0130 17:17:30.615511 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xbqw6"] Jan 30 17:17:31 crc kubenswrapper[4766]: I0130 17:17:31.146414 4766 generic.go:334] "Generic (PLEG): container finished" podID="8d7c1afe-4961-4d01-9513-635a558d6eba" containerID="586acc78e1d93b943a55480254d09794912e7f6511e2aa6c95cd772d5a4e71e0" exitCode=0 Jan 30 17:17:31 crc kubenswrapper[4766]: I0130 17:17:31.146486 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xbqw6" event={"ID":"8d7c1afe-4961-4d01-9513-635a558d6eba","Type":"ContainerDied","Data":"586acc78e1d93b943a55480254d09794912e7f6511e2aa6c95cd772d5a4e71e0"} Jan 30 17:17:31 crc kubenswrapper[4766]: I0130 17:17:31.146790 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xbqw6" event={"ID":"8d7c1afe-4961-4d01-9513-635a558d6eba","Type":"ContainerStarted","Data":"c4a1ca07aec1c81f773e9b6ff12e10f2e9b2b05c89b31b465ae9387f71a0c82a"} Jan 30 17:17:31 crc kubenswrapper[4766]: I0130 17:17:31.148667 4766 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 17:17:36 crc kubenswrapper[4766]: I0130 17:17:36.189000 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xbqw6" event={"ID":"8d7c1afe-4961-4d01-9513-635a558d6eba","Type":"ContainerStarted","Data":"f00d8420f534ff4322c45b1fceb6e89fa9b4fe29e7559cd153daf96d32f4fc38"} Jan 30 17:17:37 crc kubenswrapper[4766]: I0130 17:17:37.200133 4766 generic.go:334] "Generic (PLEG): container finished" podID="8d7c1afe-4961-4d01-9513-635a558d6eba" containerID="f00d8420f534ff4322c45b1fceb6e89fa9b4fe29e7559cd153daf96d32f4fc38" exitCode=0 Jan 30 17:17:37 crc kubenswrapper[4766]: I0130 17:17:37.200254 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xbqw6" event={"ID":"8d7c1afe-4961-4d01-9513-635a558d6eba","Type":"ContainerDied","Data":"f00d8420f534ff4322c45b1fceb6e89fa9b4fe29e7559cd153daf96d32f4fc38"} Jan 30 17:17:38 crc kubenswrapper[4766]: I0130 17:17:38.214265 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xbqw6" event={"ID":"8d7c1afe-4961-4d01-9513-635a558d6eba","Type":"ContainerStarted","Data":"836e59fbff2828406622783e6759c8e36d18b33bcebcb00b0a79100a58039c34"} Jan 30 17:17:38 crc kubenswrapper[4766]: I0130 17:17:38.239050 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xbqw6" podStartSLOduration=2.805500164 podStartE2EDuration="9.239023414s" podCreationTimestamp="2026-01-30 17:17:29 +0000 UTC" firstStartedPulling="2026-01-30 17:17:31.148366956 +0000 UTC m=+3305.786324302" lastFinishedPulling="2026-01-30 17:17:37.581890196 +0000 UTC m=+3312.219847552" observedRunningTime="2026-01-30 17:17:38.2339097 +0000 UTC m=+3312.871867056" watchObservedRunningTime="2026-01-30 17:17:38.239023414 +0000 UTC m=+3312.876980780" Jan 30 17:17:40 crc kubenswrapper[4766]: I0130 17:17:40.069394 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xbqw6" Jan 30 17:17:40 crc kubenswrapper[4766]: I0130 17:17:40.070394 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xbqw6" Jan 30 17:17:40 crc kubenswrapper[4766]: I0130 17:17:40.114632 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xbqw6" Jan 30 17:17:50 crc kubenswrapper[4766]: I0130 17:17:50.111475 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xbqw6" Jan 30 17:17:50 crc kubenswrapper[4766]: I0130 17:17:50.177895 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xbqw6"] Jan 30 17:17:50 crc kubenswrapper[4766]: I0130 17:17:50.220416 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sqx4x"] Jan 30 17:17:50 crc kubenswrapper[4766]: I0130 17:17:50.220964 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sqx4x" podUID="748d2b4a-b71d-4ecb-9df9-166be9b20302" containerName="registry-server" containerID="cri-o://80f82cf583547c818df4ad2c3dd4a04653845e781c7f074dd998f95869f66a78" gracePeriod=2 Jan 30 17:17:50 crc kubenswrapper[4766]: I0130 17:17:50.690604 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sqx4x" Jan 30 17:17:50 crc kubenswrapper[4766]: I0130 17:17:50.882383 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/748d2b4a-b71d-4ecb-9df9-166be9b20302-utilities\") pod \"748d2b4a-b71d-4ecb-9df9-166be9b20302\" (UID: \"748d2b4a-b71d-4ecb-9df9-166be9b20302\") " Jan 30 17:17:50 crc kubenswrapper[4766]: I0130 17:17:50.882843 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/748d2b4a-b71d-4ecb-9df9-166be9b20302-utilities" (OuterVolumeSpecName: "utilities") pod "748d2b4a-b71d-4ecb-9df9-166be9b20302" (UID: "748d2b4a-b71d-4ecb-9df9-166be9b20302"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:17:50 crc kubenswrapper[4766]: I0130 17:17:50.883016 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/748d2b4a-b71d-4ecb-9df9-166be9b20302-catalog-content\") pod \"748d2b4a-b71d-4ecb-9df9-166be9b20302\" (UID: \"748d2b4a-b71d-4ecb-9df9-166be9b20302\") " Jan 30 17:17:50 crc kubenswrapper[4766]: I0130 17:17:50.884077 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjxkj\" (UniqueName: \"kubernetes.io/projected/748d2b4a-b71d-4ecb-9df9-166be9b20302-kube-api-access-mjxkj\") pod \"748d2b4a-b71d-4ecb-9df9-166be9b20302\" (UID: \"748d2b4a-b71d-4ecb-9df9-166be9b20302\") " Jan 30 17:17:50 crc kubenswrapper[4766]: I0130 17:17:50.884409 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/748d2b4a-b71d-4ecb-9df9-166be9b20302-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:50 crc kubenswrapper[4766]: I0130 17:17:50.889363 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/748d2b4a-b71d-4ecb-9df9-166be9b20302-kube-api-access-mjxkj" (OuterVolumeSpecName: "kube-api-access-mjxkj") pod "748d2b4a-b71d-4ecb-9df9-166be9b20302" (UID: "748d2b4a-b71d-4ecb-9df9-166be9b20302"). InnerVolumeSpecName "kube-api-access-mjxkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:17:50 crc kubenswrapper[4766]: I0130 17:17:50.934143 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/748d2b4a-b71d-4ecb-9df9-166be9b20302-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "748d2b4a-b71d-4ecb-9df9-166be9b20302" (UID: "748d2b4a-b71d-4ecb-9df9-166be9b20302"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:17:50 crc kubenswrapper[4766]: I0130 17:17:50.985190 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/748d2b4a-b71d-4ecb-9df9-166be9b20302-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:50 crc kubenswrapper[4766]: I0130 17:17:50.985224 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjxkj\" (UniqueName: \"kubernetes.io/projected/748d2b4a-b71d-4ecb-9df9-166be9b20302-kube-api-access-mjxkj\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:51 crc kubenswrapper[4766]: I0130 17:17:51.301593 4766 generic.go:334] "Generic (PLEG): container finished" podID="748d2b4a-b71d-4ecb-9df9-166be9b20302" containerID="80f82cf583547c818df4ad2c3dd4a04653845e781c7f074dd998f95869f66a78" exitCode=0 Jan 30 17:17:51 crc kubenswrapper[4766]: I0130 17:17:51.301643 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sqx4x" event={"ID":"748d2b4a-b71d-4ecb-9df9-166be9b20302","Type":"ContainerDied","Data":"80f82cf583547c818df4ad2c3dd4a04653845e781c7f074dd998f95869f66a78"} Jan 30 17:17:51 crc kubenswrapper[4766]: I0130 17:17:51.301671 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sqx4x" event={"ID":"748d2b4a-b71d-4ecb-9df9-166be9b20302","Type":"ContainerDied","Data":"4e2e822728d72b043828d2c376fae8de09ee8b30107e67f666204b30101944fd"} Jan 30 17:17:51 crc kubenswrapper[4766]: I0130 17:17:51.301687 4766 scope.go:117] "RemoveContainer" containerID="80f82cf583547c818df4ad2c3dd4a04653845e781c7f074dd998f95869f66a78" Jan 30 17:17:51 crc kubenswrapper[4766]: I0130 17:17:51.301830 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sqx4x" Jan 30 17:17:51 crc kubenswrapper[4766]: I0130 17:17:51.331699 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sqx4x"] Jan 30 17:17:51 crc kubenswrapper[4766]: I0130 17:17:51.335451 4766 scope.go:117] "RemoveContainer" containerID="e9061df418f3f64de90a81135deae4b851739a3e9086514cdf7058448889f571" Jan 30 17:17:51 crc kubenswrapper[4766]: I0130 17:17:51.339153 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sqx4x"] Jan 30 17:17:51 crc kubenswrapper[4766]: I0130 17:17:51.360015 4766 scope.go:117] "RemoveContainer" containerID="ba435ab631f7ff92ef8bba4d34cfafad5419427cc9857694848d5fda4018e46b" Jan 30 17:17:51 crc kubenswrapper[4766]: I0130 17:17:51.394766 4766 scope.go:117] "RemoveContainer" containerID="80f82cf583547c818df4ad2c3dd4a04653845e781c7f074dd998f95869f66a78" Jan 30 17:17:51 crc kubenswrapper[4766]: E0130 17:17:51.395469 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80f82cf583547c818df4ad2c3dd4a04653845e781c7f074dd998f95869f66a78\": container with ID starting with 80f82cf583547c818df4ad2c3dd4a04653845e781c7f074dd998f95869f66a78 not found: ID does not exist" containerID="80f82cf583547c818df4ad2c3dd4a04653845e781c7f074dd998f95869f66a78" Jan 30 17:17:51 crc kubenswrapper[4766]: I0130 17:17:51.395509 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80f82cf583547c818df4ad2c3dd4a04653845e781c7f074dd998f95869f66a78"} err="failed to get container status \"80f82cf583547c818df4ad2c3dd4a04653845e781c7f074dd998f95869f66a78\": rpc error: code = NotFound desc = could not find container \"80f82cf583547c818df4ad2c3dd4a04653845e781c7f074dd998f95869f66a78\": container with ID starting with 80f82cf583547c818df4ad2c3dd4a04653845e781c7f074dd998f95869f66a78 not found: ID does not exist" Jan 30 17:17:51 crc kubenswrapper[4766]: I0130 17:17:51.395577 4766 scope.go:117] "RemoveContainer" containerID="e9061df418f3f64de90a81135deae4b851739a3e9086514cdf7058448889f571" Jan 30 17:17:51 crc kubenswrapper[4766]: E0130 17:17:51.395917 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9061df418f3f64de90a81135deae4b851739a3e9086514cdf7058448889f571\": container with ID starting with e9061df418f3f64de90a81135deae4b851739a3e9086514cdf7058448889f571 not found: ID does not exist" containerID="e9061df418f3f64de90a81135deae4b851739a3e9086514cdf7058448889f571" Jan 30 17:17:51 crc kubenswrapper[4766]: I0130 17:17:51.395972 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9061df418f3f64de90a81135deae4b851739a3e9086514cdf7058448889f571"} err="failed to get container status \"e9061df418f3f64de90a81135deae4b851739a3e9086514cdf7058448889f571\": rpc error: code = NotFound desc = could not find container \"e9061df418f3f64de90a81135deae4b851739a3e9086514cdf7058448889f571\": container with ID starting with e9061df418f3f64de90a81135deae4b851739a3e9086514cdf7058448889f571 not found: ID does not exist" Jan 30 17:17:51 crc kubenswrapper[4766]: I0130 17:17:51.395999 4766 scope.go:117] "RemoveContainer" containerID="ba435ab631f7ff92ef8bba4d34cfafad5419427cc9857694848d5fda4018e46b" Jan 30 17:17:51 crc kubenswrapper[4766]: E0130 17:17:51.397422 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba435ab631f7ff92ef8bba4d34cfafad5419427cc9857694848d5fda4018e46b\": container with ID starting with ba435ab631f7ff92ef8bba4d34cfafad5419427cc9857694848d5fda4018e46b not found: ID does not exist" containerID="ba435ab631f7ff92ef8bba4d34cfafad5419427cc9857694848d5fda4018e46b" Jan 30 17:17:51 crc kubenswrapper[4766]: I0130 17:17:51.397473 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba435ab631f7ff92ef8bba4d34cfafad5419427cc9857694848d5fda4018e46b"} err="failed to get container status \"ba435ab631f7ff92ef8bba4d34cfafad5419427cc9857694848d5fda4018e46b\": rpc error: code = NotFound desc = could not find container \"ba435ab631f7ff92ef8bba4d34cfafad5419427cc9857694848d5fda4018e46b\": container with ID starting with ba435ab631f7ff92ef8bba4d34cfafad5419427cc9857694848d5fda4018e46b not found: ID does not exist" Jan 30 17:17:52 crc kubenswrapper[4766]: I0130 17:17:52.048550 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="748d2b4a-b71d-4ecb-9df9-166be9b20302" path="/var/lib/kubelet/pods/748d2b4a-b71d-4ecb-9df9-166be9b20302/volumes" Jan 30 17:18:39 crc kubenswrapper[4766]: I0130 17:18:39.045360 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:18:39 crc kubenswrapper[4766]: I0130 17:18:39.045949 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:19:09 crc kubenswrapper[4766]: I0130 17:19:09.045397 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:19:09 crc kubenswrapper[4766]: I0130 17:19:09.046051 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:19:39 crc kubenswrapper[4766]: I0130 17:19:39.045435 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:19:39 crc kubenswrapper[4766]: I0130 17:19:39.046156 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:19:39 crc kubenswrapper[4766]: I0130 17:19:39.046235 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 17:19:39 crc kubenswrapper[4766]: I0130 17:19:39.077664 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d1a44a725e65357e16f05f690c1dcafe8159120a80d628d21f45739a01c94504"} pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 17:19:39 crc kubenswrapper[4766]: I0130 17:19:39.077756 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" containerID="cri-o://d1a44a725e65357e16f05f690c1dcafe8159120a80d628d21f45739a01c94504" gracePeriod=600 Jan 30 17:19:40 crc kubenswrapper[4766]: I0130 17:19:40.086795 4766 generic.go:334] "Generic (PLEG): container finished" podID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerID="d1a44a725e65357e16f05f690c1dcafe8159120a80d628d21f45739a01c94504" exitCode=0 Jan 30 17:19:40 crc kubenswrapper[4766]: I0130 17:19:40.086922 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerDied","Data":"d1a44a725e65357e16f05f690c1dcafe8159120a80d628d21f45739a01c94504"} Jan 30 17:19:40 crc kubenswrapper[4766]: I0130 17:19:40.087623 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerStarted","Data":"8257227168db12c5ef7d2b395f7b8af2a8fe11d391df629780e7c43cc40160f9"} Jan 30 17:19:40 crc kubenswrapper[4766]: I0130 17:19:40.087726 4766 scope.go:117] "RemoveContainer" containerID="33bfd46087a94354017580cc322df23894b074e070192c17b1c605a92e00a8b9" Jan 30 17:21:39 crc kubenswrapper[4766]: I0130 17:21:39.046042 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:21:39 crc kubenswrapper[4766]: I0130 17:21:39.047994 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:22:09 crc kubenswrapper[4766]: I0130 17:22:09.045644 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:22:09 crc kubenswrapper[4766]: I0130 17:22:09.046247 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:22:39 crc kubenswrapper[4766]: I0130 17:22:39.045987 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:22:39 crc kubenswrapper[4766]: I0130 17:22:39.046766 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:22:39 crc kubenswrapper[4766]: I0130 17:22:39.046819 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 17:22:39 crc kubenswrapper[4766]: I0130 17:22:39.047594 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8257227168db12c5ef7d2b395f7b8af2a8fe11d391df629780e7c43cc40160f9"} pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 17:22:39 crc kubenswrapper[4766]: I0130 17:22:39.047672 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" containerID="cri-o://8257227168db12c5ef7d2b395f7b8af2a8fe11d391df629780e7c43cc40160f9" gracePeriod=600 Jan 30 17:22:39 crc kubenswrapper[4766]: E0130 17:22:39.171513 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:22:39 crc kubenswrapper[4766]: I0130 17:22:39.294473 4766 generic.go:334] "Generic (PLEG): container finished" podID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerID="8257227168db12c5ef7d2b395f7b8af2a8fe11d391df629780e7c43cc40160f9" exitCode=0 Jan 30 17:22:39 crc kubenswrapper[4766]: I0130 17:22:39.294531 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerDied","Data":"8257227168db12c5ef7d2b395f7b8af2a8fe11d391df629780e7c43cc40160f9"} Jan 30 17:22:39 crc kubenswrapper[4766]: I0130 17:22:39.294576 4766 scope.go:117] "RemoveContainer" containerID="d1a44a725e65357e16f05f690c1dcafe8159120a80d628d21f45739a01c94504" Jan 30 17:22:39 crc kubenswrapper[4766]: I0130 17:22:39.295344 4766 scope.go:117] "RemoveContainer" containerID="8257227168db12c5ef7d2b395f7b8af2a8fe11d391df629780e7c43cc40160f9" Jan 30 17:22:39 crc kubenswrapper[4766]: E0130 17:22:39.295832 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:22:54 crc kubenswrapper[4766]: I0130 17:22:54.039818 4766 scope.go:117] "RemoveContainer" containerID="8257227168db12c5ef7d2b395f7b8af2a8fe11d391df629780e7c43cc40160f9" Jan 30 17:22:54 crc kubenswrapper[4766]: E0130 17:22:54.041326 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:23:09 crc kubenswrapper[4766]: I0130 17:23:09.039720 4766 scope.go:117] "RemoveContainer" containerID="8257227168db12c5ef7d2b395f7b8af2a8fe11d391df629780e7c43cc40160f9" Jan 30 17:23:09 crc kubenswrapper[4766]: E0130 17:23:09.040582 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:23:22 crc kubenswrapper[4766]: I0130 17:23:22.040014 4766 scope.go:117] "RemoveContainer" containerID="8257227168db12c5ef7d2b395f7b8af2a8fe11d391df629780e7c43cc40160f9" Jan 30 17:23:22 crc kubenswrapper[4766]: E0130 17:23:22.040895 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:23:36 crc kubenswrapper[4766]: I0130 17:23:36.043878 4766 scope.go:117] "RemoveContainer" containerID="8257227168db12c5ef7d2b395f7b8af2a8fe11d391df629780e7c43cc40160f9" Jan 30 17:23:36 crc kubenswrapper[4766]: E0130 17:23:36.044696 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:23:47 crc kubenswrapper[4766]: I0130 17:23:47.038782 4766 scope.go:117] "RemoveContainer" containerID="8257227168db12c5ef7d2b395f7b8af2a8fe11d391df629780e7c43cc40160f9" Jan 30 17:23:47 crc kubenswrapper[4766]: E0130 17:23:47.039589 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:23:58 crc kubenswrapper[4766]: I0130 17:23:58.038976 4766 scope.go:117] "RemoveContainer" containerID="8257227168db12c5ef7d2b395f7b8af2a8fe11d391df629780e7c43cc40160f9" Jan 30 17:23:58 crc kubenswrapper[4766]: E0130 17:23:58.039869 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:24:13 crc kubenswrapper[4766]: I0130 17:24:13.040352 4766 scope.go:117] "RemoveContainer" containerID="8257227168db12c5ef7d2b395f7b8af2a8fe11d391df629780e7c43cc40160f9" Jan 30 17:24:13 crc kubenswrapper[4766]: E0130 17:24:13.041246 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:24:28 crc kubenswrapper[4766]: I0130 17:24:28.039946 4766 scope.go:117] "RemoveContainer" containerID="8257227168db12c5ef7d2b395f7b8af2a8fe11d391df629780e7c43cc40160f9" Jan 30 17:24:28 crc kubenswrapper[4766]: E0130 17:24:28.041145 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:24:40 crc kubenswrapper[4766]: I0130 17:24:40.697598 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5hvxv"] Jan 30 17:24:40 crc kubenswrapper[4766]: E0130 17:24:40.699349 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="748d2b4a-b71d-4ecb-9df9-166be9b20302" containerName="extract-utilities" Jan 30 17:24:40 crc kubenswrapper[4766]: I0130 17:24:40.699390 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="748d2b4a-b71d-4ecb-9df9-166be9b20302" containerName="extract-utilities" Jan 30 17:24:40 crc kubenswrapper[4766]: E0130 17:24:40.699424 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="748d2b4a-b71d-4ecb-9df9-166be9b20302" containerName="registry-server" Jan 30 17:24:40 crc kubenswrapper[4766]: I0130 17:24:40.699432 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="748d2b4a-b71d-4ecb-9df9-166be9b20302" containerName="registry-server" Jan 30 17:24:40 crc kubenswrapper[4766]: E0130 17:24:40.699442 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="748d2b4a-b71d-4ecb-9df9-166be9b20302" containerName="extract-content" Jan 30 17:24:40 crc kubenswrapper[4766]: I0130 17:24:40.699448 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="748d2b4a-b71d-4ecb-9df9-166be9b20302" containerName="extract-content" Jan 30 17:24:40 crc kubenswrapper[4766]: I0130 17:24:40.699571 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="748d2b4a-b71d-4ecb-9df9-166be9b20302" containerName="registry-server" Jan 30 17:24:40 crc kubenswrapper[4766]: I0130 17:24:40.700615 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5hvxv" Jan 30 17:24:40 crc kubenswrapper[4766]: I0130 17:24:40.712929 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5hvxv"] Jan 30 17:24:40 crc kubenswrapper[4766]: I0130 17:24:40.879942 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n9d5\" (UniqueName: \"kubernetes.io/projected/4569e00a-4dea-4144-999c-4ac356b760d8-kube-api-access-7n9d5\") pod \"redhat-operators-5hvxv\" (UID: \"4569e00a-4dea-4144-999c-4ac356b760d8\") " pod="openshift-marketplace/redhat-operators-5hvxv" Jan 30 17:24:40 crc kubenswrapper[4766]: I0130 17:24:40.880075 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4569e00a-4dea-4144-999c-4ac356b760d8-catalog-content\") pod \"redhat-operators-5hvxv\" (UID: \"4569e00a-4dea-4144-999c-4ac356b760d8\") " pod="openshift-marketplace/redhat-operators-5hvxv" Jan 30 17:24:40 crc kubenswrapper[4766]: I0130 17:24:40.880146 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4569e00a-4dea-4144-999c-4ac356b760d8-utilities\") pod \"redhat-operators-5hvxv\" (UID: \"4569e00a-4dea-4144-999c-4ac356b760d8\") " pod="openshift-marketplace/redhat-operators-5hvxv" Jan 30 17:24:40 crc kubenswrapper[4766]: I0130 17:24:40.981612 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4569e00a-4dea-4144-999c-4ac356b760d8-utilities\") pod \"redhat-operators-5hvxv\" (UID: \"4569e00a-4dea-4144-999c-4ac356b760d8\") " pod="openshift-marketplace/redhat-operators-5hvxv" Jan 30 17:24:40 crc kubenswrapper[4766]: I0130 17:24:40.981727 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n9d5\" (UniqueName: \"kubernetes.io/projected/4569e00a-4dea-4144-999c-4ac356b760d8-kube-api-access-7n9d5\") pod \"redhat-operators-5hvxv\" (UID: \"4569e00a-4dea-4144-999c-4ac356b760d8\") " pod="openshift-marketplace/redhat-operators-5hvxv" Jan 30 17:24:40 crc kubenswrapper[4766]: I0130 17:24:40.981762 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4569e00a-4dea-4144-999c-4ac356b760d8-catalog-content\") pod \"redhat-operators-5hvxv\" (UID: \"4569e00a-4dea-4144-999c-4ac356b760d8\") " pod="openshift-marketplace/redhat-operators-5hvxv" Jan 30 17:24:40 crc kubenswrapper[4766]: I0130 17:24:40.982286 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4569e00a-4dea-4144-999c-4ac356b760d8-catalog-content\") pod \"redhat-operators-5hvxv\" (UID: \"4569e00a-4dea-4144-999c-4ac356b760d8\") " pod="openshift-marketplace/redhat-operators-5hvxv" Jan 30 17:24:40 crc kubenswrapper[4766]: I0130 17:24:40.982562 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4569e00a-4dea-4144-999c-4ac356b760d8-utilities\") pod \"redhat-operators-5hvxv\" (UID: \"4569e00a-4dea-4144-999c-4ac356b760d8\") " pod="openshift-marketplace/redhat-operators-5hvxv" Jan 30 17:24:41 crc kubenswrapper[4766]: I0130 17:24:41.007371 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n9d5\" (UniqueName: \"kubernetes.io/projected/4569e00a-4dea-4144-999c-4ac356b760d8-kube-api-access-7n9d5\") pod \"redhat-operators-5hvxv\" (UID: \"4569e00a-4dea-4144-999c-4ac356b760d8\") " pod="openshift-marketplace/redhat-operators-5hvxv" Jan 30 17:24:41 crc kubenswrapper[4766]: I0130 17:24:41.066884 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5hvxv" Jan 30 17:24:41 crc kubenswrapper[4766]: I0130 17:24:41.487735 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5hvxv"] Jan 30 17:24:42 crc kubenswrapper[4766]: I0130 17:24:42.039465 4766 scope.go:117] "RemoveContainer" containerID="8257227168db12c5ef7d2b395f7b8af2a8fe11d391df629780e7c43cc40160f9" Jan 30 17:24:42 crc kubenswrapper[4766]: E0130 17:24:42.040022 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:24:42 crc kubenswrapper[4766]: I0130 17:24:42.307764 4766 generic.go:334] "Generic (PLEG): container finished" podID="4569e00a-4dea-4144-999c-4ac356b760d8" containerID="18bf458792fc5980c1b167dfe022c147e91bebcb4d41a36bb7c4510aa600c92d" exitCode=0 Jan 30 17:24:42 crc kubenswrapper[4766]: I0130 17:24:42.307817 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5hvxv" event={"ID":"4569e00a-4dea-4144-999c-4ac356b760d8","Type":"ContainerDied","Data":"18bf458792fc5980c1b167dfe022c147e91bebcb4d41a36bb7c4510aa600c92d"} Jan 30 17:24:42 crc kubenswrapper[4766]: I0130 17:24:42.307853 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5hvxv" event={"ID":"4569e00a-4dea-4144-999c-4ac356b760d8","Type":"ContainerStarted","Data":"a407b6681c75c46865be92f23c418aad527a3d363d1077ce91e1a166879a60a7"} Jan 30 17:24:42 crc kubenswrapper[4766]: I0130 17:24:42.309719 4766 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 17:24:43 crc kubenswrapper[4766]: I0130 17:24:43.895513 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cl9cr"] Jan 30 17:24:43 crc kubenswrapper[4766]: I0130 17:24:43.897950 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cl9cr" Jan 30 17:24:43 crc kubenswrapper[4766]: I0130 17:24:43.913225 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cl9cr"] Jan 30 17:24:44 crc kubenswrapper[4766]: I0130 17:24:44.033719 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62fdc4f9-d560-48af-8de6-fecfb7e24d8b-catalog-content\") pod \"community-operators-cl9cr\" (UID: \"62fdc4f9-d560-48af-8de6-fecfb7e24d8b\") " pod="openshift-marketplace/community-operators-cl9cr" Jan 30 17:24:44 crc kubenswrapper[4766]: I0130 17:24:44.033804 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62fdc4f9-d560-48af-8de6-fecfb7e24d8b-utilities\") pod \"community-operators-cl9cr\" (UID: \"62fdc4f9-d560-48af-8de6-fecfb7e24d8b\") " pod="openshift-marketplace/community-operators-cl9cr" Jan 30 17:24:44 crc kubenswrapper[4766]: I0130 17:24:44.033868 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d9zq\" (UniqueName: \"kubernetes.io/projected/62fdc4f9-d560-48af-8de6-fecfb7e24d8b-kube-api-access-5d9zq\") pod \"community-operators-cl9cr\" (UID: \"62fdc4f9-d560-48af-8de6-fecfb7e24d8b\") " pod="openshift-marketplace/community-operators-cl9cr" Jan 30 17:24:44 crc kubenswrapper[4766]: I0130 17:24:44.135093 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5d9zq\" (UniqueName: \"kubernetes.io/projected/62fdc4f9-d560-48af-8de6-fecfb7e24d8b-kube-api-access-5d9zq\") pod \"community-operators-cl9cr\" (UID: \"62fdc4f9-d560-48af-8de6-fecfb7e24d8b\") " pod="openshift-marketplace/community-operators-cl9cr" Jan 30 17:24:44 crc kubenswrapper[4766]: I0130 17:24:44.135230 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62fdc4f9-d560-48af-8de6-fecfb7e24d8b-catalog-content\") pod \"community-operators-cl9cr\" (UID: \"62fdc4f9-d560-48af-8de6-fecfb7e24d8b\") " pod="openshift-marketplace/community-operators-cl9cr" Jan 30 17:24:44 crc kubenswrapper[4766]: I0130 17:24:44.135294 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62fdc4f9-d560-48af-8de6-fecfb7e24d8b-utilities\") pod \"community-operators-cl9cr\" (UID: \"62fdc4f9-d560-48af-8de6-fecfb7e24d8b\") " pod="openshift-marketplace/community-operators-cl9cr" Jan 30 17:24:44 crc kubenswrapper[4766]: I0130 17:24:44.135833 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62fdc4f9-d560-48af-8de6-fecfb7e24d8b-catalog-content\") pod \"community-operators-cl9cr\" (UID: \"62fdc4f9-d560-48af-8de6-fecfb7e24d8b\") " pod="openshift-marketplace/community-operators-cl9cr" Jan 30 17:24:44 crc kubenswrapper[4766]: I0130 17:24:44.135891 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62fdc4f9-d560-48af-8de6-fecfb7e24d8b-utilities\") pod \"community-operators-cl9cr\" (UID: \"62fdc4f9-d560-48af-8de6-fecfb7e24d8b\") " pod="openshift-marketplace/community-operators-cl9cr" Jan 30 17:24:44 crc kubenswrapper[4766]: I0130 17:24:44.163368 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5d9zq\" (UniqueName: \"kubernetes.io/projected/62fdc4f9-d560-48af-8de6-fecfb7e24d8b-kube-api-access-5d9zq\") pod \"community-operators-cl9cr\" (UID: \"62fdc4f9-d560-48af-8de6-fecfb7e24d8b\") " pod="openshift-marketplace/community-operators-cl9cr" Jan 30 17:24:44 crc kubenswrapper[4766]: I0130 17:24:44.222156 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cl9cr" Jan 30 17:24:44 crc kubenswrapper[4766]: I0130 17:24:44.324625 4766 generic.go:334] "Generic (PLEG): container finished" podID="4569e00a-4dea-4144-999c-4ac356b760d8" containerID="9d1565a5d3ce55ff82e51943d69e372b3e812ede87bdb6a01d0330bec19acd50" exitCode=0 Jan 30 17:24:44 crc kubenswrapper[4766]: I0130 17:24:44.324723 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5hvxv" event={"ID":"4569e00a-4dea-4144-999c-4ac356b760d8","Type":"ContainerDied","Data":"9d1565a5d3ce55ff82e51943d69e372b3e812ede87bdb6a01d0330bec19acd50"} Jan 30 17:24:44 crc kubenswrapper[4766]: I0130 17:24:44.525960 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cl9cr"] Jan 30 17:24:44 crc kubenswrapper[4766]: I0130 17:24:44.679736 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mjxb9"] Jan 30 17:24:44 crc kubenswrapper[4766]: I0130 17:24:44.681261 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mjxb9" Jan 30 17:24:44 crc kubenswrapper[4766]: I0130 17:24:44.691308 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mjxb9"] Jan 30 17:24:44 crc kubenswrapper[4766]: I0130 17:24:44.846984 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a4a051d-4bb1-46b4-9e9c-cc50b06e823f-utilities\") pod \"redhat-marketplace-mjxb9\" (UID: \"2a4a051d-4bb1-46b4-9e9c-cc50b06e823f\") " pod="openshift-marketplace/redhat-marketplace-mjxb9" Jan 30 17:24:44 crc kubenswrapper[4766]: I0130 17:24:44.847388 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a4a051d-4bb1-46b4-9e9c-cc50b06e823f-catalog-content\") pod \"redhat-marketplace-mjxb9\" (UID: \"2a4a051d-4bb1-46b4-9e9c-cc50b06e823f\") " pod="openshift-marketplace/redhat-marketplace-mjxb9" Jan 30 17:24:44 crc kubenswrapper[4766]: I0130 17:24:44.847413 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szxw5\" (UniqueName: \"kubernetes.io/projected/2a4a051d-4bb1-46b4-9e9c-cc50b06e823f-kube-api-access-szxw5\") pod \"redhat-marketplace-mjxb9\" (UID: \"2a4a051d-4bb1-46b4-9e9c-cc50b06e823f\") " pod="openshift-marketplace/redhat-marketplace-mjxb9" Jan 30 17:24:44 crc kubenswrapper[4766]: I0130 17:24:44.949293 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a4a051d-4bb1-46b4-9e9c-cc50b06e823f-utilities\") pod \"redhat-marketplace-mjxb9\" (UID: \"2a4a051d-4bb1-46b4-9e9c-cc50b06e823f\") " pod="openshift-marketplace/redhat-marketplace-mjxb9" Jan 30 17:24:44 crc kubenswrapper[4766]: I0130 17:24:44.949436 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a4a051d-4bb1-46b4-9e9c-cc50b06e823f-catalog-content\") pod \"redhat-marketplace-mjxb9\" (UID: \"2a4a051d-4bb1-46b4-9e9c-cc50b06e823f\") " pod="openshift-marketplace/redhat-marketplace-mjxb9" Jan 30 17:24:44 crc kubenswrapper[4766]: I0130 17:24:44.949474 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szxw5\" (UniqueName: \"kubernetes.io/projected/2a4a051d-4bb1-46b4-9e9c-cc50b06e823f-kube-api-access-szxw5\") pod \"redhat-marketplace-mjxb9\" (UID: \"2a4a051d-4bb1-46b4-9e9c-cc50b06e823f\") " pod="openshift-marketplace/redhat-marketplace-mjxb9" Jan 30 17:24:44 crc kubenswrapper[4766]: I0130 17:24:44.949885 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a4a051d-4bb1-46b4-9e9c-cc50b06e823f-utilities\") pod \"redhat-marketplace-mjxb9\" (UID: \"2a4a051d-4bb1-46b4-9e9c-cc50b06e823f\") " pod="openshift-marketplace/redhat-marketplace-mjxb9" Jan 30 17:24:44 crc kubenswrapper[4766]: I0130 17:24:44.949913 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a4a051d-4bb1-46b4-9e9c-cc50b06e823f-catalog-content\") pod \"redhat-marketplace-mjxb9\" (UID: \"2a4a051d-4bb1-46b4-9e9c-cc50b06e823f\") " pod="openshift-marketplace/redhat-marketplace-mjxb9" Jan 30 17:24:44 crc kubenswrapper[4766]: I0130 17:24:44.974742 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szxw5\" (UniqueName: \"kubernetes.io/projected/2a4a051d-4bb1-46b4-9e9c-cc50b06e823f-kube-api-access-szxw5\") pod \"redhat-marketplace-mjxb9\" (UID: \"2a4a051d-4bb1-46b4-9e9c-cc50b06e823f\") " pod="openshift-marketplace/redhat-marketplace-mjxb9" Jan 30 17:24:45 crc kubenswrapper[4766]: I0130 17:24:45.006155 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mjxb9" Jan 30 17:24:45 crc kubenswrapper[4766]: W0130 17:24:45.257748 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a4a051d_4bb1_46b4_9e9c_cc50b06e823f.slice/crio-579bc68ebea5e87f4392da65b5ea701114e26d213f6bb3adf0d1d3670c59295c WatchSource:0}: Error finding container 579bc68ebea5e87f4392da65b5ea701114e26d213f6bb3adf0d1d3670c59295c: Status 404 returned error can't find the container with id 579bc68ebea5e87f4392da65b5ea701114e26d213f6bb3adf0d1d3670c59295c Jan 30 17:24:45 crc kubenswrapper[4766]: I0130 17:24:45.257948 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mjxb9"] Jan 30 17:24:45 crc kubenswrapper[4766]: I0130 17:24:45.332103 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mjxb9" event={"ID":"2a4a051d-4bb1-46b4-9e9c-cc50b06e823f","Type":"ContainerStarted","Data":"579bc68ebea5e87f4392da65b5ea701114e26d213f6bb3adf0d1d3670c59295c"} Jan 30 17:24:45 crc kubenswrapper[4766]: I0130 17:24:45.335028 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5hvxv" event={"ID":"4569e00a-4dea-4144-999c-4ac356b760d8","Type":"ContainerStarted","Data":"653a8e6831133d8b2a6f2ee575acaa40923e3047d8768c174fd6bd89a70f21de"} Jan 30 17:24:45 crc kubenswrapper[4766]: I0130 17:24:45.337611 4766 generic.go:334] "Generic (PLEG): container finished" podID="62fdc4f9-d560-48af-8de6-fecfb7e24d8b" containerID="5930c12208a27936cf2e1889a6fbd7e0f6c461fb83dc532115569957fdc3bf36" exitCode=0 Jan 30 17:24:45 crc kubenswrapper[4766]: I0130 17:24:45.337671 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cl9cr" event={"ID":"62fdc4f9-d560-48af-8de6-fecfb7e24d8b","Type":"ContainerDied","Data":"5930c12208a27936cf2e1889a6fbd7e0f6c461fb83dc532115569957fdc3bf36"} Jan 30 17:24:45 crc kubenswrapper[4766]: I0130 17:24:45.337710 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cl9cr" event={"ID":"62fdc4f9-d560-48af-8de6-fecfb7e24d8b","Type":"ContainerStarted","Data":"47eb60c93a789d09e469d6d91f744380bb36ac2e3fc7ca1dbbff8f9e7af1d3f7"} Jan 30 17:24:45 crc kubenswrapper[4766]: I0130 17:24:45.362704 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5hvxv" podStartSLOduration=2.7972187269999997 podStartE2EDuration="5.36268612s" podCreationTimestamp="2026-01-30 17:24:40 +0000 UTC" firstStartedPulling="2026-01-30 17:24:42.309389839 +0000 UTC m=+3736.947347185" lastFinishedPulling="2026-01-30 17:24:44.874857232 +0000 UTC m=+3739.512814578" observedRunningTime="2026-01-30 17:24:45.354356672 +0000 UTC m=+3739.992314028" watchObservedRunningTime="2026-01-30 17:24:45.36268612 +0000 UTC m=+3740.000643466" Jan 30 17:24:46 crc kubenswrapper[4766]: I0130 17:24:46.345932 4766 generic.go:334] "Generic (PLEG): container finished" podID="2a4a051d-4bb1-46b4-9e9c-cc50b06e823f" containerID="71ba56461c5026d76621ad26c74f0bce91657447f3349861025a1c45ca4ffedf" exitCode=0 Jan 30 17:24:46 crc kubenswrapper[4766]: I0130 17:24:46.346002 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mjxb9" event={"ID":"2a4a051d-4bb1-46b4-9e9c-cc50b06e823f","Type":"ContainerDied","Data":"71ba56461c5026d76621ad26c74f0bce91657447f3349861025a1c45ca4ffedf"} Jan 30 17:24:46 crc kubenswrapper[4766]: I0130 17:24:46.350915 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cl9cr" event={"ID":"62fdc4f9-d560-48af-8de6-fecfb7e24d8b","Type":"ContainerStarted","Data":"d10b116a5966ef4d064980eac6f7b00e3fc1d563e3ac448eabf568ca49f9cb35"} Jan 30 17:24:47 crc kubenswrapper[4766]: I0130 17:24:47.368092 4766 generic.go:334] "Generic (PLEG): container finished" podID="62fdc4f9-d560-48af-8de6-fecfb7e24d8b" containerID="d10b116a5966ef4d064980eac6f7b00e3fc1d563e3ac448eabf568ca49f9cb35" exitCode=0 Jan 30 17:24:47 crc kubenswrapper[4766]: I0130 17:24:47.368549 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cl9cr" event={"ID":"62fdc4f9-d560-48af-8de6-fecfb7e24d8b","Type":"ContainerDied","Data":"d10b116a5966ef4d064980eac6f7b00e3fc1d563e3ac448eabf568ca49f9cb35"} Jan 30 17:24:47 crc kubenswrapper[4766]: I0130 17:24:47.368585 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cl9cr" event={"ID":"62fdc4f9-d560-48af-8de6-fecfb7e24d8b","Type":"ContainerStarted","Data":"abcde2cbdc9df1676025f822d42fc361cb317312aed1ffad87e6e425537f4c6b"} Jan 30 17:24:47 crc kubenswrapper[4766]: I0130 17:24:47.373359 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mjxb9" event={"ID":"2a4a051d-4bb1-46b4-9e9c-cc50b06e823f","Type":"ContainerStarted","Data":"4ac13993bab44a7508e345b20e1a3c2404957d32156f8b839370b9931ace0016"} Jan 30 17:24:47 crc kubenswrapper[4766]: I0130 17:24:47.393022 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cl9cr" podStartSLOduration=2.982633186 podStartE2EDuration="4.392996648s" podCreationTimestamp="2026-01-30 17:24:43 +0000 UTC" firstStartedPulling="2026-01-30 17:24:45.339993029 +0000 UTC m=+3739.977950375" lastFinishedPulling="2026-01-30 17:24:46.750356491 +0000 UTC m=+3741.388313837" observedRunningTime="2026-01-30 17:24:47.390848289 +0000 UTC m=+3742.028805645" watchObservedRunningTime="2026-01-30 17:24:47.392996648 +0000 UTC m=+3742.030953994" Jan 30 17:24:48 crc kubenswrapper[4766]: I0130 17:24:48.391961 4766 generic.go:334] "Generic (PLEG): container finished" podID="2a4a051d-4bb1-46b4-9e9c-cc50b06e823f" containerID="4ac13993bab44a7508e345b20e1a3c2404957d32156f8b839370b9931ace0016" exitCode=0 Jan 30 17:24:48 crc kubenswrapper[4766]: I0130 17:24:48.392158 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mjxb9" event={"ID":"2a4a051d-4bb1-46b4-9e9c-cc50b06e823f","Type":"ContainerDied","Data":"4ac13993bab44a7508e345b20e1a3c2404957d32156f8b839370b9931ace0016"} Jan 30 17:24:48 crc kubenswrapper[4766]: I0130 17:24:48.392264 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mjxb9" event={"ID":"2a4a051d-4bb1-46b4-9e9c-cc50b06e823f","Type":"ContainerStarted","Data":"dc974cf058ca436183d41a69f1abca4b087bed8847f48187976ccaf3626e59df"} Jan 30 17:24:48 crc kubenswrapper[4766]: I0130 17:24:48.416963 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mjxb9" podStartSLOduration=2.946285696 podStartE2EDuration="4.416939288s" podCreationTimestamp="2026-01-30 17:24:44 +0000 UTC" firstStartedPulling="2026-01-30 17:24:46.348360522 +0000 UTC m=+3740.986317868" lastFinishedPulling="2026-01-30 17:24:47.819014114 +0000 UTC m=+3742.456971460" observedRunningTime="2026-01-30 17:24:48.408461806 +0000 UTC m=+3743.046419152" watchObservedRunningTime="2026-01-30 17:24:48.416939288 +0000 UTC m=+3743.054896634" Jan 30 17:24:51 crc kubenswrapper[4766]: I0130 17:24:51.067614 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5hvxv" Jan 30 17:24:51 crc kubenswrapper[4766]: I0130 17:24:51.068220 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5hvxv" Jan 30 17:24:51 crc kubenswrapper[4766]: I0130 17:24:51.108088 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5hvxv" Jan 30 17:24:51 crc kubenswrapper[4766]: I0130 17:24:51.450957 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5hvxv" Jan 30 17:24:53 crc kubenswrapper[4766]: I0130 17:24:53.074034 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5hvxv"] Jan 30 17:24:53 crc kubenswrapper[4766]: I0130 17:24:53.423451 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5hvxv" podUID="4569e00a-4dea-4144-999c-4ac356b760d8" containerName="registry-server" containerID="cri-o://653a8e6831133d8b2a6f2ee575acaa40923e3047d8768c174fd6bd89a70f21de" gracePeriod=2 Jan 30 17:24:53 crc kubenswrapper[4766]: I0130 17:24:53.800369 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5hvxv" Jan 30 17:24:53 crc kubenswrapper[4766]: I0130 17:24:53.891033 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4569e00a-4dea-4144-999c-4ac356b760d8-utilities\") pod \"4569e00a-4dea-4144-999c-4ac356b760d8\" (UID: \"4569e00a-4dea-4144-999c-4ac356b760d8\") " Jan 30 17:24:53 crc kubenswrapper[4766]: I0130 17:24:53.892007 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4569e00a-4dea-4144-999c-4ac356b760d8-utilities" (OuterVolumeSpecName: "utilities") pod "4569e00a-4dea-4144-999c-4ac356b760d8" (UID: "4569e00a-4dea-4144-999c-4ac356b760d8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:24:53 crc kubenswrapper[4766]: I0130 17:24:53.891174 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7n9d5\" (UniqueName: \"kubernetes.io/projected/4569e00a-4dea-4144-999c-4ac356b760d8-kube-api-access-7n9d5\") pod \"4569e00a-4dea-4144-999c-4ac356b760d8\" (UID: \"4569e00a-4dea-4144-999c-4ac356b760d8\") " Jan 30 17:24:53 crc kubenswrapper[4766]: I0130 17:24:53.892349 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4569e00a-4dea-4144-999c-4ac356b760d8-catalog-content\") pod \"4569e00a-4dea-4144-999c-4ac356b760d8\" (UID: \"4569e00a-4dea-4144-999c-4ac356b760d8\") " Jan 30 17:24:53 crc kubenswrapper[4766]: I0130 17:24:53.892621 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4569e00a-4dea-4144-999c-4ac356b760d8-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:24:53 crc kubenswrapper[4766]: I0130 17:24:53.897971 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4569e00a-4dea-4144-999c-4ac356b760d8-kube-api-access-7n9d5" (OuterVolumeSpecName: "kube-api-access-7n9d5") pod "4569e00a-4dea-4144-999c-4ac356b760d8" (UID: "4569e00a-4dea-4144-999c-4ac356b760d8"). InnerVolumeSpecName "kube-api-access-7n9d5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:24:53 crc kubenswrapper[4766]: I0130 17:24:53.994128 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7n9d5\" (UniqueName: \"kubernetes.io/projected/4569e00a-4dea-4144-999c-4ac356b760d8-kube-api-access-7n9d5\") on node \"crc\" DevicePath \"\"" Jan 30 17:24:54 crc kubenswrapper[4766]: I0130 17:24:54.223038 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cl9cr" Jan 30 17:24:54 crc kubenswrapper[4766]: I0130 17:24:54.224401 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cl9cr" Jan 30 17:24:54 crc kubenswrapper[4766]: I0130 17:24:54.264111 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cl9cr" Jan 30 17:24:54 crc kubenswrapper[4766]: I0130 17:24:54.432231 4766 generic.go:334] "Generic (PLEG): container finished" podID="4569e00a-4dea-4144-999c-4ac356b760d8" containerID="653a8e6831133d8b2a6f2ee575acaa40923e3047d8768c174fd6bd89a70f21de" exitCode=0 Jan 30 17:24:54 crc kubenswrapper[4766]: I0130 17:24:54.432270 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5hvxv" Jan 30 17:24:54 crc kubenswrapper[4766]: I0130 17:24:54.432360 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5hvxv" event={"ID":"4569e00a-4dea-4144-999c-4ac356b760d8","Type":"ContainerDied","Data":"653a8e6831133d8b2a6f2ee575acaa40923e3047d8768c174fd6bd89a70f21de"} Jan 30 17:24:54 crc kubenswrapper[4766]: I0130 17:24:54.432405 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5hvxv" event={"ID":"4569e00a-4dea-4144-999c-4ac356b760d8","Type":"ContainerDied","Data":"a407b6681c75c46865be92f23c418aad527a3d363d1077ce91e1a166879a60a7"} Jan 30 17:24:54 crc kubenswrapper[4766]: I0130 17:24:54.432428 4766 scope.go:117] "RemoveContainer" containerID="653a8e6831133d8b2a6f2ee575acaa40923e3047d8768c174fd6bd89a70f21de" Jan 30 17:24:54 crc kubenswrapper[4766]: I0130 17:24:54.452027 4766 scope.go:117] "RemoveContainer" containerID="9d1565a5d3ce55ff82e51943d69e372b3e812ede87bdb6a01d0330bec19acd50" Jan 30 17:24:54 crc kubenswrapper[4766]: I0130 17:24:54.471051 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cl9cr" Jan 30 17:24:54 crc kubenswrapper[4766]: I0130 17:24:54.481432 4766 scope.go:117] "RemoveContainer" containerID="18bf458792fc5980c1b167dfe022c147e91bebcb4d41a36bb7c4510aa600c92d" Jan 30 17:24:54 crc kubenswrapper[4766]: I0130 17:24:54.504195 4766 scope.go:117] "RemoveContainer" containerID="653a8e6831133d8b2a6f2ee575acaa40923e3047d8768c174fd6bd89a70f21de" Jan 30 17:24:54 crc kubenswrapper[4766]: E0130 17:24:54.504704 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"653a8e6831133d8b2a6f2ee575acaa40923e3047d8768c174fd6bd89a70f21de\": container with ID starting with 653a8e6831133d8b2a6f2ee575acaa40923e3047d8768c174fd6bd89a70f21de not found: ID does not exist" containerID="653a8e6831133d8b2a6f2ee575acaa40923e3047d8768c174fd6bd89a70f21de" Jan 30 17:24:54 crc kubenswrapper[4766]: I0130 17:24:54.504751 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"653a8e6831133d8b2a6f2ee575acaa40923e3047d8768c174fd6bd89a70f21de"} err="failed to get container status \"653a8e6831133d8b2a6f2ee575acaa40923e3047d8768c174fd6bd89a70f21de\": rpc error: code = NotFound desc = could not find container \"653a8e6831133d8b2a6f2ee575acaa40923e3047d8768c174fd6bd89a70f21de\": container with ID starting with 653a8e6831133d8b2a6f2ee575acaa40923e3047d8768c174fd6bd89a70f21de not found: ID does not exist" Jan 30 17:24:54 crc kubenswrapper[4766]: I0130 17:24:54.504785 4766 scope.go:117] "RemoveContainer" containerID="9d1565a5d3ce55ff82e51943d69e372b3e812ede87bdb6a01d0330bec19acd50" Jan 30 17:24:54 crc kubenswrapper[4766]: E0130 17:24:54.505332 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d1565a5d3ce55ff82e51943d69e372b3e812ede87bdb6a01d0330bec19acd50\": container with ID starting with 9d1565a5d3ce55ff82e51943d69e372b3e812ede87bdb6a01d0330bec19acd50 not found: ID does not exist" containerID="9d1565a5d3ce55ff82e51943d69e372b3e812ede87bdb6a01d0330bec19acd50" Jan 30 17:24:54 crc kubenswrapper[4766]: I0130 17:24:54.505368 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d1565a5d3ce55ff82e51943d69e372b3e812ede87bdb6a01d0330bec19acd50"} err="failed to get container status \"9d1565a5d3ce55ff82e51943d69e372b3e812ede87bdb6a01d0330bec19acd50\": rpc error: code = NotFound desc = could not find container \"9d1565a5d3ce55ff82e51943d69e372b3e812ede87bdb6a01d0330bec19acd50\": container with ID starting with 9d1565a5d3ce55ff82e51943d69e372b3e812ede87bdb6a01d0330bec19acd50 not found: ID does not exist" Jan 30 17:24:54 crc kubenswrapper[4766]: I0130 17:24:54.505395 4766 scope.go:117] "RemoveContainer" containerID="18bf458792fc5980c1b167dfe022c147e91bebcb4d41a36bb7c4510aa600c92d" Jan 30 17:24:54 crc kubenswrapper[4766]: E0130 17:24:54.505732 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18bf458792fc5980c1b167dfe022c147e91bebcb4d41a36bb7c4510aa600c92d\": container with ID starting with 18bf458792fc5980c1b167dfe022c147e91bebcb4d41a36bb7c4510aa600c92d not found: ID does not exist" containerID="18bf458792fc5980c1b167dfe022c147e91bebcb4d41a36bb7c4510aa600c92d" Jan 30 17:24:54 crc kubenswrapper[4766]: I0130 17:24:54.505786 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18bf458792fc5980c1b167dfe022c147e91bebcb4d41a36bb7c4510aa600c92d"} err="failed to get container status \"18bf458792fc5980c1b167dfe022c147e91bebcb4d41a36bb7c4510aa600c92d\": rpc error: code = NotFound desc = could not find container \"18bf458792fc5980c1b167dfe022c147e91bebcb4d41a36bb7c4510aa600c92d\": container with ID starting with 18bf458792fc5980c1b167dfe022c147e91bebcb4d41a36bb7c4510aa600c92d not found: ID does not exist" Jan 30 17:24:55 crc kubenswrapper[4766]: I0130 17:24:55.006498 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mjxb9" Jan 30 17:24:55 crc kubenswrapper[4766]: I0130 17:24:55.007143 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mjxb9" Jan 30 17:24:55 crc kubenswrapper[4766]: I0130 17:24:55.039542 4766 scope.go:117] "RemoveContainer" containerID="8257227168db12c5ef7d2b395f7b8af2a8fe11d391df629780e7c43cc40160f9" Jan 30 17:24:55 crc kubenswrapper[4766]: E0130 17:24:55.040003 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:24:55 crc kubenswrapper[4766]: I0130 17:24:55.049240 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mjxb9" Jan 30 17:24:55 crc kubenswrapper[4766]: I0130 17:24:55.453907 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4569e00a-4dea-4144-999c-4ac356b760d8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4569e00a-4dea-4144-999c-4ac356b760d8" (UID: "4569e00a-4dea-4144-999c-4ac356b760d8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:24:55 crc kubenswrapper[4766]: I0130 17:24:55.480896 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mjxb9" Jan 30 17:24:55 crc kubenswrapper[4766]: I0130 17:24:55.517039 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4569e00a-4dea-4144-999c-4ac356b760d8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:24:55 crc kubenswrapper[4766]: I0130 17:24:55.668145 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5hvxv"] Jan 30 17:24:55 crc kubenswrapper[4766]: I0130 17:24:55.674838 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5hvxv"] Jan 30 17:24:56 crc kubenswrapper[4766]: I0130 17:24:56.048276 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4569e00a-4dea-4144-999c-4ac356b760d8" path="/var/lib/kubelet/pods/4569e00a-4dea-4144-999c-4ac356b760d8/volumes" Jan 30 17:24:56 crc kubenswrapper[4766]: I0130 17:24:56.669307 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cl9cr"] Jan 30 17:24:57 crc kubenswrapper[4766]: I0130 17:24:57.451761 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cl9cr" podUID="62fdc4f9-d560-48af-8de6-fecfb7e24d8b" containerName="registry-server" containerID="cri-o://abcde2cbdc9df1676025f822d42fc361cb317312aed1ffad87e6e425537f4c6b" gracePeriod=2 Jan 30 17:24:58 crc kubenswrapper[4766]: I0130 17:24:58.070692 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mjxb9"] Jan 30 17:24:58 crc kubenswrapper[4766]: I0130 17:24:58.470410 4766 generic.go:334] "Generic (PLEG): container finished" podID="62fdc4f9-d560-48af-8de6-fecfb7e24d8b" containerID="abcde2cbdc9df1676025f822d42fc361cb317312aed1ffad87e6e425537f4c6b" exitCode=0 Jan 30 17:24:58 crc kubenswrapper[4766]: I0130 17:24:58.470499 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cl9cr" event={"ID":"62fdc4f9-d560-48af-8de6-fecfb7e24d8b","Type":"ContainerDied","Data":"abcde2cbdc9df1676025f822d42fc361cb317312aed1ffad87e6e425537f4c6b"} Jan 30 17:24:58 crc kubenswrapper[4766]: I0130 17:24:58.470654 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mjxb9" podUID="2a4a051d-4bb1-46b4-9e9c-cc50b06e823f" containerName="registry-server" containerID="cri-o://dc974cf058ca436183d41a69f1abca4b087bed8847f48187976ccaf3626e59df" gracePeriod=2 Jan 30 17:24:58 crc kubenswrapper[4766]: I0130 17:24:58.634530 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cl9cr" Jan 30 17:24:58 crc kubenswrapper[4766]: I0130 17:24:58.760916 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62fdc4f9-d560-48af-8de6-fecfb7e24d8b-catalog-content\") pod \"62fdc4f9-d560-48af-8de6-fecfb7e24d8b\" (UID: \"62fdc4f9-d560-48af-8de6-fecfb7e24d8b\") " Jan 30 17:24:58 crc kubenswrapper[4766]: I0130 17:24:58.761057 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5d9zq\" (UniqueName: \"kubernetes.io/projected/62fdc4f9-d560-48af-8de6-fecfb7e24d8b-kube-api-access-5d9zq\") pod \"62fdc4f9-d560-48af-8de6-fecfb7e24d8b\" (UID: \"62fdc4f9-d560-48af-8de6-fecfb7e24d8b\") " Jan 30 17:24:58 crc kubenswrapper[4766]: I0130 17:24:58.761143 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62fdc4f9-d560-48af-8de6-fecfb7e24d8b-utilities\") pod \"62fdc4f9-d560-48af-8de6-fecfb7e24d8b\" (UID: \"62fdc4f9-d560-48af-8de6-fecfb7e24d8b\") " Jan 30 17:24:58 crc kubenswrapper[4766]: I0130 17:24:58.762057 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62fdc4f9-d560-48af-8de6-fecfb7e24d8b-utilities" (OuterVolumeSpecName: "utilities") pod "62fdc4f9-d560-48af-8de6-fecfb7e24d8b" (UID: "62fdc4f9-d560-48af-8de6-fecfb7e24d8b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:24:58 crc kubenswrapper[4766]: I0130 17:24:58.768296 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62fdc4f9-d560-48af-8de6-fecfb7e24d8b-kube-api-access-5d9zq" (OuterVolumeSpecName: "kube-api-access-5d9zq") pod "62fdc4f9-d560-48af-8de6-fecfb7e24d8b" (UID: "62fdc4f9-d560-48af-8de6-fecfb7e24d8b"). InnerVolumeSpecName "kube-api-access-5d9zq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:24:58 crc kubenswrapper[4766]: I0130 17:24:58.813984 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62fdc4f9-d560-48af-8de6-fecfb7e24d8b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "62fdc4f9-d560-48af-8de6-fecfb7e24d8b" (UID: "62fdc4f9-d560-48af-8de6-fecfb7e24d8b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:24:58 crc kubenswrapper[4766]: I0130 17:24:58.820810 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mjxb9" Jan 30 17:24:58 crc kubenswrapper[4766]: I0130 17:24:58.862783 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62fdc4f9-d560-48af-8de6-fecfb7e24d8b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:24:58 crc kubenswrapper[4766]: I0130 17:24:58.862817 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5d9zq\" (UniqueName: \"kubernetes.io/projected/62fdc4f9-d560-48af-8de6-fecfb7e24d8b-kube-api-access-5d9zq\") on node \"crc\" DevicePath \"\"" Jan 30 17:24:58 crc kubenswrapper[4766]: I0130 17:24:58.862829 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62fdc4f9-d560-48af-8de6-fecfb7e24d8b-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:24:58 crc kubenswrapper[4766]: I0130 17:24:58.963676 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a4a051d-4bb1-46b4-9e9c-cc50b06e823f-catalog-content\") pod \"2a4a051d-4bb1-46b4-9e9c-cc50b06e823f\" (UID: \"2a4a051d-4bb1-46b4-9e9c-cc50b06e823f\") " Jan 30 17:24:58 crc kubenswrapper[4766]: I0130 17:24:58.965399 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szxw5\" (UniqueName: \"kubernetes.io/projected/2a4a051d-4bb1-46b4-9e9c-cc50b06e823f-kube-api-access-szxw5\") pod \"2a4a051d-4bb1-46b4-9e9c-cc50b06e823f\" (UID: \"2a4a051d-4bb1-46b4-9e9c-cc50b06e823f\") " Jan 30 17:24:58 crc kubenswrapper[4766]: I0130 17:24:58.965439 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a4a051d-4bb1-46b4-9e9c-cc50b06e823f-utilities\") pod \"2a4a051d-4bb1-46b4-9e9c-cc50b06e823f\" (UID: \"2a4a051d-4bb1-46b4-9e9c-cc50b06e823f\") " Jan 30 17:24:58 crc kubenswrapper[4766]: I0130 17:24:58.966226 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a4a051d-4bb1-46b4-9e9c-cc50b06e823f-utilities" (OuterVolumeSpecName: "utilities") pod "2a4a051d-4bb1-46b4-9e9c-cc50b06e823f" (UID: "2a4a051d-4bb1-46b4-9e9c-cc50b06e823f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:24:58 crc kubenswrapper[4766]: I0130 17:24:58.968306 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a4a051d-4bb1-46b4-9e9c-cc50b06e823f-kube-api-access-szxw5" (OuterVolumeSpecName: "kube-api-access-szxw5") pod "2a4a051d-4bb1-46b4-9e9c-cc50b06e823f" (UID: "2a4a051d-4bb1-46b4-9e9c-cc50b06e823f"). InnerVolumeSpecName "kube-api-access-szxw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:24:58 crc kubenswrapper[4766]: I0130 17:24:58.988045 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a4a051d-4bb1-46b4-9e9c-cc50b06e823f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2a4a051d-4bb1-46b4-9e9c-cc50b06e823f" (UID: "2a4a051d-4bb1-46b4-9e9c-cc50b06e823f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:24:59 crc kubenswrapper[4766]: I0130 17:24:59.066646 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a4a051d-4bb1-46b4-9e9c-cc50b06e823f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:24:59 crc kubenswrapper[4766]: I0130 17:24:59.066695 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szxw5\" (UniqueName: \"kubernetes.io/projected/2a4a051d-4bb1-46b4-9e9c-cc50b06e823f-kube-api-access-szxw5\") on node \"crc\" DevicePath \"\"" Jan 30 17:24:59 crc kubenswrapper[4766]: I0130 17:24:59.066708 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a4a051d-4bb1-46b4-9e9c-cc50b06e823f-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:24:59 crc kubenswrapper[4766]: I0130 17:24:59.479035 4766 generic.go:334] "Generic (PLEG): container finished" podID="2a4a051d-4bb1-46b4-9e9c-cc50b06e823f" containerID="dc974cf058ca436183d41a69f1abca4b087bed8847f48187976ccaf3626e59df" exitCode=0 Jan 30 17:24:59 crc kubenswrapper[4766]: I0130 17:24:59.479117 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mjxb9" Jan 30 17:24:59 crc kubenswrapper[4766]: I0130 17:24:59.479136 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mjxb9" event={"ID":"2a4a051d-4bb1-46b4-9e9c-cc50b06e823f","Type":"ContainerDied","Data":"dc974cf058ca436183d41a69f1abca4b087bed8847f48187976ccaf3626e59df"} Jan 30 17:24:59 crc kubenswrapper[4766]: I0130 17:24:59.479228 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mjxb9" event={"ID":"2a4a051d-4bb1-46b4-9e9c-cc50b06e823f","Type":"ContainerDied","Data":"579bc68ebea5e87f4392da65b5ea701114e26d213f6bb3adf0d1d3670c59295c"} Jan 30 17:24:59 crc kubenswrapper[4766]: I0130 17:24:59.479249 4766 scope.go:117] "RemoveContainer" containerID="dc974cf058ca436183d41a69f1abca4b087bed8847f48187976ccaf3626e59df" Jan 30 17:24:59 crc kubenswrapper[4766]: I0130 17:24:59.481726 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cl9cr" event={"ID":"62fdc4f9-d560-48af-8de6-fecfb7e24d8b","Type":"ContainerDied","Data":"47eb60c93a789d09e469d6d91f744380bb36ac2e3fc7ca1dbbff8f9e7af1d3f7"} Jan 30 17:24:59 crc kubenswrapper[4766]: I0130 17:24:59.481844 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cl9cr" Jan 30 17:24:59 crc kubenswrapper[4766]: I0130 17:24:59.495808 4766 scope.go:117] "RemoveContainer" containerID="4ac13993bab44a7508e345b20e1a3c2404957d32156f8b839370b9931ace0016" Jan 30 17:24:59 crc kubenswrapper[4766]: I0130 17:24:59.511908 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mjxb9"] Jan 30 17:24:59 crc kubenswrapper[4766]: I0130 17:24:59.517750 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mjxb9"] Jan 30 17:24:59 crc kubenswrapper[4766]: I0130 17:24:59.526627 4766 scope.go:117] "RemoveContainer" containerID="71ba56461c5026d76621ad26c74f0bce91657447f3349861025a1c45ca4ffedf" Jan 30 17:24:59 crc kubenswrapper[4766]: I0130 17:24:59.538619 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cl9cr"] Jan 30 17:24:59 crc kubenswrapper[4766]: I0130 17:24:59.548165 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cl9cr"] Jan 30 17:24:59 crc kubenswrapper[4766]: I0130 17:24:59.548911 4766 scope.go:117] "RemoveContainer" containerID="dc974cf058ca436183d41a69f1abca4b087bed8847f48187976ccaf3626e59df" Jan 30 17:24:59 crc kubenswrapper[4766]: E0130 17:24:59.549753 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc974cf058ca436183d41a69f1abca4b087bed8847f48187976ccaf3626e59df\": container with ID starting with dc974cf058ca436183d41a69f1abca4b087bed8847f48187976ccaf3626e59df not found: ID does not exist" containerID="dc974cf058ca436183d41a69f1abca4b087bed8847f48187976ccaf3626e59df" Jan 30 17:24:59 crc kubenswrapper[4766]: I0130 17:24:59.549783 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc974cf058ca436183d41a69f1abca4b087bed8847f48187976ccaf3626e59df"} err="failed to get container status \"dc974cf058ca436183d41a69f1abca4b087bed8847f48187976ccaf3626e59df\": rpc error: code = NotFound desc = could not find container \"dc974cf058ca436183d41a69f1abca4b087bed8847f48187976ccaf3626e59df\": container with ID starting with dc974cf058ca436183d41a69f1abca4b087bed8847f48187976ccaf3626e59df not found: ID does not exist" Jan 30 17:24:59 crc kubenswrapper[4766]: I0130 17:24:59.549806 4766 scope.go:117] "RemoveContainer" containerID="4ac13993bab44a7508e345b20e1a3c2404957d32156f8b839370b9931ace0016" Jan 30 17:24:59 crc kubenswrapper[4766]: E0130 17:24:59.550116 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ac13993bab44a7508e345b20e1a3c2404957d32156f8b839370b9931ace0016\": container with ID starting with 4ac13993bab44a7508e345b20e1a3c2404957d32156f8b839370b9931ace0016 not found: ID does not exist" containerID="4ac13993bab44a7508e345b20e1a3c2404957d32156f8b839370b9931ace0016" Jan 30 17:24:59 crc kubenswrapper[4766]: I0130 17:24:59.550164 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ac13993bab44a7508e345b20e1a3c2404957d32156f8b839370b9931ace0016"} err="failed to get container status \"4ac13993bab44a7508e345b20e1a3c2404957d32156f8b839370b9931ace0016\": rpc error: code = NotFound desc = could not find container \"4ac13993bab44a7508e345b20e1a3c2404957d32156f8b839370b9931ace0016\": container with ID starting with 4ac13993bab44a7508e345b20e1a3c2404957d32156f8b839370b9931ace0016 not found: ID does not exist" Jan 30 17:24:59 crc kubenswrapper[4766]: I0130 17:24:59.550212 4766 scope.go:117] "RemoveContainer" containerID="71ba56461c5026d76621ad26c74f0bce91657447f3349861025a1c45ca4ffedf" Jan 30 17:24:59 crc kubenswrapper[4766]: E0130 17:24:59.550491 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71ba56461c5026d76621ad26c74f0bce91657447f3349861025a1c45ca4ffedf\": container with ID starting with 71ba56461c5026d76621ad26c74f0bce91657447f3349861025a1c45ca4ffedf not found: ID does not exist" containerID="71ba56461c5026d76621ad26c74f0bce91657447f3349861025a1c45ca4ffedf" Jan 30 17:24:59 crc kubenswrapper[4766]: I0130 17:24:59.550561 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71ba56461c5026d76621ad26c74f0bce91657447f3349861025a1c45ca4ffedf"} err="failed to get container status \"71ba56461c5026d76621ad26c74f0bce91657447f3349861025a1c45ca4ffedf\": rpc error: code = NotFound desc = could not find container \"71ba56461c5026d76621ad26c74f0bce91657447f3349861025a1c45ca4ffedf\": container with ID starting with 71ba56461c5026d76621ad26c74f0bce91657447f3349861025a1c45ca4ffedf not found: ID does not exist" Jan 30 17:24:59 crc kubenswrapper[4766]: I0130 17:24:59.550577 4766 scope.go:117] "RemoveContainer" containerID="abcde2cbdc9df1676025f822d42fc361cb317312aed1ffad87e6e425537f4c6b" Jan 30 17:24:59 crc kubenswrapper[4766]: I0130 17:24:59.567137 4766 scope.go:117] "RemoveContainer" containerID="d10b116a5966ef4d064980eac6f7b00e3fc1d563e3ac448eabf568ca49f9cb35" Jan 30 17:24:59 crc kubenswrapper[4766]: I0130 17:24:59.585722 4766 scope.go:117] "RemoveContainer" containerID="5930c12208a27936cf2e1889a6fbd7e0f6c461fb83dc532115569957fdc3bf36" Jan 30 17:25:00 crc kubenswrapper[4766]: I0130 17:25:00.047474 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a4a051d-4bb1-46b4-9e9c-cc50b06e823f" path="/var/lib/kubelet/pods/2a4a051d-4bb1-46b4-9e9c-cc50b06e823f/volumes" Jan 30 17:25:00 crc kubenswrapper[4766]: I0130 17:25:00.048463 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62fdc4f9-d560-48af-8de6-fecfb7e24d8b" path="/var/lib/kubelet/pods/62fdc4f9-d560-48af-8de6-fecfb7e24d8b/volumes" Jan 30 17:25:06 crc kubenswrapper[4766]: I0130 17:25:06.042789 4766 scope.go:117] "RemoveContainer" containerID="8257227168db12c5ef7d2b395f7b8af2a8fe11d391df629780e7c43cc40160f9" Jan 30 17:25:06 crc kubenswrapper[4766]: E0130 17:25:06.043354 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:25:17 crc kubenswrapper[4766]: I0130 17:25:17.040420 4766 scope.go:117] "RemoveContainer" containerID="8257227168db12c5ef7d2b395f7b8af2a8fe11d391df629780e7c43cc40160f9" Jan 30 17:25:17 crc kubenswrapper[4766]: E0130 17:25:17.041115 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:25:31 crc kubenswrapper[4766]: I0130 17:25:31.039564 4766 scope.go:117] "RemoveContainer" containerID="8257227168db12c5ef7d2b395f7b8af2a8fe11d391df629780e7c43cc40160f9" Jan 30 17:25:31 crc kubenswrapper[4766]: E0130 17:25:31.040287 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:25:43 crc kubenswrapper[4766]: I0130 17:25:43.040024 4766 scope.go:117] "RemoveContainer" containerID="8257227168db12c5ef7d2b395f7b8af2a8fe11d391df629780e7c43cc40160f9" Jan 30 17:25:43 crc kubenswrapper[4766]: E0130 17:25:43.040733 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:25:55 crc kubenswrapper[4766]: I0130 17:25:55.039562 4766 scope.go:117] "RemoveContainer" containerID="8257227168db12c5ef7d2b395f7b8af2a8fe11d391df629780e7c43cc40160f9" Jan 30 17:25:55 crc kubenswrapper[4766]: E0130 17:25:55.040753 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:26:06 crc kubenswrapper[4766]: I0130 17:26:06.042903 4766 scope.go:117] "RemoveContainer" containerID="8257227168db12c5ef7d2b395f7b8af2a8fe11d391df629780e7c43cc40160f9" Jan 30 17:26:06 crc kubenswrapper[4766]: E0130 17:26:06.043728 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:26:21 crc kubenswrapper[4766]: I0130 17:26:21.045154 4766 scope.go:117] "RemoveContainer" containerID="8257227168db12c5ef7d2b395f7b8af2a8fe11d391df629780e7c43cc40160f9" Jan 30 17:26:21 crc kubenswrapper[4766]: E0130 17:26:21.070584 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:26:33 crc kubenswrapper[4766]: I0130 17:26:33.039805 4766 scope.go:117] "RemoveContainer" containerID="8257227168db12c5ef7d2b395f7b8af2a8fe11d391df629780e7c43cc40160f9" Jan 30 17:26:33 crc kubenswrapper[4766]: E0130 17:26:33.041051 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:26:45 crc kubenswrapper[4766]: I0130 17:26:45.039589 4766 scope.go:117] "RemoveContainer" containerID="8257227168db12c5ef7d2b395f7b8af2a8fe11d391df629780e7c43cc40160f9" Jan 30 17:26:45 crc kubenswrapper[4766]: E0130 17:26:45.041707 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:26:56 crc kubenswrapper[4766]: I0130 17:26:56.043431 4766 scope.go:117] "RemoveContainer" containerID="8257227168db12c5ef7d2b395f7b8af2a8fe11d391df629780e7c43cc40160f9" Jan 30 17:26:56 crc kubenswrapper[4766]: E0130 17:26:56.044285 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:27:09 crc kubenswrapper[4766]: I0130 17:27:09.040268 4766 scope.go:117] "RemoveContainer" containerID="8257227168db12c5ef7d2b395f7b8af2a8fe11d391df629780e7c43cc40160f9" Jan 30 17:27:09 crc kubenswrapper[4766]: E0130 17:27:09.040959 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:27:23 crc kubenswrapper[4766]: I0130 17:27:23.039947 4766 scope.go:117] "RemoveContainer" containerID="8257227168db12c5ef7d2b395f7b8af2a8fe11d391df629780e7c43cc40160f9" Jan 30 17:27:23 crc kubenswrapper[4766]: E0130 17:27:23.040970 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:27:35 crc kubenswrapper[4766]: I0130 17:27:35.039463 4766 scope.go:117] "RemoveContainer" containerID="8257227168db12c5ef7d2b395f7b8af2a8fe11d391df629780e7c43cc40160f9" Jan 30 17:27:35 crc kubenswrapper[4766]: E0130 17:27:35.040222 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:27:48 crc kubenswrapper[4766]: I0130 17:27:48.040993 4766 scope.go:117] "RemoveContainer" containerID="8257227168db12c5ef7d2b395f7b8af2a8fe11d391df629780e7c43cc40160f9" Jan 30 17:27:48 crc kubenswrapper[4766]: I0130 17:27:48.709815 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerStarted","Data":"88a09e8baa31aaef5207c9fcdfb3917d77584174469d09080f844dc7ec4a244c"} Jan 30 17:28:03 crc kubenswrapper[4766]: I0130 17:28:03.819787 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pcvwt"] Jan 30 17:28:03 crc kubenswrapper[4766]: E0130 17:28:03.820728 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62fdc4f9-d560-48af-8de6-fecfb7e24d8b" containerName="registry-server" Jan 30 17:28:03 crc kubenswrapper[4766]: I0130 17:28:03.820746 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="62fdc4f9-d560-48af-8de6-fecfb7e24d8b" containerName="registry-server" Jan 30 17:28:03 crc kubenswrapper[4766]: E0130 17:28:03.820765 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62fdc4f9-d560-48af-8de6-fecfb7e24d8b" containerName="extract-content" Jan 30 17:28:03 crc kubenswrapper[4766]: I0130 17:28:03.820774 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="62fdc4f9-d560-48af-8de6-fecfb7e24d8b" containerName="extract-content" Jan 30 17:28:03 crc kubenswrapper[4766]: E0130 17:28:03.820787 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a4a051d-4bb1-46b4-9e9c-cc50b06e823f" containerName="extract-content" Jan 30 17:28:03 crc kubenswrapper[4766]: I0130 17:28:03.820796 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a4a051d-4bb1-46b4-9e9c-cc50b06e823f" containerName="extract-content" Jan 30 17:28:03 crc kubenswrapper[4766]: E0130 17:28:03.820807 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a4a051d-4bb1-46b4-9e9c-cc50b06e823f" containerName="registry-server" Jan 30 17:28:03 crc kubenswrapper[4766]: I0130 17:28:03.820816 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a4a051d-4bb1-46b4-9e9c-cc50b06e823f" containerName="registry-server" Jan 30 17:28:03 crc kubenswrapper[4766]: E0130 17:28:03.820838 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4569e00a-4dea-4144-999c-4ac356b760d8" containerName="extract-content" Jan 30 17:28:03 crc kubenswrapper[4766]: I0130 17:28:03.820846 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="4569e00a-4dea-4144-999c-4ac356b760d8" containerName="extract-content" Jan 30 17:28:03 crc kubenswrapper[4766]: E0130 17:28:03.820860 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4569e00a-4dea-4144-999c-4ac356b760d8" containerName="extract-utilities" Jan 30 17:28:03 crc kubenswrapper[4766]: I0130 17:28:03.820868 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="4569e00a-4dea-4144-999c-4ac356b760d8" containerName="extract-utilities" Jan 30 17:28:03 crc kubenswrapper[4766]: E0130 17:28:03.820880 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a4a051d-4bb1-46b4-9e9c-cc50b06e823f" containerName="extract-utilities" Jan 30 17:28:03 crc kubenswrapper[4766]: I0130 17:28:03.820887 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a4a051d-4bb1-46b4-9e9c-cc50b06e823f" containerName="extract-utilities" Jan 30 17:28:03 crc kubenswrapper[4766]: E0130 17:28:03.820898 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4569e00a-4dea-4144-999c-4ac356b760d8" containerName="registry-server" Jan 30 17:28:03 crc kubenswrapper[4766]: I0130 17:28:03.820906 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="4569e00a-4dea-4144-999c-4ac356b760d8" containerName="registry-server" Jan 30 17:28:03 crc kubenswrapper[4766]: E0130 17:28:03.820916 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62fdc4f9-d560-48af-8de6-fecfb7e24d8b" containerName="extract-utilities" Jan 30 17:28:03 crc kubenswrapper[4766]: I0130 17:28:03.820924 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="62fdc4f9-d560-48af-8de6-fecfb7e24d8b" containerName="extract-utilities" Jan 30 17:28:03 crc kubenswrapper[4766]: I0130 17:28:03.821098 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="62fdc4f9-d560-48af-8de6-fecfb7e24d8b" containerName="registry-server" Jan 30 17:28:03 crc kubenswrapper[4766]: I0130 17:28:03.821116 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a4a051d-4bb1-46b4-9e9c-cc50b06e823f" containerName="registry-server" Jan 30 17:28:03 crc kubenswrapper[4766]: I0130 17:28:03.821139 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="4569e00a-4dea-4144-999c-4ac356b760d8" containerName="registry-server" Jan 30 17:28:03 crc kubenswrapper[4766]: I0130 17:28:03.826012 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pcvwt" Jan 30 17:28:03 crc kubenswrapper[4766]: I0130 17:28:03.830159 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pcvwt"] Jan 30 17:28:03 crc kubenswrapper[4766]: I0130 17:28:03.993417 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsnqv\" (UniqueName: \"kubernetes.io/projected/6f04beb2-7aa4-4e60-acb5-943ec1b07978-kube-api-access-gsnqv\") pod \"certified-operators-pcvwt\" (UID: \"6f04beb2-7aa4-4e60-acb5-943ec1b07978\") " pod="openshift-marketplace/certified-operators-pcvwt" Jan 30 17:28:03 crc kubenswrapper[4766]: I0130 17:28:03.993492 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f04beb2-7aa4-4e60-acb5-943ec1b07978-utilities\") pod \"certified-operators-pcvwt\" (UID: \"6f04beb2-7aa4-4e60-acb5-943ec1b07978\") " pod="openshift-marketplace/certified-operators-pcvwt" Jan 30 17:28:03 crc kubenswrapper[4766]: I0130 17:28:03.993527 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f04beb2-7aa4-4e60-acb5-943ec1b07978-catalog-content\") pod \"certified-operators-pcvwt\" (UID: \"6f04beb2-7aa4-4e60-acb5-943ec1b07978\") " pod="openshift-marketplace/certified-operators-pcvwt" Jan 30 17:28:04 crc kubenswrapper[4766]: I0130 17:28:04.095300 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f04beb2-7aa4-4e60-acb5-943ec1b07978-utilities\") pod \"certified-operators-pcvwt\" (UID: \"6f04beb2-7aa4-4e60-acb5-943ec1b07978\") " pod="openshift-marketplace/certified-operators-pcvwt" Jan 30 17:28:04 crc kubenswrapper[4766]: I0130 17:28:04.095358 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f04beb2-7aa4-4e60-acb5-943ec1b07978-catalog-content\") pod \"certified-operators-pcvwt\" (UID: \"6f04beb2-7aa4-4e60-acb5-943ec1b07978\") " pod="openshift-marketplace/certified-operators-pcvwt" Jan 30 17:28:04 crc kubenswrapper[4766]: I0130 17:28:04.095457 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsnqv\" (UniqueName: \"kubernetes.io/projected/6f04beb2-7aa4-4e60-acb5-943ec1b07978-kube-api-access-gsnqv\") pod \"certified-operators-pcvwt\" (UID: \"6f04beb2-7aa4-4e60-acb5-943ec1b07978\") " pod="openshift-marketplace/certified-operators-pcvwt" Jan 30 17:28:04 crc kubenswrapper[4766]: I0130 17:28:04.095898 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f04beb2-7aa4-4e60-acb5-943ec1b07978-utilities\") pod \"certified-operators-pcvwt\" (UID: \"6f04beb2-7aa4-4e60-acb5-943ec1b07978\") " pod="openshift-marketplace/certified-operators-pcvwt" Jan 30 17:28:04 crc kubenswrapper[4766]: I0130 17:28:04.096117 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f04beb2-7aa4-4e60-acb5-943ec1b07978-catalog-content\") pod \"certified-operators-pcvwt\" (UID: \"6f04beb2-7aa4-4e60-acb5-943ec1b07978\") " pod="openshift-marketplace/certified-operators-pcvwt" Jan 30 17:28:04 crc kubenswrapper[4766]: I0130 17:28:04.120298 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsnqv\" (UniqueName: \"kubernetes.io/projected/6f04beb2-7aa4-4e60-acb5-943ec1b07978-kube-api-access-gsnqv\") pod \"certified-operators-pcvwt\" (UID: \"6f04beb2-7aa4-4e60-acb5-943ec1b07978\") " pod="openshift-marketplace/certified-operators-pcvwt" Jan 30 17:28:04 crc kubenswrapper[4766]: I0130 17:28:04.151953 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pcvwt" Jan 30 17:28:04 crc kubenswrapper[4766]: I0130 17:28:04.616161 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pcvwt"] Jan 30 17:28:05 crc kubenswrapper[4766]: I0130 17:28:05.832841 4766 generic.go:334] "Generic (PLEG): container finished" podID="6f04beb2-7aa4-4e60-acb5-943ec1b07978" containerID="ecae258863521551783468826ada29bec790cda4bf21502aec01cbf669c169e7" exitCode=0 Jan 30 17:28:05 crc kubenswrapper[4766]: I0130 17:28:05.832962 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pcvwt" event={"ID":"6f04beb2-7aa4-4e60-acb5-943ec1b07978","Type":"ContainerDied","Data":"ecae258863521551783468826ada29bec790cda4bf21502aec01cbf669c169e7"} Jan 30 17:28:05 crc kubenswrapper[4766]: I0130 17:28:05.833431 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pcvwt" event={"ID":"6f04beb2-7aa4-4e60-acb5-943ec1b07978","Type":"ContainerStarted","Data":"b029f3569a66e8a8f3f99f4d7fc08ed279dc99ad1ace20029a511e0ade65e8b6"} Jan 30 17:28:06 crc kubenswrapper[4766]: I0130 17:28:06.841224 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pcvwt" event={"ID":"6f04beb2-7aa4-4e60-acb5-943ec1b07978","Type":"ContainerStarted","Data":"d2c700c9cf815142844159ccaab5b2e609d3972a6caefb05a8b58a4a680f0b9b"} Jan 30 17:28:07 crc kubenswrapper[4766]: I0130 17:28:07.848998 4766 generic.go:334] "Generic (PLEG): container finished" podID="6f04beb2-7aa4-4e60-acb5-943ec1b07978" containerID="d2c700c9cf815142844159ccaab5b2e609d3972a6caefb05a8b58a4a680f0b9b" exitCode=0 Jan 30 17:28:07 crc kubenswrapper[4766]: I0130 17:28:07.849054 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pcvwt" event={"ID":"6f04beb2-7aa4-4e60-acb5-943ec1b07978","Type":"ContainerDied","Data":"d2c700c9cf815142844159ccaab5b2e609d3972a6caefb05a8b58a4a680f0b9b"} Jan 30 17:28:08 crc kubenswrapper[4766]: I0130 17:28:08.861464 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pcvwt" event={"ID":"6f04beb2-7aa4-4e60-acb5-943ec1b07978","Type":"ContainerStarted","Data":"9c10a28311d35b39d17a20b29b1674abd6dd1ba0402501fab704f89e9c2768ab"} Jan 30 17:28:08 crc kubenswrapper[4766]: I0130 17:28:08.887419 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pcvwt" podStartSLOduration=3.228158926 podStartE2EDuration="5.887398876s" podCreationTimestamp="2026-01-30 17:28:03 +0000 UTC" firstStartedPulling="2026-01-30 17:28:05.836219573 +0000 UTC m=+3940.474176919" lastFinishedPulling="2026-01-30 17:28:08.495459513 +0000 UTC m=+3943.133416869" observedRunningTime="2026-01-30 17:28:08.880870398 +0000 UTC m=+3943.518827764" watchObservedRunningTime="2026-01-30 17:28:08.887398876 +0000 UTC m=+3943.525356222" Jan 30 17:28:14 crc kubenswrapper[4766]: I0130 17:28:14.152556 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pcvwt" Jan 30 17:28:14 crc kubenswrapper[4766]: I0130 17:28:14.153524 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pcvwt" Jan 30 17:28:14 crc kubenswrapper[4766]: I0130 17:28:14.198165 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pcvwt" Jan 30 17:28:15 crc kubenswrapper[4766]: I0130 17:28:15.231284 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pcvwt" Jan 30 17:28:17 crc kubenswrapper[4766]: I0130 17:28:17.604976 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pcvwt"] Jan 30 17:28:17 crc kubenswrapper[4766]: I0130 17:28:17.934983 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pcvwt" podUID="6f04beb2-7aa4-4e60-acb5-943ec1b07978" containerName="registry-server" containerID="cri-o://9c10a28311d35b39d17a20b29b1674abd6dd1ba0402501fab704f89e9c2768ab" gracePeriod=2 Jan 30 17:28:18 crc kubenswrapper[4766]: I0130 17:28:18.951909 4766 generic.go:334] "Generic (PLEG): container finished" podID="6f04beb2-7aa4-4e60-acb5-943ec1b07978" containerID="9c10a28311d35b39d17a20b29b1674abd6dd1ba0402501fab704f89e9c2768ab" exitCode=0 Jan 30 17:28:18 crc kubenswrapper[4766]: I0130 17:28:18.951983 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pcvwt" event={"ID":"6f04beb2-7aa4-4e60-acb5-943ec1b07978","Type":"ContainerDied","Data":"9c10a28311d35b39d17a20b29b1674abd6dd1ba0402501fab704f89e9c2768ab"} Jan 30 17:28:19 crc kubenswrapper[4766]: I0130 17:28:19.027702 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pcvwt" Jan 30 17:28:19 crc kubenswrapper[4766]: I0130 17:28:19.112330 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f04beb2-7aa4-4e60-acb5-943ec1b07978-catalog-content\") pod \"6f04beb2-7aa4-4e60-acb5-943ec1b07978\" (UID: \"6f04beb2-7aa4-4e60-acb5-943ec1b07978\") " Jan 30 17:28:19 crc kubenswrapper[4766]: I0130 17:28:19.112380 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsnqv\" (UniqueName: \"kubernetes.io/projected/6f04beb2-7aa4-4e60-acb5-943ec1b07978-kube-api-access-gsnqv\") pod \"6f04beb2-7aa4-4e60-acb5-943ec1b07978\" (UID: \"6f04beb2-7aa4-4e60-acb5-943ec1b07978\") " Jan 30 17:28:19 crc kubenswrapper[4766]: I0130 17:28:19.112516 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f04beb2-7aa4-4e60-acb5-943ec1b07978-utilities\") pod \"6f04beb2-7aa4-4e60-acb5-943ec1b07978\" (UID: \"6f04beb2-7aa4-4e60-acb5-943ec1b07978\") " Jan 30 17:28:19 crc kubenswrapper[4766]: I0130 17:28:19.113966 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f04beb2-7aa4-4e60-acb5-943ec1b07978-utilities" (OuterVolumeSpecName: "utilities") pod "6f04beb2-7aa4-4e60-acb5-943ec1b07978" (UID: "6f04beb2-7aa4-4e60-acb5-943ec1b07978"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:28:19 crc kubenswrapper[4766]: I0130 17:28:19.117859 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f04beb2-7aa4-4e60-acb5-943ec1b07978-kube-api-access-gsnqv" (OuterVolumeSpecName: "kube-api-access-gsnqv") pod "6f04beb2-7aa4-4e60-acb5-943ec1b07978" (UID: "6f04beb2-7aa4-4e60-acb5-943ec1b07978"). InnerVolumeSpecName "kube-api-access-gsnqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:28:19 crc kubenswrapper[4766]: I0130 17:28:19.161444 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f04beb2-7aa4-4e60-acb5-943ec1b07978-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6f04beb2-7aa4-4e60-acb5-943ec1b07978" (UID: "6f04beb2-7aa4-4e60-acb5-943ec1b07978"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:28:19 crc kubenswrapper[4766]: I0130 17:28:19.213835 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f04beb2-7aa4-4e60-acb5-943ec1b07978-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:28:19 crc kubenswrapper[4766]: I0130 17:28:19.213869 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f04beb2-7aa4-4e60-acb5-943ec1b07978-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:28:19 crc kubenswrapper[4766]: I0130 17:28:19.213885 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gsnqv\" (UniqueName: \"kubernetes.io/projected/6f04beb2-7aa4-4e60-acb5-943ec1b07978-kube-api-access-gsnqv\") on node \"crc\" DevicePath \"\"" Jan 30 17:28:19 crc kubenswrapper[4766]: I0130 17:28:19.962964 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pcvwt" event={"ID":"6f04beb2-7aa4-4e60-acb5-943ec1b07978","Type":"ContainerDied","Data":"b029f3569a66e8a8f3f99f4d7fc08ed279dc99ad1ace20029a511e0ade65e8b6"} Jan 30 17:28:19 crc kubenswrapper[4766]: I0130 17:28:19.963038 4766 scope.go:117] "RemoveContainer" containerID="9c10a28311d35b39d17a20b29b1674abd6dd1ba0402501fab704f89e9c2768ab" Jan 30 17:28:19 crc kubenswrapper[4766]: I0130 17:28:19.963059 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pcvwt" Jan 30 17:28:19 crc kubenswrapper[4766]: I0130 17:28:19.984175 4766 scope.go:117] "RemoveContainer" containerID="d2c700c9cf815142844159ccaab5b2e609d3972a6caefb05a8b58a4a680f0b9b" Jan 30 17:28:19 crc kubenswrapper[4766]: I0130 17:28:19.994790 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pcvwt"] Jan 30 17:28:20 crc kubenswrapper[4766]: I0130 17:28:20.002525 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pcvwt"] Jan 30 17:28:20 crc kubenswrapper[4766]: I0130 17:28:20.026033 4766 scope.go:117] "RemoveContainer" containerID="ecae258863521551783468826ada29bec790cda4bf21502aec01cbf669c169e7" Jan 30 17:28:20 crc kubenswrapper[4766]: I0130 17:28:20.049807 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f04beb2-7aa4-4e60-acb5-943ec1b07978" path="/var/lib/kubelet/pods/6f04beb2-7aa4-4e60-acb5-943ec1b07978/volumes" Jan 30 17:30:00 crc kubenswrapper[4766]: I0130 17:30:00.169127 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496570-t4zn4"] Jan 30 17:30:00 crc kubenswrapper[4766]: E0130 17:30:00.172756 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f04beb2-7aa4-4e60-acb5-943ec1b07978" containerName="extract-content" Jan 30 17:30:00 crc kubenswrapper[4766]: I0130 17:30:00.172788 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f04beb2-7aa4-4e60-acb5-943ec1b07978" containerName="extract-content" Jan 30 17:30:00 crc kubenswrapper[4766]: E0130 17:30:00.172805 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f04beb2-7aa4-4e60-acb5-943ec1b07978" containerName="registry-server" Jan 30 17:30:00 crc kubenswrapper[4766]: I0130 17:30:00.172812 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f04beb2-7aa4-4e60-acb5-943ec1b07978" containerName="registry-server" Jan 30 17:30:00 crc kubenswrapper[4766]: E0130 17:30:00.172828 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f04beb2-7aa4-4e60-acb5-943ec1b07978" containerName="extract-utilities" Jan 30 17:30:00 crc kubenswrapper[4766]: I0130 17:30:00.172836 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f04beb2-7aa4-4e60-acb5-943ec1b07978" containerName="extract-utilities" Jan 30 17:30:00 crc kubenswrapper[4766]: I0130 17:30:00.173126 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f04beb2-7aa4-4e60-acb5-943ec1b07978" containerName="registry-server" Jan 30 17:30:00 crc kubenswrapper[4766]: I0130 17:30:00.174214 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-t4zn4" Jan 30 17:30:00 crc kubenswrapper[4766]: I0130 17:30:00.176659 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 17:30:00 crc kubenswrapper[4766]: I0130 17:30:00.176671 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 17:30:00 crc kubenswrapper[4766]: I0130 17:30:00.177511 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496570-t4zn4"] Jan 30 17:30:00 crc kubenswrapper[4766]: I0130 17:30:00.268552 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v726x\" (UniqueName: \"kubernetes.io/projected/1d5ff932-157e-49bf-9f1e-b4dc767de05e-kube-api-access-v726x\") pod \"collect-profiles-29496570-t4zn4\" (UID: \"1d5ff932-157e-49bf-9f1e-b4dc767de05e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-t4zn4" Jan 30 17:30:00 crc kubenswrapper[4766]: I0130 17:30:00.268598 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1d5ff932-157e-49bf-9f1e-b4dc767de05e-config-volume\") pod \"collect-profiles-29496570-t4zn4\" (UID: \"1d5ff932-157e-49bf-9f1e-b4dc767de05e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-t4zn4" Jan 30 17:30:00 crc kubenswrapper[4766]: I0130 17:30:00.268673 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1d5ff932-157e-49bf-9f1e-b4dc767de05e-secret-volume\") pod \"collect-profiles-29496570-t4zn4\" (UID: \"1d5ff932-157e-49bf-9f1e-b4dc767de05e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-t4zn4" Jan 30 17:30:00 crc kubenswrapper[4766]: I0130 17:30:00.369781 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1d5ff932-157e-49bf-9f1e-b4dc767de05e-secret-volume\") pod \"collect-profiles-29496570-t4zn4\" (UID: \"1d5ff932-157e-49bf-9f1e-b4dc767de05e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-t4zn4" Jan 30 17:30:00 crc kubenswrapper[4766]: I0130 17:30:00.370160 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v726x\" (UniqueName: \"kubernetes.io/projected/1d5ff932-157e-49bf-9f1e-b4dc767de05e-kube-api-access-v726x\") pod \"collect-profiles-29496570-t4zn4\" (UID: \"1d5ff932-157e-49bf-9f1e-b4dc767de05e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-t4zn4" Jan 30 17:30:00 crc kubenswrapper[4766]: I0130 17:30:00.370219 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1d5ff932-157e-49bf-9f1e-b4dc767de05e-config-volume\") pod \"collect-profiles-29496570-t4zn4\" (UID: \"1d5ff932-157e-49bf-9f1e-b4dc767de05e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-t4zn4" Jan 30 17:30:00 crc kubenswrapper[4766]: I0130 17:30:00.371602 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1d5ff932-157e-49bf-9f1e-b4dc767de05e-config-volume\") pod \"collect-profiles-29496570-t4zn4\" (UID: \"1d5ff932-157e-49bf-9f1e-b4dc767de05e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-t4zn4" Jan 30 17:30:00 crc kubenswrapper[4766]: I0130 17:30:00.378837 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1d5ff932-157e-49bf-9f1e-b4dc767de05e-secret-volume\") pod \"collect-profiles-29496570-t4zn4\" (UID: \"1d5ff932-157e-49bf-9f1e-b4dc767de05e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-t4zn4" Jan 30 17:30:00 crc kubenswrapper[4766]: I0130 17:30:00.390819 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v726x\" (UniqueName: \"kubernetes.io/projected/1d5ff932-157e-49bf-9f1e-b4dc767de05e-kube-api-access-v726x\") pod \"collect-profiles-29496570-t4zn4\" (UID: \"1d5ff932-157e-49bf-9f1e-b4dc767de05e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-t4zn4" Jan 30 17:30:00 crc kubenswrapper[4766]: I0130 17:30:00.498121 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-t4zn4" Jan 30 17:30:00 crc kubenswrapper[4766]: I0130 17:30:00.900672 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496570-t4zn4"] Jan 30 17:30:01 crc kubenswrapper[4766]: I0130 17:30:01.657561 4766 generic.go:334] "Generic (PLEG): container finished" podID="1d5ff932-157e-49bf-9f1e-b4dc767de05e" containerID="2114380f0112baa1ec046121feaf5820547d68532f27b3cf3f25db273ce53dee" exitCode=0 Jan 30 17:30:01 crc kubenswrapper[4766]: I0130 17:30:01.657629 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-t4zn4" event={"ID":"1d5ff932-157e-49bf-9f1e-b4dc767de05e","Type":"ContainerDied","Data":"2114380f0112baa1ec046121feaf5820547d68532f27b3cf3f25db273ce53dee"} Jan 30 17:30:01 crc kubenswrapper[4766]: I0130 17:30:01.657865 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-t4zn4" event={"ID":"1d5ff932-157e-49bf-9f1e-b4dc767de05e","Type":"ContainerStarted","Data":"5cbf3b759bd6bfceded4b9afe5b7971707417f8ffc9ef7455d7bcf67ecfafcd5"} Jan 30 17:30:02 crc kubenswrapper[4766]: I0130 17:30:02.903524 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-t4zn4" Jan 30 17:30:03 crc kubenswrapper[4766]: I0130 17:30:03.004961 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1d5ff932-157e-49bf-9f1e-b4dc767de05e-config-volume\") pod \"1d5ff932-157e-49bf-9f1e-b4dc767de05e\" (UID: \"1d5ff932-157e-49bf-9f1e-b4dc767de05e\") " Jan 30 17:30:03 crc kubenswrapper[4766]: I0130 17:30:03.005029 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v726x\" (UniqueName: \"kubernetes.io/projected/1d5ff932-157e-49bf-9f1e-b4dc767de05e-kube-api-access-v726x\") pod \"1d5ff932-157e-49bf-9f1e-b4dc767de05e\" (UID: \"1d5ff932-157e-49bf-9f1e-b4dc767de05e\") " Jan 30 17:30:03 crc kubenswrapper[4766]: I0130 17:30:03.005178 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1d5ff932-157e-49bf-9f1e-b4dc767de05e-secret-volume\") pod \"1d5ff932-157e-49bf-9f1e-b4dc767de05e\" (UID: \"1d5ff932-157e-49bf-9f1e-b4dc767de05e\") " Jan 30 17:30:03 crc kubenswrapper[4766]: I0130 17:30:03.006106 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d5ff932-157e-49bf-9f1e-b4dc767de05e-config-volume" (OuterVolumeSpecName: "config-volume") pod "1d5ff932-157e-49bf-9f1e-b4dc767de05e" (UID: "1d5ff932-157e-49bf-9f1e-b4dc767de05e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:30:03 crc kubenswrapper[4766]: I0130 17:30:03.011332 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d5ff932-157e-49bf-9f1e-b4dc767de05e-kube-api-access-v726x" (OuterVolumeSpecName: "kube-api-access-v726x") pod "1d5ff932-157e-49bf-9f1e-b4dc767de05e" (UID: "1d5ff932-157e-49bf-9f1e-b4dc767de05e"). InnerVolumeSpecName "kube-api-access-v726x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:30:03 crc kubenswrapper[4766]: I0130 17:30:03.011457 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d5ff932-157e-49bf-9f1e-b4dc767de05e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1d5ff932-157e-49bf-9f1e-b4dc767de05e" (UID: "1d5ff932-157e-49bf-9f1e-b4dc767de05e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:30:03 crc kubenswrapper[4766]: I0130 17:30:03.106533 4766 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1d5ff932-157e-49bf-9f1e-b4dc767de05e-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 17:30:03 crc kubenswrapper[4766]: I0130 17:30:03.106869 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v726x\" (UniqueName: \"kubernetes.io/projected/1d5ff932-157e-49bf-9f1e-b4dc767de05e-kube-api-access-v726x\") on node \"crc\" DevicePath \"\"" Jan 30 17:30:03 crc kubenswrapper[4766]: I0130 17:30:03.106886 4766 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1d5ff932-157e-49bf-9f1e-b4dc767de05e-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 17:30:03 crc kubenswrapper[4766]: I0130 17:30:03.672740 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-t4zn4" event={"ID":"1d5ff932-157e-49bf-9f1e-b4dc767de05e","Type":"ContainerDied","Data":"5cbf3b759bd6bfceded4b9afe5b7971707417f8ffc9ef7455d7bcf67ecfafcd5"} Jan 30 17:30:03 crc kubenswrapper[4766]: I0130 17:30:03.672786 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-t4zn4" Jan 30 17:30:03 crc kubenswrapper[4766]: I0130 17:30:03.672793 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5cbf3b759bd6bfceded4b9afe5b7971707417f8ffc9ef7455d7bcf67ecfafcd5" Jan 30 17:30:03 crc kubenswrapper[4766]: I0130 17:30:03.979753 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496525-bphwz"] Jan 30 17:30:03 crc kubenswrapper[4766]: I0130 17:30:03.985972 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496525-bphwz"] Jan 30 17:30:04 crc kubenswrapper[4766]: I0130 17:30:04.051596 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae50e63c-8d14-4773-85f7-1deaaee40da6" path="/var/lib/kubelet/pods/ae50e63c-8d14-4773-85f7-1deaaee40da6/volumes" Jan 30 17:30:09 crc kubenswrapper[4766]: I0130 17:30:09.045697 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:30:09 crc kubenswrapper[4766]: I0130 17:30:09.046302 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:30:30 crc kubenswrapper[4766]: I0130 17:30:30.824510 4766 scope.go:117] "RemoveContainer" containerID="8dd7d74e3c7ee802070a55313e5ed776854ad2a4f3bbdd635c4f840d40fcfbc2" Jan 30 17:30:39 crc kubenswrapper[4766]: I0130 17:30:39.045758 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:30:39 crc kubenswrapper[4766]: I0130 17:30:39.046362 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:31:09 crc kubenswrapper[4766]: I0130 17:31:09.045860 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:31:09 crc kubenswrapper[4766]: I0130 17:31:09.046542 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:31:09 crc kubenswrapper[4766]: I0130 17:31:09.046598 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 17:31:09 crc kubenswrapper[4766]: I0130 17:31:09.047231 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"88a09e8baa31aaef5207c9fcdfb3917d77584174469d09080f844dc7ec4a244c"} pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 17:31:09 crc kubenswrapper[4766]: I0130 17:31:09.047287 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" containerID="cri-o://88a09e8baa31aaef5207c9fcdfb3917d77584174469d09080f844dc7ec4a244c" gracePeriod=600 Jan 30 17:31:10 crc kubenswrapper[4766]: I0130 17:31:10.158521 4766 generic.go:334] "Generic (PLEG): container finished" podID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerID="88a09e8baa31aaef5207c9fcdfb3917d77584174469d09080f844dc7ec4a244c" exitCode=0 Jan 30 17:31:10 crc kubenswrapper[4766]: I0130 17:31:10.158597 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerDied","Data":"88a09e8baa31aaef5207c9fcdfb3917d77584174469d09080f844dc7ec4a244c"} Jan 30 17:31:10 crc kubenswrapper[4766]: I0130 17:31:10.159107 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerStarted","Data":"62b27159543c2d7874b57e155df6cf176eef5367eb321b1133ba7cc2464a1a68"} Jan 30 17:31:10 crc kubenswrapper[4766]: I0130 17:31:10.159131 4766 scope.go:117] "RemoveContainer" containerID="8257227168db12c5ef7d2b395f7b8af2a8fe11d391df629780e7c43cc40160f9" Jan 30 17:33:09 crc kubenswrapper[4766]: I0130 17:33:09.045658 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:33:09 crc kubenswrapper[4766]: I0130 17:33:09.047389 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:33:39 crc kubenswrapper[4766]: I0130 17:33:39.045043 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:33:39 crc kubenswrapper[4766]: I0130 17:33:39.047024 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:34:09 crc kubenswrapper[4766]: I0130 17:34:09.045439 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:34:09 crc kubenswrapper[4766]: I0130 17:34:09.045931 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:34:09 crc kubenswrapper[4766]: I0130 17:34:09.045969 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 17:34:09 crc kubenswrapper[4766]: I0130 17:34:09.046378 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"62b27159543c2d7874b57e155df6cf176eef5367eb321b1133ba7cc2464a1a68"} pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 17:34:09 crc kubenswrapper[4766]: I0130 17:34:09.046423 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" containerID="cri-o://62b27159543c2d7874b57e155df6cf176eef5367eb321b1133ba7cc2464a1a68" gracePeriod=600 Jan 30 17:34:09 crc kubenswrapper[4766]: I0130 17:34:09.453722 4766 generic.go:334] "Generic (PLEG): container finished" podID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerID="62b27159543c2d7874b57e155df6cf176eef5367eb321b1133ba7cc2464a1a68" exitCode=0 Jan 30 17:34:09 crc kubenswrapper[4766]: I0130 17:34:09.453796 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerDied","Data":"62b27159543c2d7874b57e155df6cf176eef5367eb321b1133ba7cc2464a1a68"} Jan 30 17:34:09 crc kubenswrapper[4766]: I0130 17:34:09.454265 4766 scope.go:117] "RemoveContainer" containerID="88a09e8baa31aaef5207c9fcdfb3917d77584174469d09080f844dc7ec4a244c" Jan 30 17:34:09 crc kubenswrapper[4766]: E0130 17:34:09.835010 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:34:10 crc kubenswrapper[4766]: I0130 17:34:10.462905 4766 scope.go:117] "RemoveContainer" containerID="62b27159543c2d7874b57e155df6cf176eef5367eb321b1133ba7cc2464a1a68" Jan 30 17:34:10 crc kubenswrapper[4766]: E0130 17:34:10.463236 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:34:24 crc kubenswrapper[4766]: I0130 17:34:24.039858 4766 scope.go:117] "RemoveContainer" containerID="62b27159543c2d7874b57e155df6cf176eef5367eb321b1133ba7cc2464a1a68" Jan 30 17:34:24 crc kubenswrapper[4766]: E0130 17:34:24.040629 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:34:36 crc kubenswrapper[4766]: I0130 17:34:36.043270 4766 scope.go:117] "RemoveContainer" containerID="62b27159543c2d7874b57e155df6cf176eef5367eb321b1133ba7cc2464a1a68" Jan 30 17:34:36 crc kubenswrapper[4766]: E0130 17:34:36.044315 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:34:46 crc kubenswrapper[4766]: I0130 17:34:46.121373 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sv58z"] Jan 30 17:34:46 crc kubenswrapper[4766]: E0130 17:34:46.122834 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d5ff932-157e-49bf-9f1e-b4dc767de05e" containerName="collect-profiles" Jan 30 17:34:46 crc kubenswrapper[4766]: I0130 17:34:46.122853 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d5ff932-157e-49bf-9f1e-b4dc767de05e" containerName="collect-profiles" Jan 30 17:34:46 crc kubenswrapper[4766]: I0130 17:34:46.123047 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d5ff932-157e-49bf-9f1e-b4dc767de05e" containerName="collect-profiles" Jan 30 17:34:46 crc kubenswrapper[4766]: I0130 17:34:46.124320 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sv58z" Jan 30 17:34:46 crc kubenswrapper[4766]: I0130 17:34:46.144605 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sv58z"] Jan 30 17:34:46 crc kubenswrapper[4766]: I0130 17:34:46.293802 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgpwf\" (UniqueName: \"kubernetes.io/projected/7c6480b5-07cc-4bd3-a1f5-d0ecdf357489-kube-api-access-kgpwf\") pod \"redhat-operators-sv58z\" (UID: \"7c6480b5-07cc-4bd3-a1f5-d0ecdf357489\") " pod="openshift-marketplace/redhat-operators-sv58z" Jan 30 17:34:46 crc kubenswrapper[4766]: I0130 17:34:46.293915 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c6480b5-07cc-4bd3-a1f5-d0ecdf357489-utilities\") pod \"redhat-operators-sv58z\" (UID: \"7c6480b5-07cc-4bd3-a1f5-d0ecdf357489\") " pod="openshift-marketplace/redhat-operators-sv58z" Jan 30 17:34:46 crc kubenswrapper[4766]: I0130 17:34:46.293946 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c6480b5-07cc-4bd3-a1f5-d0ecdf357489-catalog-content\") pod \"redhat-operators-sv58z\" (UID: \"7c6480b5-07cc-4bd3-a1f5-d0ecdf357489\") " pod="openshift-marketplace/redhat-operators-sv58z" Jan 30 17:34:46 crc kubenswrapper[4766]: I0130 17:34:46.395486 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgpwf\" (UniqueName: \"kubernetes.io/projected/7c6480b5-07cc-4bd3-a1f5-d0ecdf357489-kube-api-access-kgpwf\") pod \"redhat-operators-sv58z\" (UID: \"7c6480b5-07cc-4bd3-a1f5-d0ecdf357489\") " pod="openshift-marketplace/redhat-operators-sv58z" Jan 30 17:34:46 crc kubenswrapper[4766]: I0130 17:34:46.395894 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c6480b5-07cc-4bd3-a1f5-d0ecdf357489-utilities\") pod \"redhat-operators-sv58z\" (UID: \"7c6480b5-07cc-4bd3-a1f5-d0ecdf357489\") " pod="openshift-marketplace/redhat-operators-sv58z" Jan 30 17:34:46 crc kubenswrapper[4766]: I0130 17:34:46.395996 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c6480b5-07cc-4bd3-a1f5-d0ecdf357489-catalog-content\") pod \"redhat-operators-sv58z\" (UID: \"7c6480b5-07cc-4bd3-a1f5-d0ecdf357489\") " pod="openshift-marketplace/redhat-operators-sv58z" Jan 30 17:34:46 crc kubenswrapper[4766]: I0130 17:34:46.396665 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c6480b5-07cc-4bd3-a1f5-d0ecdf357489-utilities\") pod \"redhat-operators-sv58z\" (UID: \"7c6480b5-07cc-4bd3-a1f5-d0ecdf357489\") " pod="openshift-marketplace/redhat-operators-sv58z" Jan 30 17:34:46 crc kubenswrapper[4766]: I0130 17:34:46.396711 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c6480b5-07cc-4bd3-a1f5-d0ecdf357489-catalog-content\") pod \"redhat-operators-sv58z\" (UID: \"7c6480b5-07cc-4bd3-a1f5-d0ecdf357489\") " pod="openshift-marketplace/redhat-operators-sv58z" Jan 30 17:34:46 crc kubenswrapper[4766]: I0130 17:34:46.429282 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgpwf\" (UniqueName: \"kubernetes.io/projected/7c6480b5-07cc-4bd3-a1f5-d0ecdf357489-kube-api-access-kgpwf\") pod \"redhat-operators-sv58z\" (UID: \"7c6480b5-07cc-4bd3-a1f5-d0ecdf357489\") " pod="openshift-marketplace/redhat-operators-sv58z" Jan 30 17:34:46 crc kubenswrapper[4766]: I0130 17:34:46.457813 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sv58z" Jan 30 17:34:46 crc kubenswrapper[4766]: I0130 17:34:46.680541 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sv58z"] Jan 30 17:34:46 crc kubenswrapper[4766]: I0130 17:34:46.709997 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sv58z" event={"ID":"7c6480b5-07cc-4bd3-a1f5-d0ecdf357489","Type":"ContainerStarted","Data":"7779d57a2f7d39826992cc6bccf7eef3bb9b01a232008a9820c30f1fbd42f046"} Jan 30 17:34:47 crc kubenswrapper[4766]: I0130 17:34:47.718923 4766 generic.go:334] "Generic (PLEG): container finished" podID="7c6480b5-07cc-4bd3-a1f5-d0ecdf357489" containerID="3019bdb6b267537a12f194a546eb47abe65ea4880b5cd19edecdfb6ede31f543" exitCode=0 Jan 30 17:34:47 crc kubenswrapper[4766]: I0130 17:34:47.718992 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sv58z" event={"ID":"7c6480b5-07cc-4bd3-a1f5-d0ecdf357489","Type":"ContainerDied","Data":"3019bdb6b267537a12f194a546eb47abe65ea4880b5cd19edecdfb6ede31f543"} Jan 30 17:34:47 crc kubenswrapper[4766]: I0130 17:34:47.721899 4766 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 17:34:48 crc kubenswrapper[4766]: I0130 17:34:48.735272 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sv58z" event={"ID":"7c6480b5-07cc-4bd3-a1f5-d0ecdf357489","Type":"ContainerStarted","Data":"b7ee6a4394b9a3e98dbaf75f318c4ad67ba5e1a0923718f427fbb75531398ef5"} Jan 30 17:34:49 crc kubenswrapper[4766]: I0130 17:34:49.747900 4766 generic.go:334] "Generic (PLEG): container finished" podID="7c6480b5-07cc-4bd3-a1f5-d0ecdf357489" containerID="b7ee6a4394b9a3e98dbaf75f318c4ad67ba5e1a0923718f427fbb75531398ef5" exitCode=0 Jan 30 17:34:49 crc kubenswrapper[4766]: I0130 17:34:49.748013 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sv58z" event={"ID":"7c6480b5-07cc-4bd3-a1f5-d0ecdf357489","Type":"ContainerDied","Data":"b7ee6a4394b9a3e98dbaf75f318c4ad67ba5e1a0923718f427fbb75531398ef5"} Jan 30 17:34:50 crc kubenswrapper[4766]: I0130 17:34:50.759009 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sv58z" event={"ID":"7c6480b5-07cc-4bd3-a1f5-d0ecdf357489","Type":"ContainerStarted","Data":"2fbab93a5123e6e1a496c78d9b54d81039d9b0ba7497e0295164d298e0e012e3"} Jan 30 17:34:50 crc kubenswrapper[4766]: I0130 17:34:50.785070 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sv58z" podStartSLOduration=2.245876919 podStartE2EDuration="4.785048822s" podCreationTimestamp="2026-01-30 17:34:46 +0000 UTC" firstStartedPulling="2026-01-30 17:34:47.721579755 +0000 UTC m=+4342.359537101" lastFinishedPulling="2026-01-30 17:34:50.260751658 +0000 UTC m=+4344.898709004" observedRunningTime="2026-01-30 17:34:50.779788447 +0000 UTC m=+4345.417745793" watchObservedRunningTime="2026-01-30 17:34:50.785048822 +0000 UTC m=+4345.423006158" Jan 30 17:34:51 crc kubenswrapper[4766]: I0130 17:34:51.039584 4766 scope.go:117] "RemoveContainer" containerID="62b27159543c2d7874b57e155df6cf176eef5367eb321b1133ba7cc2464a1a68" Jan 30 17:34:51 crc kubenswrapper[4766]: E0130 17:34:51.039771 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:34:56 crc kubenswrapper[4766]: I0130 17:34:56.458799 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sv58z" Jan 30 17:34:56 crc kubenswrapper[4766]: I0130 17:34:56.459299 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sv58z" Jan 30 17:34:56 crc kubenswrapper[4766]: I0130 17:34:56.504819 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sv58z" Jan 30 17:34:56 crc kubenswrapper[4766]: I0130 17:34:56.838542 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sv58z" Jan 30 17:34:56 crc kubenswrapper[4766]: I0130 17:34:56.892632 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sv58z"] Jan 30 17:34:58 crc kubenswrapper[4766]: I0130 17:34:58.814212 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sv58z" podUID="7c6480b5-07cc-4bd3-a1f5-d0ecdf357489" containerName="registry-server" containerID="cri-o://2fbab93a5123e6e1a496c78d9b54d81039d9b0ba7497e0295164d298e0e012e3" gracePeriod=2 Jan 30 17:35:00 crc kubenswrapper[4766]: I0130 17:35:00.264608 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sv58z" Jan 30 17:35:00 crc kubenswrapper[4766]: I0130 17:35:00.293550 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgpwf\" (UniqueName: \"kubernetes.io/projected/7c6480b5-07cc-4bd3-a1f5-d0ecdf357489-kube-api-access-kgpwf\") pod \"7c6480b5-07cc-4bd3-a1f5-d0ecdf357489\" (UID: \"7c6480b5-07cc-4bd3-a1f5-d0ecdf357489\") " Jan 30 17:35:00 crc kubenswrapper[4766]: I0130 17:35:00.293619 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c6480b5-07cc-4bd3-a1f5-d0ecdf357489-catalog-content\") pod \"7c6480b5-07cc-4bd3-a1f5-d0ecdf357489\" (UID: \"7c6480b5-07cc-4bd3-a1f5-d0ecdf357489\") " Jan 30 17:35:00 crc kubenswrapper[4766]: I0130 17:35:00.293693 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c6480b5-07cc-4bd3-a1f5-d0ecdf357489-utilities\") pod \"7c6480b5-07cc-4bd3-a1f5-d0ecdf357489\" (UID: \"7c6480b5-07cc-4bd3-a1f5-d0ecdf357489\") " Jan 30 17:35:00 crc kubenswrapper[4766]: I0130 17:35:00.294782 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c6480b5-07cc-4bd3-a1f5-d0ecdf357489-utilities" (OuterVolumeSpecName: "utilities") pod "7c6480b5-07cc-4bd3-a1f5-d0ecdf357489" (UID: "7c6480b5-07cc-4bd3-a1f5-d0ecdf357489"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:35:00 crc kubenswrapper[4766]: I0130 17:35:00.300384 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c6480b5-07cc-4bd3-a1f5-d0ecdf357489-kube-api-access-kgpwf" (OuterVolumeSpecName: "kube-api-access-kgpwf") pod "7c6480b5-07cc-4bd3-a1f5-d0ecdf357489" (UID: "7c6480b5-07cc-4bd3-a1f5-d0ecdf357489"). InnerVolumeSpecName "kube-api-access-kgpwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:35:00 crc kubenswrapper[4766]: I0130 17:35:00.395993 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgpwf\" (UniqueName: \"kubernetes.io/projected/7c6480b5-07cc-4bd3-a1f5-d0ecdf357489-kube-api-access-kgpwf\") on node \"crc\" DevicePath \"\"" Jan 30 17:35:00 crc kubenswrapper[4766]: I0130 17:35:00.396031 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c6480b5-07cc-4bd3-a1f5-d0ecdf357489-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:35:00 crc kubenswrapper[4766]: I0130 17:35:00.412464 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c6480b5-07cc-4bd3-a1f5-d0ecdf357489-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7c6480b5-07cc-4bd3-a1f5-d0ecdf357489" (UID: "7c6480b5-07cc-4bd3-a1f5-d0ecdf357489"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:35:00 crc kubenswrapper[4766]: I0130 17:35:00.496733 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c6480b5-07cc-4bd3-a1f5-d0ecdf357489-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:35:00 crc kubenswrapper[4766]: I0130 17:35:00.834316 4766 generic.go:334] "Generic (PLEG): container finished" podID="7c6480b5-07cc-4bd3-a1f5-d0ecdf357489" containerID="2fbab93a5123e6e1a496c78d9b54d81039d9b0ba7497e0295164d298e0e012e3" exitCode=0 Jan 30 17:35:00 crc kubenswrapper[4766]: I0130 17:35:00.834372 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sv58z" event={"ID":"7c6480b5-07cc-4bd3-a1f5-d0ecdf357489","Type":"ContainerDied","Data":"2fbab93a5123e6e1a496c78d9b54d81039d9b0ba7497e0295164d298e0e012e3"} Jan 30 17:35:00 crc kubenswrapper[4766]: I0130 17:35:00.834400 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sv58z" event={"ID":"7c6480b5-07cc-4bd3-a1f5-d0ecdf357489","Type":"ContainerDied","Data":"7779d57a2f7d39826992cc6bccf7eef3bb9b01a232008a9820c30f1fbd42f046"} Jan 30 17:35:00 crc kubenswrapper[4766]: I0130 17:35:00.834411 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sv58z" Jan 30 17:35:00 crc kubenswrapper[4766]: I0130 17:35:00.834422 4766 scope.go:117] "RemoveContainer" containerID="2fbab93a5123e6e1a496c78d9b54d81039d9b0ba7497e0295164d298e0e012e3" Jan 30 17:35:00 crc kubenswrapper[4766]: I0130 17:35:00.853227 4766 scope.go:117] "RemoveContainer" containerID="b7ee6a4394b9a3e98dbaf75f318c4ad67ba5e1a0923718f427fbb75531398ef5" Jan 30 17:35:00 crc kubenswrapper[4766]: I0130 17:35:00.863897 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sv58z"] Jan 30 17:35:00 crc kubenswrapper[4766]: I0130 17:35:00.870641 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sv58z"] Jan 30 17:35:00 crc kubenswrapper[4766]: I0130 17:35:00.898582 4766 scope.go:117] "RemoveContainer" containerID="3019bdb6b267537a12f194a546eb47abe65ea4880b5cd19edecdfb6ede31f543" Jan 30 17:35:00 crc kubenswrapper[4766]: I0130 17:35:00.917268 4766 scope.go:117] "RemoveContainer" containerID="2fbab93a5123e6e1a496c78d9b54d81039d9b0ba7497e0295164d298e0e012e3" Jan 30 17:35:00 crc kubenswrapper[4766]: E0130 17:35:00.917818 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fbab93a5123e6e1a496c78d9b54d81039d9b0ba7497e0295164d298e0e012e3\": container with ID starting with 2fbab93a5123e6e1a496c78d9b54d81039d9b0ba7497e0295164d298e0e012e3 not found: ID does not exist" containerID="2fbab93a5123e6e1a496c78d9b54d81039d9b0ba7497e0295164d298e0e012e3" Jan 30 17:35:00 crc kubenswrapper[4766]: I0130 17:35:00.917864 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fbab93a5123e6e1a496c78d9b54d81039d9b0ba7497e0295164d298e0e012e3"} err="failed to get container status \"2fbab93a5123e6e1a496c78d9b54d81039d9b0ba7497e0295164d298e0e012e3\": rpc error: code = NotFound desc = could not find container \"2fbab93a5123e6e1a496c78d9b54d81039d9b0ba7497e0295164d298e0e012e3\": container with ID starting with 2fbab93a5123e6e1a496c78d9b54d81039d9b0ba7497e0295164d298e0e012e3 not found: ID does not exist" Jan 30 17:35:00 crc kubenswrapper[4766]: I0130 17:35:00.917900 4766 scope.go:117] "RemoveContainer" containerID="b7ee6a4394b9a3e98dbaf75f318c4ad67ba5e1a0923718f427fbb75531398ef5" Jan 30 17:35:00 crc kubenswrapper[4766]: E0130 17:35:00.918404 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7ee6a4394b9a3e98dbaf75f318c4ad67ba5e1a0923718f427fbb75531398ef5\": container with ID starting with b7ee6a4394b9a3e98dbaf75f318c4ad67ba5e1a0923718f427fbb75531398ef5 not found: ID does not exist" containerID="b7ee6a4394b9a3e98dbaf75f318c4ad67ba5e1a0923718f427fbb75531398ef5" Jan 30 17:35:00 crc kubenswrapper[4766]: I0130 17:35:00.918431 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7ee6a4394b9a3e98dbaf75f318c4ad67ba5e1a0923718f427fbb75531398ef5"} err="failed to get container status \"b7ee6a4394b9a3e98dbaf75f318c4ad67ba5e1a0923718f427fbb75531398ef5\": rpc error: code = NotFound desc = could not find container \"b7ee6a4394b9a3e98dbaf75f318c4ad67ba5e1a0923718f427fbb75531398ef5\": container with ID starting with b7ee6a4394b9a3e98dbaf75f318c4ad67ba5e1a0923718f427fbb75531398ef5 not found: ID does not exist" Jan 30 17:35:00 crc kubenswrapper[4766]: I0130 17:35:00.918450 4766 scope.go:117] "RemoveContainer" containerID="3019bdb6b267537a12f194a546eb47abe65ea4880b5cd19edecdfb6ede31f543" Jan 30 17:35:00 crc kubenswrapper[4766]: E0130 17:35:00.918751 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3019bdb6b267537a12f194a546eb47abe65ea4880b5cd19edecdfb6ede31f543\": container with ID starting with 3019bdb6b267537a12f194a546eb47abe65ea4880b5cd19edecdfb6ede31f543 not found: ID does not exist" containerID="3019bdb6b267537a12f194a546eb47abe65ea4880b5cd19edecdfb6ede31f543" Jan 30 17:35:00 crc kubenswrapper[4766]: I0130 17:35:00.918791 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3019bdb6b267537a12f194a546eb47abe65ea4880b5cd19edecdfb6ede31f543"} err="failed to get container status \"3019bdb6b267537a12f194a546eb47abe65ea4880b5cd19edecdfb6ede31f543\": rpc error: code = NotFound desc = could not find container \"3019bdb6b267537a12f194a546eb47abe65ea4880b5cd19edecdfb6ede31f543\": container with ID starting with 3019bdb6b267537a12f194a546eb47abe65ea4880b5cd19edecdfb6ede31f543 not found: ID does not exist" Jan 30 17:35:02 crc kubenswrapper[4766]: I0130 17:35:02.047730 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c6480b5-07cc-4bd3-a1f5-d0ecdf357489" path="/var/lib/kubelet/pods/7c6480b5-07cc-4bd3-a1f5-d0ecdf357489/volumes" Jan 30 17:35:05 crc kubenswrapper[4766]: I0130 17:35:05.039720 4766 scope.go:117] "RemoveContainer" containerID="62b27159543c2d7874b57e155df6cf176eef5367eb321b1133ba7cc2464a1a68" Jan 30 17:35:05 crc kubenswrapper[4766]: E0130 17:35:05.039964 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:35:06 crc kubenswrapper[4766]: I0130 17:35:06.070885 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5wb7s"] Jan 30 17:35:06 crc kubenswrapper[4766]: E0130 17:35:06.071316 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c6480b5-07cc-4bd3-a1f5-d0ecdf357489" containerName="extract-content" Jan 30 17:35:06 crc kubenswrapper[4766]: I0130 17:35:06.071335 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c6480b5-07cc-4bd3-a1f5-d0ecdf357489" containerName="extract-content" Jan 30 17:35:06 crc kubenswrapper[4766]: E0130 17:35:06.071363 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c6480b5-07cc-4bd3-a1f5-d0ecdf357489" containerName="extract-utilities" Jan 30 17:35:06 crc kubenswrapper[4766]: I0130 17:35:06.071371 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c6480b5-07cc-4bd3-a1f5-d0ecdf357489" containerName="extract-utilities" Jan 30 17:35:06 crc kubenswrapper[4766]: E0130 17:35:06.071394 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c6480b5-07cc-4bd3-a1f5-d0ecdf357489" containerName="registry-server" Jan 30 17:35:06 crc kubenswrapper[4766]: I0130 17:35:06.071401 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c6480b5-07cc-4bd3-a1f5-d0ecdf357489" containerName="registry-server" Jan 30 17:35:06 crc kubenswrapper[4766]: I0130 17:35:06.071572 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c6480b5-07cc-4bd3-a1f5-d0ecdf357489" containerName="registry-server" Jan 30 17:35:06 crc kubenswrapper[4766]: I0130 17:35:06.072772 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5wb7s" Jan 30 17:35:06 crc kubenswrapper[4766]: I0130 17:35:06.073472 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtbmw\" (UniqueName: \"kubernetes.io/projected/94bf4dd2-3bf6-4429-a387-5cc19fadf159-kube-api-access-qtbmw\") pod \"community-operators-5wb7s\" (UID: \"94bf4dd2-3bf6-4429-a387-5cc19fadf159\") " pod="openshift-marketplace/community-operators-5wb7s" Jan 30 17:35:06 crc kubenswrapper[4766]: I0130 17:35:06.073617 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94bf4dd2-3bf6-4429-a387-5cc19fadf159-catalog-content\") pod \"community-operators-5wb7s\" (UID: \"94bf4dd2-3bf6-4429-a387-5cc19fadf159\") " pod="openshift-marketplace/community-operators-5wb7s" Jan 30 17:35:06 crc kubenswrapper[4766]: I0130 17:35:06.073708 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94bf4dd2-3bf6-4429-a387-5cc19fadf159-utilities\") pod \"community-operators-5wb7s\" (UID: \"94bf4dd2-3bf6-4429-a387-5cc19fadf159\") " pod="openshift-marketplace/community-operators-5wb7s" Jan 30 17:35:06 crc kubenswrapper[4766]: I0130 17:35:06.086623 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5wb7s"] Jan 30 17:35:06 crc kubenswrapper[4766]: I0130 17:35:06.174352 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtbmw\" (UniqueName: \"kubernetes.io/projected/94bf4dd2-3bf6-4429-a387-5cc19fadf159-kube-api-access-qtbmw\") pod \"community-operators-5wb7s\" (UID: \"94bf4dd2-3bf6-4429-a387-5cc19fadf159\") " pod="openshift-marketplace/community-operators-5wb7s" Jan 30 17:35:06 crc kubenswrapper[4766]: I0130 17:35:06.174444 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94bf4dd2-3bf6-4429-a387-5cc19fadf159-catalog-content\") pod \"community-operators-5wb7s\" (UID: \"94bf4dd2-3bf6-4429-a387-5cc19fadf159\") " pod="openshift-marketplace/community-operators-5wb7s" Jan 30 17:35:06 crc kubenswrapper[4766]: I0130 17:35:06.174461 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94bf4dd2-3bf6-4429-a387-5cc19fadf159-utilities\") pod \"community-operators-5wb7s\" (UID: \"94bf4dd2-3bf6-4429-a387-5cc19fadf159\") " pod="openshift-marketplace/community-operators-5wb7s" Jan 30 17:35:06 crc kubenswrapper[4766]: I0130 17:35:06.174875 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94bf4dd2-3bf6-4429-a387-5cc19fadf159-utilities\") pod \"community-operators-5wb7s\" (UID: \"94bf4dd2-3bf6-4429-a387-5cc19fadf159\") " pod="openshift-marketplace/community-operators-5wb7s" Jan 30 17:35:06 crc kubenswrapper[4766]: I0130 17:35:06.175028 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94bf4dd2-3bf6-4429-a387-5cc19fadf159-catalog-content\") pod \"community-operators-5wb7s\" (UID: \"94bf4dd2-3bf6-4429-a387-5cc19fadf159\") " pod="openshift-marketplace/community-operators-5wb7s" Jan 30 17:35:06 crc kubenswrapper[4766]: I0130 17:35:06.198886 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtbmw\" (UniqueName: \"kubernetes.io/projected/94bf4dd2-3bf6-4429-a387-5cc19fadf159-kube-api-access-qtbmw\") pod \"community-operators-5wb7s\" (UID: \"94bf4dd2-3bf6-4429-a387-5cc19fadf159\") " pod="openshift-marketplace/community-operators-5wb7s" Jan 30 17:35:06 crc kubenswrapper[4766]: I0130 17:35:06.395871 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5wb7s" Jan 30 17:35:06 crc kubenswrapper[4766]: I0130 17:35:06.692403 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5wb7s"] Jan 30 17:35:06 crc kubenswrapper[4766]: I0130 17:35:06.874529 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5wb7s" event={"ID":"94bf4dd2-3bf6-4429-a387-5cc19fadf159","Type":"ContainerStarted","Data":"b12c1b39c79da2097cf82447c715692db38883222baa6093ec2dc5ab0047733d"} Jan 30 17:35:07 crc kubenswrapper[4766]: I0130 17:35:07.882233 4766 generic.go:334] "Generic (PLEG): container finished" podID="94bf4dd2-3bf6-4429-a387-5cc19fadf159" containerID="8f53263ef0f45f5465e08466c5c75079b529bfc1a1881f2675dd3b9de746cb70" exitCode=0 Jan 30 17:35:07 crc kubenswrapper[4766]: I0130 17:35:07.882277 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5wb7s" event={"ID":"94bf4dd2-3bf6-4429-a387-5cc19fadf159","Type":"ContainerDied","Data":"8f53263ef0f45f5465e08466c5c75079b529bfc1a1881f2675dd3b9de746cb70"} Jan 30 17:35:08 crc kubenswrapper[4766]: I0130 17:35:08.891306 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5wb7s" event={"ID":"94bf4dd2-3bf6-4429-a387-5cc19fadf159","Type":"ContainerStarted","Data":"69c1ebac4e1d751b165ee5d8b42388e508e15e90175d6241e2f344ff246ae956"} Jan 30 17:35:09 crc kubenswrapper[4766]: I0130 17:35:09.898662 4766 generic.go:334] "Generic (PLEG): container finished" podID="94bf4dd2-3bf6-4429-a387-5cc19fadf159" containerID="69c1ebac4e1d751b165ee5d8b42388e508e15e90175d6241e2f344ff246ae956" exitCode=0 Jan 30 17:35:09 crc kubenswrapper[4766]: I0130 17:35:09.898907 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5wb7s" event={"ID":"94bf4dd2-3bf6-4429-a387-5cc19fadf159","Type":"ContainerDied","Data":"69c1ebac4e1d751b165ee5d8b42388e508e15e90175d6241e2f344ff246ae956"} Jan 30 17:35:10 crc kubenswrapper[4766]: I0130 17:35:10.908251 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5wb7s" event={"ID":"94bf4dd2-3bf6-4429-a387-5cc19fadf159","Type":"ContainerStarted","Data":"f0e5722bd70c78c44b76a49c782e9f6cb3cd0bf57cb393e2198ca2b5697ce2ed"} Jan 30 17:35:10 crc kubenswrapper[4766]: I0130 17:35:10.931162 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5wb7s" podStartSLOduration=2.451162638 podStartE2EDuration="4.931138148s" podCreationTimestamp="2026-01-30 17:35:06 +0000 UTC" firstStartedPulling="2026-01-30 17:35:07.884072653 +0000 UTC m=+4362.522029999" lastFinishedPulling="2026-01-30 17:35:10.364048163 +0000 UTC m=+4365.002005509" observedRunningTime="2026-01-30 17:35:10.925715679 +0000 UTC m=+4365.563673025" watchObservedRunningTime="2026-01-30 17:35:10.931138148 +0000 UTC m=+4365.569095494" Jan 30 17:35:16 crc kubenswrapper[4766]: I0130 17:35:16.397203 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5wb7s" Jan 30 17:35:16 crc kubenswrapper[4766]: I0130 17:35:16.397562 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5wb7s" Jan 30 17:35:16 crc kubenswrapper[4766]: I0130 17:35:16.444460 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5wb7s" Jan 30 17:35:16 crc kubenswrapper[4766]: I0130 17:35:16.997425 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5wb7s" Jan 30 17:35:17 crc kubenswrapper[4766]: I0130 17:35:17.039598 4766 scope.go:117] "RemoveContainer" containerID="62b27159543c2d7874b57e155df6cf176eef5367eb321b1133ba7cc2464a1a68" Jan 30 17:35:17 crc kubenswrapper[4766]: E0130 17:35:17.040132 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:35:17 crc kubenswrapper[4766]: I0130 17:35:17.317236 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5wb7s"] Jan 30 17:35:18 crc kubenswrapper[4766]: I0130 17:35:18.971616 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5wb7s" podUID="94bf4dd2-3bf6-4429-a387-5cc19fadf159" containerName="registry-server" containerID="cri-o://f0e5722bd70c78c44b76a49c782e9f6cb3cd0bf57cb393e2198ca2b5697ce2ed" gracePeriod=2 Jan 30 17:35:19 crc kubenswrapper[4766]: I0130 17:35:19.844483 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5wb7s" Jan 30 17:35:19 crc kubenswrapper[4766]: I0130 17:35:19.981134 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94bf4dd2-3bf6-4429-a387-5cc19fadf159-utilities\") pod \"94bf4dd2-3bf6-4429-a387-5cc19fadf159\" (UID: \"94bf4dd2-3bf6-4429-a387-5cc19fadf159\") " Jan 30 17:35:19 crc kubenswrapper[4766]: I0130 17:35:19.982310 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94bf4dd2-3bf6-4429-a387-5cc19fadf159-utilities" (OuterVolumeSpecName: "utilities") pod "94bf4dd2-3bf6-4429-a387-5cc19fadf159" (UID: "94bf4dd2-3bf6-4429-a387-5cc19fadf159"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:35:19 crc kubenswrapper[4766]: I0130 17:35:19.982407 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtbmw\" (UniqueName: \"kubernetes.io/projected/94bf4dd2-3bf6-4429-a387-5cc19fadf159-kube-api-access-qtbmw\") pod \"94bf4dd2-3bf6-4429-a387-5cc19fadf159\" (UID: \"94bf4dd2-3bf6-4429-a387-5cc19fadf159\") " Jan 30 17:35:19 crc kubenswrapper[4766]: I0130 17:35:19.982449 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94bf4dd2-3bf6-4429-a387-5cc19fadf159-catalog-content\") pod \"94bf4dd2-3bf6-4429-a387-5cc19fadf159\" (UID: \"94bf4dd2-3bf6-4429-a387-5cc19fadf159\") " Jan 30 17:35:19 crc kubenswrapper[4766]: I0130 17:35:19.982700 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94bf4dd2-3bf6-4429-a387-5cc19fadf159-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:35:19 crc kubenswrapper[4766]: I0130 17:35:19.989199 4766 generic.go:334] "Generic (PLEG): container finished" podID="94bf4dd2-3bf6-4429-a387-5cc19fadf159" containerID="f0e5722bd70c78c44b76a49c782e9f6cb3cd0bf57cb393e2198ca2b5697ce2ed" exitCode=0 Jan 30 17:35:19 crc kubenswrapper[4766]: I0130 17:35:19.989301 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5wb7s" event={"ID":"94bf4dd2-3bf6-4429-a387-5cc19fadf159","Type":"ContainerDied","Data":"f0e5722bd70c78c44b76a49c782e9f6cb3cd0bf57cb393e2198ca2b5697ce2ed"} Jan 30 17:35:19 crc kubenswrapper[4766]: I0130 17:35:19.989360 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5wb7s" event={"ID":"94bf4dd2-3bf6-4429-a387-5cc19fadf159","Type":"ContainerDied","Data":"b12c1b39c79da2097cf82447c715692db38883222baa6093ec2dc5ab0047733d"} Jan 30 17:35:19 crc kubenswrapper[4766]: I0130 17:35:19.989396 4766 scope.go:117] "RemoveContainer" containerID="f0e5722bd70c78c44b76a49c782e9f6cb3cd0bf57cb393e2198ca2b5697ce2ed" Jan 30 17:35:19 crc kubenswrapper[4766]: I0130 17:35:19.989413 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5wb7s" Jan 30 17:35:19 crc kubenswrapper[4766]: I0130 17:35:19.991260 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94bf4dd2-3bf6-4429-a387-5cc19fadf159-kube-api-access-qtbmw" (OuterVolumeSpecName: "kube-api-access-qtbmw") pod "94bf4dd2-3bf6-4429-a387-5cc19fadf159" (UID: "94bf4dd2-3bf6-4429-a387-5cc19fadf159"). InnerVolumeSpecName "kube-api-access-qtbmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:35:20 crc kubenswrapper[4766]: I0130 17:35:20.032469 4766 scope.go:117] "RemoveContainer" containerID="69c1ebac4e1d751b165ee5d8b42388e508e15e90175d6241e2f344ff246ae956" Jan 30 17:35:20 crc kubenswrapper[4766]: I0130 17:35:20.053598 4766 scope.go:117] "RemoveContainer" containerID="8f53263ef0f45f5465e08466c5c75079b529bfc1a1881f2675dd3b9de746cb70" Jan 30 17:35:20 crc kubenswrapper[4766]: I0130 17:35:20.084622 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtbmw\" (UniqueName: \"kubernetes.io/projected/94bf4dd2-3bf6-4429-a387-5cc19fadf159-kube-api-access-qtbmw\") on node \"crc\" DevicePath \"\"" Jan 30 17:35:20 crc kubenswrapper[4766]: I0130 17:35:20.087666 4766 scope.go:117] "RemoveContainer" containerID="f0e5722bd70c78c44b76a49c782e9f6cb3cd0bf57cb393e2198ca2b5697ce2ed" Jan 30 17:35:20 crc kubenswrapper[4766]: E0130 17:35:20.088196 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0e5722bd70c78c44b76a49c782e9f6cb3cd0bf57cb393e2198ca2b5697ce2ed\": container with ID starting with f0e5722bd70c78c44b76a49c782e9f6cb3cd0bf57cb393e2198ca2b5697ce2ed not found: ID does not exist" containerID="f0e5722bd70c78c44b76a49c782e9f6cb3cd0bf57cb393e2198ca2b5697ce2ed" Jan 30 17:35:20 crc kubenswrapper[4766]: I0130 17:35:20.088265 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0e5722bd70c78c44b76a49c782e9f6cb3cd0bf57cb393e2198ca2b5697ce2ed"} err="failed to get container status \"f0e5722bd70c78c44b76a49c782e9f6cb3cd0bf57cb393e2198ca2b5697ce2ed\": rpc error: code = NotFound desc = could not find container \"f0e5722bd70c78c44b76a49c782e9f6cb3cd0bf57cb393e2198ca2b5697ce2ed\": container with ID starting with f0e5722bd70c78c44b76a49c782e9f6cb3cd0bf57cb393e2198ca2b5697ce2ed not found: ID does not exist" Jan 30 17:35:20 crc kubenswrapper[4766]: I0130 17:35:20.088305 4766 scope.go:117] "RemoveContainer" containerID="69c1ebac4e1d751b165ee5d8b42388e508e15e90175d6241e2f344ff246ae956" Jan 30 17:35:20 crc kubenswrapper[4766]: E0130 17:35:20.088673 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69c1ebac4e1d751b165ee5d8b42388e508e15e90175d6241e2f344ff246ae956\": container with ID starting with 69c1ebac4e1d751b165ee5d8b42388e508e15e90175d6241e2f344ff246ae956 not found: ID does not exist" containerID="69c1ebac4e1d751b165ee5d8b42388e508e15e90175d6241e2f344ff246ae956" Jan 30 17:35:20 crc kubenswrapper[4766]: I0130 17:35:20.088720 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69c1ebac4e1d751b165ee5d8b42388e508e15e90175d6241e2f344ff246ae956"} err="failed to get container status \"69c1ebac4e1d751b165ee5d8b42388e508e15e90175d6241e2f344ff246ae956\": rpc error: code = NotFound desc = could not find container \"69c1ebac4e1d751b165ee5d8b42388e508e15e90175d6241e2f344ff246ae956\": container with ID starting with 69c1ebac4e1d751b165ee5d8b42388e508e15e90175d6241e2f344ff246ae956 not found: ID does not exist" Jan 30 17:35:20 crc kubenswrapper[4766]: I0130 17:35:20.088756 4766 scope.go:117] "RemoveContainer" containerID="8f53263ef0f45f5465e08466c5c75079b529bfc1a1881f2675dd3b9de746cb70" Jan 30 17:35:20 crc kubenswrapper[4766]: E0130 17:35:20.089173 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f53263ef0f45f5465e08466c5c75079b529bfc1a1881f2675dd3b9de746cb70\": container with ID starting with 8f53263ef0f45f5465e08466c5c75079b529bfc1a1881f2675dd3b9de746cb70 not found: ID does not exist" containerID="8f53263ef0f45f5465e08466c5c75079b529bfc1a1881f2675dd3b9de746cb70" Jan 30 17:35:20 crc kubenswrapper[4766]: I0130 17:35:20.089282 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f53263ef0f45f5465e08466c5c75079b529bfc1a1881f2675dd3b9de746cb70"} err="failed to get container status \"8f53263ef0f45f5465e08466c5c75079b529bfc1a1881f2675dd3b9de746cb70\": rpc error: code = NotFound desc = could not find container \"8f53263ef0f45f5465e08466c5c75079b529bfc1a1881f2675dd3b9de746cb70\": container with ID starting with 8f53263ef0f45f5465e08466c5c75079b529bfc1a1881f2675dd3b9de746cb70 not found: ID does not exist" Jan 30 17:35:20 crc kubenswrapper[4766]: I0130 17:35:20.111805 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94bf4dd2-3bf6-4429-a387-5cc19fadf159-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94bf4dd2-3bf6-4429-a387-5cc19fadf159" (UID: "94bf4dd2-3bf6-4429-a387-5cc19fadf159"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:35:20 crc kubenswrapper[4766]: I0130 17:35:20.186132 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94bf4dd2-3bf6-4429-a387-5cc19fadf159-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:35:20 crc kubenswrapper[4766]: I0130 17:35:20.324909 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5wb7s"] Jan 30 17:35:20 crc kubenswrapper[4766]: I0130 17:35:20.330308 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5wb7s"] Jan 30 17:35:22 crc kubenswrapper[4766]: I0130 17:35:22.068241 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94bf4dd2-3bf6-4429-a387-5cc19fadf159" path="/var/lib/kubelet/pods/94bf4dd2-3bf6-4429-a387-5cc19fadf159/volumes" Jan 30 17:35:29 crc kubenswrapper[4766]: I0130 17:35:29.039286 4766 scope.go:117] "RemoveContainer" containerID="62b27159543c2d7874b57e155df6cf176eef5367eb321b1133ba7cc2464a1a68" Jan 30 17:35:29 crc kubenswrapper[4766]: E0130 17:35:29.040037 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:35:35 crc kubenswrapper[4766]: I0130 17:35:35.461748 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hjhvv"] Jan 30 17:35:35 crc kubenswrapper[4766]: E0130 17:35:35.462601 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94bf4dd2-3bf6-4429-a387-5cc19fadf159" containerName="extract-utilities" Jan 30 17:35:35 crc kubenswrapper[4766]: I0130 17:35:35.462616 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="94bf4dd2-3bf6-4429-a387-5cc19fadf159" containerName="extract-utilities" Jan 30 17:35:35 crc kubenswrapper[4766]: E0130 17:35:35.462624 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94bf4dd2-3bf6-4429-a387-5cc19fadf159" containerName="registry-server" Jan 30 17:35:35 crc kubenswrapper[4766]: I0130 17:35:35.462630 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="94bf4dd2-3bf6-4429-a387-5cc19fadf159" containerName="registry-server" Jan 30 17:35:35 crc kubenswrapper[4766]: E0130 17:35:35.462651 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94bf4dd2-3bf6-4429-a387-5cc19fadf159" containerName="extract-content" Jan 30 17:35:35 crc kubenswrapper[4766]: I0130 17:35:35.462658 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="94bf4dd2-3bf6-4429-a387-5cc19fadf159" containerName="extract-content" Jan 30 17:35:35 crc kubenswrapper[4766]: I0130 17:35:35.462809 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="94bf4dd2-3bf6-4429-a387-5cc19fadf159" containerName="registry-server" Jan 30 17:35:35 crc kubenswrapper[4766]: I0130 17:35:35.463853 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hjhvv" Jan 30 17:35:35 crc kubenswrapper[4766]: I0130 17:35:35.469533 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hjhvv"] Jan 30 17:35:35 crc kubenswrapper[4766]: I0130 17:35:35.627498 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/575b9005-6dc0-455d-8097-a165628fd850-utilities\") pod \"redhat-marketplace-hjhvv\" (UID: \"575b9005-6dc0-455d-8097-a165628fd850\") " pod="openshift-marketplace/redhat-marketplace-hjhvv" Jan 30 17:35:35 crc kubenswrapper[4766]: I0130 17:35:35.627566 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zktt\" (UniqueName: \"kubernetes.io/projected/575b9005-6dc0-455d-8097-a165628fd850-kube-api-access-8zktt\") pod \"redhat-marketplace-hjhvv\" (UID: \"575b9005-6dc0-455d-8097-a165628fd850\") " pod="openshift-marketplace/redhat-marketplace-hjhvv" Jan 30 17:35:35 crc kubenswrapper[4766]: I0130 17:35:35.627632 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/575b9005-6dc0-455d-8097-a165628fd850-catalog-content\") pod \"redhat-marketplace-hjhvv\" (UID: \"575b9005-6dc0-455d-8097-a165628fd850\") " pod="openshift-marketplace/redhat-marketplace-hjhvv" Jan 30 17:35:35 crc kubenswrapper[4766]: I0130 17:35:35.729840 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/575b9005-6dc0-455d-8097-a165628fd850-catalog-content\") pod \"redhat-marketplace-hjhvv\" (UID: \"575b9005-6dc0-455d-8097-a165628fd850\") " pod="openshift-marketplace/redhat-marketplace-hjhvv" Jan 30 17:35:35 crc kubenswrapper[4766]: I0130 17:35:35.729934 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/575b9005-6dc0-455d-8097-a165628fd850-utilities\") pod \"redhat-marketplace-hjhvv\" (UID: \"575b9005-6dc0-455d-8097-a165628fd850\") " pod="openshift-marketplace/redhat-marketplace-hjhvv" Jan 30 17:35:35 crc kubenswrapper[4766]: I0130 17:35:35.729989 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zktt\" (UniqueName: \"kubernetes.io/projected/575b9005-6dc0-455d-8097-a165628fd850-kube-api-access-8zktt\") pod \"redhat-marketplace-hjhvv\" (UID: \"575b9005-6dc0-455d-8097-a165628fd850\") " pod="openshift-marketplace/redhat-marketplace-hjhvv" Jan 30 17:35:35 crc kubenswrapper[4766]: I0130 17:35:35.730456 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/575b9005-6dc0-455d-8097-a165628fd850-catalog-content\") pod \"redhat-marketplace-hjhvv\" (UID: \"575b9005-6dc0-455d-8097-a165628fd850\") " pod="openshift-marketplace/redhat-marketplace-hjhvv" Jan 30 17:35:35 crc kubenswrapper[4766]: I0130 17:35:35.730851 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/575b9005-6dc0-455d-8097-a165628fd850-utilities\") pod \"redhat-marketplace-hjhvv\" (UID: \"575b9005-6dc0-455d-8097-a165628fd850\") " pod="openshift-marketplace/redhat-marketplace-hjhvv" Jan 30 17:35:35 crc kubenswrapper[4766]: I0130 17:35:35.752379 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zktt\" (UniqueName: \"kubernetes.io/projected/575b9005-6dc0-455d-8097-a165628fd850-kube-api-access-8zktt\") pod \"redhat-marketplace-hjhvv\" (UID: \"575b9005-6dc0-455d-8097-a165628fd850\") " pod="openshift-marketplace/redhat-marketplace-hjhvv" Jan 30 17:35:35 crc kubenswrapper[4766]: I0130 17:35:35.791359 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hjhvv" Jan 30 17:35:36 crc kubenswrapper[4766]: I0130 17:35:36.057847 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hjhvv"] Jan 30 17:35:36 crc kubenswrapper[4766]: I0130 17:35:36.095073 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hjhvv" event={"ID":"575b9005-6dc0-455d-8097-a165628fd850","Type":"ContainerStarted","Data":"01b91be4a1ae2b19d40d81b37a1373588971cae934410131587b526a172a37bb"} Jan 30 17:35:37 crc kubenswrapper[4766]: I0130 17:35:37.102971 4766 generic.go:334] "Generic (PLEG): container finished" podID="575b9005-6dc0-455d-8097-a165628fd850" containerID="74a144af1554b6fb622d7415dcfa4e99432537d5df2b9a4266fc0cfe6bf31bd7" exitCode=0 Jan 30 17:35:37 crc kubenswrapper[4766]: I0130 17:35:37.103152 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hjhvv" event={"ID":"575b9005-6dc0-455d-8097-a165628fd850","Type":"ContainerDied","Data":"74a144af1554b6fb622d7415dcfa4e99432537d5df2b9a4266fc0cfe6bf31bd7"} Jan 30 17:35:38 crc kubenswrapper[4766]: I0130 17:35:38.110211 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hjhvv" event={"ID":"575b9005-6dc0-455d-8097-a165628fd850","Type":"ContainerStarted","Data":"d21171451e4bb484c0205be267627694bb27b0f78e9366d9d7f3eae330f25e5b"} Jan 30 17:35:39 crc kubenswrapper[4766]: I0130 17:35:39.119025 4766 generic.go:334] "Generic (PLEG): container finished" podID="575b9005-6dc0-455d-8097-a165628fd850" containerID="d21171451e4bb484c0205be267627694bb27b0f78e9366d9d7f3eae330f25e5b" exitCode=0 Jan 30 17:35:39 crc kubenswrapper[4766]: I0130 17:35:39.119069 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hjhvv" event={"ID":"575b9005-6dc0-455d-8097-a165628fd850","Type":"ContainerDied","Data":"d21171451e4bb484c0205be267627694bb27b0f78e9366d9d7f3eae330f25e5b"} Jan 30 17:35:40 crc kubenswrapper[4766]: I0130 17:35:40.128897 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hjhvv" event={"ID":"575b9005-6dc0-455d-8097-a165628fd850","Type":"ContainerStarted","Data":"ea25da4a1e9ae2c42a265e7d70506aaae1b5e2d2009cff423c884a1e78c3a054"} Jan 30 17:35:40 crc kubenswrapper[4766]: I0130 17:35:40.153535 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hjhvv" podStartSLOduration=2.742026708 podStartE2EDuration="5.153515903s" podCreationTimestamp="2026-01-30 17:35:35 +0000 UTC" firstStartedPulling="2026-01-30 17:35:37.105604993 +0000 UTC m=+4391.743562339" lastFinishedPulling="2026-01-30 17:35:39.517094188 +0000 UTC m=+4394.155051534" observedRunningTime="2026-01-30 17:35:40.149209154 +0000 UTC m=+4394.787166520" watchObservedRunningTime="2026-01-30 17:35:40.153515903 +0000 UTC m=+4394.791473249" Jan 30 17:35:43 crc kubenswrapper[4766]: I0130 17:35:43.039789 4766 scope.go:117] "RemoveContainer" containerID="62b27159543c2d7874b57e155df6cf176eef5367eb321b1133ba7cc2464a1a68" Jan 30 17:35:43 crc kubenswrapper[4766]: E0130 17:35:43.040546 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:35:45 crc kubenswrapper[4766]: I0130 17:35:45.791875 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hjhvv" Jan 30 17:35:45 crc kubenswrapper[4766]: I0130 17:35:45.792217 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hjhvv" Jan 30 17:35:45 crc kubenswrapper[4766]: I0130 17:35:45.840209 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hjhvv" Jan 30 17:35:46 crc kubenswrapper[4766]: I0130 17:35:46.200413 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hjhvv" Jan 30 17:35:46 crc kubenswrapper[4766]: I0130 17:35:46.243424 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hjhvv"] Jan 30 17:35:48 crc kubenswrapper[4766]: I0130 17:35:48.179677 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hjhvv" podUID="575b9005-6dc0-455d-8097-a165628fd850" containerName="registry-server" containerID="cri-o://ea25da4a1e9ae2c42a265e7d70506aaae1b5e2d2009cff423c884a1e78c3a054" gracePeriod=2 Jan 30 17:35:48 crc kubenswrapper[4766]: I0130 17:35:48.570067 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hjhvv" Jan 30 17:35:48 crc kubenswrapper[4766]: I0130 17:35:48.730797 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/575b9005-6dc0-455d-8097-a165628fd850-catalog-content\") pod \"575b9005-6dc0-455d-8097-a165628fd850\" (UID: \"575b9005-6dc0-455d-8097-a165628fd850\") " Jan 30 17:35:48 crc kubenswrapper[4766]: I0130 17:35:48.730878 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zktt\" (UniqueName: \"kubernetes.io/projected/575b9005-6dc0-455d-8097-a165628fd850-kube-api-access-8zktt\") pod \"575b9005-6dc0-455d-8097-a165628fd850\" (UID: \"575b9005-6dc0-455d-8097-a165628fd850\") " Jan 30 17:35:48 crc kubenswrapper[4766]: I0130 17:35:48.731034 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/575b9005-6dc0-455d-8097-a165628fd850-utilities\") pod \"575b9005-6dc0-455d-8097-a165628fd850\" (UID: \"575b9005-6dc0-455d-8097-a165628fd850\") " Jan 30 17:35:48 crc kubenswrapper[4766]: I0130 17:35:48.732137 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/575b9005-6dc0-455d-8097-a165628fd850-utilities" (OuterVolumeSpecName: "utilities") pod "575b9005-6dc0-455d-8097-a165628fd850" (UID: "575b9005-6dc0-455d-8097-a165628fd850"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:35:48 crc kubenswrapper[4766]: I0130 17:35:48.736749 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/575b9005-6dc0-455d-8097-a165628fd850-kube-api-access-8zktt" (OuterVolumeSpecName: "kube-api-access-8zktt") pod "575b9005-6dc0-455d-8097-a165628fd850" (UID: "575b9005-6dc0-455d-8097-a165628fd850"). InnerVolumeSpecName "kube-api-access-8zktt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:35:48 crc kubenswrapper[4766]: I0130 17:35:48.756548 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/575b9005-6dc0-455d-8097-a165628fd850-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "575b9005-6dc0-455d-8097-a165628fd850" (UID: "575b9005-6dc0-455d-8097-a165628fd850"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:35:48 crc kubenswrapper[4766]: I0130 17:35:48.833050 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/575b9005-6dc0-455d-8097-a165628fd850-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:35:48 crc kubenswrapper[4766]: I0130 17:35:48.833096 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zktt\" (UniqueName: \"kubernetes.io/projected/575b9005-6dc0-455d-8097-a165628fd850-kube-api-access-8zktt\") on node \"crc\" DevicePath \"\"" Jan 30 17:35:48 crc kubenswrapper[4766]: I0130 17:35:48.833107 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/575b9005-6dc0-455d-8097-a165628fd850-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:35:49 crc kubenswrapper[4766]: I0130 17:35:49.187700 4766 generic.go:334] "Generic (PLEG): container finished" podID="575b9005-6dc0-455d-8097-a165628fd850" containerID="ea25da4a1e9ae2c42a265e7d70506aaae1b5e2d2009cff423c884a1e78c3a054" exitCode=0 Jan 30 17:35:49 crc kubenswrapper[4766]: I0130 17:35:49.187746 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hjhvv" event={"ID":"575b9005-6dc0-455d-8097-a165628fd850","Type":"ContainerDied","Data":"ea25da4a1e9ae2c42a265e7d70506aaae1b5e2d2009cff423c884a1e78c3a054"} Jan 30 17:35:49 crc kubenswrapper[4766]: I0130 17:35:49.187774 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hjhvv" event={"ID":"575b9005-6dc0-455d-8097-a165628fd850","Type":"ContainerDied","Data":"01b91be4a1ae2b19d40d81b37a1373588971cae934410131587b526a172a37bb"} Jan 30 17:35:49 crc kubenswrapper[4766]: I0130 17:35:49.187774 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hjhvv" Jan 30 17:35:49 crc kubenswrapper[4766]: I0130 17:35:49.187795 4766 scope.go:117] "RemoveContainer" containerID="ea25da4a1e9ae2c42a265e7d70506aaae1b5e2d2009cff423c884a1e78c3a054" Jan 30 17:35:49 crc kubenswrapper[4766]: I0130 17:35:49.202791 4766 scope.go:117] "RemoveContainer" containerID="d21171451e4bb484c0205be267627694bb27b0f78e9366d9d7f3eae330f25e5b" Jan 30 17:35:49 crc kubenswrapper[4766]: I0130 17:35:49.219095 4766 scope.go:117] "RemoveContainer" containerID="74a144af1554b6fb622d7415dcfa4e99432537d5df2b9a4266fc0cfe6bf31bd7" Jan 30 17:35:49 crc kubenswrapper[4766]: I0130 17:35:49.224376 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hjhvv"] Jan 30 17:35:49 crc kubenswrapper[4766]: I0130 17:35:49.232711 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hjhvv"] Jan 30 17:35:49 crc kubenswrapper[4766]: I0130 17:35:49.246961 4766 scope.go:117] "RemoveContainer" containerID="ea25da4a1e9ae2c42a265e7d70506aaae1b5e2d2009cff423c884a1e78c3a054" Jan 30 17:35:49 crc kubenswrapper[4766]: E0130 17:35:49.247605 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea25da4a1e9ae2c42a265e7d70506aaae1b5e2d2009cff423c884a1e78c3a054\": container with ID starting with ea25da4a1e9ae2c42a265e7d70506aaae1b5e2d2009cff423c884a1e78c3a054 not found: ID does not exist" containerID="ea25da4a1e9ae2c42a265e7d70506aaae1b5e2d2009cff423c884a1e78c3a054" Jan 30 17:35:49 crc kubenswrapper[4766]: I0130 17:35:49.247725 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea25da4a1e9ae2c42a265e7d70506aaae1b5e2d2009cff423c884a1e78c3a054"} err="failed to get container status \"ea25da4a1e9ae2c42a265e7d70506aaae1b5e2d2009cff423c884a1e78c3a054\": rpc error: code = NotFound desc = could not find container \"ea25da4a1e9ae2c42a265e7d70506aaae1b5e2d2009cff423c884a1e78c3a054\": container with ID starting with ea25da4a1e9ae2c42a265e7d70506aaae1b5e2d2009cff423c884a1e78c3a054 not found: ID does not exist" Jan 30 17:35:49 crc kubenswrapper[4766]: I0130 17:35:49.247753 4766 scope.go:117] "RemoveContainer" containerID="d21171451e4bb484c0205be267627694bb27b0f78e9366d9d7f3eae330f25e5b" Jan 30 17:35:49 crc kubenswrapper[4766]: E0130 17:35:49.248148 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d21171451e4bb484c0205be267627694bb27b0f78e9366d9d7f3eae330f25e5b\": container with ID starting with d21171451e4bb484c0205be267627694bb27b0f78e9366d9d7f3eae330f25e5b not found: ID does not exist" containerID="d21171451e4bb484c0205be267627694bb27b0f78e9366d9d7f3eae330f25e5b" Jan 30 17:35:49 crc kubenswrapper[4766]: I0130 17:35:49.248200 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d21171451e4bb484c0205be267627694bb27b0f78e9366d9d7f3eae330f25e5b"} err="failed to get container status \"d21171451e4bb484c0205be267627694bb27b0f78e9366d9d7f3eae330f25e5b\": rpc error: code = NotFound desc = could not find container \"d21171451e4bb484c0205be267627694bb27b0f78e9366d9d7f3eae330f25e5b\": container with ID starting with d21171451e4bb484c0205be267627694bb27b0f78e9366d9d7f3eae330f25e5b not found: ID does not exist" Jan 30 17:35:49 crc kubenswrapper[4766]: I0130 17:35:49.248226 4766 scope.go:117] "RemoveContainer" containerID="74a144af1554b6fb622d7415dcfa4e99432537d5df2b9a4266fc0cfe6bf31bd7" Jan 30 17:35:49 crc kubenswrapper[4766]: E0130 17:35:49.248561 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74a144af1554b6fb622d7415dcfa4e99432537d5df2b9a4266fc0cfe6bf31bd7\": container with ID starting with 74a144af1554b6fb622d7415dcfa4e99432537d5df2b9a4266fc0cfe6bf31bd7 not found: ID does not exist" containerID="74a144af1554b6fb622d7415dcfa4e99432537d5df2b9a4266fc0cfe6bf31bd7" Jan 30 17:35:49 crc kubenswrapper[4766]: I0130 17:35:49.248588 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74a144af1554b6fb622d7415dcfa4e99432537d5df2b9a4266fc0cfe6bf31bd7"} err="failed to get container status \"74a144af1554b6fb622d7415dcfa4e99432537d5df2b9a4266fc0cfe6bf31bd7\": rpc error: code = NotFound desc = could not find container \"74a144af1554b6fb622d7415dcfa4e99432537d5df2b9a4266fc0cfe6bf31bd7\": container with ID starting with 74a144af1554b6fb622d7415dcfa4e99432537d5df2b9a4266fc0cfe6bf31bd7 not found: ID does not exist" Jan 30 17:35:50 crc kubenswrapper[4766]: I0130 17:35:50.048665 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="575b9005-6dc0-455d-8097-a165628fd850" path="/var/lib/kubelet/pods/575b9005-6dc0-455d-8097-a165628fd850/volumes" Jan 30 17:35:55 crc kubenswrapper[4766]: I0130 17:35:55.039233 4766 scope.go:117] "RemoveContainer" containerID="62b27159543c2d7874b57e155df6cf176eef5367eb321b1133ba7cc2464a1a68" Jan 30 17:35:55 crc kubenswrapper[4766]: E0130 17:35:55.040935 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:36:09 crc kubenswrapper[4766]: I0130 17:36:09.039800 4766 scope.go:117] "RemoveContainer" containerID="62b27159543c2d7874b57e155df6cf176eef5367eb321b1133ba7cc2464a1a68" Jan 30 17:36:09 crc kubenswrapper[4766]: E0130 17:36:09.040544 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:36:24 crc kubenswrapper[4766]: I0130 17:36:24.039499 4766 scope.go:117] "RemoveContainer" containerID="62b27159543c2d7874b57e155df6cf176eef5367eb321b1133ba7cc2464a1a68" Jan 30 17:36:24 crc kubenswrapper[4766]: E0130 17:36:24.040289 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:36:35 crc kubenswrapper[4766]: I0130 17:36:35.040032 4766 scope.go:117] "RemoveContainer" containerID="62b27159543c2d7874b57e155df6cf176eef5367eb321b1133ba7cc2464a1a68" Jan 30 17:36:35 crc kubenswrapper[4766]: E0130 17:36:35.040758 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:36:47 crc kubenswrapper[4766]: I0130 17:36:47.040081 4766 scope.go:117] "RemoveContainer" containerID="62b27159543c2d7874b57e155df6cf176eef5367eb321b1133ba7cc2464a1a68" Jan 30 17:36:47 crc kubenswrapper[4766]: E0130 17:36:47.041058 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:37:01 crc kubenswrapper[4766]: I0130 17:37:01.040294 4766 scope.go:117] "RemoveContainer" containerID="62b27159543c2d7874b57e155df6cf176eef5367eb321b1133ba7cc2464a1a68" Jan 30 17:37:01 crc kubenswrapper[4766]: E0130 17:37:01.041104 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:37:15 crc kubenswrapper[4766]: I0130 17:37:15.039321 4766 scope.go:117] "RemoveContainer" containerID="62b27159543c2d7874b57e155df6cf176eef5367eb321b1133ba7cc2464a1a68" Jan 30 17:37:15 crc kubenswrapper[4766]: E0130 17:37:15.040048 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:37:17 crc kubenswrapper[4766]: I0130 17:37:17.884239 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-mxw77"] Jan 30 17:37:17 crc kubenswrapper[4766]: I0130 17:37:17.890482 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-mxw77"] Jan 30 17:37:18 crc kubenswrapper[4766]: I0130 17:37:18.016779 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-9vbc9"] Jan 30 17:37:18 crc kubenswrapper[4766]: E0130 17:37:18.017060 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="575b9005-6dc0-455d-8097-a165628fd850" containerName="extract-content" Jan 30 17:37:18 crc kubenswrapper[4766]: I0130 17:37:18.017094 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="575b9005-6dc0-455d-8097-a165628fd850" containerName="extract-content" Jan 30 17:37:18 crc kubenswrapper[4766]: E0130 17:37:18.017107 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="575b9005-6dc0-455d-8097-a165628fd850" containerName="extract-utilities" Jan 30 17:37:18 crc kubenswrapper[4766]: I0130 17:37:18.017115 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="575b9005-6dc0-455d-8097-a165628fd850" containerName="extract-utilities" Jan 30 17:37:18 crc kubenswrapper[4766]: E0130 17:37:18.017132 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="575b9005-6dc0-455d-8097-a165628fd850" containerName="registry-server" Jan 30 17:37:18 crc kubenswrapper[4766]: I0130 17:37:18.017137 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="575b9005-6dc0-455d-8097-a165628fd850" containerName="registry-server" Jan 30 17:37:18 crc kubenswrapper[4766]: I0130 17:37:18.017292 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="575b9005-6dc0-455d-8097-a165628fd850" containerName="registry-server" Jan 30 17:37:18 crc kubenswrapper[4766]: I0130 17:37:18.017802 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-9vbc9" Jan 30 17:37:18 crc kubenswrapper[4766]: I0130 17:37:18.020045 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 30 17:37:18 crc kubenswrapper[4766]: I0130 17:37:18.021235 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 30 17:37:18 crc kubenswrapper[4766]: I0130 17:37:18.021267 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 30 17:37:18 crc kubenswrapper[4766]: I0130 17:37:18.021547 4766 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-r8skn" Jan 30 17:37:18 crc kubenswrapper[4766]: I0130 17:37:18.029108 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-9vbc9"] Jan 30 17:37:18 crc kubenswrapper[4766]: I0130 17:37:18.059401 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ad5692e-34c5-4e32-ba96-cd5e6e617c62" path="/var/lib/kubelet/pods/3ad5692e-34c5-4e32-ba96-cd5e6e617c62/volumes" Jan 30 17:37:18 crc kubenswrapper[4766]: I0130 17:37:18.122473 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/b1d0287e-07c6-4924-85de-701d0ff03488-crc-storage\") pod \"crc-storage-crc-9vbc9\" (UID: \"b1d0287e-07c6-4924-85de-701d0ff03488\") " pod="crc-storage/crc-storage-crc-9vbc9" Jan 30 17:37:18 crc kubenswrapper[4766]: I0130 17:37:18.123168 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qzm8\" (UniqueName: \"kubernetes.io/projected/b1d0287e-07c6-4924-85de-701d0ff03488-kube-api-access-4qzm8\") pod \"crc-storage-crc-9vbc9\" (UID: \"b1d0287e-07c6-4924-85de-701d0ff03488\") " pod="crc-storage/crc-storage-crc-9vbc9" Jan 30 17:37:18 crc kubenswrapper[4766]: I0130 17:37:18.123383 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/b1d0287e-07c6-4924-85de-701d0ff03488-node-mnt\") pod \"crc-storage-crc-9vbc9\" (UID: \"b1d0287e-07c6-4924-85de-701d0ff03488\") " pod="crc-storage/crc-storage-crc-9vbc9" Jan 30 17:37:18 crc kubenswrapper[4766]: I0130 17:37:18.225115 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/b1d0287e-07c6-4924-85de-701d0ff03488-crc-storage\") pod \"crc-storage-crc-9vbc9\" (UID: \"b1d0287e-07c6-4924-85de-701d0ff03488\") " pod="crc-storage/crc-storage-crc-9vbc9" Jan 30 17:37:18 crc kubenswrapper[4766]: I0130 17:37:18.225541 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qzm8\" (UniqueName: \"kubernetes.io/projected/b1d0287e-07c6-4924-85de-701d0ff03488-kube-api-access-4qzm8\") pod \"crc-storage-crc-9vbc9\" (UID: \"b1d0287e-07c6-4924-85de-701d0ff03488\") " pod="crc-storage/crc-storage-crc-9vbc9" Jan 30 17:37:18 crc kubenswrapper[4766]: I0130 17:37:18.225656 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/b1d0287e-07c6-4924-85de-701d0ff03488-node-mnt\") pod \"crc-storage-crc-9vbc9\" (UID: \"b1d0287e-07c6-4924-85de-701d0ff03488\") " pod="crc-storage/crc-storage-crc-9vbc9" Jan 30 17:37:18 crc kubenswrapper[4766]: I0130 17:37:18.225773 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/b1d0287e-07c6-4924-85de-701d0ff03488-crc-storage\") pod \"crc-storage-crc-9vbc9\" (UID: \"b1d0287e-07c6-4924-85de-701d0ff03488\") " pod="crc-storage/crc-storage-crc-9vbc9" Jan 30 17:37:18 crc kubenswrapper[4766]: I0130 17:37:18.225886 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/b1d0287e-07c6-4924-85de-701d0ff03488-node-mnt\") pod \"crc-storage-crc-9vbc9\" (UID: \"b1d0287e-07c6-4924-85de-701d0ff03488\") " pod="crc-storage/crc-storage-crc-9vbc9" Jan 30 17:37:18 crc kubenswrapper[4766]: I0130 17:37:18.258422 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qzm8\" (UniqueName: \"kubernetes.io/projected/b1d0287e-07c6-4924-85de-701d0ff03488-kube-api-access-4qzm8\") pod \"crc-storage-crc-9vbc9\" (UID: \"b1d0287e-07c6-4924-85de-701d0ff03488\") " pod="crc-storage/crc-storage-crc-9vbc9" Jan 30 17:37:18 crc kubenswrapper[4766]: I0130 17:37:18.347593 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-9vbc9" Jan 30 17:37:18 crc kubenswrapper[4766]: I0130 17:37:18.816940 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-9vbc9"] Jan 30 17:37:19 crc kubenswrapper[4766]: I0130 17:37:19.817382 4766 generic.go:334] "Generic (PLEG): container finished" podID="b1d0287e-07c6-4924-85de-701d0ff03488" containerID="3c2bcfb1e73c683e268e22a58c61847b65be47ed0077a6171ee0609e464de262" exitCode=0 Jan 30 17:37:19 crc kubenswrapper[4766]: I0130 17:37:19.817477 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-9vbc9" event={"ID":"b1d0287e-07c6-4924-85de-701d0ff03488","Type":"ContainerDied","Data":"3c2bcfb1e73c683e268e22a58c61847b65be47ed0077a6171ee0609e464de262"} Jan 30 17:37:19 crc kubenswrapper[4766]: I0130 17:37:19.817731 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-9vbc9" event={"ID":"b1d0287e-07c6-4924-85de-701d0ff03488","Type":"ContainerStarted","Data":"c6ad9955f9c9492351f5f634289f5868d65bcfba8c44923b9c7ee46fe2179e5a"} Jan 30 17:37:21 crc kubenswrapper[4766]: I0130 17:37:21.145364 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-9vbc9" Jan 30 17:37:21 crc kubenswrapper[4766]: I0130 17:37:21.271622 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/b1d0287e-07c6-4924-85de-701d0ff03488-crc-storage\") pod \"b1d0287e-07c6-4924-85de-701d0ff03488\" (UID: \"b1d0287e-07c6-4924-85de-701d0ff03488\") " Jan 30 17:37:21 crc kubenswrapper[4766]: I0130 17:37:21.272065 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qzm8\" (UniqueName: \"kubernetes.io/projected/b1d0287e-07c6-4924-85de-701d0ff03488-kube-api-access-4qzm8\") pod \"b1d0287e-07c6-4924-85de-701d0ff03488\" (UID: \"b1d0287e-07c6-4924-85de-701d0ff03488\") " Jan 30 17:37:21 crc kubenswrapper[4766]: I0130 17:37:21.272091 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/b1d0287e-07c6-4924-85de-701d0ff03488-node-mnt\") pod \"b1d0287e-07c6-4924-85de-701d0ff03488\" (UID: \"b1d0287e-07c6-4924-85de-701d0ff03488\") " Jan 30 17:37:21 crc kubenswrapper[4766]: I0130 17:37:21.272305 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1d0287e-07c6-4924-85de-701d0ff03488-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "b1d0287e-07c6-4924-85de-701d0ff03488" (UID: "b1d0287e-07c6-4924-85de-701d0ff03488"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:37:21 crc kubenswrapper[4766]: I0130 17:37:21.272589 4766 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/b1d0287e-07c6-4924-85de-701d0ff03488-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 30 17:37:21 crc kubenswrapper[4766]: I0130 17:37:21.279931 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1d0287e-07c6-4924-85de-701d0ff03488-kube-api-access-4qzm8" (OuterVolumeSpecName: "kube-api-access-4qzm8") pod "b1d0287e-07c6-4924-85de-701d0ff03488" (UID: "b1d0287e-07c6-4924-85de-701d0ff03488"). InnerVolumeSpecName "kube-api-access-4qzm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:37:21 crc kubenswrapper[4766]: I0130 17:37:21.298511 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1d0287e-07c6-4924-85de-701d0ff03488-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "b1d0287e-07c6-4924-85de-701d0ff03488" (UID: "b1d0287e-07c6-4924-85de-701d0ff03488"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:37:21 crc kubenswrapper[4766]: I0130 17:37:21.374333 4766 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/b1d0287e-07c6-4924-85de-701d0ff03488-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 30 17:37:21 crc kubenswrapper[4766]: I0130 17:37:21.374600 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qzm8\" (UniqueName: \"kubernetes.io/projected/b1d0287e-07c6-4924-85de-701d0ff03488-kube-api-access-4qzm8\") on node \"crc\" DevicePath \"\"" Jan 30 17:37:21 crc kubenswrapper[4766]: I0130 17:37:21.831595 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-9vbc9" event={"ID":"b1d0287e-07c6-4924-85de-701d0ff03488","Type":"ContainerDied","Data":"c6ad9955f9c9492351f5f634289f5868d65bcfba8c44923b9c7ee46fe2179e5a"} Jan 30 17:37:21 crc kubenswrapper[4766]: I0130 17:37:21.831920 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6ad9955f9c9492351f5f634289f5868d65bcfba8c44923b9c7ee46fe2179e5a" Jan 30 17:37:21 crc kubenswrapper[4766]: I0130 17:37:21.831648 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-9vbc9" Jan 30 17:37:23 crc kubenswrapper[4766]: I0130 17:37:23.281650 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-9vbc9"] Jan 30 17:37:23 crc kubenswrapper[4766]: I0130 17:37:23.286856 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-9vbc9"] Jan 30 17:37:23 crc kubenswrapper[4766]: I0130 17:37:23.416955 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-nmc5q"] Jan 30 17:37:23 crc kubenswrapper[4766]: E0130 17:37:23.417488 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1d0287e-07c6-4924-85de-701d0ff03488" containerName="storage" Jan 30 17:37:23 crc kubenswrapper[4766]: I0130 17:37:23.417526 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1d0287e-07c6-4924-85de-701d0ff03488" containerName="storage" Jan 30 17:37:23 crc kubenswrapper[4766]: I0130 17:37:23.417745 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1d0287e-07c6-4924-85de-701d0ff03488" containerName="storage" Jan 30 17:37:23 crc kubenswrapper[4766]: I0130 17:37:23.418733 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-nmc5q" Jan 30 17:37:23 crc kubenswrapper[4766]: I0130 17:37:23.421818 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 30 17:37:23 crc kubenswrapper[4766]: I0130 17:37:23.421845 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 30 17:37:23 crc kubenswrapper[4766]: I0130 17:37:23.421880 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 30 17:37:23 crc kubenswrapper[4766]: I0130 17:37:23.423491 4766 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-r8skn" Jan 30 17:37:23 crc kubenswrapper[4766]: I0130 17:37:23.431732 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-nmc5q"] Jan 30 17:37:23 crc kubenswrapper[4766]: I0130 17:37:23.603257 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/554cf476-6d37-432b-826d-9a1094b73f78-node-mnt\") pod \"crc-storage-crc-nmc5q\" (UID: \"554cf476-6d37-432b-826d-9a1094b73f78\") " pod="crc-storage/crc-storage-crc-nmc5q" Jan 30 17:37:23 crc kubenswrapper[4766]: I0130 17:37:23.603307 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2xxf\" (UniqueName: \"kubernetes.io/projected/554cf476-6d37-432b-826d-9a1094b73f78-kube-api-access-n2xxf\") pod \"crc-storage-crc-nmc5q\" (UID: \"554cf476-6d37-432b-826d-9a1094b73f78\") " pod="crc-storage/crc-storage-crc-nmc5q" Jan 30 17:37:23 crc kubenswrapper[4766]: I0130 17:37:23.603345 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/554cf476-6d37-432b-826d-9a1094b73f78-crc-storage\") pod \"crc-storage-crc-nmc5q\" (UID: \"554cf476-6d37-432b-826d-9a1094b73f78\") " pod="crc-storage/crc-storage-crc-nmc5q" Jan 30 17:37:23 crc kubenswrapper[4766]: I0130 17:37:23.705303 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/554cf476-6d37-432b-826d-9a1094b73f78-crc-storage\") pod \"crc-storage-crc-nmc5q\" (UID: \"554cf476-6d37-432b-826d-9a1094b73f78\") " pod="crc-storage/crc-storage-crc-nmc5q" Jan 30 17:37:23 crc kubenswrapper[4766]: I0130 17:37:23.705496 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/554cf476-6d37-432b-826d-9a1094b73f78-node-mnt\") pod \"crc-storage-crc-nmc5q\" (UID: \"554cf476-6d37-432b-826d-9a1094b73f78\") " pod="crc-storage/crc-storage-crc-nmc5q" Jan 30 17:37:23 crc kubenswrapper[4766]: I0130 17:37:23.705532 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2xxf\" (UniqueName: \"kubernetes.io/projected/554cf476-6d37-432b-826d-9a1094b73f78-kube-api-access-n2xxf\") pod \"crc-storage-crc-nmc5q\" (UID: \"554cf476-6d37-432b-826d-9a1094b73f78\") " pod="crc-storage/crc-storage-crc-nmc5q" Jan 30 17:37:23 crc kubenswrapper[4766]: I0130 17:37:23.705845 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/554cf476-6d37-432b-826d-9a1094b73f78-node-mnt\") pod \"crc-storage-crc-nmc5q\" (UID: \"554cf476-6d37-432b-826d-9a1094b73f78\") " pod="crc-storage/crc-storage-crc-nmc5q" Jan 30 17:37:23 crc kubenswrapper[4766]: I0130 17:37:23.706288 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/554cf476-6d37-432b-826d-9a1094b73f78-crc-storage\") pod \"crc-storage-crc-nmc5q\" (UID: \"554cf476-6d37-432b-826d-9a1094b73f78\") " pod="crc-storage/crc-storage-crc-nmc5q" Jan 30 17:37:23 crc kubenswrapper[4766]: I0130 17:37:23.734285 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2xxf\" (UniqueName: \"kubernetes.io/projected/554cf476-6d37-432b-826d-9a1094b73f78-kube-api-access-n2xxf\") pod \"crc-storage-crc-nmc5q\" (UID: \"554cf476-6d37-432b-826d-9a1094b73f78\") " pod="crc-storage/crc-storage-crc-nmc5q" Jan 30 17:37:23 crc kubenswrapper[4766]: I0130 17:37:23.746690 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-nmc5q" Jan 30 17:37:24 crc kubenswrapper[4766]: I0130 17:37:24.049392 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1d0287e-07c6-4924-85de-701d0ff03488" path="/var/lib/kubelet/pods/b1d0287e-07c6-4924-85de-701d0ff03488/volumes" Jan 30 17:37:24 crc kubenswrapper[4766]: I0130 17:37:24.247266 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-nmc5q"] Jan 30 17:37:24 crc kubenswrapper[4766]: I0130 17:37:24.861229 4766 generic.go:334] "Generic (PLEG): container finished" podID="554cf476-6d37-432b-826d-9a1094b73f78" containerID="bc2fc12d9fb98dc06beb3fcccccec9dd09eda88527c87c3e7ef793da23ffc25f" exitCode=0 Jan 30 17:37:24 crc kubenswrapper[4766]: I0130 17:37:24.861313 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-nmc5q" event={"ID":"554cf476-6d37-432b-826d-9a1094b73f78","Type":"ContainerDied","Data":"bc2fc12d9fb98dc06beb3fcccccec9dd09eda88527c87c3e7ef793da23ffc25f"} Jan 30 17:37:24 crc kubenswrapper[4766]: I0130 17:37:24.861638 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-nmc5q" event={"ID":"554cf476-6d37-432b-826d-9a1094b73f78","Type":"ContainerStarted","Data":"e532aba2a8b327b4d07974fc0b2b133d4749f14e3fabd7cf5d5ad5417408d2b6"} Jan 30 17:37:26 crc kubenswrapper[4766]: I0130 17:37:26.138791 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-nmc5q" Jan 30 17:37:26 crc kubenswrapper[4766]: I0130 17:37:26.257741 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/554cf476-6d37-432b-826d-9a1094b73f78-crc-storage\") pod \"554cf476-6d37-432b-826d-9a1094b73f78\" (UID: \"554cf476-6d37-432b-826d-9a1094b73f78\") " Jan 30 17:37:26 crc kubenswrapper[4766]: I0130 17:37:26.258202 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2xxf\" (UniqueName: \"kubernetes.io/projected/554cf476-6d37-432b-826d-9a1094b73f78-kube-api-access-n2xxf\") pod \"554cf476-6d37-432b-826d-9a1094b73f78\" (UID: \"554cf476-6d37-432b-826d-9a1094b73f78\") " Jan 30 17:37:26 crc kubenswrapper[4766]: I0130 17:37:26.258390 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/554cf476-6d37-432b-826d-9a1094b73f78-node-mnt\") pod \"554cf476-6d37-432b-826d-9a1094b73f78\" (UID: \"554cf476-6d37-432b-826d-9a1094b73f78\") " Jan 30 17:37:26 crc kubenswrapper[4766]: I0130 17:37:26.258631 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/554cf476-6d37-432b-826d-9a1094b73f78-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "554cf476-6d37-432b-826d-9a1094b73f78" (UID: "554cf476-6d37-432b-826d-9a1094b73f78"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:37:26 crc kubenswrapper[4766]: I0130 17:37:26.258891 4766 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/554cf476-6d37-432b-826d-9a1094b73f78-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 30 17:37:26 crc kubenswrapper[4766]: I0130 17:37:26.263036 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/554cf476-6d37-432b-826d-9a1094b73f78-kube-api-access-n2xxf" (OuterVolumeSpecName: "kube-api-access-n2xxf") pod "554cf476-6d37-432b-826d-9a1094b73f78" (UID: "554cf476-6d37-432b-826d-9a1094b73f78"). InnerVolumeSpecName "kube-api-access-n2xxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:37:26 crc kubenswrapper[4766]: I0130 17:37:26.275320 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/554cf476-6d37-432b-826d-9a1094b73f78-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "554cf476-6d37-432b-826d-9a1094b73f78" (UID: "554cf476-6d37-432b-826d-9a1094b73f78"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:37:26 crc kubenswrapper[4766]: I0130 17:37:26.360908 4766 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/554cf476-6d37-432b-826d-9a1094b73f78-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 30 17:37:26 crc kubenswrapper[4766]: I0130 17:37:26.361219 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2xxf\" (UniqueName: \"kubernetes.io/projected/554cf476-6d37-432b-826d-9a1094b73f78-kube-api-access-n2xxf\") on node \"crc\" DevicePath \"\"" Jan 30 17:37:26 crc kubenswrapper[4766]: I0130 17:37:26.878756 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-nmc5q" event={"ID":"554cf476-6d37-432b-826d-9a1094b73f78","Type":"ContainerDied","Data":"e532aba2a8b327b4d07974fc0b2b133d4749f14e3fabd7cf5d5ad5417408d2b6"} Jan 30 17:37:26 crc kubenswrapper[4766]: I0130 17:37:26.878803 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-nmc5q" Jan 30 17:37:26 crc kubenswrapper[4766]: I0130 17:37:26.878977 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e532aba2a8b327b4d07974fc0b2b133d4749f14e3fabd7cf5d5ad5417408d2b6" Jan 30 17:37:27 crc kubenswrapper[4766]: I0130 17:37:27.040348 4766 scope.go:117] "RemoveContainer" containerID="62b27159543c2d7874b57e155df6cf176eef5367eb321b1133ba7cc2464a1a68" Jan 30 17:37:27 crc kubenswrapper[4766]: E0130 17:37:27.040775 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:37:31 crc kubenswrapper[4766]: I0130 17:37:31.225311 4766 scope.go:117] "RemoveContainer" containerID="403a056677f3371b0fbc8b04190fc4d600537695442bf6a2adce1bee6fee4304" Jan 30 17:37:38 crc kubenswrapper[4766]: I0130 17:37:38.040363 4766 scope.go:117] "RemoveContainer" containerID="62b27159543c2d7874b57e155df6cf176eef5367eb321b1133ba7cc2464a1a68" Jan 30 17:37:38 crc kubenswrapper[4766]: E0130 17:37:38.040908 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:37:51 crc kubenswrapper[4766]: I0130 17:37:51.040036 4766 scope.go:117] "RemoveContainer" containerID="62b27159543c2d7874b57e155df6cf176eef5367eb321b1133ba7cc2464a1a68" Jan 30 17:37:51 crc kubenswrapper[4766]: E0130 17:37:51.040920 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:38:06 crc kubenswrapper[4766]: I0130 17:38:06.043528 4766 scope.go:117] "RemoveContainer" containerID="62b27159543c2d7874b57e155df6cf176eef5367eb321b1133ba7cc2464a1a68" Jan 30 17:38:06 crc kubenswrapper[4766]: E0130 17:38:06.044373 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:38:17 crc kubenswrapper[4766]: I0130 17:38:17.040454 4766 scope.go:117] "RemoveContainer" containerID="62b27159543c2d7874b57e155df6cf176eef5367eb321b1133ba7cc2464a1a68" Jan 30 17:38:17 crc kubenswrapper[4766]: E0130 17:38:17.041481 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:38:32 crc kubenswrapper[4766]: I0130 17:38:32.040114 4766 scope.go:117] "RemoveContainer" containerID="62b27159543c2d7874b57e155df6cf176eef5367eb321b1133ba7cc2464a1a68" Jan 30 17:38:32 crc kubenswrapper[4766]: E0130 17:38:32.040845 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:38:43 crc kubenswrapper[4766]: I0130 17:38:43.038974 4766 scope.go:117] "RemoveContainer" containerID="62b27159543c2d7874b57e155df6cf176eef5367eb321b1133ba7cc2464a1a68" Jan 30 17:38:43 crc kubenswrapper[4766]: E0130 17:38:43.039709 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:38:45 crc kubenswrapper[4766]: I0130 17:38:45.844073 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-c6jjm"] Jan 30 17:38:45 crc kubenswrapper[4766]: E0130 17:38:45.846558 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="554cf476-6d37-432b-826d-9a1094b73f78" containerName="storage" Jan 30 17:38:45 crc kubenswrapper[4766]: I0130 17:38:45.846574 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="554cf476-6d37-432b-826d-9a1094b73f78" containerName="storage" Jan 30 17:38:45 crc kubenswrapper[4766]: I0130 17:38:45.846713 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="554cf476-6d37-432b-826d-9a1094b73f78" containerName="storage" Jan 30 17:38:45 crc kubenswrapper[4766]: I0130 17:38:45.848590 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c6jjm" Jan 30 17:38:45 crc kubenswrapper[4766]: I0130 17:38:45.854496 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c6jjm"] Jan 30 17:38:45 crc kubenswrapper[4766]: I0130 17:38:45.977118 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b2a422f-876d-4faa-9195-7dabd362b052-utilities\") pod \"certified-operators-c6jjm\" (UID: \"5b2a422f-876d-4faa-9195-7dabd362b052\") " pod="openshift-marketplace/certified-operators-c6jjm" Jan 30 17:38:45 crc kubenswrapper[4766]: I0130 17:38:45.977204 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dhc5\" (UniqueName: \"kubernetes.io/projected/5b2a422f-876d-4faa-9195-7dabd362b052-kube-api-access-2dhc5\") pod \"certified-operators-c6jjm\" (UID: \"5b2a422f-876d-4faa-9195-7dabd362b052\") " pod="openshift-marketplace/certified-operators-c6jjm" Jan 30 17:38:45 crc kubenswrapper[4766]: I0130 17:38:45.977236 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b2a422f-876d-4faa-9195-7dabd362b052-catalog-content\") pod \"certified-operators-c6jjm\" (UID: \"5b2a422f-876d-4faa-9195-7dabd362b052\") " pod="openshift-marketplace/certified-operators-c6jjm" Jan 30 17:38:46 crc kubenswrapper[4766]: I0130 17:38:46.079004 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b2a422f-876d-4faa-9195-7dabd362b052-utilities\") pod \"certified-operators-c6jjm\" (UID: \"5b2a422f-876d-4faa-9195-7dabd362b052\") " pod="openshift-marketplace/certified-operators-c6jjm" Jan 30 17:38:46 crc kubenswrapper[4766]: I0130 17:38:46.079064 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dhc5\" (UniqueName: \"kubernetes.io/projected/5b2a422f-876d-4faa-9195-7dabd362b052-kube-api-access-2dhc5\") pod \"certified-operators-c6jjm\" (UID: \"5b2a422f-876d-4faa-9195-7dabd362b052\") " pod="openshift-marketplace/certified-operators-c6jjm" Jan 30 17:38:46 crc kubenswrapper[4766]: I0130 17:38:46.079100 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b2a422f-876d-4faa-9195-7dabd362b052-catalog-content\") pod \"certified-operators-c6jjm\" (UID: \"5b2a422f-876d-4faa-9195-7dabd362b052\") " pod="openshift-marketplace/certified-operators-c6jjm" Jan 30 17:38:46 crc kubenswrapper[4766]: I0130 17:38:46.080037 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b2a422f-876d-4faa-9195-7dabd362b052-utilities\") pod \"certified-operators-c6jjm\" (UID: \"5b2a422f-876d-4faa-9195-7dabd362b052\") " pod="openshift-marketplace/certified-operators-c6jjm" Jan 30 17:38:46 crc kubenswrapper[4766]: I0130 17:38:46.080604 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b2a422f-876d-4faa-9195-7dabd362b052-catalog-content\") pod \"certified-operators-c6jjm\" (UID: \"5b2a422f-876d-4faa-9195-7dabd362b052\") " pod="openshift-marketplace/certified-operators-c6jjm" Jan 30 17:38:46 crc kubenswrapper[4766]: I0130 17:38:46.107104 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dhc5\" (UniqueName: \"kubernetes.io/projected/5b2a422f-876d-4faa-9195-7dabd362b052-kube-api-access-2dhc5\") pod \"certified-operators-c6jjm\" (UID: \"5b2a422f-876d-4faa-9195-7dabd362b052\") " pod="openshift-marketplace/certified-operators-c6jjm" Jan 30 17:38:46 crc kubenswrapper[4766]: I0130 17:38:46.175578 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c6jjm" Jan 30 17:38:46 crc kubenswrapper[4766]: I0130 17:38:46.655699 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c6jjm"] Jan 30 17:38:47 crc kubenswrapper[4766]: I0130 17:38:47.463770 4766 generic.go:334] "Generic (PLEG): container finished" podID="5b2a422f-876d-4faa-9195-7dabd362b052" containerID="7c62846fb9a06c1fe31068f56ff3093499fc642888219760f85c70134964db1d" exitCode=0 Jan 30 17:38:47 crc kubenswrapper[4766]: I0130 17:38:47.463878 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c6jjm" event={"ID":"5b2a422f-876d-4faa-9195-7dabd362b052","Type":"ContainerDied","Data":"7c62846fb9a06c1fe31068f56ff3093499fc642888219760f85c70134964db1d"} Jan 30 17:38:47 crc kubenswrapper[4766]: I0130 17:38:47.464080 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c6jjm" event={"ID":"5b2a422f-876d-4faa-9195-7dabd362b052","Type":"ContainerStarted","Data":"67b5cb77ed90276d3ad55c1d03d0e6cb3bcec17521689084489af36ee219355e"} Jan 30 17:38:49 crc kubenswrapper[4766]: I0130 17:38:49.483623 4766 generic.go:334] "Generic (PLEG): container finished" podID="5b2a422f-876d-4faa-9195-7dabd362b052" containerID="0a8041e29ba4595714d141896afbe542d929ac5f85bebdedc28e6759861cdc45" exitCode=0 Jan 30 17:38:49 crc kubenswrapper[4766]: I0130 17:38:49.483718 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c6jjm" event={"ID":"5b2a422f-876d-4faa-9195-7dabd362b052","Type":"ContainerDied","Data":"0a8041e29ba4595714d141896afbe542d929ac5f85bebdedc28e6759861cdc45"} Jan 30 17:38:50 crc kubenswrapper[4766]: I0130 17:38:50.494067 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c6jjm" event={"ID":"5b2a422f-876d-4faa-9195-7dabd362b052","Type":"ContainerStarted","Data":"19c2330983a2ffc0a839122354ec20f00b31bbc358acf991745d966472b55d91"} Jan 30 17:38:50 crc kubenswrapper[4766]: I0130 17:38:50.516375 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-c6jjm" podStartSLOduration=3.059169837 podStartE2EDuration="5.516352827s" podCreationTimestamp="2026-01-30 17:38:45 +0000 UTC" firstStartedPulling="2026-01-30 17:38:47.466258705 +0000 UTC m=+4582.104216051" lastFinishedPulling="2026-01-30 17:38:49.923441695 +0000 UTC m=+4584.561399041" observedRunningTime="2026-01-30 17:38:50.512684407 +0000 UTC m=+4585.150641773" watchObservedRunningTime="2026-01-30 17:38:50.516352827 +0000 UTC m=+4585.154310173" Jan 30 17:38:55 crc kubenswrapper[4766]: I0130 17:38:55.039649 4766 scope.go:117] "RemoveContainer" containerID="62b27159543c2d7874b57e155df6cf176eef5367eb321b1133ba7cc2464a1a68" Jan 30 17:38:55 crc kubenswrapper[4766]: E0130 17:38:55.040547 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:38:56 crc kubenswrapper[4766]: I0130 17:38:56.175864 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-c6jjm" Jan 30 17:38:56 crc kubenswrapper[4766]: I0130 17:38:56.175939 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-c6jjm" Jan 30 17:38:56 crc kubenswrapper[4766]: I0130 17:38:56.222124 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-c6jjm" Jan 30 17:38:56 crc kubenswrapper[4766]: I0130 17:38:56.581147 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-c6jjm" Jan 30 17:38:56 crc kubenswrapper[4766]: I0130 17:38:56.624962 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c6jjm"] Jan 30 17:38:58 crc kubenswrapper[4766]: I0130 17:38:58.548646 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-c6jjm" podUID="5b2a422f-876d-4faa-9195-7dabd362b052" containerName="registry-server" containerID="cri-o://19c2330983a2ffc0a839122354ec20f00b31bbc358acf991745d966472b55d91" gracePeriod=2 Jan 30 17:38:58 crc kubenswrapper[4766]: I0130 17:38:58.918124 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c6jjm" Jan 30 17:38:59 crc kubenswrapper[4766]: I0130 17:38:59.064168 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b2a422f-876d-4faa-9195-7dabd362b052-utilities\") pod \"5b2a422f-876d-4faa-9195-7dabd362b052\" (UID: \"5b2a422f-876d-4faa-9195-7dabd362b052\") " Jan 30 17:38:59 crc kubenswrapper[4766]: I0130 17:38:59.064271 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dhc5\" (UniqueName: \"kubernetes.io/projected/5b2a422f-876d-4faa-9195-7dabd362b052-kube-api-access-2dhc5\") pod \"5b2a422f-876d-4faa-9195-7dabd362b052\" (UID: \"5b2a422f-876d-4faa-9195-7dabd362b052\") " Jan 30 17:38:59 crc kubenswrapper[4766]: I0130 17:38:59.064355 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b2a422f-876d-4faa-9195-7dabd362b052-catalog-content\") pod \"5b2a422f-876d-4faa-9195-7dabd362b052\" (UID: \"5b2a422f-876d-4faa-9195-7dabd362b052\") " Jan 30 17:38:59 crc kubenswrapper[4766]: I0130 17:38:59.065418 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b2a422f-876d-4faa-9195-7dabd362b052-utilities" (OuterVolumeSpecName: "utilities") pod "5b2a422f-876d-4faa-9195-7dabd362b052" (UID: "5b2a422f-876d-4faa-9195-7dabd362b052"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:38:59 crc kubenswrapper[4766]: I0130 17:38:59.069356 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b2a422f-876d-4faa-9195-7dabd362b052-kube-api-access-2dhc5" (OuterVolumeSpecName: "kube-api-access-2dhc5") pod "5b2a422f-876d-4faa-9195-7dabd362b052" (UID: "5b2a422f-876d-4faa-9195-7dabd362b052"). InnerVolumeSpecName "kube-api-access-2dhc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:38:59 crc kubenswrapper[4766]: I0130 17:38:59.118926 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b2a422f-876d-4faa-9195-7dabd362b052-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5b2a422f-876d-4faa-9195-7dabd362b052" (UID: "5b2a422f-876d-4faa-9195-7dabd362b052"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:38:59 crc kubenswrapper[4766]: I0130 17:38:59.166395 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b2a422f-876d-4faa-9195-7dabd362b052-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:38:59 crc kubenswrapper[4766]: I0130 17:38:59.166456 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dhc5\" (UniqueName: \"kubernetes.io/projected/5b2a422f-876d-4faa-9195-7dabd362b052-kube-api-access-2dhc5\") on node \"crc\" DevicePath \"\"" Jan 30 17:38:59 crc kubenswrapper[4766]: I0130 17:38:59.166471 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b2a422f-876d-4faa-9195-7dabd362b052-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:38:59 crc kubenswrapper[4766]: I0130 17:38:59.556882 4766 generic.go:334] "Generic (PLEG): container finished" podID="5b2a422f-876d-4faa-9195-7dabd362b052" containerID="19c2330983a2ffc0a839122354ec20f00b31bbc358acf991745d966472b55d91" exitCode=0 Jan 30 17:38:59 crc kubenswrapper[4766]: I0130 17:38:59.556928 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c6jjm" Jan 30 17:38:59 crc kubenswrapper[4766]: I0130 17:38:59.556931 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c6jjm" event={"ID":"5b2a422f-876d-4faa-9195-7dabd362b052","Type":"ContainerDied","Data":"19c2330983a2ffc0a839122354ec20f00b31bbc358acf991745d966472b55d91"} Jan 30 17:38:59 crc kubenswrapper[4766]: I0130 17:38:59.557050 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c6jjm" event={"ID":"5b2a422f-876d-4faa-9195-7dabd362b052","Type":"ContainerDied","Data":"67b5cb77ed90276d3ad55c1d03d0e6cb3bcec17521689084489af36ee219355e"} Jan 30 17:38:59 crc kubenswrapper[4766]: I0130 17:38:59.557071 4766 scope.go:117] "RemoveContainer" containerID="19c2330983a2ffc0a839122354ec20f00b31bbc358acf991745d966472b55d91" Jan 30 17:38:59 crc kubenswrapper[4766]: I0130 17:38:59.579146 4766 scope.go:117] "RemoveContainer" containerID="0a8041e29ba4595714d141896afbe542d929ac5f85bebdedc28e6759861cdc45" Jan 30 17:38:59 crc kubenswrapper[4766]: I0130 17:38:59.595358 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c6jjm"] Jan 30 17:38:59 crc kubenswrapper[4766]: I0130 17:38:59.601624 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-c6jjm"] Jan 30 17:38:59 crc kubenswrapper[4766]: I0130 17:38:59.816093 4766 scope.go:117] "RemoveContainer" containerID="7c62846fb9a06c1fe31068f56ff3093499fc642888219760f85c70134964db1d" Jan 30 17:38:59 crc kubenswrapper[4766]: I0130 17:38:59.930837 4766 scope.go:117] "RemoveContainer" containerID="19c2330983a2ffc0a839122354ec20f00b31bbc358acf991745d966472b55d91" Jan 30 17:38:59 crc kubenswrapper[4766]: E0130 17:38:59.931482 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19c2330983a2ffc0a839122354ec20f00b31bbc358acf991745d966472b55d91\": container with ID starting with 19c2330983a2ffc0a839122354ec20f00b31bbc358acf991745d966472b55d91 not found: ID does not exist" containerID="19c2330983a2ffc0a839122354ec20f00b31bbc358acf991745d966472b55d91" Jan 30 17:38:59 crc kubenswrapper[4766]: I0130 17:38:59.931535 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19c2330983a2ffc0a839122354ec20f00b31bbc358acf991745d966472b55d91"} err="failed to get container status \"19c2330983a2ffc0a839122354ec20f00b31bbc358acf991745d966472b55d91\": rpc error: code = NotFound desc = could not find container \"19c2330983a2ffc0a839122354ec20f00b31bbc358acf991745d966472b55d91\": container with ID starting with 19c2330983a2ffc0a839122354ec20f00b31bbc358acf991745d966472b55d91 not found: ID does not exist" Jan 30 17:38:59 crc kubenswrapper[4766]: I0130 17:38:59.931562 4766 scope.go:117] "RemoveContainer" containerID="0a8041e29ba4595714d141896afbe542d929ac5f85bebdedc28e6759861cdc45" Jan 30 17:38:59 crc kubenswrapper[4766]: E0130 17:38:59.932053 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a8041e29ba4595714d141896afbe542d929ac5f85bebdedc28e6759861cdc45\": container with ID starting with 0a8041e29ba4595714d141896afbe542d929ac5f85bebdedc28e6759861cdc45 not found: ID does not exist" containerID="0a8041e29ba4595714d141896afbe542d929ac5f85bebdedc28e6759861cdc45" Jan 30 17:38:59 crc kubenswrapper[4766]: I0130 17:38:59.932131 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a8041e29ba4595714d141896afbe542d929ac5f85bebdedc28e6759861cdc45"} err="failed to get container status \"0a8041e29ba4595714d141896afbe542d929ac5f85bebdedc28e6759861cdc45\": rpc error: code = NotFound desc = could not find container \"0a8041e29ba4595714d141896afbe542d929ac5f85bebdedc28e6759861cdc45\": container with ID starting with 0a8041e29ba4595714d141896afbe542d929ac5f85bebdedc28e6759861cdc45 not found: ID does not exist" Jan 30 17:38:59 crc kubenswrapper[4766]: I0130 17:38:59.932158 4766 scope.go:117] "RemoveContainer" containerID="7c62846fb9a06c1fe31068f56ff3093499fc642888219760f85c70134964db1d" Jan 30 17:38:59 crc kubenswrapper[4766]: E0130 17:38:59.932510 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c62846fb9a06c1fe31068f56ff3093499fc642888219760f85c70134964db1d\": container with ID starting with 7c62846fb9a06c1fe31068f56ff3093499fc642888219760f85c70134964db1d not found: ID does not exist" containerID="7c62846fb9a06c1fe31068f56ff3093499fc642888219760f85c70134964db1d" Jan 30 17:38:59 crc kubenswrapper[4766]: I0130 17:38:59.932535 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c62846fb9a06c1fe31068f56ff3093499fc642888219760f85c70134964db1d"} err="failed to get container status \"7c62846fb9a06c1fe31068f56ff3093499fc642888219760f85c70134964db1d\": rpc error: code = NotFound desc = could not find container \"7c62846fb9a06c1fe31068f56ff3093499fc642888219760f85c70134964db1d\": container with ID starting with 7c62846fb9a06c1fe31068f56ff3093499fc642888219760f85c70134964db1d not found: ID does not exist" Jan 30 17:39:00 crc kubenswrapper[4766]: I0130 17:39:00.051709 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b2a422f-876d-4faa-9195-7dabd362b052" path="/var/lib/kubelet/pods/5b2a422f-876d-4faa-9195-7dabd362b052/volumes" Jan 30 17:39:08 crc kubenswrapper[4766]: I0130 17:39:08.039617 4766 scope.go:117] "RemoveContainer" containerID="62b27159543c2d7874b57e155df6cf176eef5367eb321b1133ba7cc2464a1a68" Jan 30 17:39:08 crc kubenswrapper[4766]: E0130 17:39:08.040400 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:39:22 crc kubenswrapper[4766]: I0130 17:39:22.039496 4766 scope.go:117] "RemoveContainer" containerID="62b27159543c2d7874b57e155df6cf176eef5367eb321b1133ba7cc2464a1a68" Jan 30 17:39:22 crc kubenswrapper[4766]: I0130 17:39:22.701706 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerStarted","Data":"6d73f468d7a4ee2dec8ec549cbfd2340a24d3dd9f72d5b67bcf478d5bc8a9a1c"} Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.012958 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-kl2j6"] Jan 30 17:40:45 crc kubenswrapper[4766]: E0130 17:40:45.014152 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b2a422f-876d-4faa-9195-7dabd362b052" containerName="extract-content" Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.014168 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b2a422f-876d-4faa-9195-7dabd362b052" containerName="extract-content" Jan 30 17:40:45 crc kubenswrapper[4766]: E0130 17:40:45.014286 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b2a422f-876d-4faa-9195-7dabd362b052" containerName="extract-utilities" Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.014295 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b2a422f-876d-4faa-9195-7dabd362b052" containerName="extract-utilities" Jan 30 17:40:45 crc kubenswrapper[4766]: E0130 17:40:45.014309 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b2a422f-876d-4faa-9195-7dabd362b052" containerName="registry-server" Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.014316 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b2a422f-876d-4faa-9195-7dabd362b052" containerName="registry-server" Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.014478 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b2a422f-876d-4faa-9195-7dabd362b052" containerName="registry-server" Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.015326 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7b5456f5-kl2j6" Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.018635 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.018910 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.019058 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.019499 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.019753 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-gwzhk" Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.029026 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-kl2j6"] Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.093831 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2586ecd-ab78-47e4-931c-d0a872a4a404-config\") pod \"dnsmasq-dns-5d7b5456f5-kl2j6\" (UID: \"d2586ecd-ab78-47e4-931c-d0a872a4a404\") " pod="openstack/dnsmasq-dns-5d7b5456f5-kl2j6" Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.094112 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2586ecd-ab78-47e4-931c-d0a872a4a404-dns-svc\") pod \"dnsmasq-dns-5d7b5456f5-kl2j6\" (UID: \"d2586ecd-ab78-47e4-931c-d0a872a4a404\") " pod="openstack/dnsmasq-dns-5d7b5456f5-kl2j6" Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.094344 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mndkq\" (UniqueName: \"kubernetes.io/projected/d2586ecd-ab78-47e4-931c-d0a872a4a404-kube-api-access-mndkq\") pod \"dnsmasq-dns-5d7b5456f5-kl2j6\" (UID: \"d2586ecd-ab78-47e4-931c-d0a872a4a404\") " pod="openstack/dnsmasq-dns-5d7b5456f5-kl2j6" Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.195505 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mndkq\" (UniqueName: \"kubernetes.io/projected/d2586ecd-ab78-47e4-931c-d0a872a4a404-kube-api-access-mndkq\") pod \"dnsmasq-dns-5d7b5456f5-kl2j6\" (UID: \"d2586ecd-ab78-47e4-931c-d0a872a4a404\") " pod="openstack/dnsmasq-dns-5d7b5456f5-kl2j6" Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.195609 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2586ecd-ab78-47e4-931c-d0a872a4a404-config\") pod \"dnsmasq-dns-5d7b5456f5-kl2j6\" (UID: \"d2586ecd-ab78-47e4-931c-d0a872a4a404\") " pod="openstack/dnsmasq-dns-5d7b5456f5-kl2j6" Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.195633 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2586ecd-ab78-47e4-931c-d0a872a4a404-dns-svc\") pod \"dnsmasq-dns-5d7b5456f5-kl2j6\" (UID: \"d2586ecd-ab78-47e4-931c-d0a872a4a404\") " pod="openstack/dnsmasq-dns-5d7b5456f5-kl2j6" Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.196464 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2586ecd-ab78-47e4-931c-d0a872a4a404-config\") pod \"dnsmasq-dns-5d7b5456f5-kl2j6\" (UID: \"d2586ecd-ab78-47e4-931c-d0a872a4a404\") " pod="openstack/dnsmasq-dns-5d7b5456f5-kl2j6" Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.196500 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2586ecd-ab78-47e4-931c-d0a872a4a404-dns-svc\") pod \"dnsmasq-dns-5d7b5456f5-kl2j6\" (UID: \"d2586ecd-ab78-47e4-931c-d0a872a4a404\") " pod="openstack/dnsmasq-dns-5d7b5456f5-kl2j6" Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.217825 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mndkq\" (UniqueName: \"kubernetes.io/projected/d2586ecd-ab78-47e4-931c-d0a872a4a404-kube-api-access-mndkq\") pod \"dnsmasq-dns-5d7b5456f5-kl2j6\" (UID: \"d2586ecd-ab78-47e4-931c-d0a872a4a404\") " pod="openstack/dnsmasq-dns-5d7b5456f5-kl2j6" Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.294397 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-ht8gm"] Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.296356 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98ddfc8f-ht8gm" Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.328647 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-ht8gm"] Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.386669 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7b5456f5-kl2j6" Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.398763 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ec757c0-9d3d-4d66-9cd8-742105f2c48e-dns-svc\") pod \"dnsmasq-dns-98ddfc8f-ht8gm\" (UID: \"5ec757c0-9d3d-4d66-9cd8-742105f2c48e\") " pod="openstack/dnsmasq-dns-98ddfc8f-ht8gm" Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.399154 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnjtx\" (UniqueName: \"kubernetes.io/projected/5ec757c0-9d3d-4d66-9cd8-742105f2c48e-kube-api-access-dnjtx\") pod \"dnsmasq-dns-98ddfc8f-ht8gm\" (UID: \"5ec757c0-9d3d-4d66-9cd8-742105f2c48e\") " pod="openstack/dnsmasq-dns-98ddfc8f-ht8gm" Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.399281 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ec757c0-9d3d-4d66-9cd8-742105f2c48e-config\") pod \"dnsmasq-dns-98ddfc8f-ht8gm\" (UID: \"5ec757c0-9d3d-4d66-9cd8-742105f2c48e\") " pod="openstack/dnsmasq-dns-98ddfc8f-ht8gm" Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.500231 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnjtx\" (UniqueName: \"kubernetes.io/projected/5ec757c0-9d3d-4d66-9cd8-742105f2c48e-kube-api-access-dnjtx\") pod \"dnsmasq-dns-98ddfc8f-ht8gm\" (UID: \"5ec757c0-9d3d-4d66-9cd8-742105f2c48e\") " pod="openstack/dnsmasq-dns-98ddfc8f-ht8gm" Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.500563 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ec757c0-9d3d-4d66-9cd8-742105f2c48e-config\") pod \"dnsmasq-dns-98ddfc8f-ht8gm\" (UID: \"5ec757c0-9d3d-4d66-9cd8-742105f2c48e\") " pod="openstack/dnsmasq-dns-98ddfc8f-ht8gm" Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.500710 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ec757c0-9d3d-4d66-9cd8-742105f2c48e-dns-svc\") pod \"dnsmasq-dns-98ddfc8f-ht8gm\" (UID: \"5ec757c0-9d3d-4d66-9cd8-742105f2c48e\") " pod="openstack/dnsmasq-dns-98ddfc8f-ht8gm" Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.501445 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ec757c0-9d3d-4d66-9cd8-742105f2c48e-config\") pod \"dnsmasq-dns-98ddfc8f-ht8gm\" (UID: \"5ec757c0-9d3d-4d66-9cd8-742105f2c48e\") " pod="openstack/dnsmasq-dns-98ddfc8f-ht8gm" Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.501687 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ec757c0-9d3d-4d66-9cd8-742105f2c48e-dns-svc\") pod \"dnsmasq-dns-98ddfc8f-ht8gm\" (UID: \"5ec757c0-9d3d-4d66-9cd8-742105f2c48e\") " pod="openstack/dnsmasq-dns-98ddfc8f-ht8gm" Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.523290 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnjtx\" (UniqueName: \"kubernetes.io/projected/5ec757c0-9d3d-4d66-9cd8-742105f2c48e-kube-api-access-dnjtx\") pod \"dnsmasq-dns-98ddfc8f-ht8gm\" (UID: \"5ec757c0-9d3d-4d66-9cd8-742105f2c48e\") " pod="openstack/dnsmasq-dns-98ddfc8f-ht8gm" Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.625899 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98ddfc8f-ht8gm" Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.905280 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-kl2j6"] Jan 30 17:40:45 crc kubenswrapper[4766]: W0130 17:40:45.916585 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2586ecd_ab78_47e4_931c_d0a872a4a404.slice/crio-5573cc16d953ef3d82ed20166e2300bea2dfbd9f1a7a8acbc6d631fab91e0738 WatchSource:0}: Error finding container 5573cc16d953ef3d82ed20166e2300bea2dfbd9f1a7a8acbc6d631fab91e0738: Status 404 returned error can't find the container with id 5573cc16d953ef3d82ed20166e2300bea2dfbd9f1a7a8acbc6d631fab91e0738 Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.920077 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-ht8gm"] Jan 30 17:40:45 crc kubenswrapper[4766]: W0130 17:40:45.930828 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ec757c0_9d3d_4d66_9cd8_742105f2c48e.slice/crio-93f3051a2e2fb18e0409a776a8675fba5f3199edbc1b6a3cbce75cefe563e769 WatchSource:0}: Error finding container 93f3051a2e2fb18e0409a776a8675fba5f3199edbc1b6a3cbce75cefe563e769: Status 404 returned error can't find the container with id 93f3051a2e2fb18e0409a776a8675fba5f3199edbc1b6a3cbce75cefe563e769 Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.179881 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.181360 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.184448 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.184462 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.184744 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.185215 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.185529 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-7fqzb" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.195471 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.215923 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " pod="openstack/rabbitmq-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.216549 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " pod="openstack/rabbitmq-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.216673 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ba9ce260-411e-465e-825e-cb85f0d828d4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba9ce260-411e-465e-825e-cb85f0d828d4\") pod \"rabbitmq-server-0\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " pod="openstack/rabbitmq-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.216751 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " pod="openstack/rabbitmq-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.216853 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " pod="openstack/rabbitmq-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.217116 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " pod="openstack/rabbitmq-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.217216 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltxf7\" (UniqueName: \"kubernetes.io/projected/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-kube-api-access-ltxf7\") pod \"rabbitmq-server-0\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " pod="openstack/rabbitmq-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.217320 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " pod="openstack/rabbitmq-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.218356 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " pod="openstack/rabbitmq-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.261298 4766 generic.go:334] "Generic (PLEG): container finished" podID="5ec757c0-9d3d-4d66-9cd8-742105f2c48e" containerID="987678b0c80e2ab072f159429ab8a830d6004ce03b8e464f8fa8d15fb7f56bd5" exitCode=0 Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.261400 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-ht8gm" event={"ID":"5ec757c0-9d3d-4d66-9cd8-742105f2c48e","Type":"ContainerDied","Data":"987678b0c80e2ab072f159429ab8a830d6004ce03b8e464f8fa8d15fb7f56bd5"} Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.261432 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-ht8gm" event={"ID":"5ec757c0-9d3d-4d66-9cd8-742105f2c48e","Type":"ContainerStarted","Data":"93f3051a2e2fb18e0409a776a8675fba5f3199edbc1b6a3cbce75cefe563e769"} Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.263508 4766 generic.go:334] "Generic (PLEG): container finished" podID="d2586ecd-ab78-47e4-931c-d0a872a4a404" containerID="836cec7dda9e74db703ca5d380e514f9a9e8c88ed4781ac11483a5c4f06b7584" exitCode=0 Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.263551 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-kl2j6" event={"ID":"d2586ecd-ab78-47e4-931c-d0a872a4a404","Type":"ContainerDied","Data":"836cec7dda9e74db703ca5d380e514f9a9e8c88ed4781ac11483a5c4f06b7584"} Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.263582 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-kl2j6" event={"ID":"d2586ecd-ab78-47e4-931c-d0a872a4a404","Type":"ContainerStarted","Data":"5573cc16d953ef3d82ed20166e2300bea2dfbd9f1a7a8acbc6d631fab91e0738"} Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.319841 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ba9ce260-411e-465e-825e-cb85f0d828d4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba9ce260-411e-465e-825e-cb85f0d828d4\") pod \"rabbitmq-server-0\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " pod="openstack/rabbitmq-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.319887 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " pod="openstack/rabbitmq-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.319946 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " pod="openstack/rabbitmq-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.319971 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltxf7\" (UniqueName: \"kubernetes.io/projected/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-kube-api-access-ltxf7\") pod \"rabbitmq-server-0\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " pod="openstack/rabbitmq-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.320018 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " pod="openstack/rabbitmq-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.320070 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " pod="openstack/rabbitmq-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.320094 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " pod="openstack/rabbitmq-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.320165 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " pod="openstack/rabbitmq-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.320210 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " pod="openstack/rabbitmq-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.322118 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " pod="openstack/rabbitmq-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.322884 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " pod="openstack/rabbitmq-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.323157 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " pod="openstack/rabbitmq-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.323242 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " pod="openstack/rabbitmq-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.324973 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " pod="openstack/rabbitmq-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.325632 4766 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.325662 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ba9ce260-411e-465e-825e-cb85f0d828d4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba9ce260-411e-465e-825e-cb85f0d828d4\") pod \"rabbitmq-server-0\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2ee0fb1f36d21ee32de31c2c1b35f1f2033c96e9c0c8d1603b6b408ac3d6223f/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.326481 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " pod="openstack/rabbitmq-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.328630 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " pod="openstack/rabbitmq-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.340013 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltxf7\" (UniqueName: \"kubernetes.io/projected/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-kube-api-access-ltxf7\") pod \"rabbitmq-server-0\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " pod="openstack/rabbitmq-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.379547 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ba9ce260-411e-465e-825e-cb85f0d828d4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba9ce260-411e-465e-825e-cb85f0d828d4\") pod \"rabbitmq-server-0\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " pod="openstack/rabbitmq-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.453436 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.512787 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.514219 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.520241 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.520547 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.520607 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.520644 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.520786 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-bz89s" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.557400 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 17:40:46 crc kubenswrapper[4766]: E0130 17:40:46.571480 4766 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Jan 30 17:40:46 crc kubenswrapper[4766]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/d2586ecd-ab78-47e4-931c-d0a872a4a404/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 30 17:40:46 crc kubenswrapper[4766]: > podSandboxID="5573cc16d953ef3d82ed20166e2300bea2dfbd9f1a7a8acbc6d631fab91e0738" Jan 30 17:40:46 crc kubenswrapper[4766]: E0130 17:40:46.571639 4766 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 17:40:46 crc kubenswrapper[4766]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n8chc6h5bh56fh546hb7hc8h67h5bchffh577h697h5b5h5bdh59bhf6hf4h558hb5h578h595h5cchfbh644h59ch7fh654h547h587h5cbh5d5h8fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mndkq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5d7b5456f5-kl2j6_openstack(d2586ecd-ab78-47e4-931c-d0a872a4a404): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/d2586ecd-ab78-47e4-931c-d0a872a4a404/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 30 17:40:46 crc kubenswrapper[4766]: > logger="UnhandledError" Jan 30 17:40:46 crc kubenswrapper[4766]: E0130 17:40:46.573163 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/d2586ecd-ab78-47e4-931c-d0a872a4a404/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-5d7b5456f5-kl2j6" podUID="d2586ecd-ab78-47e4-931c-d0a872a4a404" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.623976 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b9cdb86f-7214-4a3e-818a-dd6936b19daf-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.624026 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b9cdb86f-7214-4a3e-818a-dd6936b19daf-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.624054 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b9cdb86f-7214-4a3e-818a-dd6936b19daf-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.624099 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b51917bf-6d02-4b8e-98a7-9f99636938a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b51917bf-6d02-4b8e-98a7-9f99636938a8\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.624127 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b9cdb86f-7214-4a3e-818a-dd6936b19daf-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.624413 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj6mk\" (UniqueName: \"kubernetes.io/projected/b9cdb86f-7214-4a3e-818a-dd6936b19daf-kube-api-access-hj6mk\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.624683 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b9cdb86f-7214-4a3e-818a-dd6936b19daf-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.624733 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b9cdb86f-7214-4a3e-818a-dd6936b19daf-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.624767 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b9cdb86f-7214-4a3e-818a-dd6936b19daf-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.725785 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b9cdb86f-7214-4a3e-818a-dd6936b19daf-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.725831 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b9cdb86f-7214-4a3e-818a-dd6936b19daf-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.725849 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b9cdb86f-7214-4a3e-818a-dd6936b19daf-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.725875 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b51917bf-6d02-4b8e-98a7-9f99636938a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b51917bf-6d02-4b8e-98a7-9f99636938a8\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.725894 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b9cdb86f-7214-4a3e-818a-dd6936b19daf-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.725940 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hj6mk\" (UniqueName: \"kubernetes.io/projected/b9cdb86f-7214-4a3e-818a-dd6936b19daf-kube-api-access-hj6mk\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.725977 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b9cdb86f-7214-4a3e-818a-dd6936b19daf-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.725991 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b9cdb86f-7214-4a3e-818a-dd6936b19daf-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.726009 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b9cdb86f-7214-4a3e-818a-dd6936b19daf-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.726372 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b9cdb86f-7214-4a3e-818a-dd6936b19daf-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.726466 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b9cdb86f-7214-4a3e-818a-dd6936b19daf-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.727088 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b9cdb86f-7214-4a3e-818a-dd6936b19daf-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.727602 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b9cdb86f-7214-4a3e-818a-dd6936b19daf-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.728402 4766 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.728430 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b51917bf-6d02-4b8e-98a7-9f99636938a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b51917bf-6d02-4b8e-98a7-9f99636938a8\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a67480c1d51246343d54cce22ecd2529a760cf02f3b5a31cca902016f15d50c3/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.731728 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b9cdb86f-7214-4a3e-818a-dd6936b19daf-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.731846 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b9cdb86f-7214-4a3e-818a-dd6936b19daf-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.733113 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b9cdb86f-7214-4a3e-818a-dd6936b19daf-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.747655 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hj6mk\" (UniqueName: \"kubernetes.io/projected/b9cdb86f-7214-4a3e-818a-dd6936b19daf-kube-api-access-hj6mk\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.754344 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b51917bf-6d02-4b8e-98a7-9f99636938a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b51917bf-6d02-4b8e-98a7-9f99636938a8\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.902219 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 17:40:46 crc kubenswrapper[4766]: W0130 17:40:46.905009 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e8a2d07_a10c_454f_b5f0_d5fb399de3dc.slice/crio-b5eea1df367acc4968b8813886b48206984ec921b296ed8c33229a96aaba3238 WatchSource:0}: Error finding container b5eea1df367acc4968b8813886b48206984ec921b296ed8c33229a96aaba3238: Status 404 returned error can't find the container with id b5eea1df367acc4968b8813886b48206984ec921b296ed8c33229a96aaba3238 Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.905624 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.943477 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.947108 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.953849 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.960849 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.961839 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.963064 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-gl758" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.973654 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.974143 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.129875 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d421ee13-9b44-4636-a518-49cd48f7f9a4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d421ee13-9b44-4636-a518-49cd48f7f9a4\") pod \"openstack-galera-0\" (UID: \"57e546c4-803f-4379-b5fb-de5ec7f0c79f\") " pod="openstack/openstack-galera-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.129923 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/57e546c4-803f-4379-b5fb-de5ec7f0c79f-config-data-default\") pod \"openstack-galera-0\" (UID: \"57e546c4-803f-4379-b5fb-de5ec7f0c79f\") " pod="openstack/openstack-galera-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.129951 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e546c4-803f-4379-b5fb-de5ec7f0c79f-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"57e546c4-803f-4379-b5fb-de5ec7f0c79f\") " pod="openstack/openstack-galera-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.130002 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/57e546c4-803f-4379-b5fb-de5ec7f0c79f-kolla-config\") pod \"openstack-galera-0\" (UID: \"57e546c4-803f-4379-b5fb-de5ec7f0c79f\") " pod="openstack/openstack-galera-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.130032 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsw5k\" (UniqueName: \"kubernetes.io/projected/57e546c4-803f-4379-b5fb-de5ec7f0c79f-kube-api-access-rsw5k\") pod \"openstack-galera-0\" (UID: \"57e546c4-803f-4379-b5fb-de5ec7f0c79f\") " pod="openstack/openstack-galera-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.130068 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57e546c4-803f-4379-b5fb-de5ec7f0c79f-operator-scripts\") pod \"openstack-galera-0\" (UID: \"57e546c4-803f-4379-b5fb-de5ec7f0c79f\") " pod="openstack/openstack-galera-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.130103 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/57e546c4-803f-4379-b5fb-de5ec7f0c79f-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"57e546c4-803f-4379-b5fb-de5ec7f0c79f\") " pod="openstack/openstack-galera-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.130137 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/57e546c4-803f-4379-b5fb-de5ec7f0c79f-config-data-generated\") pod \"openstack-galera-0\" (UID: \"57e546c4-803f-4379-b5fb-de5ec7f0c79f\") " pod="openstack/openstack-galera-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.228557 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.229711 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.231068 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e546c4-803f-4379-b5fb-de5ec7f0c79f-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"57e546c4-803f-4379-b5fb-de5ec7f0c79f\") " pod="openstack/openstack-galera-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.231138 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/57e546c4-803f-4379-b5fb-de5ec7f0c79f-kolla-config\") pod \"openstack-galera-0\" (UID: \"57e546c4-803f-4379-b5fb-de5ec7f0c79f\") " pod="openstack/openstack-galera-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.231161 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsw5k\" (UniqueName: \"kubernetes.io/projected/57e546c4-803f-4379-b5fb-de5ec7f0c79f-kube-api-access-rsw5k\") pod \"openstack-galera-0\" (UID: \"57e546c4-803f-4379-b5fb-de5ec7f0c79f\") " pod="openstack/openstack-galera-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.231224 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57e546c4-803f-4379-b5fb-de5ec7f0c79f-operator-scripts\") pod \"openstack-galera-0\" (UID: \"57e546c4-803f-4379-b5fb-de5ec7f0c79f\") " pod="openstack/openstack-galera-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.231266 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/57e546c4-803f-4379-b5fb-de5ec7f0c79f-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"57e546c4-803f-4379-b5fb-de5ec7f0c79f\") " pod="openstack/openstack-galera-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.231297 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/57e546c4-803f-4379-b5fb-de5ec7f0c79f-config-data-generated\") pod \"openstack-galera-0\" (UID: \"57e546c4-803f-4379-b5fb-de5ec7f0c79f\") " pod="openstack/openstack-galera-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.231330 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d421ee13-9b44-4636-a518-49cd48f7f9a4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d421ee13-9b44-4636-a518-49cd48f7f9a4\") pod \"openstack-galera-0\" (UID: \"57e546c4-803f-4379-b5fb-de5ec7f0c79f\") " pod="openstack/openstack-galera-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.231348 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/57e546c4-803f-4379-b5fb-de5ec7f0c79f-config-data-default\") pod \"openstack-galera-0\" (UID: \"57e546c4-803f-4379-b5fb-de5ec7f0c79f\") " pod="openstack/openstack-galera-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.233118 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/57e546c4-803f-4379-b5fb-de5ec7f0c79f-config-data-generated\") pod \"openstack-galera-0\" (UID: \"57e546c4-803f-4379-b5fb-de5ec7f0c79f\") " pod="openstack/openstack-galera-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.233553 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/57e546c4-803f-4379-b5fb-de5ec7f0c79f-kolla-config\") pod \"openstack-galera-0\" (UID: \"57e546c4-803f-4379-b5fb-de5ec7f0c79f\") " pod="openstack/openstack-galera-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.233740 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/57e546c4-803f-4379-b5fb-de5ec7f0c79f-config-data-default\") pod \"openstack-galera-0\" (UID: \"57e546c4-803f-4379-b5fb-de5ec7f0c79f\") " pod="openstack/openstack-galera-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.234979 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57e546c4-803f-4379-b5fb-de5ec7f0c79f-operator-scripts\") pod \"openstack-galera-0\" (UID: \"57e546c4-803f-4379-b5fb-de5ec7f0c79f\") " pod="openstack/openstack-galera-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.238506 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e546c4-803f-4379-b5fb-de5ec7f0c79f-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"57e546c4-803f-4379-b5fb-de5ec7f0c79f\") " pod="openstack/openstack-galera-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.239287 4766 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.239317 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d421ee13-9b44-4636-a518-49cd48f7f9a4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d421ee13-9b44-4636-a518-49cd48f7f9a4\") pod \"openstack-galera-0\" (UID: \"57e546c4-803f-4379-b5fb-de5ec7f0c79f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/20b0f9adc7994a8fd90688a9d6ad7010a4d3c43b63679c705cf315abd13682e6/globalmount\"" pod="openstack/openstack-galera-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.240675 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/57e546c4-803f-4379-b5fb-de5ec7f0c79f-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"57e546c4-803f-4379-b5fb-de5ec7f0c79f\") " pod="openstack/openstack-galera-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.244786 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.250717 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-kvd5j" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.251415 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.272320 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsw5k\" (UniqueName: \"kubernetes.io/projected/57e546c4-803f-4379-b5fb-de5ec7f0c79f-kube-api-access-rsw5k\") pod \"openstack-galera-0\" (UID: \"57e546c4-803f-4379-b5fb-de5ec7f0c79f\") " pod="openstack/openstack-galera-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.276503 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-ht8gm" event={"ID":"5ec757c0-9d3d-4d66-9cd8-742105f2c48e","Type":"ContainerStarted","Data":"a41bb6492a4775abf65f979bb5fa7a9593fae4739f7119a8735ab9ea5cd43dfb"} Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.276591 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-98ddfc8f-ht8gm" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.279665 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc","Type":"ContainerStarted","Data":"b5eea1df367acc4968b8813886b48206984ec921b296ed8c33229a96aaba3238"} Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.334132 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/08baa9d0-2942-4a73-a75a-d13dc2148bb0-kolla-config\") pod \"memcached-0\" (UID: \"08baa9d0-2942-4a73-a75a-d13dc2148bb0\") " pod="openstack/memcached-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.334221 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vggnm\" (UniqueName: \"kubernetes.io/projected/08baa9d0-2942-4a73-a75a-d13dc2148bb0-kube-api-access-vggnm\") pod \"memcached-0\" (UID: \"08baa9d0-2942-4a73-a75a-d13dc2148bb0\") " pod="openstack/memcached-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.334265 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/08baa9d0-2942-4a73-a75a-d13dc2148bb0-config-data\") pod \"memcached-0\" (UID: \"08baa9d0-2942-4a73-a75a-d13dc2148bb0\") " pod="openstack/memcached-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.346646 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-98ddfc8f-ht8gm" podStartSLOduration=2.346628006 podStartE2EDuration="2.346628006s" podCreationTimestamp="2026-01-30 17:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:40:47.335242285 +0000 UTC m=+4701.973199631" watchObservedRunningTime="2026-01-30 17:40:47.346628006 +0000 UTC m=+4701.984585342" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.420372 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d421ee13-9b44-4636-a518-49cd48f7f9a4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d421ee13-9b44-4636-a518-49cd48f7f9a4\") pod \"openstack-galera-0\" (UID: \"57e546c4-803f-4379-b5fb-de5ec7f0c79f\") " pod="openstack/openstack-galera-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.436369 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/08baa9d0-2942-4a73-a75a-d13dc2148bb0-kolla-config\") pod \"memcached-0\" (UID: \"08baa9d0-2942-4a73-a75a-d13dc2148bb0\") " pod="openstack/memcached-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.436467 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vggnm\" (UniqueName: \"kubernetes.io/projected/08baa9d0-2942-4a73-a75a-d13dc2148bb0-kube-api-access-vggnm\") pod \"memcached-0\" (UID: \"08baa9d0-2942-4a73-a75a-d13dc2148bb0\") " pod="openstack/memcached-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.436522 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/08baa9d0-2942-4a73-a75a-d13dc2148bb0-config-data\") pod \"memcached-0\" (UID: \"08baa9d0-2942-4a73-a75a-d13dc2148bb0\") " pod="openstack/memcached-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.438721 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/08baa9d0-2942-4a73-a75a-d13dc2148bb0-kolla-config\") pod \"memcached-0\" (UID: \"08baa9d0-2942-4a73-a75a-d13dc2148bb0\") " pod="openstack/memcached-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.439225 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/08baa9d0-2942-4a73-a75a-d13dc2148bb0-config-data\") pod \"memcached-0\" (UID: \"08baa9d0-2942-4a73-a75a-d13dc2148bb0\") " pod="openstack/memcached-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.471310 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.481872 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vggnm\" (UniqueName: \"kubernetes.io/projected/08baa9d0-2942-4a73-a75a-d13dc2148bb0-kube-api-access-vggnm\") pod \"memcached-0\" (UID: \"08baa9d0-2942-4a73-a75a-d13dc2148bb0\") " pod="openstack/memcached-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.548566 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.571823 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.037057 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 17:40:48 crc kubenswrapper[4766]: W0130 17:40:48.044245 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57e546c4_803f_4379_b5fb_de5ec7f0c79f.slice/crio-e437034f89a988c0824416ec3ba988893cd0a78074c94f4865126c5e418923d7 WatchSource:0}: Error finding container e437034f89a988c0824416ec3ba988893cd0a78074c94f4865126c5e418923d7: Status 404 returned error can't find the container with id e437034f89a988c0824416ec3ba988893cd0a78074c94f4865126c5e418923d7 Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.075787 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 30 17:40:48 crc kubenswrapper[4766]: W0130 17:40:48.081585 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08baa9d0_2942_4a73_a75a_d13dc2148bb0.slice/crio-c72b9a5ccc020969707248fe4e3f8b932ea925d284ea3a8e91ffc266790f42ec WatchSource:0}: Error finding container c72b9a5ccc020969707248fe4e3f8b932ea925d284ea3a8e91ffc266790f42ec: Status 404 returned error can't find the container with id c72b9a5ccc020969707248fe4e3f8b932ea925d284ea3a8e91ffc266790f42ec Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.287269 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b9cdb86f-7214-4a3e-818a-dd6936b19daf","Type":"ContainerStarted","Data":"a6bfba4c8f09a9b72e6500c3cd5b8a4d9dd328a59974eb580780494c99cc6fcc"} Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.288856 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"08baa9d0-2942-4a73-a75a-d13dc2148bb0","Type":"ContainerStarted","Data":"3f5c0de07e7479d50cce8d395f10ab302ea61264980440c7d83b992af8af828d"} Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.288900 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"08baa9d0-2942-4a73-a75a-d13dc2148bb0","Type":"ContainerStarted","Data":"c72b9a5ccc020969707248fe4e3f8b932ea925d284ea3a8e91ffc266790f42ec"} Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.289001 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.290680 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc","Type":"ContainerStarted","Data":"b675ff1cca1887242f7fe886c969fa2c7a3239d5c0b07658edae799b86b555a7"} Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.292953 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-kl2j6" event={"ID":"d2586ecd-ab78-47e4-931c-d0a872a4a404","Type":"ContainerStarted","Data":"50e4458bfe967c551ed69e8a0c74db33b10da1bd35f70c27724fb6f1a38673ae"} Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.293457 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d7b5456f5-kl2j6" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.295710 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"57e546c4-803f-4379-b5fb-de5ec7f0c79f","Type":"ContainerStarted","Data":"3d2175d8409e41a53fb147d7f034704e23413198ecfec42d0e06c440e7ce21a6"} Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.295734 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"57e546c4-803f-4379-b5fb-de5ec7f0c79f","Type":"ContainerStarted","Data":"e437034f89a988c0824416ec3ba988893cd0a78074c94f4865126c5e418923d7"} Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.325837 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=1.325815782 podStartE2EDuration="1.325815782s" podCreationTimestamp="2026-01-30 17:40:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:40:48.319058127 +0000 UTC m=+4702.957015473" watchObservedRunningTime="2026-01-30 17:40:48.325815782 +0000 UTC m=+4702.963773128" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.384496 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d7b5456f5-kl2j6" podStartSLOduration=4.384474685 podStartE2EDuration="4.384474685s" podCreationTimestamp="2026-01-30 17:40:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:40:48.382723376 +0000 UTC m=+4703.020680722" watchObservedRunningTime="2026-01-30 17:40:48.384474685 +0000 UTC m=+4703.022432031" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.609841 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.612792 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.616719 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-lqsbw" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.617271 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.617666 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.617713 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.623305 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.759800 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c586850-0ed6-4949-9087-0e66405455ce-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"7c586850-0ed6-4949-9087-0e66405455ce\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.760089 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3d1f3819-8bc9-4ca2-acf6-d7be73c344d1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3d1f3819-8bc9-4ca2-acf6-d7be73c344d1\") pod \"openstack-cell1-galera-0\" (UID: \"7c586850-0ed6-4949-9087-0e66405455ce\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.760123 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j87w4\" (UniqueName: \"kubernetes.io/projected/7c586850-0ed6-4949-9087-0e66405455ce-kube-api-access-j87w4\") pod \"openstack-cell1-galera-0\" (UID: \"7c586850-0ed6-4949-9087-0e66405455ce\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.760153 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7c586850-0ed6-4949-9087-0e66405455ce-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"7c586850-0ed6-4949-9087-0e66405455ce\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.760196 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7c586850-0ed6-4949-9087-0e66405455ce-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"7c586850-0ed6-4949-9087-0e66405455ce\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.760235 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c586850-0ed6-4949-9087-0e66405455ce-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"7c586850-0ed6-4949-9087-0e66405455ce\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.760269 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c586850-0ed6-4949-9087-0e66405455ce-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"7c586850-0ed6-4949-9087-0e66405455ce\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.760304 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7c586850-0ed6-4949-9087-0e66405455ce-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"7c586850-0ed6-4949-9087-0e66405455ce\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.862066 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c586850-0ed6-4949-9087-0e66405455ce-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"7c586850-0ed6-4949-9087-0e66405455ce\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.862141 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3d1f3819-8bc9-4ca2-acf6-d7be73c344d1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3d1f3819-8bc9-4ca2-acf6-d7be73c344d1\") pod \"openstack-cell1-galera-0\" (UID: \"7c586850-0ed6-4949-9087-0e66405455ce\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.862193 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j87w4\" (UniqueName: \"kubernetes.io/projected/7c586850-0ed6-4949-9087-0e66405455ce-kube-api-access-j87w4\") pod \"openstack-cell1-galera-0\" (UID: \"7c586850-0ed6-4949-9087-0e66405455ce\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.862224 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7c586850-0ed6-4949-9087-0e66405455ce-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"7c586850-0ed6-4949-9087-0e66405455ce\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.862245 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7c586850-0ed6-4949-9087-0e66405455ce-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"7c586850-0ed6-4949-9087-0e66405455ce\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.862283 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c586850-0ed6-4949-9087-0e66405455ce-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"7c586850-0ed6-4949-9087-0e66405455ce\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.862310 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c586850-0ed6-4949-9087-0e66405455ce-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"7c586850-0ed6-4949-9087-0e66405455ce\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.862344 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7c586850-0ed6-4949-9087-0e66405455ce-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"7c586850-0ed6-4949-9087-0e66405455ce\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.862864 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7c586850-0ed6-4949-9087-0e66405455ce-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"7c586850-0ed6-4949-9087-0e66405455ce\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.863087 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7c586850-0ed6-4949-9087-0e66405455ce-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"7c586850-0ed6-4949-9087-0e66405455ce\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.863387 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7c586850-0ed6-4949-9087-0e66405455ce-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"7c586850-0ed6-4949-9087-0e66405455ce\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.863804 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c586850-0ed6-4949-9087-0e66405455ce-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"7c586850-0ed6-4949-9087-0e66405455ce\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.866557 4766 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.866590 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3d1f3819-8bc9-4ca2-acf6-d7be73c344d1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3d1f3819-8bc9-4ca2-acf6-d7be73c344d1\") pod \"openstack-cell1-galera-0\" (UID: \"7c586850-0ed6-4949-9087-0e66405455ce\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ce7edaffb00e2bd79e7a1aa3a5ee9c0ee7a7f7940e757f6576a1ec1da2cd53f3/globalmount\"" pod="openstack/openstack-cell1-galera-0" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.866951 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c586850-0ed6-4949-9087-0e66405455ce-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"7c586850-0ed6-4949-9087-0e66405455ce\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.868394 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c586850-0ed6-4949-9087-0e66405455ce-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"7c586850-0ed6-4949-9087-0e66405455ce\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.893114 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j87w4\" (UniqueName: \"kubernetes.io/projected/7c586850-0ed6-4949-9087-0e66405455ce-kube-api-access-j87w4\") pod \"openstack-cell1-galera-0\" (UID: \"7c586850-0ed6-4949-9087-0e66405455ce\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.897546 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3d1f3819-8bc9-4ca2-acf6-d7be73c344d1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3d1f3819-8bc9-4ca2-acf6-d7be73c344d1\") pod \"openstack-cell1-galera-0\" (UID: \"7c586850-0ed6-4949-9087-0e66405455ce\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.933519 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 17:40:49 crc kubenswrapper[4766]: I0130 17:40:49.302827 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b9cdb86f-7214-4a3e-818a-dd6936b19daf","Type":"ContainerStarted","Data":"5121888509f9bb894d32efd5aae0d010bb82beed7fef4e339f209ac41ce7486c"} Jan 30 17:40:49 crc kubenswrapper[4766]: I0130 17:40:49.387700 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 17:40:49 crc kubenswrapper[4766]: W0130 17:40:49.388506 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c586850_0ed6_4949_9087_0e66405455ce.slice/crio-dc93468d21c221aa92565bb5533f4f61f5083ceac44bcfbb73d5e116a7cf9d14 WatchSource:0}: Error finding container dc93468d21c221aa92565bb5533f4f61f5083ceac44bcfbb73d5e116a7cf9d14: Status 404 returned error can't find the container with id dc93468d21c221aa92565bb5533f4f61f5083ceac44bcfbb73d5e116a7cf9d14 Jan 30 17:40:50 crc kubenswrapper[4766]: I0130 17:40:50.316300 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"7c586850-0ed6-4949-9087-0e66405455ce","Type":"ContainerStarted","Data":"ca1b02c90db7d4d28988f0f88956689e5c7275ed839e19f3bcbb29fb897fb0a1"} Jan 30 17:40:50 crc kubenswrapper[4766]: I0130 17:40:50.316797 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"7c586850-0ed6-4949-9087-0e66405455ce","Type":"ContainerStarted","Data":"dc93468d21c221aa92565bb5533f4f61f5083ceac44bcfbb73d5e116a7cf9d14"} Jan 30 17:40:52 crc kubenswrapper[4766]: I0130 17:40:52.331428 4766 generic.go:334] "Generic (PLEG): container finished" podID="57e546c4-803f-4379-b5fb-de5ec7f0c79f" containerID="3d2175d8409e41a53fb147d7f034704e23413198ecfec42d0e06c440e7ce21a6" exitCode=0 Jan 30 17:40:52 crc kubenswrapper[4766]: I0130 17:40:52.331507 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"57e546c4-803f-4379-b5fb-de5ec7f0c79f","Type":"ContainerDied","Data":"3d2175d8409e41a53fb147d7f034704e23413198ecfec42d0e06c440e7ce21a6"} Jan 30 17:40:53 crc kubenswrapper[4766]: I0130 17:40:53.340388 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"57e546c4-803f-4379-b5fb-de5ec7f0c79f","Type":"ContainerStarted","Data":"45984f6374a5f85fca8559d6af13242174c7dbe17d36d867af8a33da7b1e938e"} Jan 30 17:40:53 crc kubenswrapper[4766]: I0130 17:40:53.358777 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=8.358756543 podStartE2EDuration="8.358756543s" podCreationTimestamp="2026-01-30 17:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:40:53.357960831 +0000 UTC m=+4707.995918177" watchObservedRunningTime="2026-01-30 17:40:53.358756543 +0000 UTC m=+4707.996713889" Jan 30 17:40:54 crc kubenswrapper[4766]: I0130 17:40:54.350043 4766 generic.go:334] "Generic (PLEG): container finished" podID="7c586850-0ed6-4949-9087-0e66405455ce" containerID="ca1b02c90db7d4d28988f0f88956689e5c7275ed839e19f3bcbb29fb897fb0a1" exitCode=0 Jan 30 17:40:54 crc kubenswrapper[4766]: I0130 17:40:54.350345 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"7c586850-0ed6-4949-9087-0e66405455ce","Type":"ContainerDied","Data":"ca1b02c90db7d4d28988f0f88956689e5c7275ed839e19f3bcbb29fb897fb0a1"} Jan 30 17:40:55 crc kubenswrapper[4766]: I0130 17:40:55.357270 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"7c586850-0ed6-4949-9087-0e66405455ce","Type":"ContainerStarted","Data":"ec1fc88488de61b67d6907b2c45b40de972cade7708c22780797640de9ebe4c4"} Jan 30 17:40:55 crc kubenswrapper[4766]: I0130 17:40:55.384700 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=8.38467738 podStartE2EDuration="8.38467738s" podCreationTimestamp="2026-01-30 17:40:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:40:55.378141381 +0000 UTC m=+4710.016098747" watchObservedRunningTime="2026-01-30 17:40:55.38467738 +0000 UTC m=+4710.022634736" Jan 30 17:40:55 crc kubenswrapper[4766]: I0130 17:40:55.389217 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d7b5456f5-kl2j6" Jan 30 17:40:55 crc kubenswrapper[4766]: I0130 17:40:55.628510 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-98ddfc8f-ht8gm" Jan 30 17:40:55 crc kubenswrapper[4766]: I0130 17:40:55.676360 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-kl2j6"] Jan 30 17:40:56 crc kubenswrapper[4766]: I0130 17:40:56.364676 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5d7b5456f5-kl2j6" podUID="d2586ecd-ab78-47e4-931c-d0a872a4a404" containerName="dnsmasq-dns" containerID="cri-o://50e4458bfe967c551ed69e8a0c74db33b10da1bd35f70c27724fb6f1a38673ae" gracePeriod=10 Jan 30 17:40:56 crc kubenswrapper[4766]: I0130 17:40:56.788752 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7b5456f5-kl2j6" Jan 30 17:40:56 crc kubenswrapper[4766]: I0130 17:40:56.889626 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mndkq\" (UniqueName: \"kubernetes.io/projected/d2586ecd-ab78-47e4-931c-d0a872a4a404-kube-api-access-mndkq\") pod \"d2586ecd-ab78-47e4-931c-d0a872a4a404\" (UID: \"d2586ecd-ab78-47e4-931c-d0a872a4a404\") " Jan 30 17:40:56 crc kubenswrapper[4766]: I0130 17:40:56.889738 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2586ecd-ab78-47e4-931c-d0a872a4a404-config\") pod \"d2586ecd-ab78-47e4-931c-d0a872a4a404\" (UID: \"d2586ecd-ab78-47e4-931c-d0a872a4a404\") " Jan 30 17:40:56 crc kubenswrapper[4766]: I0130 17:40:56.889833 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2586ecd-ab78-47e4-931c-d0a872a4a404-dns-svc\") pod \"d2586ecd-ab78-47e4-931c-d0a872a4a404\" (UID: \"d2586ecd-ab78-47e4-931c-d0a872a4a404\") " Jan 30 17:40:56 crc kubenswrapper[4766]: I0130 17:40:56.900986 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2586ecd-ab78-47e4-931c-d0a872a4a404-kube-api-access-mndkq" (OuterVolumeSpecName: "kube-api-access-mndkq") pod "d2586ecd-ab78-47e4-931c-d0a872a4a404" (UID: "d2586ecd-ab78-47e4-931c-d0a872a4a404"). InnerVolumeSpecName "kube-api-access-mndkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:40:56 crc kubenswrapper[4766]: I0130 17:40:56.931961 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2586ecd-ab78-47e4-931c-d0a872a4a404-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d2586ecd-ab78-47e4-931c-d0a872a4a404" (UID: "d2586ecd-ab78-47e4-931c-d0a872a4a404"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:40:56 crc kubenswrapper[4766]: I0130 17:40:56.933706 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2586ecd-ab78-47e4-931c-d0a872a4a404-config" (OuterVolumeSpecName: "config") pod "d2586ecd-ab78-47e4-931c-d0a872a4a404" (UID: "d2586ecd-ab78-47e4-931c-d0a872a4a404"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:40:56 crc kubenswrapper[4766]: I0130 17:40:56.991301 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2586ecd-ab78-47e4-931c-d0a872a4a404-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:40:56 crc kubenswrapper[4766]: I0130 17:40:56.991357 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2586ecd-ab78-47e4-931c-d0a872a4a404-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 17:40:56 crc kubenswrapper[4766]: I0130 17:40:56.991368 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mndkq\" (UniqueName: \"kubernetes.io/projected/d2586ecd-ab78-47e4-931c-d0a872a4a404-kube-api-access-mndkq\") on node \"crc\" DevicePath \"\"" Jan 30 17:40:57 crc kubenswrapper[4766]: I0130 17:40:57.373065 4766 generic.go:334] "Generic (PLEG): container finished" podID="d2586ecd-ab78-47e4-931c-d0a872a4a404" containerID="50e4458bfe967c551ed69e8a0c74db33b10da1bd35f70c27724fb6f1a38673ae" exitCode=0 Jan 30 17:40:57 crc kubenswrapper[4766]: I0130 17:40:57.373127 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7b5456f5-kl2j6" Jan 30 17:40:57 crc kubenswrapper[4766]: I0130 17:40:57.373121 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-kl2j6" event={"ID":"d2586ecd-ab78-47e4-931c-d0a872a4a404","Type":"ContainerDied","Data":"50e4458bfe967c551ed69e8a0c74db33b10da1bd35f70c27724fb6f1a38673ae"} Jan 30 17:40:57 crc kubenswrapper[4766]: I0130 17:40:57.373253 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-kl2j6" event={"ID":"d2586ecd-ab78-47e4-931c-d0a872a4a404","Type":"ContainerDied","Data":"5573cc16d953ef3d82ed20166e2300bea2dfbd9f1a7a8acbc6d631fab91e0738"} Jan 30 17:40:57 crc kubenswrapper[4766]: I0130 17:40:57.373275 4766 scope.go:117] "RemoveContainer" containerID="50e4458bfe967c551ed69e8a0c74db33b10da1bd35f70c27724fb6f1a38673ae" Jan 30 17:40:57 crc kubenswrapper[4766]: I0130 17:40:57.391720 4766 scope.go:117] "RemoveContainer" containerID="836cec7dda9e74db703ca5d380e514f9a9e8c88ed4781ac11483a5c4f06b7584" Jan 30 17:40:57 crc kubenswrapper[4766]: I0130 17:40:57.406532 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-kl2j6"] Jan 30 17:40:57 crc kubenswrapper[4766]: I0130 17:40:57.412453 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-kl2j6"] Jan 30 17:40:57 crc kubenswrapper[4766]: I0130 17:40:57.429712 4766 scope.go:117] "RemoveContainer" containerID="50e4458bfe967c551ed69e8a0c74db33b10da1bd35f70c27724fb6f1a38673ae" Jan 30 17:40:57 crc kubenswrapper[4766]: E0130 17:40:57.430355 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50e4458bfe967c551ed69e8a0c74db33b10da1bd35f70c27724fb6f1a38673ae\": container with ID starting with 50e4458bfe967c551ed69e8a0c74db33b10da1bd35f70c27724fb6f1a38673ae not found: ID does not exist" containerID="50e4458bfe967c551ed69e8a0c74db33b10da1bd35f70c27724fb6f1a38673ae" Jan 30 17:40:57 crc kubenswrapper[4766]: I0130 17:40:57.430399 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50e4458bfe967c551ed69e8a0c74db33b10da1bd35f70c27724fb6f1a38673ae"} err="failed to get container status \"50e4458bfe967c551ed69e8a0c74db33b10da1bd35f70c27724fb6f1a38673ae\": rpc error: code = NotFound desc = could not find container \"50e4458bfe967c551ed69e8a0c74db33b10da1bd35f70c27724fb6f1a38673ae\": container with ID starting with 50e4458bfe967c551ed69e8a0c74db33b10da1bd35f70c27724fb6f1a38673ae not found: ID does not exist" Jan 30 17:40:57 crc kubenswrapper[4766]: I0130 17:40:57.430429 4766 scope.go:117] "RemoveContainer" containerID="836cec7dda9e74db703ca5d380e514f9a9e8c88ed4781ac11483a5c4f06b7584" Jan 30 17:40:57 crc kubenswrapper[4766]: E0130 17:40:57.431050 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"836cec7dda9e74db703ca5d380e514f9a9e8c88ed4781ac11483a5c4f06b7584\": container with ID starting with 836cec7dda9e74db703ca5d380e514f9a9e8c88ed4781ac11483a5c4f06b7584 not found: ID does not exist" containerID="836cec7dda9e74db703ca5d380e514f9a9e8c88ed4781ac11483a5c4f06b7584" Jan 30 17:40:57 crc kubenswrapper[4766]: I0130 17:40:57.431103 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"836cec7dda9e74db703ca5d380e514f9a9e8c88ed4781ac11483a5c4f06b7584"} err="failed to get container status \"836cec7dda9e74db703ca5d380e514f9a9e8c88ed4781ac11483a5c4f06b7584\": rpc error: code = NotFound desc = could not find container \"836cec7dda9e74db703ca5d380e514f9a9e8c88ed4781ac11483a5c4f06b7584\": container with ID starting with 836cec7dda9e74db703ca5d380e514f9a9e8c88ed4781ac11483a5c4f06b7584 not found: ID does not exist" Jan 30 17:40:57 crc kubenswrapper[4766]: I0130 17:40:57.550412 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 30 17:40:57 crc kubenswrapper[4766]: I0130 17:40:57.573530 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 30 17:40:57 crc kubenswrapper[4766]: I0130 17:40:57.574620 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 30 17:40:57 crc kubenswrapper[4766]: I0130 17:40:57.647501 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 30 17:40:58 crc kubenswrapper[4766]: I0130 17:40:58.051372 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2586ecd-ab78-47e4-931c-d0a872a4a404" path="/var/lib/kubelet/pods/d2586ecd-ab78-47e4-931c-d0a872a4a404/volumes" Jan 30 17:40:58 crc kubenswrapper[4766]: I0130 17:40:58.454807 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 30 17:40:58 crc kubenswrapper[4766]: I0130 17:40:58.933642 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 30 17:40:58 crc kubenswrapper[4766]: I0130 17:40:58.933713 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 30 17:40:59 crc kubenswrapper[4766]: I0130 17:40:59.003508 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 30 17:40:59 crc kubenswrapper[4766]: I0130 17:40:59.454151 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 30 17:41:05 crc kubenswrapper[4766]: I0130 17:41:05.902846 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-2zjtv"] Jan 30 17:41:05 crc kubenswrapper[4766]: E0130 17:41:05.903648 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2586ecd-ab78-47e4-931c-d0a872a4a404" containerName="dnsmasq-dns" Jan 30 17:41:05 crc kubenswrapper[4766]: I0130 17:41:05.903663 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2586ecd-ab78-47e4-931c-d0a872a4a404" containerName="dnsmasq-dns" Jan 30 17:41:05 crc kubenswrapper[4766]: E0130 17:41:05.903690 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2586ecd-ab78-47e4-931c-d0a872a4a404" containerName="init" Jan 30 17:41:05 crc kubenswrapper[4766]: I0130 17:41:05.903697 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2586ecd-ab78-47e4-931c-d0a872a4a404" containerName="init" Jan 30 17:41:05 crc kubenswrapper[4766]: I0130 17:41:05.903831 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2586ecd-ab78-47e4-931c-d0a872a4a404" containerName="dnsmasq-dns" Jan 30 17:41:05 crc kubenswrapper[4766]: I0130 17:41:05.904411 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2zjtv" Jan 30 17:41:05 crc kubenswrapper[4766]: I0130 17:41:05.908298 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 30 17:41:05 crc kubenswrapper[4766]: I0130 17:41:05.911268 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-2zjtv"] Jan 30 17:41:06 crc kubenswrapper[4766]: I0130 17:41:06.027779 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7304e777-31df-44d9-932a-e9dfde1ebad9-operator-scripts\") pod \"root-account-create-update-2zjtv\" (UID: \"7304e777-31df-44d9-932a-e9dfde1ebad9\") " pod="openstack/root-account-create-update-2zjtv" Jan 30 17:41:06 crc kubenswrapper[4766]: I0130 17:41:06.028223 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnhqd\" (UniqueName: \"kubernetes.io/projected/7304e777-31df-44d9-932a-e9dfde1ebad9-kube-api-access-rnhqd\") pod \"root-account-create-update-2zjtv\" (UID: \"7304e777-31df-44d9-932a-e9dfde1ebad9\") " pod="openstack/root-account-create-update-2zjtv" Jan 30 17:41:06 crc kubenswrapper[4766]: I0130 17:41:06.129443 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnhqd\" (UniqueName: \"kubernetes.io/projected/7304e777-31df-44d9-932a-e9dfde1ebad9-kube-api-access-rnhqd\") pod \"root-account-create-update-2zjtv\" (UID: \"7304e777-31df-44d9-932a-e9dfde1ebad9\") " pod="openstack/root-account-create-update-2zjtv" Jan 30 17:41:06 crc kubenswrapper[4766]: I0130 17:41:06.129567 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7304e777-31df-44d9-932a-e9dfde1ebad9-operator-scripts\") pod \"root-account-create-update-2zjtv\" (UID: \"7304e777-31df-44d9-932a-e9dfde1ebad9\") " pod="openstack/root-account-create-update-2zjtv" Jan 30 17:41:06 crc kubenswrapper[4766]: I0130 17:41:06.130419 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7304e777-31df-44d9-932a-e9dfde1ebad9-operator-scripts\") pod \"root-account-create-update-2zjtv\" (UID: \"7304e777-31df-44d9-932a-e9dfde1ebad9\") " pod="openstack/root-account-create-update-2zjtv" Jan 30 17:41:06 crc kubenswrapper[4766]: I0130 17:41:06.160703 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnhqd\" (UniqueName: \"kubernetes.io/projected/7304e777-31df-44d9-932a-e9dfde1ebad9-kube-api-access-rnhqd\") pod \"root-account-create-update-2zjtv\" (UID: \"7304e777-31df-44d9-932a-e9dfde1ebad9\") " pod="openstack/root-account-create-update-2zjtv" Jan 30 17:41:06 crc kubenswrapper[4766]: I0130 17:41:06.227985 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2zjtv" Jan 30 17:41:06 crc kubenswrapper[4766]: I0130 17:41:06.705652 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-2zjtv"] Jan 30 17:41:07 crc kubenswrapper[4766]: I0130 17:41:07.465581 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2zjtv" event={"ID":"7304e777-31df-44d9-932a-e9dfde1ebad9","Type":"ContainerStarted","Data":"0078600a657ee1591d8d9983657bcc34b477649798d6ae05ffcf66ebeaeaa4a4"} Jan 30 17:41:07 crc kubenswrapper[4766]: I0130 17:41:07.465940 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2zjtv" event={"ID":"7304e777-31df-44d9-932a-e9dfde1ebad9","Type":"ContainerStarted","Data":"0ea17d040e0fa1847b3a68fc75819ef2d8e63a51206c5f3eb9a83a57a8c64778"} Jan 30 17:41:07 crc kubenswrapper[4766]: I0130 17:41:07.485923 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-2zjtv" podStartSLOduration=2.485898126 podStartE2EDuration="2.485898126s" podCreationTimestamp="2026-01-30 17:41:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:41:07.480098658 +0000 UTC m=+4722.118056004" watchObservedRunningTime="2026-01-30 17:41:07.485898126 +0000 UTC m=+4722.123855472" Jan 30 17:41:08 crc kubenswrapper[4766]: I0130 17:41:08.474074 4766 generic.go:334] "Generic (PLEG): container finished" podID="7304e777-31df-44d9-932a-e9dfde1ebad9" containerID="0078600a657ee1591d8d9983657bcc34b477649798d6ae05ffcf66ebeaeaa4a4" exitCode=0 Jan 30 17:41:08 crc kubenswrapper[4766]: I0130 17:41:08.474135 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2zjtv" event={"ID":"7304e777-31df-44d9-932a-e9dfde1ebad9","Type":"ContainerDied","Data":"0078600a657ee1591d8d9983657bcc34b477649798d6ae05ffcf66ebeaeaa4a4"} Jan 30 17:41:09 crc kubenswrapper[4766]: I0130 17:41:09.753487 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2zjtv" Jan 30 17:41:09 crc kubenswrapper[4766]: I0130 17:41:09.887336 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7304e777-31df-44d9-932a-e9dfde1ebad9-operator-scripts\") pod \"7304e777-31df-44d9-932a-e9dfde1ebad9\" (UID: \"7304e777-31df-44d9-932a-e9dfde1ebad9\") " Jan 30 17:41:09 crc kubenswrapper[4766]: I0130 17:41:09.887910 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnhqd\" (UniqueName: \"kubernetes.io/projected/7304e777-31df-44d9-932a-e9dfde1ebad9-kube-api-access-rnhqd\") pod \"7304e777-31df-44d9-932a-e9dfde1ebad9\" (UID: \"7304e777-31df-44d9-932a-e9dfde1ebad9\") " Jan 30 17:41:09 crc kubenswrapper[4766]: I0130 17:41:09.888564 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7304e777-31df-44d9-932a-e9dfde1ebad9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7304e777-31df-44d9-932a-e9dfde1ebad9" (UID: "7304e777-31df-44d9-932a-e9dfde1ebad9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:41:09 crc kubenswrapper[4766]: I0130 17:41:09.889367 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7304e777-31df-44d9-932a-e9dfde1ebad9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:41:09 crc kubenswrapper[4766]: I0130 17:41:09.893462 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7304e777-31df-44d9-932a-e9dfde1ebad9-kube-api-access-rnhqd" (OuterVolumeSpecName: "kube-api-access-rnhqd") pod "7304e777-31df-44d9-932a-e9dfde1ebad9" (UID: "7304e777-31df-44d9-932a-e9dfde1ebad9"). InnerVolumeSpecName "kube-api-access-rnhqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:41:09 crc kubenswrapper[4766]: I0130 17:41:09.989869 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnhqd\" (UniqueName: \"kubernetes.io/projected/7304e777-31df-44d9-932a-e9dfde1ebad9-kube-api-access-rnhqd\") on node \"crc\" DevicePath \"\"" Jan 30 17:41:10 crc kubenswrapper[4766]: I0130 17:41:10.487301 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2zjtv" event={"ID":"7304e777-31df-44d9-932a-e9dfde1ebad9","Type":"ContainerDied","Data":"0ea17d040e0fa1847b3a68fc75819ef2d8e63a51206c5f3eb9a83a57a8c64778"} Jan 30 17:41:10 crc kubenswrapper[4766]: I0130 17:41:10.487345 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ea17d040e0fa1847b3a68fc75819ef2d8e63a51206c5f3eb9a83a57a8c64778" Jan 30 17:41:10 crc kubenswrapper[4766]: I0130 17:41:10.487397 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2zjtv" Jan 30 17:41:12 crc kubenswrapper[4766]: I0130 17:41:12.462806 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-2zjtv"] Jan 30 17:41:12 crc kubenswrapper[4766]: I0130 17:41:12.470405 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-2zjtv"] Jan 30 17:41:14 crc kubenswrapper[4766]: I0130 17:41:14.051528 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7304e777-31df-44d9-932a-e9dfde1ebad9" path="/var/lib/kubelet/pods/7304e777-31df-44d9-932a-e9dfde1ebad9/volumes" Jan 30 17:41:17 crc kubenswrapper[4766]: I0130 17:41:17.491699 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-xfq5b"] Jan 30 17:41:17 crc kubenswrapper[4766]: E0130 17:41:17.492524 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7304e777-31df-44d9-932a-e9dfde1ebad9" containerName="mariadb-account-create-update" Jan 30 17:41:17 crc kubenswrapper[4766]: I0130 17:41:17.492541 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="7304e777-31df-44d9-932a-e9dfde1ebad9" containerName="mariadb-account-create-update" Jan 30 17:41:17 crc kubenswrapper[4766]: I0130 17:41:17.492747 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="7304e777-31df-44d9-932a-e9dfde1ebad9" containerName="mariadb-account-create-update" Jan 30 17:41:17 crc kubenswrapper[4766]: I0130 17:41:17.493397 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xfq5b" Jan 30 17:41:17 crc kubenswrapper[4766]: I0130 17:41:17.496305 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 30 17:41:17 crc kubenswrapper[4766]: I0130 17:41:17.497504 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-xfq5b"] Jan 30 17:41:17 crc kubenswrapper[4766]: I0130 17:41:17.515086 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp89f\" (UniqueName: \"kubernetes.io/projected/0e74a4a8-0c9c-4bba-b839-4caeca1e9304-kube-api-access-vp89f\") pod \"root-account-create-update-xfq5b\" (UID: \"0e74a4a8-0c9c-4bba-b839-4caeca1e9304\") " pod="openstack/root-account-create-update-xfq5b" Jan 30 17:41:17 crc kubenswrapper[4766]: I0130 17:41:17.515429 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e74a4a8-0c9c-4bba-b839-4caeca1e9304-operator-scripts\") pod \"root-account-create-update-xfq5b\" (UID: \"0e74a4a8-0c9c-4bba-b839-4caeca1e9304\") " pod="openstack/root-account-create-update-xfq5b" Jan 30 17:41:17 crc kubenswrapper[4766]: I0130 17:41:17.617375 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e74a4a8-0c9c-4bba-b839-4caeca1e9304-operator-scripts\") pod \"root-account-create-update-xfq5b\" (UID: \"0e74a4a8-0c9c-4bba-b839-4caeca1e9304\") " pod="openstack/root-account-create-update-xfq5b" Jan 30 17:41:17 crc kubenswrapper[4766]: I0130 17:41:17.617875 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp89f\" (UniqueName: \"kubernetes.io/projected/0e74a4a8-0c9c-4bba-b839-4caeca1e9304-kube-api-access-vp89f\") pod \"root-account-create-update-xfq5b\" (UID: \"0e74a4a8-0c9c-4bba-b839-4caeca1e9304\") " pod="openstack/root-account-create-update-xfq5b" Jan 30 17:41:17 crc kubenswrapper[4766]: I0130 17:41:17.619168 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e74a4a8-0c9c-4bba-b839-4caeca1e9304-operator-scripts\") pod \"root-account-create-update-xfq5b\" (UID: \"0e74a4a8-0c9c-4bba-b839-4caeca1e9304\") " pod="openstack/root-account-create-update-xfq5b" Jan 30 17:41:17 crc kubenswrapper[4766]: I0130 17:41:17.640237 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp89f\" (UniqueName: \"kubernetes.io/projected/0e74a4a8-0c9c-4bba-b839-4caeca1e9304-kube-api-access-vp89f\") pod \"root-account-create-update-xfq5b\" (UID: \"0e74a4a8-0c9c-4bba-b839-4caeca1e9304\") " pod="openstack/root-account-create-update-xfq5b" Jan 30 17:41:17 crc kubenswrapper[4766]: I0130 17:41:17.817295 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xfq5b" Jan 30 17:41:18 crc kubenswrapper[4766]: I0130 17:41:18.263799 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-xfq5b"] Jan 30 17:41:18 crc kubenswrapper[4766]: I0130 17:41:18.538677 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xfq5b" event={"ID":"0e74a4a8-0c9c-4bba-b839-4caeca1e9304","Type":"ContainerStarted","Data":"a1009dde22ffcc8455d2189a3b2d9bd31c4314e79dc5a1b8bf480ca3671346fc"} Jan 30 17:41:18 crc kubenswrapper[4766]: I0130 17:41:18.538730 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xfq5b" event={"ID":"0e74a4a8-0c9c-4bba-b839-4caeca1e9304","Type":"ContainerStarted","Data":"3a81c3d928a5d3f56971ccaef0e640c80858dea47bdca5959804ed5cf15fd0d3"} Jan 30 17:41:18 crc kubenswrapper[4766]: I0130 17:41:18.555445 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-xfq5b" podStartSLOduration=1.555423694 podStartE2EDuration="1.555423694s" podCreationTimestamp="2026-01-30 17:41:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:41:18.551087085 +0000 UTC m=+4733.189044431" watchObservedRunningTime="2026-01-30 17:41:18.555423694 +0000 UTC m=+4733.193381040" Jan 30 17:41:19 crc kubenswrapper[4766]: I0130 17:41:19.548110 4766 generic.go:334] "Generic (PLEG): container finished" podID="0e74a4a8-0c9c-4bba-b839-4caeca1e9304" containerID="a1009dde22ffcc8455d2189a3b2d9bd31c4314e79dc5a1b8bf480ca3671346fc" exitCode=0 Jan 30 17:41:19 crc kubenswrapper[4766]: I0130 17:41:19.548324 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xfq5b" event={"ID":"0e74a4a8-0c9c-4bba-b839-4caeca1e9304","Type":"ContainerDied","Data":"a1009dde22ffcc8455d2189a3b2d9bd31c4314e79dc5a1b8bf480ca3671346fc"} Jan 30 17:41:20 crc kubenswrapper[4766]: I0130 17:41:20.556912 4766 generic.go:334] "Generic (PLEG): container finished" podID="9e8a2d07-a10c-454f-b5f0-d5fb399de3dc" containerID="b675ff1cca1887242f7fe886c969fa2c7a3239d5c0b07658edae799b86b555a7" exitCode=0 Jan 30 17:41:20 crc kubenswrapper[4766]: I0130 17:41:20.557100 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc","Type":"ContainerDied","Data":"b675ff1cca1887242f7fe886c969fa2c7a3239d5c0b07658edae799b86b555a7"} Jan 30 17:41:20 crc kubenswrapper[4766]: I0130 17:41:20.924929 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xfq5b" Jan 30 17:41:21 crc kubenswrapper[4766]: I0130 17:41:21.061309 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e74a4a8-0c9c-4bba-b839-4caeca1e9304-operator-scripts\") pod \"0e74a4a8-0c9c-4bba-b839-4caeca1e9304\" (UID: \"0e74a4a8-0c9c-4bba-b839-4caeca1e9304\") " Jan 30 17:41:21 crc kubenswrapper[4766]: I0130 17:41:21.061429 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vp89f\" (UniqueName: \"kubernetes.io/projected/0e74a4a8-0c9c-4bba-b839-4caeca1e9304-kube-api-access-vp89f\") pod \"0e74a4a8-0c9c-4bba-b839-4caeca1e9304\" (UID: \"0e74a4a8-0c9c-4bba-b839-4caeca1e9304\") " Jan 30 17:41:21 crc kubenswrapper[4766]: I0130 17:41:21.062124 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e74a4a8-0c9c-4bba-b839-4caeca1e9304-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0e74a4a8-0c9c-4bba-b839-4caeca1e9304" (UID: "0e74a4a8-0c9c-4bba-b839-4caeca1e9304"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:41:21 crc kubenswrapper[4766]: I0130 17:41:21.066614 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e74a4a8-0c9c-4bba-b839-4caeca1e9304-kube-api-access-vp89f" (OuterVolumeSpecName: "kube-api-access-vp89f") pod "0e74a4a8-0c9c-4bba-b839-4caeca1e9304" (UID: "0e74a4a8-0c9c-4bba-b839-4caeca1e9304"). InnerVolumeSpecName "kube-api-access-vp89f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:41:21 crc kubenswrapper[4766]: I0130 17:41:21.162689 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e74a4a8-0c9c-4bba-b839-4caeca1e9304-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:41:21 crc kubenswrapper[4766]: I0130 17:41:21.162736 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vp89f\" (UniqueName: \"kubernetes.io/projected/0e74a4a8-0c9c-4bba-b839-4caeca1e9304-kube-api-access-vp89f\") on node \"crc\" DevicePath \"\"" Jan 30 17:41:21 crc kubenswrapper[4766]: I0130 17:41:21.569799 4766 generic.go:334] "Generic (PLEG): container finished" podID="b9cdb86f-7214-4a3e-818a-dd6936b19daf" containerID="5121888509f9bb894d32efd5aae0d010bb82beed7fef4e339f209ac41ce7486c" exitCode=0 Jan 30 17:41:21 crc kubenswrapper[4766]: I0130 17:41:21.569955 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b9cdb86f-7214-4a3e-818a-dd6936b19daf","Type":"ContainerDied","Data":"5121888509f9bb894d32efd5aae0d010bb82beed7fef4e339f209ac41ce7486c"} Jan 30 17:41:21 crc kubenswrapper[4766]: I0130 17:41:21.573020 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc","Type":"ContainerStarted","Data":"90da3d8ded2aeba6de2be254532a2e4ec6ceb21d77172879f5a52d9cea491e24"} Jan 30 17:41:21 crc kubenswrapper[4766]: I0130 17:41:21.573364 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 30 17:41:21 crc kubenswrapper[4766]: I0130 17:41:21.574721 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xfq5b" event={"ID":"0e74a4a8-0c9c-4bba-b839-4caeca1e9304","Type":"ContainerDied","Data":"3a81c3d928a5d3f56971ccaef0e640c80858dea47bdca5959804ed5cf15fd0d3"} Jan 30 17:41:21 crc kubenswrapper[4766]: I0130 17:41:21.574765 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a81c3d928a5d3f56971ccaef0e640c80858dea47bdca5959804ed5cf15fd0d3" Jan 30 17:41:21 crc kubenswrapper[4766]: I0130 17:41:21.574835 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xfq5b" Jan 30 17:41:21 crc kubenswrapper[4766]: I0130 17:41:21.654540 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.654522175 podStartE2EDuration="36.654522175s" podCreationTimestamp="2026-01-30 17:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:41:21.648629764 +0000 UTC m=+4736.286587110" watchObservedRunningTime="2026-01-30 17:41:21.654522175 +0000 UTC m=+4736.292479521" Jan 30 17:41:22 crc kubenswrapper[4766]: I0130 17:41:22.584519 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b9cdb86f-7214-4a3e-818a-dd6936b19daf","Type":"ContainerStarted","Data":"afb3a1a69becdca84d4614986ca161768ac83342e70fd972e16d882fe41cf9ae"} Jan 30 17:41:22 crc kubenswrapper[4766]: I0130 17:41:22.584975 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:22 crc kubenswrapper[4766]: I0130 17:41:22.606140 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.606118357 podStartE2EDuration="37.606118357s" podCreationTimestamp="2026-01-30 17:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:41:22.604271466 +0000 UTC m=+4737.242228822" watchObservedRunningTime="2026-01-30 17:41:22.606118357 +0000 UTC m=+4737.244075703" Jan 30 17:41:36 crc kubenswrapper[4766]: I0130 17:41:36.456657 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 30 17:41:36 crc kubenswrapper[4766]: I0130 17:41:36.910116 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:39 crc kubenswrapper[4766]: I0130 17:41:39.045604 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:41:39 crc kubenswrapper[4766]: I0130 17:41:39.045995 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:41:39 crc kubenswrapper[4766]: I0130 17:41:39.100795 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-pmwzk"] Jan 30 17:41:39 crc kubenswrapper[4766]: E0130 17:41:39.101344 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e74a4a8-0c9c-4bba-b839-4caeca1e9304" containerName="mariadb-account-create-update" Jan 30 17:41:39 crc kubenswrapper[4766]: I0130 17:41:39.101369 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e74a4a8-0c9c-4bba-b839-4caeca1e9304" containerName="mariadb-account-create-update" Jan 30 17:41:39 crc kubenswrapper[4766]: I0130 17:41:39.101734 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e74a4a8-0c9c-4bba-b839-4caeca1e9304" containerName="mariadb-account-create-update" Jan 30 17:41:39 crc kubenswrapper[4766]: I0130 17:41:39.103041 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7946d7b9-pmwzk" Jan 30 17:41:39 crc kubenswrapper[4766]: I0130 17:41:39.118659 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-pmwzk"] Jan 30 17:41:39 crc kubenswrapper[4766]: I0130 17:41:39.244033 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/83b52c39-5b23-4e74-abf9-0018a54b215e-dns-svc\") pod \"dnsmasq-dns-5b7946d7b9-pmwzk\" (UID: \"83b52c39-5b23-4e74-abf9-0018a54b215e\") " pod="openstack/dnsmasq-dns-5b7946d7b9-pmwzk" Jan 30 17:41:39 crc kubenswrapper[4766]: I0130 17:41:39.244530 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz2t6\" (UniqueName: \"kubernetes.io/projected/83b52c39-5b23-4e74-abf9-0018a54b215e-kube-api-access-dz2t6\") pod \"dnsmasq-dns-5b7946d7b9-pmwzk\" (UID: \"83b52c39-5b23-4e74-abf9-0018a54b215e\") " pod="openstack/dnsmasq-dns-5b7946d7b9-pmwzk" Jan 30 17:41:39 crc kubenswrapper[4766]: I0130 17:41:39.244558 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83b52c39-5b23-4e74-abf9-0018a54b215e-config\") pod \"dnsmasq-dns-5b7946d7b9-pmwzk\" (UID: \"83b52c39-5b23-4e74-abf9-0018a54b215e\") " pod="openstack/dnsmasq-dns-5b7946d7b9-pmwzk" Jan 30 17:41:39 crc kubenswrapper[4766]: I0130 17:41:39.345878 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dz2t6\" (UniqueName: \"kubernetes.io/projected/83b52c39-5b23-4e74-abf9-0018a54b215e-kube-api-access-dz2t6\") pod \"dnsmasq-dns-5b7946d7b9-pmwzk\" (UID: \"83b52c39-5b23-4e74-abf9-0018a54b215e\") " pod="openstack/dnsmasq-dns-5b7946d7b9-pmwzk" Jan 30 17:41:39 crc kubenswrapper[4766]: I0130 17:41:39.345938 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83b52c39-5b23-4e74-abf9-0018a54b215e-config\") pod \"dnsmasq-dns-5b7946d7b9-pmwzk\" (UID: \"83b52c39-5b23-4e74-abf9-0018a54b215e\") " pod="openstack/dnsmasq-dns-5b7946d7b9-pmwzk" Jan 30 17:41:39 crc kubenswrapper[4766]: I0130 17:41:39.345991 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/83b52c39-5b23-4e74-abf9-0018a54b215e-dns-svc\") pod \"dnsmasq-dns-5b7946d7b9-pmwzk\" (UID: \"83b52c39-5b23-4e74-abf9-0018a54b215e\") " pod="openstack/dnsmasq-dns-5b7946d7b9-pmwzk" Jan 30 17:41:39 crc kubenswrapper[4766]: I0130 17:41:39.347108 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83b52c39-5b23-4e74-abf9-0018a54b215e-config\") pod \"dnsmasq-dns-5b7946d7b9-pmwzk\" (UID: \"83b52c39-5b23-4e74-abf9-0018a54b215e\") " pod="openstack/dnsmasq-dns-5b7946d7b9-pmwzk" Jan 30 17:41:39 crc kubenswrapper[4766]: I0130 17:41:39.347280 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/83b52c39-5b23-4e74-abf9-0018a54b215e-dns-svc\") pod \"dnsmasq-dns-5b7946d7b9-pmwzk\" (UID: \"83b52c39-5b23-4e74-abf9-0018a54b215e\") " pod="openstack/dnsmasq-dns-5b7946d7b9-pmwzk" Jan 30 17:41:39 crc kubenswrapper[4766]: I0130 17:41:39.378348 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dz2t6\" (UniqueName: \"kubernetes.io/projected/83b52c39-5b23-4e74-abf9-0018a54b215e-kube-api-access-dz2t6\") pod \"dnsmasq-dns-5b7946d7b9-pmwzk\" (UID: \"83b52c39-5b23-4e74-abf9-0018a54b215e\") " pod="openstack/dnsmasq-dns-5b7946d7b9-pmwzk" Jan 30 17:41:39 crc kubenswrapper[4766]: I0130 17:41:39.443686 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7946d7b9-pmwzk" Jan 30 17:41:39 crc kubenswrapper[4766]: I0130 17:41:39.715594 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 17:41:39 crc kubenswrapper[4766]: I0130 17:41:39.762236 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-pmwzk"] Jan 30 17:41:39 crc kubenswrapper[4766]: W0130 17:41:39.770728 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83b52c39_5b23_4e74_abf9_0018a54b215e.slice/crio-413a11896bba6c856744f800c01e207dabe5ad018e6db2441e865aa1619f4199 WatchSource:0}: Error finding container 413a11896bba6c856744f800c01e207dabe5ad018e6db2441e865aa1619f4199: Status 404 returned error can't find the container with id 413a11896bba6c856744f800c01e207dabe5ad018e6db2441e865aa1619f4199 Jan 30 17:41:40 crc kubenswrapper[4766]: I0130 17:41:40.463411 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 17:41:40 crc kubenswrapper[4766]: I0130 17:41:40.729620 4766 generic.go:334] "Generic (PLEG): container finished" podID="83b52c39-5b23-4e74-abf9-0018a54b215e" containerID="71ad7de0b7e47b4f7b8f41c3804d227e46571979ff8bb171580d7c01a4182d89" exitCode=0 Jan 30 17:41:40 crc kubenswrapper[4766]: I0130 17:41:40.729680 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-pmwzk" event={"ID":"83b52c39-5b23-4e74-abf9-0018a54b215e","Type":"ContainerDied","Data":"71ad7de0b7e47b4f7b8f41c3804d227e46571979ff8bb171580d7c01a4182d89"} Jan 30 17:41:40 crc kubenswrapper[4766]: I0130 17:41:40.729725 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-pmwzk" event={"ID":"83b52c39-5b23-4e74-abf9-0018a54b215e","Type":"ContainerStarted","Data":"413a11896bba6c856744f800c01e207dabe5ad018e6db2441e865aa1619f4199"} Jan 30 17:41:41 crc kubenswrapper[4766]: I0130 17:41:41.576023 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="9e8a2d07-a10c-454f-b5f0-d5fb399de3dc" containerName="rabbitmq" containerID="cri-o://90da3d8ded2aeba6de2be254532a2e4ec6ceb21d77172879f5a52d9cea491e24" gracePeriod=604799 Jan 30 17:41:41 crc kubenswrapper[4766]: I0130 17:41:41.737733 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-pmwzk" event={"ID":"83b52c39-5b23-4e74-abf9-0018a54b215e","Type":"ContainerStarted","Data":"64dac631aecc3debe44408ebe734ca2389ee4d9a76ae4ea3913d0c3e70670dee"} Jan 30 17:41:41 crc kubenswrapper[4766]: I0130 17:41:41.737862 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b7946d7b9-pmwzk" Jan 30 17:41:41 crc kubenswrapper[4766]: I0130 17:41:41.757899 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b7946d7b9-pmwzk" podStartSLOduration=2.757882505 podStartE2EDuration="2.757882505s" podCreationTimestamp="2026-01-30 17:41:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:41:41.753674981 +0000 UTC m=+4756.391632327" watchObservedRunningTime="2026-01-30 17:41:41.757882505 +0000 UTC m=+4756.395839851" Jan 30 17:41:42 crc kubenswrapper[4766]: I0130 17:41:42.180292 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="b9cdb86f-7214-4a3e-818a-dd6936b19daf" containerName="rabbitmq" containerID="cri-o://afb3a1a69becdca84d4614986ca161768ac83342e70fd972e16d882fe41cf9ae" gracePeriod=604799 Jan 30 17:41:46 crc kubenswrapper[4766]: I0130 17:41:46.454338 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="9e8a2d07-a10c-454f-b5f0-d5fb399de3dc" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.239:5672: connect: connection refused" Jan 30 17:41:46 crc kubenswrapper[4766]: I0130 17:41:46.907117 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="b9cdb86f-7214-4a3e-818a-dd6936b19daf" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.240:5672: connect: connection refused" Jan 30 17:41:48 crc kubenswrapper[4766]: I0130 17:41:48.786606 4766 generic.go:334] "Generic (PLEG): container finished" podID="9e8a2d07-a10c-454f-b5f0-d5fb399de3dc" containerID="90da3d8ded2aeba6de2be254532a2e4ec6ceb21d77172879f5a52d9cea491e24" exitCode=0 Jan 30 17:41:48 crc kubenswrapper[4766]: I0130 17:41:48.786693 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc","Type":"ContainerDied","Data":"90da3d8ded2aeba6de2be254532a2e4ec6ceb21d77172879f5a52d9cea491e24"} Jan 30 17:41:48 crc kubenswrapper[4766]: I0130 17:41:48.790475 4766 generic.go:334] "Generic (PLEG): container finished" podID="b9cdb86f-7214-4a3e-818a-dd6936b19daf" containerID="afb3a1a69becdca84d4614986ca161768ac83342e70fd972e16d882fe41cf9ae" exitCode=0 Jan 30 17:41:48 crc kubenswrapper[4766]: I0130 17:41:48.790524 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b9cdb86f-7214-4a3e-818a-dd6936b19daf","Type":"ContainerDied","Data":"afb3a1a69becdca84d4614986ca161768ac83342e70fd972e16d882fe41cf9ae"} Jan 30 17:41:48 crc kubenswrapper[4766]: I0130 17:41:48.973448 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.098368 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-plugins-conf\") pod \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.098426 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-rabbitmq-plugins\") pod \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.098488 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltxf7\" (UniqueName: \"kubernetes.io/projected/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-kube-api-access-ltxf7\") pod \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.098529 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-rabbitmq-erlang-cookie\") pod \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.098580 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-erlang-cookie-secret\") pod \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.098622 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-server-conf\") pod \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.098745 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba9ce260-411e-465e-825e-cb85f0d828d4\") pod \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.098772 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-rabbitmq-confd\") pod \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.098820 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-pod-info\") pod \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.099013 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "9e8a2d07-a10c-454f-b5f0-d5fb399de3dc" (UID: "9e8a2d07-a10c-454f-b5f0-d5fb399de3dc"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.099155 4766 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.099693 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "9e8a2d07-a10c-454f-b5f0-d5fb399de3dc" (UID: "9e8a2d07-a10c-454f-b5f0-d5fb399de3dc"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.100116 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "9e8a2d07-a10c-454f-b5f0-d5fb399de3dc" (UID: "9e8a2d07-a10c-454f-b5f0-d5fb399de3dc"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.114582 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-kube-api-access-ltxf7" (OuterVolumeSpecName: "kube-api-access-ltxf7") pod "9e8a2d07-a10c-454f-b5f0-d5fb399de3dc" (UID: "9e8a2d07-a10c-454f-b5f0-d5fb399de3dc"). InnerVolumeSpecName "kube-api-access-ltxf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.114996 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-pod-info" (OuterVolumeSpecName: "pod-info") pod "9e8a2d07-a10c-454f-b5f0-d5fb399de3dc" (UID: "9e8a2d07-a10c-454f-b5f0-d5fb399de3dc"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.117578 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "9e8a2d07-a10c-454f-b5f0-d5fb399de3dc" (UID: "9e8a2d07-a10c-454f-b5f0-d5fb399de3dc"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.118714 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba9ce260-411e-465e-825e-cb85f0d828d4" (OuterVolumeSpecName: "persistence") pod "9e8a2d07-a10c-454f-b5f0-d5fb399de3dc" (UID: "9e8a2d07-a10c-454f-b5f0-d5fb399de3dc"). InnerVolumeSpecName "pvc-ba9ce260-411e-465e-825e-cb85f0d828d4". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.132604 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-server-conf" (OuterVolumeSpecName: "server-conf") pod "9e8a2d07-a10c-454f-b5f0-d5fb399de3dc" (UID: "9e8a2d07-a10c-454f-b5f0-d5fb399de3dc"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.195669 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "9e8a2d07-a10c-454f-b5f0-d5fb399de3dc" (UID: "9e8a2d07-a10c-454f-b5f0-d5fb399de3dc"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.200699 4766 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-server-conf\") on node \"crc\" DevicePath \"\"" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.200762 4766 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-ba9ce260-411e-465e-825e-cb85f0d828d4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba9ce260-411e-465e-825e-cb85f0d828d4\") on node \"crc\" " Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.200781 4766 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.200793 4766 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-pod-info\") on node \"crc\" DevicePath \"\"" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.200805 4766 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.200816 4766 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.200828 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltxf7\" (UniqueName: \"kubernetes.io/projected/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-kube-api-access-ltxf7\") on node \"crc\" DevicePath \"\"" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.200842 4766 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.219478 4766 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.219613 4766 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-ba9ce260-411e-465e-825e-cb85f0d828d4" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba9ce260-411e-465e-825e-cb85f0d828d4") on node "crc" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.268343 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.305142 4766 reconciler_common.go:293] "Volume detached for volume \"pvc-ba9ce260-411e-465e-825e-cb85f0d828d4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba9ce260-411e-465e-825e-cb85f0d828d4\") on node \"crc\" DevicePath \"\"" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.406591 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b9cdb86f-7214-4a3e-818a-dd6936b19daf-rabbitmq-erlang-cookie\") pod \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.406666 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b9cdb86f-7214-4a3e-818a-dd6936b19daf-erlang-cookie-secret\") pod \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.406792 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b51917bf-6d02-4b8e-98a7-9f99636938a8\") pod \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.406812 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b9cdb86f-7214-4a3e-818a-dd6936b19daf-rabbitmq-confd\") pod \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.406887 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b9cdb86f-7214-4a3e-818a-dd6936b19daf-plugins-conf\") pod \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.406904 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b9cdb86f-7214-4a3e-818a-dd6936b19daf-rabbitmq-plugins\") pod \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.406938 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b9cdb86f-7214-4a3e-818a-dd6936b19daf-server-conf\") pod \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.406966 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hj6mk\" (UniqueName: \"kubernetes.io/projected/b9cdb86f-7214-4a3e-818a-dd6936b19daf-kube-api-access-hj6mk\") pod \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.407040 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b9cdb86f-7214-4a3e-818a-dd6936b19daf-pod-info\") pod \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.407191 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9cdb86f-7214-4a3e-818a-dd6936b19daf-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "b9cdb86f-7214-4a3e-818a-dd6936b19daf" (UID: "b9cdb86f-7214-4a3e-818a-dd6936b19daf"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.407553 4766 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b9cdb86f-7214-4a3e-818a-dd6936b19daf-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.407728 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9cdb86f-7214-4a3e-818a-dd6936b19daf-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "b9cdb86f-7214-4a3e-818a-dd6936b19daf" (UID: "b9cdb86f-7214-4a3e-818a-dd6936b19daf"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.407841 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9cdb86f-7214-4a3e-818a-dd6936b19daf-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "b9cdb86f-7214-4a3e-818a-dd6936b19daf" (UID: "b9cdb86f-7214-4a3e-818a-dd6936b19daf"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.410542 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9cdb86f-7214-4a3e-818a-dd6936b19daf-kube-api-access-hj6mk" (OuterVolumeSpecName: "kube-api-access-hj6mk") pod "b9cdb86f-7214-4a3e-818a-dd6936b19daf" (UID: "b9cdb86f-7214-4a3e-818a-dd6936b19daf"). InnerVolumeSpecName "kube-api-access-hj6mk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.411050 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/b9cdb86f-7214-4a3e-818a-dd6936b19daf-pod-info" (OuterVolumeSpecName: "pod-info") pod "b9cdb86f-7214-4a3e-818a-dd6936b19daf" (UID: "b9cdb86f-7214-4a3e-818a-dd6936b19daf"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.412350 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9cdb86f-7214-4a3e-818a-dd6936b19daf-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "b9cdb86f-7214-4a3e-818a-dd6936b19daf" (UID: "b9cdb86f-7214-4a3e-818a-dd6936b19daf"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.419265 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b51917bf-6d02-4b8e-98a7-9f99636938a8" (OuterVolumeSpecName: "persistence") pod "b9cdb86f-7214-4a3e-818a-dd6936b19daf" (UID: "b9cdb86f-7214-4a3e-818a-dd6936b19daf"). InnerVolumeSpecName "pvc-b51917bf-6d02-4b8e-98a7-9f99636938a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.428203 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9cdb86f-7214-4a3e-818a-dd6936b19daf-server-conf" (OuterVolumeSpecName: "server-conf") pod "b9cdb86f-7214-4a3e-818a-dd6936b19daf" (UID: "b9cdb86f-7214-4a3e-818a-dd6936b19daf"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.445352 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b7946d7b9-pmwzk" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.489202 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9cdb86f-7214-4a3e-818a-dd6936b19daf-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "b9cdb86f-7214-4a3e-818a-dd6936b19daf" (UID: "b9cdb86f-7214-4a3e-818a-dd6936b19daf"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.492796 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-ht8gm"] Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.493018 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-98ddfc8f-ht8gm" podUID="5ec757c0-9d3d-4d66-9cd8-742105f2c48e" containerName="dnsmasq-dns" containerID="cri-o://a41bb6492a4775abf65f979bb5fa7a9593fae4739f7119a8735ab9ea5cd43dfb" gracePeriod=10 Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.509527 4766 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b9cdb86f-7214-4a3e-818a-dd6936b19daf-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.509589 4766 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-b51917bf-6d02-4b8e-98a7-9f99636938a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b51917bf-6d02-4b8e-98a7-9f99636938a8\") on node \"crc\" " Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.509603 4766 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b9cdb86f-7214-4a3e-818a-dd6936b19daf-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.509613 4766 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b9cdb86f-7214-4a3e-818a-dd6936b19daf-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.509622 4766 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b9cdb86f-7214-4a3e-818a-dd6936b19daf-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.509630 4766 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b9cdb86f-7214-4a3e-818a-dd6936b19daf-server-conf\") on node \"crc\" DevicePath \"\"" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.509638 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hj6mk\" (UniqueName: \"kubernetes.io/projected/b9cdb86f-7214-4a3e-818a-dd6936b19daf-kube-api-access-hj6mk\") on node \"crc\" DevicePath \"\"" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.509647 4766 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b9cdb86f-7214-4a3e-818a-dd6936b19daf-pod-info\") on node \"crc\" DevicePath \"\"" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.530915 4766 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.531114 4766 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-b51917bf-6d02-4b8e-98a7-9f99636938a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b51917bf-6d02-4b8e-98a7-9f99636938a8") on node "crc" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.611309 4766 reconciler_common.go:293] "Volume detached for volume \"pvc-b51917bf-6d02-4b8e-98a7-9f99636938a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b51917bf-6d02-4b8e-98a7-9f99636938a8\") on node \"crc\" DevicePath \"\"" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.798178 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc","Type":"ContainerDied","Data":"b5eea1df367acc4968b8813886b48206984ec921b296ed8c33229a96aaba3238"} Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.798214 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.798252 4766 scope.go:117] "RemoveContainer" containerID="90da3d8ded2aeba6de2be254532a2e4ec6ceb21d77172879f5a52d9cea491e24" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.800706 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b9cdb86f-7214-4a3e-818a-dd6936b19daf","Type":"ContainerDied","Data":"a6bfba4c8f09a9b72e6500c3cd5b8a4d9dd328a59974eb580780494c99cc6fcc"} Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.800768 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.802890 4766 generic.go:334] "Generic (PLEG): container finished" podID="5ec757c0-9d3d-4d66-9cd8-742105f2c48e" containerID="a41bb6492a4775abf65f979bb5fa7a9593fae4739f7119a8735ab9ea5cd43dfb" exitCode=0 Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.802948 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-ht8gm" event={"ID":"5ec757c0-9d3d-4d66-9cd8-742105f2c48e","Type":"ContainerDied","Data":"a41bb6492a4775abf65f979bb5fa7a9593fae4739f7119a8735ab9ea5cd43dfb"} Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.816365 4766 scope.go:117] "RemoveContainer" containerID="b675ff1cca1887242f7fe886c969fa2c7a3239d5c0b07658edae799b86b555a7" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.833697 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.841140 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.863561 4766 scope.go:117] "RemoveContainer" containerID="afb3a1a69becdca84d4614986ca161768ac83342e70fd972e16d882fe41cf9ae" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.871695 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.883646 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.888943 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 17:41:49 crc kubenswrapper[4766]: E0130 17:41:49.889331 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e8a2d07-a10c-454f-b5f0-d5fb399de3dc" containerName="rabbitmq" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.889353 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e8a2d07-a10c-454f-b5f0-d5fb399de3dc" containerName="rabbitmq" Jan 30 17:41:49 crc kubenswrapper[4766]: E0130 17:41:49.889377 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9cdb86f-7214-4a3e-818a-dd6936b19daf" containerName="setup-container" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.889387 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9cdb86f-7214-4a3e-818a-dd6936b19daf" containerName="setup-container" Jan 30 17:41:49 crc kubenswrapper[4766]: E0130 17:41:49.889406 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e8a2d07-a10c-454f-b5f0-d5fb399de3dc" containerName="setup-container" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.889406 4766 scope.go:117] "RemoveContainer" containerID="5121888509f9bb894d32efd5aae0d010bb82beed7fef4e339f209ac41ce7486c" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.889414 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e8a2d07-a10c-454f-b5f0-d5fb399de3dc" containerName="setup-container" Jan 30 17:41:49 crc kubenswrapper[4766]: E0130 17:41:49.889530 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9cdb86f-7214-4a3e-818a-dd6936b19daf" containerName="rabbitmq" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.889541 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9cdb86f-7214-4a3e-818a-dd6936b19daf" containerName="rabbitmq" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.889811 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e8a2d07-a10c-454f-b5f0-d5fb399de3dc" containerName="rabbitmq" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.889836 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9cdb86f-7214-4a3e-818a-dd6936b19daf" containerName="rabbitmq" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.890700 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.892716 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.892932 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.893028 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.893168 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-7fqzb" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.894515 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.895238 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.896370 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.897872 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.898855 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.898977 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.899140 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-bz89s" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.899263 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.900511 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.909821 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.017056 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q75lm\" (UniqueName: \"kubernetes.io/projected/b579b360-d367-4637-8bf4-24be247f4daf-kube-api-access-q75lm\") pod \"rabbitmq-server-0\" (UID: \"b579b360-d367-4637-8bf4-24be247f4daf\") " pod="openstack/rabbitmq-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.017817 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1fd0348d-2f44-4961-9503-eb8ce09016d8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd0348d-2f44-4961-9503-eb8ce09016d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.017863 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1fd0348d-2f44-4961-9503-eb8ce09016d8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd0348d-2f44-4961-9503-eb8ce09016d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.017913 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b579b360-d367-4637-8bf4-24be247f4daf-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b579b360-d367-4637-8bf4-24be247f4daf\") " pod="openstack/rabbitmq-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.017952 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b579b360-d367-4637-8bf4-24be247f4daf-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b579b360-d367-4637-8bf4-24be247f4daf\") " pod="openstack/rabbitmq-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.017982 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b579b360-d367-4637-8bf4-24be247f4daf-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b579b360-d367-4637-8bf4-24be247f4daf\") " pod="openstack/rabbitmq-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.018010 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b51917bf-6d02-4b8e-98a7-9f99636938a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b51917bf-6d02-4b8e-98a7-9f99636938a8\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd0348d-2f44-4961-9503-eb8ce09016d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.018042 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ba9ce260-411e-465e-825e-cb85f0d828d4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba9ce260-411e-465e-825e-cb85f0d828d4\") pod \"rabbitmq-server-0\" (UID: \"b579b360-d367-4637-8bf4-24be247f4daf\") " pod="openstack/rabbitmq-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.018080 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b579b360-d367-4637-8bf4-24be247f4daf-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b579b360-d367-4637-8bf4-24be247f4daf\") " pod="openstack/rabbitmq-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.018109 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1fd0348d-2f44-4961-9503-eb8ce09016d8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd0348d-2f44-4961-9503-eb8ce09016d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.018161 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b579b360-d367-4637-8bf4-24be247f4daf-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b579b360-d367-4637-8bf4-24be247f4daf\") " pod="openstack/rabbitmq-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.018215 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1fd0348d-2f44-4961-9503-eb8ce09016d8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd0348d-2f44-4961-9503-eb8ce09016d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.018256 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b579b360-d367-4637-8bf4-24be247f4daf-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b579b360-d367-4637-8bf4-24be247f4daf\") " pod="openstack/rabbitmq-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.018301 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b579b360-d367-4637-8bf4-24be247f4daf-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b579b360-d367-4637-8bf4-24be247f4daf\") " pod="openstack/rabbitmq-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.018325 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1fd0348d-2f44-4961-9503-eb8ce09016d8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd0348d-2f44-4961-9503-eb8ce09016d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.018368 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkpkm\" (UniqueName: \"kubernetes.io/projected/1fd0348d-2f44-4961-9503-eb8ce09016d8-kube-api-access-nkpkm\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd0348d-2f44-4961-9503-eb8ce09016d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.018425 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1fd0348d-2f44-4961-9503-eb8ce09016d8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd0348d-2f44-4961-9503-eb8ce09016d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.018455 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1fd0348d-2f44-4961-9503-eb8ce09016d8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd0348d-2f44-4961-9503-eb8ce09016d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.051058 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e8a2d07-a10c-454f-b5f0-d5fb399de3dc" path="/var/lib/kubelet/pods/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc/volumes" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.052189 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9cdb86f-7214-4a3e-818a-dd6936b19daf" path="/var/lib/kubelet/pods/b9cdb86f-7214-4a3e-818a-dd6936b19daf/volumes" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.120059 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b579b360-d367-4637-8bf4-24be247f4daf-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b579b360-d367-4637-8bf4-24be247f4daf\") " pod="openstack/rabbitmq-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.120108 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1fd0348d-2f44-4961-9503-eb8ce09016d8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd0348d-2f44-4961-9503-eb8ce09016d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.120132 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b579b360-d367-4637-8bf4-24be247f4daf-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b579b360-d367-4637-8bf4-24be247f4daf\") " pod="openstack/rabbitmq-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.120158 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1fd0348d-2f44-4961-9503-eb8ce09016d8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd0348d-2f44-4961-9503-eb8ce09016d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.120178 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b579b360-d367-4637-8bf4-24be247f4daf-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b579b360-d367-4637-8bf4-24be247f4daf\") " pod="openstack/rabbitmq-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.120233 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkpkm\" (UniqueName: \"kubernetes.io/projected/1fd0348d-2f44-4961-9503-eb8ce09016d8-kube-api-access-nkpkm\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd0348d-2f44-4961-9503-eb8ce09016d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.120268 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1fd0348d-2f44-4961-9503-eb8ce09016d8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd0348d-2f44-4961-9503-eb8ce09016d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.120284 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1fd0348d-2f44-4961-9503-eb8ce09016d8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd0348d-2f44-4961-9503-eb8ce09016d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.120308 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q75lm\" (UniqueName: \"kubernetes.io/projected/b579b360-d367-4637-8bf4-24be247f4daf-kube-api-access-q75lm\") pod \"rabbitmq-server-0\" (UID: \"b579b360-d367-4637-8bf4-24be247f4daf\") " pod="openstack/rabbitmq-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.120324 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1fd0348d-2f44-4961-9503-eb8ce09016d8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd0348d-2f44-4961-9503-eb8ce09016d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.120343 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1fd0348d-2f44-4961-9503-eb8ce09016d8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd0348d-2f44-4961-9503-eb8ce09016d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.120368 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b579b360-d367-4637-8bf4-24be247f4daf-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b579b360-d367-4637-8bf4-24be247f4daf\") " pod="openstack/rabbitmq-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.120389 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b579b360-d367-4637-8bf4-24be247f4daf-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b579b360-d367-4637-8bf4-24be247f4daf\") " pod="openstack/rabbitmq-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.120409 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b579b360-d367-4637-8bf4-24be247f4daf-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b579b360-d367-4637-8bf4-24be247f4daf\") " pod="openstack/rabbitmq-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.120427 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b51917bf-6d02-4b8e-98a7-9f99636938a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b51917bf-6d02-4b8e-98a7-9f99636938a8\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd0348d-2f44-4961-9503-eb8ce09016d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.120449 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ba9ce260-411e-465e-825e-cb85f0d828d4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba9ce260-411e-465e-825e-cb85f0d828d4\") pod \"rabbitmq-server-0\" (UID: \"b579b360-d367-4637-8bf4-24be247f4daf\") " pod="openstack/rabbitmq-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.120468 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b579b360-d367-4637-8bf4-24be247f4daf-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b579b360-d367-4637-8bf4-24be247f4daf\") " pod="openstack/rabbitmq-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.120486 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1fd0348d-2f44-4961-9503-eb8ce09016d8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd0348d-2f44-4961-9503-eb8ce09016d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.121067 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1fd0348d-2f44-4961-9503-eb8ce09016d8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd0348d-2f44-4961-9503-eb8ce09016d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.121124 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1fd0348d-2f44-4961-9503-eb8ce09016d8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd0348d-2f44-4961-9503-eb8ce09016d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.121704 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1fd0348d-2f44-4961-9503-eb8ce09016d8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd0348d-2f44-4961-9503-eb8ce09016d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.121745 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1fd0348d-2f44-4961-9503-eb8ce09016d8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd0348d-2f44-4961-9503-eb8ce09016d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.121799 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b579b360-d367-4637-8bf4-24be247f4daf-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b579b360-d367-4637-8bf4-24be247f4daf\") " pod="openstack/rabbitmq-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.122223 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b579b360-d367-4637-8bf4-24be247f4daf-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b579b360-d367-4637-8bf4-24be247f4daf\") " pod="openstack/rabbitmq-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.122623 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b579b360-d367-4637-8bf4-24be247f4daf-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b579b360-d367-4637-8bf4-24be247f4daf\") " pod="openstack/rabbitmq-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.123338 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b579b360-d367-4637-8bf4-24be247f4daf-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b579b360-d367-4637-8bf4-24be247f4daf\") " pod="openstack/rabbitmq-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.125933 4766 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.125958 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b579b360-d367-4637-8bf4-24be247f4daf-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b579b360-d367-4637-8bf4-24be247f4daf\") " pod="openstack/rabbitmq-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.125977 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1fd0348d-2f44-4961-9503-eb8ce09016d8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd0348d-2f44-4961-9503-eb8ce09016d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.125966 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ba9ce260-411e-465e-825e-cb85f0d828d4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba9ce260-411e-465e-825e-cb85f0d828d4\") pod \"rabbitmq-server-0\" (UID: \"b579b360-d367-4637-8bf4-24be247f4daf\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2ee0fb1f36d21ee32de31c2c1b35f1f2033c96e9c0c8d1603b6b408ac3d6223f/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.126986 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b579b360-d367-4637-8bf4-24be247f4daf-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b579b360-d367-4637-8bf4-24be247f4daf\") " pod="openstack/rabbitmq-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.127149 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1fd0348d-2f44-4961-9503-eb8ce09016d8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd0348d-2f44-4961-9503-eb8ce09016d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.127643 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1fd0348d-2f44-4961-9503-eb8ce09016d8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd0348d-2f44-4961-9503-eb8ce09016d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.127763 4766 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.127791 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b51917bf-6d02-4b8e-98a7-9f99636938a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b51917bf-6d02-4b8e-98a7-9f99636938a8\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd0348d-2f44-4961-9503-eb8ce09016d8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a67480c1d51246343d54cce22ecd2529a760cf02f3b5a31cca902016f15d50c3/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.130801 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b579b360-d367-4637-8bf4-24be247f4daf-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b579b360-d367-4637-8bf4-24be247f4daf\") " pod="openstack/rabbitmq-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.142497 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q75lm\" (UniqueName: \"kubernetes.io/projected/b579b360-d367-4637-8bf4-24be247f4daf-kube-api-access-q75lm\") pod \"rabbitmq-server-0\" (UID: \"b579b360-d367-4637-8bf4-24be247f4daf\") " pod="openstack/rabbitmq-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.148252 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkpkm\" (UniqueName: \"kubernetes.io/projected/1fd0348d-2f44-4961-9503-eb8ce09016d8-kube-api-access-nkpkm\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd0348d-2f44-4961-9503-eb8ce09016d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.160445 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ba9ce260-411e-465e-825e-cb85f0d828d4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba9ce260-411e-465e-825e-cb85f0d828d4\") pod \"rabbitmq-server-0\" (UID: \"b579b360-d367-4637-8bf4-24be247f4daf\") " pod="openstack/rabbitmq-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.167392 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b51917bf-6d02-4b8e-98a7-9f99636938a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b51917bf-6d02-4b8e-98a7-9f99636938a8\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd0348d-2f44-4961-9503-eb8ce09016d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.281281 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.298772 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.455748 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98ddfc8f-ht8gm" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.526501 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnjtx\" (UniqueName: \"kubernetes.io/projected/5ec757c0-9d3d-4d66-9cd8-742105f2c48e-kube-api-access-dnjtx\") pod \"5ec757c0-9d3d-4d66-9cd8-742105f2c48e\" (UID: \"5ec757c0-9d3d-4d66-9cd8-742105f2c48e\") " Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.526581 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ec757c0-9d3d-4d66-9cd8-742105f2c48e-config\") pod \"5ec757c0-9d3d-4d66-9cd8-742105f2c48e\" (UID: \"5ec757c0-9d3d-4d66-9cd8-742105f2c48e\") " Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.526699 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ec757c0-9d3d-4d66-9cd8-742105f2c48e-dns-svc\") pod \"5ec757c0-9d3d-4d66-9cd8-742105f2c48e\" (UID: \"5ec757c0-9d3d-4d66-9cd8-742105f2c48e\") " Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.530418 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ec757c0-9d3d-4d66-9cd8-742105f2c48e-kube-api-access-dnjtx" (OuterVolumeSpecName: "kube-api-access-dnjtx") pod "5ec757c0-9d3d-4d66-9cd8-742105f2c48e" (UID: "5ec757c0-9d3d-4d66-9cd8-742105f2c48e"). InnerVolumeSpecName "kube-api-access-dnjtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.558859 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ec757c0-9d3d-4d66-9cd8-742105f2c48e-config" (OuterVolumeSpecName: "config") pod "5ec757c0-9d3d-4d66-9cd8-742105f2c48e" (UID: "5ec757c0-9d3d-4d66-9cd8-742105f2c48e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.559501 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ec757c0-9d3d-4d66-9cd8-742105f2c48e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5ec757c0-9d3d-4d66-9cd8-742105f2c48e" (UID: "5ec757c0-9d3d-4d66-9cd8-742105f2c48e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.628422 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ec757c0-9d3d-4d66-9cd8-742105f2c48e-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.628455 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ec757c0-9d3d-4d66-9cd8-742105f2c48e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.628465 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnjtx\" (UniqueName: \"kubernetes.io/projected/5ec757c0-9d3d-4d66-9cd8-742105f2c48e-kube-api-access-dnjtx\") on node \"crc\" DevicePath \"\"" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.751934 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.786859 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 17:41:50 crc kubenswrapper[4766]: W0130 17:41:50.788853 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1fd0348d_2f44_4961_9503_eb8ce09016d8.slice/crio-7aa758ccaa41a9662023737ab421ac8e9714ab9d4dbe298de7914ad3ec0b6d58 WatchSource:0}: Error finding container 7aa758ccaa41a9662023737ab421ac8e9714ab9d4dbe298de7914ad3ec0b6d58: Status 404 returned error can't find the container with id 7aa758ccaa41a9662023737ab421ac8e9714ab9d4dbe298de7914ad3ec0b6d58 Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.816752 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b579b360-d367-4637-8bf4-24be247f4daf","Type":"ContainerStarted","Data":"6158fa7c90c40c3905ef3369b347739baa1209eb6794f989832d1a300a02e3de"} Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.818633 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1fd0348d-2f44-4961-9503-eb8ce09016d8","Type":"ContainerStarted","Data":"7aa758ccaa41a9662023737ab421ac8e9714ab9d4dbe298de7914ad3ec0b6d58"} Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.822982 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-ht8gm" event={"ID":"5ec757c0-9d3d-4d66-9cd8-742105f2c48e","Type":"ContainerDied","Data":"93f3051a2e2fb18e0409a776a8675fba5f3199edbc1b6a3cbce75cefe563e769"} Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.823042 4766 scope.go:117] "RemoveContainer" containerID="a41bb6492a4775abf65f979bb5fa7a9593fae4739f7119a8735ab9ea5cd43dfb" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.823051 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98ddfc8f-ht8gm" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.841950 4766 scope.go:117] "RemoveContainer" containerID="987678b0c80e2ab072f159429ab8a830d6004ce03b8e464f8fa8d15fb7f56bd5" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.853152 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-ht8gm"] Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.858763 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-ht8gm"] Jan 30 17:41:51 crc kubenswrapper[4766]: I0130 17:41:51.832716 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b579b360-d367-4637-8bf4-24be247f4daf","Type":"ContainerStarted","Data":"528dc1be45c1fa71884fbf948c0b03035abf8f0497d38922787990286c05fb07"} Jan 30 17:41:51 crc kubenswrapper[4766]: I0130 17:41:51.834996 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1fd0348d-2f44-4961-9503-eb8ce09016d8","Type":"ContainerStarted","Data":"63aa9db7ce728dc6b379a3a3ae24390eec924085ffc2204e788f8997dce28e2d"} Jan 30 17:41:52 crc kubenswrapper[4766]: I0130 17:41:52.048261 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ec757c0-9d3d-4d66-9cd8-742105f2c48e" path="/var/lib/kubelet/pods/5ec757c0-9d3d-4d66-9cd8-742105f2c48e/volumes" Jan 30 17:42:09 crc kubenswrapper[4766]: I0130 17:42:09.045272 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:42:09 crc kubenswrapper[4766]: I0130 17:42:09.045770 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:42:24 crc kubenswrapper[4766]: I0130 17:42:24.065813 4766 generic.go:334] "Generic (PLEG): container finished" podID="b579b360-d367-4637-8bf4-24be247f4daf" containerID="528dc1be45c1fa71884fbf948c0b03035abf8f0497d38922787990286c05fb07" exitCode=0 Jan 30 17:42:24 crc kubenswrapper[4766]: I0130 17:42:24.065908 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b579b360-d367-4637-8bf4-24be247f4daf","Type":"ContainerDied","Data":"528dc1be45c1fa71884fbf948c0b03035abf8f0497d38922787990286c05fb07"} Jan 30 17:42:24 crc kubenswrapper[4766]: I0130 17:42:24.068497 4766 generic.go:334] "Generic (PLEG): container finished" podID="1fd0348d-2f44-4961-9503-eb8ce09016d8" containerID="63aa9db7ce728dc6b379a3a3ae24390eec924085ffc2204e788f8997dce28e2d" exitCode=0 Jan 30 17:42:24 crc kubenswrapper[4766]: I0130 17:42:24.068541 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1fd0348d-2f44-4961-9503-eb8ce09016d8","Type":"ContainerDied","Data":"63aa9db7ce728dc6b379a3a3ae24390eec924085ffc2204e788f8997dce28e2d"} Jan 30 17:42:25 crc kubenswrapper[4766]: I0130 17:42:25.092292 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b579b360-d367-4637-8bf4-24be247f4daf","Type":"ContainerStarted","Data":"3a9a754e2871aa2dcf9c538d95d0a137d0ee2fca4a3dddf391ff4585dc468eb1"} Jan 30 17:42:25 crc kubenswrapper[4766]: I0130 17:42:25.092974 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 30 17:42:25 crc kubenswrapper[4766]: I0130 17:42:25.095975 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1fd0348d-2f44-4961-9503-eb8ce09016d8","Type":"ContainerStarted","Data":"fbbe5bec4b359c72e85fc61bd1c297c0a5b74557b6c30d2687f2232b936a4140"} Jan 30 17:42:25 crc kubenswrapper[4766]: I0130 17:42:25.096217 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:42:25 crc kubenswrapper[4766]: I0130 17:42:25.123115 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.123086765 podStartE2EDuration="36.123086765s" podCreationTimestamp="2026-01-30 17:41:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:42:25.11516212 +0000 UTC m=+4799.753119466" watchObservedRunningTime="2026-01-30 17:42:25.123086765 +0000 UTC m=+4799.761044111" Jan 30 17:42:25 crc kubenswrapper[4766]: I0130 17:42:25.145078 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.145054231 podStartE2EDuration="36.145054231s" podCreationTimestamp="2026-01-30 17:41:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:42:25.143479638 +0000 UTC m=+4799.781436984" watchObservedRunningTime="2026-01-30 17:42:25.145054231 +0000 UTC m=+4799.783011577" Jan 30 17:42:39 crc kubenswrapper[4766]: I0130 17:42:39.045600 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:42:39 crc kubenswrapper[4766]: I0130 17:42:39.046124 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:42:39 crc kubenswrapper[4766]: I0130 17:42:39.046210 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 17:42:39 crc kubenswrapper[4766]: I0130 17:42:39.046732 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6d73f468d7a4ee2dec8ec549cbfd2340a24d3dd9f72d5b67bcf478d5bc8a9a1c"} pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 17:42:39 crc kubenswrapper[4766]: I0130 17:42:39.046799 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" containerID="cri-o://6d73f468d7a4ee2dec8ec549cbfd2340a24d3dd9f72d5b67bcf478d5bc8a9a1c" gracePeriod=600 Jan 30 17:42:39 crc kubenswrapper[4766]: I0130 17:42:39.206289 4766 generic.go:334] "Generic (PLEG): container finished" podID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerID="6d73f468d7a4ee2dec8ec549cbfd2340a24d3dd9f72d5b67bcf478d5bc8a9a1c" exitCode=0 Jan 30 17:42:39 crc kubenswrapper[4766]: I0130 17:42:39.206374 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerDied","Data":"6d73f468d7a4ee2dec8ec549cbfd2340a24d3dd9f72d5b67bcf478d5bc8a9a1c"} Jan 30 17:42:39 crc kubenswrapper[4766]: I0130 17:42:39.206675 4766 scope.go:117] "RemoveContainer" containerID="62b27159543c2d7874b57e155df6cf176eef5367eb321b1133ba7cc2464a1a68" Jan 30 17:42:40 crc kubenswrapper[4766]: I0130 17:42:40.221073 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerStarted","Data":"7fc5828a50a187fe8ccf98b89913744accb72a8e7e151fd7516b11ddfbd0849c"} Jan 30 17:42:40 crc kubenswrapper[4766]: I0130 17:42:40.285506 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 30 17:42:40 crc kubenswrapper[4766]: I0130 17:42:40.304265 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:42:50 crc kubenswrapper[4766]: I0130 17:42:50.681638 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Jan 30 17:42:50 crc kubenswrapper[4766]: E0130 17:42:50.682428 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ec757c0-9d3d-4d66-9cd8-742105f2c48e" containerName="dnsmasq-dns" Jan 30 17:42:50 crc kubenswrapper[4766]: I0130 17:42:50.682441 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ec757c0-9d3d-4d66-9cd8-742105f2c48e" containerName="dnsmasq-dns" Jan 30 17:42:50 crc kubenswrapper[4766]: E0130 17:42:50.682460 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ec757c0-9d3d-4d66-9cd8-742105f2c48e" containerName="init" Jan 30 17:42:50 crc kubenswrapper[4766]: I0130 17:42:50.682466 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ec757c0-9d3d-4d66-9cd8-742105f2c48e" containerName="init" Jan 30 17:42:50 crc kubenswrapper[4766]: I0130 17:42:50.682601 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ec757c0-9d3d-4d66-9cd8-742105f2c48e" containerName="dnsmasq-dns" Jan 30 17:42:50 crc kubenswrapper[4766]: I0130 17:42:50.683304 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 17:42:50 crc kubenswrapper[4766]: I0130 17:42:50.689069 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 30 17:42:50 crc kubenswrapper[4766]: I0130 17:42:50.689799 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-slmpt" Jan 30 17:42:50 crc kubenswrapper[4766]: I0130 17:42:50.827159 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wczvr\" (UniqueName: \"kubernetes.io/projected/1129ee55-bf4e-46de-849a-fe2fa0de8181-kube-api-access-wczvr\") pod \"mariadb-client\" (UID: \"1129ee55-bf4e-46de-849a-fe2fa0de8181\") " pod="openstack/mariadb-client" Jan 30 17:42:50 crc kubenswrapper[4766]: I0130 17:42:50.928743 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wczvr\" (UniqueName: \"kubernetes.io/projected/1129ee55-bf4e-46de-849a-fe2fa0de8181-kube-api-access-wczvr\") pod \"mariadb-client\" (UID: \"1129ee55-bf4e-46de-849a-fe2fa0de8181\") " pod="openstack/mariadb-client" Jan 30 17:42:50 crc kubenswrapper[4766]: I0130 17:42:50.956341 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wczvr\" (UniqueName: \"kubernetes.io/projected/1129ee55-bf4e-46de-849a-fe2fa0de8181-kube-api-access-wczvr\") pod \"mariadb-client\" (UID: \"1129ee55-bf4e-46de-849a-fe2fa0de8181\") " pod="openstack/mariadb-client" Jan 30 17:42:51 crc kubenswrapper[4766]: I0130 17:42:51.001101 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 17:42:51 crc kubenswrapper[4766]: I0130 17:42:51.507134 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 30 17:42:52 crc kubenswrapper[4766]: I0130 17:42:52.300603 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"1129ee55-bf4e-46de-849a-fe2fa0de8181","Type":"ContainerStarted","Data":"79a3b7f4eba22a75e63f5463101291fbd04068411bae413a72681963a67ee108"} Jan 30 17:42:52 crc kubenswrapper[4766]: I0130 17:42:52.301103 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"1129ee55-bf4e-46de-849a-fe2fa0de8181","Type":"ContainerStarted","Data":"d5fca061ae43a81617098f04a8518ad9f8c173148013c0de0c644f6920fe37cb"} Jan 30 17:42:52 crc kubenswrapper[4766]: I0130 17:42:52.324622 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client" podStartSLOduration=2.324591436 podStartE2EDuration="2.324591436s" podCreationTimestamp="2026-01-30 17:42:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:42:52.315709575 +0000 UTC m=+4826.953666921" watchObservedRunningTime="2026-01-30 17:42:52.324591436 +0000 UTC m=+4826.962548782" Jan 30 17:43:05 crc kubenswrapper[4766]: I0130 17:43:05.607004 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 30 17:43:05 crc kubenswrapper[4766]: I0130 17:43:05.607746 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mariadb-client" podUID="1129ee55-bf4e-46de-849a-fe2fa0de8181" containerName="mariadb-client" containerID="cri-o://79a3b7f4eba22a75e63f5463101291fbd04068411bae413a72681963a67ee108" gracePeriod=30 Jan 30 17:43:06 crc kubenswrapper[4766]: I0130 17:43:06.043789 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 17:43:06 crc kubenswrapper[4766]: I0130 17:43:06.154109 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wczvr\" (UniqueName: \"kubernetes.io/projected/1129ee55-bf4e-46de-849a-fe2fa0de8181-kube-api-access-wczvr\") pod \"1129ee55-bf4e-46de-849a-fe2fa0de8181\" (UID: \"1129ee55-bf4e-46de-849a-fe2fa0de8181\") " Jan 30 17:43:06 crc kubenswrapper[4766]: I0130 17:43:06.159584 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1129ee55-bf4e-46de-849a-fe2fa0de8181-kube-api-access-wczvr" (OuterVolumeSpecName: "kube-api-access-wczvr") pod "1129ee55-bf4e-46de-849a-fe2fa0de8181" (UID: "1129ee55-bf4e-46de-849a-fe2fa0de8181"). InnerVolumeSpecName "kube-api-access-wczvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:43:06 crc kubenswrapper[4766]: I0130 17:43:06.255643 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wczvr\" (UniqueName: \"kubernetes.io/projected/1129ee55-bf4e-46de-849a-fe2fa0de8181-kube-api-access-wczvr\") on node \"crc\" DevicePath \"\"" Jan 30 17:43:06 crc kubenswrapper[4766]: I0130 17:43:06.406046 4766 generic.go:334] "Generic (PLEG): container finished" podID="1129ee55-bf4e-46de-849a-fe2fa0de8181" containerID="79a3b7f4eba22a75e63f5463101291fbd04068411bae413a72681963a67ee108" exitCode=143 Jan 30 17:43:06 crc kubenswrapper[4766]: I0130 17:43:06.406100 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 17:43:06 crc kubenswrapper[4766]: I0130 17:43:06.406103 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"1129ee55-bf4e-46de-849a-fe2fa0de8181","Type":"ContainerDied","Data":"79a3b7f4eba22a75e63f5463101291fbd04068411bae413a72681963a67ee108"} Jan 30 17:43:06 crc kubenswrapper[4766]: I0130 17:43:06.406259 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"1129ee55-bf4e-46de-849a-fe2fa0de8181","Type":"ContainerDied","Data":"d5fca061ae43a81617098f04a8518ad9f8c173148013c0de0c644f6920fe37cb"} Jan 30 17:43:06 crc kubenswrapper[4766]: I0130 17:43:06.406279 4766 scope.go:117] "RemoveContainer" containerID="79a3b7f4eba22a75e63f5463101291fbd04068411bae413a72681963a67ee108" Jan 30 17:43:06 crc kubenswrapper[4766]: I0130 17:43:06.425994 4766 scope.go:117] "RemoveContainer" containerID="79a3b7f4eba22a75e63f5463101291fbd04068411bae413a72681963a67ee108" Jan 30 17:43:06 crc kubenswrapper[4766]: E0130 17:43:06.426396 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79a3b7f4eba22a75e63f5463101291fbd04068411bae413a72681963a67ee108\": container with ID starting with 79a3b7f4eba22a75e63f5463101291fbd04068411bae413a72681963a67ee108 not found: ID does not exist" containerID="79a3b7f4eba22a75e63f5463101291fbd04068411bae413a72681963a67ee108" Jan 30 17:43:06 crc kubenswrapper[4766]: I0130 17:43:06.426430 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79a3b7f4eba22a75e63f5463101291fbd04068411bae413a72681963a67ee108"} err="failed to get container status \"79a3b7f4eba22a75e63f5463101291fbd04068411bae413a72681963a67ee108\": rpc error: code = NotFound desc = could not find container \"79a3b7f4eba22a75e63f5463101291fbd04068411bae413a72681963a67ee108\": container with ID starting with 79a3b7f4eba22a75e63f5463101291fbd04068411bae413a72681963a67ee108 not found: ID does not exist" Jan 30 17:43:06 crc kubenswrapper[4766]: I0130 17:43:06.446641 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 30 17:43:06 crc kubenswrapper[4766]: I0130 17:43:06.454784 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Jan 30 17:43:08 crc kubenswrapper[4766]: I0130 17:43:08.047931 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1129ee55-bf4e-46de-849a-fe2fa0de8181" path="/var/lib/kubelet/pods/1129ee55-bf4e-46de-849a-fe2fa0de8181/volumes" Jan 30 17:43:31 crc kubenswrapper[4766]: I0130 17:43:31.430270 4766 scope.go:117] "RemoveContainer" containerID="3c2bcfb1e73c683e268e22a58c61847b65be47ed0077a6171ee0609e464de262" Jan 30 17:44:39 crc kubenswrapper[4766]: I0130 17:44:39.045620 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:44:39 crc kubenswrapper[4766]: I0130 17:44:39.046646 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:45:00 crc kubenswrapper[4766]: I0130 17:45:00.144799 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496585-mftxm"] Jan 30 17:45:00 crc kubenswrapper[4766]: E0130 17:45:00.145785 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1129ee55-bf4e-46de-849a-fe2fa0de8181" containerName="mariadb-client" Jan 30 17:45:00 crc kubenswrapper[4766]: I0130 17:45:00.145805 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="1129ee55-bf4e-46de-849a-fe2fa0de8181" containerName="mariadb-client" Jan 30 17:45:00 crc kubenswrapper[4766]: I0130 17:45:00.145982 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="1129ee55-bf4e-46de-849a-fe2fa0de8181" containerName="mariadb-client" Jan 30 17:45:00 crc kubenswrapper[4766]: I0130 17:45:00.146593 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496585-mftxm" Jan 30 17:45:00 crc kubenswrapper[4766]: I0130 17:45:00.149001 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 17:45:00 crc kubenswrapper[4766]: I0130 17:45:00.149248 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 17:45:00 crc kubenswrapper[4766]: I0130 17:45:00.153127 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496585-mftxm"] Jan 30 17:45:00 crc kubenswrapper[4766]: I0130 17:45:00.273882 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a7ea96a-39de-4a8a-b0ce-e7778f12fe50-config-volume\") pod \"collect-profiles-29496585-mftxm\" (UID: \"8a7ea96a-39de-4a8a-b0ce-e7778f12fe50\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496585-mftxm" Jan 30 17:45:00 crc kubenswrapper[4766]: I0130 17:45:00.273948 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8a7ea96a-39de-4a8a-b0ce-e7778f12fe50-secret-volume\") pod \"collect-profiles-29496585-mftxm\" (UID: \"8a7ea96a-39de-4a8a-b0ce-e7778f12fe50\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496585-mftxm" Jan 30 17:45:00 crc kubenswrapper[4766]: I0130 17:45:00.274008 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ddbp\" (UniqueName: \"kubernetes.io/projected/8a7ea96a-39de-4a8a-b0ce-e7778f12fe50-kube-api-access-7ddbp\") pod \"collect-profiles-29496585-mftxm\" (UID: \"8a7ea96a-39de-4a8a-b0ce-e7778f12fe50\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496585-mftxm" Jan 30 17:45:00 crc kubenswrapper[4766]: I0130 17:45:00.374789 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8a7ea96a-39de-4a8a-b0ce-e7778f12fe50-secret-volume\") pod \"collect-profiles-29496585-mftxm\" (UID: \"8a7ea96a-39de-4a8a-b0ce-e7778f12fe50\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496585-mftxm" Jan 30 17:45:00 crc kubenswrapper[4766]: I0130 17:45:00.374927 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ddbp\" (UniqueName: \"kubernetes.io/projected/8a7ea96a-39de-4a8a-b0ce-e7778f12fe50-kube-api-access-7ddbp\") pod \"collect-profiles-29496585-mftxm\" (UID: \"8a7ea96a-39de-4a8a-b0ce-e7778f12fe50\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496585-mftxm" Jan 30 17:45:00 crc kubenswrapper[4766]: I0130 17:45:00.375049 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a7ea96a-39de-4a8a-b0ce-e7778f12fe50-config-volume\") pod \"collect-profiles-29496585-mftxm\" (UID: \"8a7ea96a-39de-4a8a-b0ce-e7778f12fe50\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496585-mftxm" Jan 30 17:45:00 crc kubenswrapper[4766]: I0130 17:45:00.375840 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a7ea96a-39de-4a8a-b0ce-e7778f12fe50-config-volume\") pod \"collect-profiles-29496585-mftxm\" (UID: \"8a7ea96a-39de-4a8a-b0ce-e7778f12fe50\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496585-mftxm" Jan 30 17:45:00 crc kubenswrapper[4766]: I0130 17:45:00.383826 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8a7ea96a-39de-4a8a-b0ce-e7778f12fe50-secret-volume\") pod \"collect-profiles-29496585-mftxm\" (UID: \"8a7ea96a-39de-4a8a-b0ce-e7778f12fe50\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496585-mftxm" Jan 30 17:45:00 crc kubenswrapper[4766]: I0130 17:45:00.394151 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ddbp\" (UniqueName: \"kubernetes.io/projected/8a7ea96a-39de-4a8a-b0ce-e7778f12fe50-kube-api-access-7ddbp\") pod \"collect-profiles-29496585-mftxm\" (UID: \"8a7ea96a-39de-4a8a-b0ce-e7778f12fe50\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496585-mftxm" Jan 30 17:45:00 crc kubenswrapper[4766]: I0130 17:45:00.468500 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496585-mftxm" Jan 30 17:45:00 crc kubenswrapper[4766]: I0130 17:45:00.878994 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496585-mftxm"] Jan 30 17:45:01 crc kubenswrapper[4766]: I0130 17:45:01.578975 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496585-mftxm" event={"ID":"8a7ea96a-39de-4a8a-b0ce-e7778f12fe50","Type":"ContainerStarted","Data":"ca11ea3447baddfdba3d4121a5ab360e5aad6d36ff04e23e12c6802b7d8b1f93"} Jan 30 17:45:01 crc kubenswrapper[4766]: I0130 17:45:01.579356 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496585-mftxm" event={"ID":"8a7ea96a-39de-4a8a-b0ce-e7778f12fe50","Type":"ContainerStarted","Data":"424847fe74f64e214d5fda0e3b977bb63b7a27bbff46d0f731551acb2e88fe4c"} Jan 30 17:45:01 crc kubenswrapper[4766]: I0130 17:45:01.598756 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496585-mftxm" podStartSLOduration=1.598735786 podStartE2EDuration="1.598735786s" podCreationTimestamp="2026-01-30 17:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:45:01.597670906 +0000 UTC m=+4956.235628262" watchObservedRunningTime="2026-01-30 17:45:01.598735786 +0000 UTC m=+4956.236693132" Jan 30 17:45:02 crc kubenswrapper[4766]: I0130 17:45:02.590635 4766 generic.go:334] "Generic (PLEG): container finished" podID="8a7ea96a-39de-4a8a-b0ce-e7778f12fe50" containerID="ca11ea3447baddfdba3d4121a5ab360e5aad6d36ff04e23e12c6802b7d8b1f93" exitCode=0 Jan 30 17:45:02 crc kubenswrapper[4766]: I0130 17:45:02.590750 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496585-mftxm" event={"ID":"8a7ea96a-39de-4a8a-b0ce-e7778f12fe50","Type":"ContainerDied","Data":"ca11ea3447baddfdba3d4121a5ab360e5aad6d36ff04e23e12c6802b7d8b1f93"} Jan 30 17:45:03 crc kubenswrapper[4766]: I0130 17:45:03.881960 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496585-mftxm" Jan 30 17:45:04 crc kubenswrapper[4766]: I0130 17:45:04.025564 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a7ea96a-39de-4a8a-b0ce-e7778f12fe50-config-volume\") pod \"8a7ea96a-39de-4a8a-b0ce-e7778f12fe50\" (UID: \"8a7ea96a-39de-4a8a-b0ce-e7778f12fe50\") " Jan 30 17:45:04 crc kubenswrapper[4766]: I0130 17:45:04.025715 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8a7ea96a-39de-4a8a-b0ce-e7778f12fe50-secret-volume\") pod \"8a7ea96a-39de-4a8a-b0ce-e7778f12fe50\" (UID: \"8a7ea96a-39de-4a8a-b0ce-e7778f12fe50\") " Jan 30 17:45:04 crc kubenswrapper[4766]: I0130 17:45:04.025847 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ddbp\" (UniqueName: \"kubernetes.io/projected/8a7ea96a-39de-4a8a-b0ce-e7778f12fe50-kube-api-access-7ddbp\") pod \"8a7ea96a-39de-4a8a-b0ce-e7778f12fe50\" (UID: \"8a7ea96a-39de-4a8a-b0ce-e7778f12fe50\") " Jan 30 17:45:04 crc kubenswrapper[4766]: I0130 17:45:04.026126 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a7ea96a-39de-4a8a-b0ce-e7778f12fe50-config-volume" (OuterVolumeSpecName: "config-volume") pod "8a7ea96a-39de-4a8a-b0ce-e7778f12fe50" (UID: "8a7ea96a-39de-4a8a-b0ce-e7778f12fe50"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:45:04 crc kubenswrapper[4766]: I0130 17:45:04.026475 4766 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a7ea96a-39de-4a8a-b0ce-e7778f12fe50-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 17:45:04 crc kubenswrapper[4766]: I0130 17:45:04.031559 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a7ea96a-39de-4a8a-b0ce-e7778f12fe50-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8a7ea96a-39de-4a8a-b0ce-e7778f12fe50" (UID: "8a7ea96a-39de-4a8a-b0ce-e7778f12fe50"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:45:04 crc kubenswrapper[4766]: I0130 17:45:04.031849 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a7ea96a-39de-4a8a-b0ce-e7778f12fe50-kube-api-access-7ddbp" (OuterVolumeSpecName: "kube-api-access-7ddbp") pod "8a7ea96a-39de-4a8a-b0ce-e7778f12fe50" (UID: "8a7ea96a-39de-4a8a-b0ce-e7778f12fe50"). InnerVolumeSpecName "kube-api-access-7ddbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:45:04 crc kubenswrapper[4766]: I0130 17:45:04.128330 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ddbp\" (UniqueName: \"kubernetes.io/projected/8a7ea96a-39de-4a8a-b0ce-e7778f12fe50-kube-api-access-7ddbp\") on node \"crc\" DevicePath \"\"" Jan 30 17:45:04 crc kubenswrapper[4766]: I0130 17:45:04.128366 4766 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8a7ea96a-39de-4a8a-b0ce-e7778f12fe50-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 17:45:04 crc kubenswrapper[4766]: I0130 17:45:04.607310 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496585-mftxm" event={"ID":"8a7ea96a-39de-4a8a-b0ce-e7778f12fe50","Type":"ContainerDied","Data":"424847fe74f64e214d5fda0e3b977bb63b7a27bbff46d0f731551acb2e88fe4c"} Jan 30 17:45:04 crc kubenswrapper[4766]: I0130 17:45:04.607347 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="424847fe74f64e214d5fda0e3b977bb63b7a27bbff46d0f731551acb2e88fe4c" Jan 30 17:45:04 crc kubenswrapper[4766]: I0130 17:45:04.607377 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496585-mftxm" Jan 30 17:45:04 crc kubenswrapper[4766]: I0130 17:45:04.666400 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496540-qfpbr"] Jan 30 17:45:04 crc kubenswrapper[4766]: I0130 17:45:04.672126 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496540-qfpbr"] Jan 30 17:45:06 crc kubenswrapper[4766]: I0130 17:45:06.048779 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d00d929-3c4f-4555-b75b-a39750dc609b" path="/var/lib/kubelet/pods/3d00d929-3c4f-4555-b75b-a39750dc609b/volumes" Jan 30 17:45:09 crc kubenswrapper[4766]: I0130 17:45:09.045507 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:45:09 crc kubenswrapper[4766]: I0130 17:45:09.046828 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:45:31 crc kubenswrapper[4766]: I0130 17:45:31.497749 4766 scope.go:117] "RemoveContainer" containerID="d1bbe33187614be0056c390feb3f40bb39d47764bf4e3d7add03326875657c91" Jan 30 17:45:39 crc kubenswrapper[4766]: I0130 17:45:39.045596 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:45:39 crc kubenswrapper[4766]: I0130 17:45:39.046107 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:45:39 crc kubenswrapper[4766]: I0130 17:45:39.046157 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 17:45:39 crc kubenswrapper[4766]: I0130 17:45:39.046748 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7fc5828a50a187fe8ccf98b89913744accb72a8e7e151fd7516b11ddfbd0849c"} pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 17:45:39 crc kubenswrapper[4766]: I0130 17:45:39.046805 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" containerID="cri-o://7fc5828a50a187fe8ccf98b89913744accb72a8e7e151fd7516b11ddfbd0849c" gracePeriod=600 Jan 30 17:45:39 crc kubenswrapper[4766]: E0130 17:45:39.245014 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:45:39 crc kubenswrapper[4766]: I0130 17:45:39.908874 4766 generic.go:334] "Generic (PLEG): container finished" podID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerID="7fc5828a50a187fe8ccf98b89913744accb72a8e7e151fd7516b11ddfbd0849c" exitCode=0 Jan 30 17:45:39 crc kubenswrapper[4766]: I0130 17:45:39.908934 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerDied","Data":"7fc5828a50a187fe8ccf98b89913744accb72a8e7e151fd7516b11ddfbd0849c"} Jan 30 17:45:39 crc kubenswrapper[4766]: I0130 17:45:39.908979 4766 scope.go:117] "RemoveContainer" containerID="6d73f468d7a4ee2dec8ec549cbfd2340a24d3dd9f72d5b67bcf478d5bc8a9a1c" Jan 30 17:45:39 crc kubenswrapper[4766]: I0130 17:45:39.909615 4766 scope.go:117] "RemoveContainer" containerID="7fc5828a50a187fe8ccf98b89913744accb72a8e7e151fd7516b11ddfbd0849c" Jan 30 17:45:39 crc kubenswrapper[4766]: E0130 17:45:39.909905 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:45:52 crc kubenswrapper[4766]: I0130 17:45:52.040028 4766 scope.go:117] "RemoveContainer" containerID="7fc5828a50a187fe8ccf98b89913744accb72a8e7e151fd7516b11ddfbd0849c" Jan 30 17:45:52 crc kubenswrapper[4766]: E0130 17:45:52.041108 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:46:03 crc kubenswrapper[4766]: I0130 17:46:03.039690 4766 scope.go:117] "RemoveContainer" containerID="7fc5828a50a187fe8ccf98b89913744accb72a8e7e151fd7516b11ddfbd0849c" Jan 30 17:46:03 crc kubenswrapper[4766]: E0130 17:46:03.040557 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:46:09 crc kubenswrapper[4766]: I0130 17:46:09.606906 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gq8x4"] Jan 30 17:46:09 crc kubenswrapper[4766]: E0130 17:46:09.607826 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a7ea96a-39de-4a8a-b0ce-e7778f12fe50" containerName="collect-profiles" Jan 30 17:46:09 crc kubenswrapper[4766]: I0130 17:46:09.607843 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a7ea96a-39de-4a8a-b0ce-e7778f12fe50" containerName="collect-profiles" Jan 30 17:46:09 crc kubenswrapper[4766]: I0130 17:46:09.608080 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a7ea96a-39de-4a8a-b0ce-e7778f12fe50" containerName="collect-profiles" Jan 30 17:46:09 crc kubenswrapper[4766]: I0130 17:46:09.609693 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gq8x4" Jan 30 17:46:09 crc kubenswrapper[4766]: I0130 17:46:09.617938 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gq8x4"] Jan 30 17:46:09 crc kubenswrapper[4766]: I0130 17:46:09.663480 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdb7h\" (UniqueName: \"kubernetes.io/projected/f01a059b-5337-4eba-bc02-106bb2e15da8-kube-api-access-hdb7h\") pod \"community-operators-gq8x4\" (UID: \"f01a059b-5337-4eba-bc02-106bb2e15da8\") " pod="openshift-marketplace/community-operators-gq8x4" Jan 30 17:46:09 crc kubenswrapper[4766]: I0130 17:46:09.663611 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f01a059b-5337-4eba-bc02-106bb2e15da8-utilities\") pod \"community-operators-gq8x4\" (UID: \"f01a059b-5337-4eba-bc02-106bb2e15da8\") " pod="openshift-marketplace/community-operators-gq8x4" Jan 30 17:46:09 crc kubenswrapper[4766]: I0130 17:46:09.663765 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f01a059b-5337-4eba-bc02-106bb2e15da8-catalog-content\") pod \"community-operators-gq8x4\" (UID: \"f01a059b-5337-4eba-bc02-106bb2e15da8\") " pod="openshift-marketplace/community-operators-gq8x4" Jan 30 17:46:09 crc kubenswrapper[4766]: I0130 17:46:09.765303 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdb7h\" (UniqueName: \"kubernetes.io/projected/f01a059b-5337-4eba-bc02-106bb2e15da8-kube-api-access-hdb7h\") pod \"community-operators-gq8x4\" (UID: \"f01a059b-5337-4eba-bc02-106bb2e15da8\") " pod="openshift-marketplace/community-operators-gq8x4" Jan 30 17:46:09 crc kubenswrapper[4766]: I0130 17:46:09.765418 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f01a059b-5337-4eba-bc02-106bb2e15da8-utilities\") pod \"community-operators-gq8x4\" (UID: \"f01a059b-5337-4eba-bc02-106bb2e15da8\") " pod="openshift-marketplace/community-operators-gq8x4" Jan 30 17:46:09 crc kubenswrapper[4766]: I0130 17:46:09.765465 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f01a059b-5337-4eba-bc02-106bb2e15da8-catalog-content\") pod \"community-operators-gq8x4\" (UID: \"f01a059b-5337-4eba-bc02-106bb2e15da8\") " pod="openshift-marketplace/community-operators-gq8x4" Jan 30 17:46:09 crc kubenswrapper[4766]: I0130 17:46:09.765998 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f01a059b-5337-4eba-bc02-106bb2e15da8-catalog-content\") pod \"community-operators-gq8x4\" (UID: \"f01a059b-5337-4eba-bc02-106bb2e15da8\") " pod="openshift-marketplace/community-operators-gq8x4" Jan 30 17:46:09 crc kubenswrapper[4766]: I0130 17:46:09.766401 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f01a059b-5337-4eba-bc02-106bb2e15da8-utilities\") pod \"community-operators-gq8x4\" (UID: \"f01a059b-5337-4eba-bc02-106bb2e15da8\") " pod="openshift-marketplace/community-operators-gq8x4" Jan 30 17:46:09 crc kubenswrapper[4766]: I0130 17:46:09.789266 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdb7h\" (UniqueName: \"kubernetes.io/projected/f01a059b-5337-4eba-bc02-106bb2e15da8-kube-api-access-hdb7h\") pod \"community-operators-gq8x4\" (UID: \"f01a059b-5337-4eba-bc02-106bb2e15da8\") " pod="openshift-marketplace/community-operators-gq8x4" Jan 30 17:46:09 crc kubenswrapper[4766]: I0130 17:46:09.930527 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gq8x4" Jan 30 17:46:10 crc kubenswrapper[4766]: I0130 17:46:10.439018 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gq8x4"] Jan 30 17:46:11 crc kubenswrapper[4766]: I0130 17:46:11.133074 4766 generic.go:334] "Generic (PLEG): container finished" podID="f01a059b-5337-4eba-bc02-106bb2e15da8" containerID="465c1cb5c8cf7246fde5bcedb2da56cb0f589b3aa6e4017c15ad93dca8961ec6" exitCode=0 Jan 30 17:46:11 crc kubenswrapper[4766]: I0130 17:46:11.133142 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gq8x4" event={"ID":"f01a059b-5337-4eba-bc02-106bb2e15da8","Type":"ContainerDied","Data":"465c1cb5c8cf7246fde5bcedb2da56cb0f589b3aa6e4017c15ad93dca8961ec6"} Jan 30 17:46:11 crc kubenswrapper[4766]: I0130 17:46:11.133573 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gq8x4" event={"ID":"f01a059b-5337-4eba-bc02-106bb2e15da8","Type":"ContainerStarted","Data":"716bfe1daea03209a9d4d6f8afa485fed91c4531dcf18c6a919290635d33e7c7"} Jan 30 17:46:11 crc kubenswrapper[4766]: I0130 17:46:11.134797 4766 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 17:46:13 crc kubenswrapper[4766]: I0130 17:46:13.146240 4766 generic.go:334] "Generic (PLEG): container finished" podID="f01a059b-5337-4eba-bc02-106bb2e15da8" containerID="0acecfe5a42117f15e32e19da99617c81a1d07eda945ccdd12d0c58c6617c02d" exitCode=0 Jan 30 17:46:13 crc kubenswrapper[4766]: I0130 17:46:13.146346 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gq8x4" event={"ID":"f01a059b-5337-4eba-bc02-106bb2e15da8","Type":"ContainerDied","Data":"0acecfe5a42117f15e32e19da99617c81a1d07eda945ccdd12d0c58c6617c02d"} Jan 30 17:46:14 crc kubenswrapper[4766]: I0130 17:46:14.155267 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gq8x4" event={"ID":"f01a059b-5337-4eba-bc02-106bb2e15da8","Type":"ContainerStarted","Data":"995a48d874b66023868a3a83fd5c367bd546ef22d90f80f8f2b3eae25ae67e0a"} Jan 30 17:46:14 crc kubenswrapper[4766]: I0130 17:46:14.171820 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gq8x4" podStartSLOduration=2.543201222 podStartE2EDuration="5.171794871s" podCreationTimestamp="2026-01-30 17:46:09 +0000 UTC" firstStartedPulling="2026-01-30 17:46:11.13457515 +0000 UTC m=+5025.772532496" lastFinishedPulling="2026-01-30 17:46:13.763168799 +0000 UTC m=+5028.401126145" observedRunningTime="2026-01-30 17:46:14.171287698 +0000 UTC m=+5028.809245074" watchObservedRunningTime="2026-01-30 17:46:14.171794871 +0000 UTC m=+5028.809752217" Jan 30 17:46:15 crc kubenswrapper[4766]: I0130 17:46:15.039854 4766 scope.go:117] "RemoveContainer" containerID="7fc5828a50a187fe8ccf98b89913744accb72a8e7e151fd7516b11ddfbd0849c" Jan 30 17:46:15 crc kubenswrapper[4766]: E0130 17:46:15.040768 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:46:19 crc kubenswrapper[4766]: I0130 17:46:19.931159 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gq8x4" Jan 30 17:46:19 crc kubenswrapper[4766]: I0130 17:46:19.931589 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gq8x4" Jan 30 17:46:19 crc kubenswrapper[4766]: I0130 17:46:19.982192 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gq8x4" Jan 30 17:46:20 crc kubenswrapper[4766]: I0130 17:46:20.267751 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gq8x4" Jan 30 17:46:20 crc kubenswrapper[4766]: I0130 17:46:20.322746 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gq8x4"] Jan 30 17:46:22 crc kubenswrapper[4766]: I0130 17:46:22.210495 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gq8x4" podUID="f01a059b-5337-4eba-bc02-106bb2e15da8" containerName="registry-server" containerID="cri-o://995a48d874b66023868a3a83fd5c367bd546ef22d90f80f8f2b3eae25ae67e0a" gracePeriod=2 Jan 30 17:46:23 crc kubenswrapper[4766]: I0130 17:46:23.102706 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gq8x4" Jan 30 17:46:23 crc kubenswrapper[4766]: I0130 17:46:23.219986 4766 generic.go:334] "Generic (PLEG): container finished" podID="f01a059b-5337-4eba-bc02-106bb2e15da8" containerID="995a48d874b66023868a3a83fd5c367bd546ef22d90f80f8f2b3eae25ae67e0a" exitCode=0 Jan 30 17:46:23 crc kubenswrapper[4766]: I0130 17:46:23.220038 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gq8x4" Jan 30 17:46:23 crc kubenswrapper[4766]: I0130 17:46:23.220052 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gq8x4" event={"ID":"f01a059b-5337-4eba-bc02-106bb2e15da8","Type":"ContainerDied","Data":"995a48d874b66023868a3a83fd5c367bd546ef22d90f80f8f2b3eae25ae67e0a"} Jan 30 17:46:23 crc kubenswrapper[4766]: I0130 17:46:23.220090 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gq8x4" event={"ID":"f01a059b-5337-4eba-bc02-106bb2e15da8","Type":"ContainerDied","Data":"716bfe1daea03209a9d4d6f8afa485fed91c4531dcf18c6a919290635d33e7c7"} Jan 30 17:46:23 crc kubenswrapper[4766]: I0130 17:46:23.220110 4766 scope.go:117] "RemoveContainer" containerID="995a48d874b66023868a3a83fd5c367bd546ef22d90f80f8f2b3eae25ae67e0a" Jan 30 17:46:23 crc kubenswrapper[4766]: I0130 17:46:23.237707 4766 scope.go:117] "RemoveContainer" containerID="0acecfe5a42117f15e32e19da99617c81a1d07eda945ccdd12d0c58c6617c02d" Jan 30 17:46:23 crc kubenswrapper[4766]: I0130 17:46:23.256596 4766 scope.go:117] "RemoveContainer" containerID="465c1cb5c8cf7246fde5bcedb2da56cb0f589b3aa6e4017c15ad93dca8961ec6" Jan 30 17:46:23 crc kubenswrapper[4766]: I0130 17:46:23.278854 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f01a059b-5337-4eba-bc02-106bb2e15da8-catalog-content\") pod \"f01a059b-5337-4eba-bc02-106bb2e15da8\" (UID: \"f01a059b-5337-4eba-bc02-106bb2e15da8\") " Jan 30 17:46:23 crc kubenswrapper[4766]: I0130 17:46:23.278918 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdb7h\" (UniqueName: \"kubernetes.io/projected/f01a059b-5337-4eba-bc02-106bb2e15da8-kube-api-access-hdb7h\") pod \"f01a059b-5337-4eba-bc02-106bb2e15da8\" (UID: \"f01a059b-5337-4eba-bc02-106bb2e15da8\") " Jan 30 17:46:23 crc kubenswrapper[4766]: I0130 17:46:23.279101 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f01a059b-5337-4eba-bc02-106bb2e15da8-utilities\") pod \"f01a059b-5337-4eba-bc02-106bb2e15da8\" (UID: \"f01a059b-5337-4eba-bc02-106bb2e15da8\") " Jan 30 17:46:23 crc kubenswrapper[4766]: I0130 17:46:23.280019 4766 scope.go:117] "RemoveContainer" containerID="995a48d874b66023868a3a83fd5c367bd546ef22d90f80f8f2b3eae25ae67e0a" Jan 30 17:46:23 crc kubenswrapper[4766]: I0130 17:46:23.280134 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f01a059b-5337-4eba-bc02-106bb2e15da8-utilities" (OuterVolumeSpecName: "utilities") pod "f01a059b-5337-4eba-bc02-106bb2e15da8" (UID: "f01a059b-5337-4eba-bc02-106bb2e15da8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:46:23 crc kubenswrapper[4766]: E0130 17:46:23.280865 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"995a48d874b66023868a3a83fd5c367bd546ef22d90f80f8f2b3eae25ae67e0a\": container with ID starting with 995a48d874b66023868a3a83fd5c367bd546ef22d90f80f8f2b3eae25ae67e0a not found: ID does not exist" containerID="995a48d874b66023868a3a83fd5c367bd546ef22d90f80f8f2b3eae25ae67e0a" Jan 30 17:46:23 crc kubenswrapper[4766]: I0130 17:46:23.280928 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"995a48d874b66023868a3a83fd5c367bd546ef22d90f80f8f2b3eae25ae67e0a"} err="failed to get container status \"995a48d874b66023868a3a83fd5c367bd546ef22d90f80f8f2b3eae25ae67e0a\": rpc error: code = NotFound desc = could not find container \"995a48d874b66023868a3a83fd5c367bd546ef22d90f80f8f2b3eae25ae67e0a\": container with ID starting with 995a48d874b66023868a3a83fd5c367bd546ef22d90f80f8f2b3eae25ae67e0a not found: ID does not exist" Jan 30 17:46:23 crc kubenswrapper[4766]: I0130 17:46:23.280954 4766 scope.go:117] "RemoveContainer" containerID="0acecfe5a42117f15e32e19da99617c81a1d07eda945ccdd12d0c58c6617c02d" Jan 30 17:46:23 crc kubenswrapper[4766]: E0130 17:46:23.281553 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0acecfe5a42117f15e32e19da99617c81a1d07eda945ccdd12d0c58c6617c02d\": container with ID starting with 0acecfe5a42117f15e32e19da99617c81a1d07eda945ccdd12d0c58c6617c02d not found: ID does not exist" containerID="0acecfe5a42117f15e32e19da99617c81a1d07eda945ccdd12d0c58c6617c02d" Jan 30 17:46:23 crc kubenswrapper[4766]: I0130 17:46:23.281600 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0acecfe5a42117f15e32e19da99617c81a1d07eda945ccdd12d0c58c6617c02d"} err="failed to get container status \"0acecfe5a42117f15e32e19da99617c81a1d07eda945ccdd12d0c58c6617c02d\": rpc error: code = NotFound desc = could not find container \"0acecfe5a42117f15e32e19da99617c81a1d07eda945ccdd12d0c58c6617c02d\": container with ID starting with 0acecfe5a42117f15e32e19da99617c81a1d07eda945ccdd12d0c58c6617c02d not found: ID does not exist" Jan 30 17:46:23 crc kubenswrapper[4766]: I0130 17:46:23.281632 4766 scope.go:117] "RemoveContainer" containerID="465c1cb5c8cf7246fde5bcedb2da56cb0f589b3aa6e4017c15ad93dca8961ec6" Jan 30 17:46:23 crc kubenswrapper[4766]: E0130 17:46:23.281945 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"465c1cb5c8cf7246fde5bcedb2da56cb0f589b3aa6e4017c15ad93dca8961ec6\": container with ID starting with 465c1cb5c8cf7246fde5bcedb2da56cb0f589b3aa6e4017c15ad93dca8961ec6 not found: ID does not exist" containerID="465c1cb5c8cf7246fde5bcedb2da56cb0f589b3aa6e4017c15ad93dca8961ec6" Jan 30 17:46:23 crc kubenswrapper[4766]: I0130 17:46:23.281976 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"465c1cb5c8cf7246fde5bcedb2da56cb0f589b3aa6e4017c15ad93dca8961ec6"} err="failed to get container status \"465c1cb5c8cf7246fde5bcedb2da56cb0f589b3aa6e4017c15ad93dca8961ec6\": rpc error: code = NotFound desc = could not find container \"465c1cb5c8cf7246fde5bcedb2da56cb0f589b3aa6e4017c15ad93dca8961ec6\": container with ID starting with 465c1cb5c8cf7246fde5bcedb2da56cb0f589b3aa6e4017c15ad93dca8961ec6 not found: ID does not exist" Jan 30 17:46:23 crc kubenswrapper[4766]: I0130 17:46:23.285522 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f01a059b-5337-4eba-bc02-106bb2e15da8-kube-api-access-hdb7h" (OuterVolumeSpecName: "kube-api-access-hdb7h") pod "f01a059b-5337-4eba-bc02-106bb2e15da8" (UID: "f01a059b-5337-4eba-bc02-106bb2e15da8"). InnerVolumeSpecName "kube-api-access-hdb7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:46:23 crc kubenswrapper[4766]: I0130 17:46:23.339766 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f01a059b-5337-4eba-bc02-106bb2e15da8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f01a059b-5337-4eba-bc02-106bb2e15da8" (UID: "f01a059b-5337-4eba-bc02-106bb2e15da8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:46:23 crc kubenswrapper[4766]: I0130 17:46:23.380889 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f01a059b-5337-4eba-bc02-106bb2e15da8-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:46:23 crc kubenswrapper[4766]: I0130 17:46:23.381423 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f01a059b-5337-4eba-bc02-106bb2e15da8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:46:23 crc kubenswrapper[4766]: I0130 17:46:23.381449 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdb7h\" (UniqueName: \"kubernetes.io/projected/f01a059b-5337-4eba-bc02-106bb2e15da8-kube-api-access-hdb7h\") on node \"crc\" DevicePath \"\"" Jan 30 17:46:23 crc kubenswrapper[4766]: I0130 17:46:23.550814 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gq8x4"] Jan 30 17:46:23 crc kubenswrapper[4766]: I0130 17:46:23.556623 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gq8x4"] Jan 30 17:46:24 crc kubenswrapper[4766]: I0130 17:46:24.066752 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f01a059b-5337-4eba-bc02-106bb2e15da8" path="/var/lib/kubelet/pods/f01a059b-5337-4eba-bc02-106bb2e15da8/volumes" Jan 30 17:46:29 crc kubenswrapper[4766]: I0130 17:46:29.039739 4766 scope.go:117] "RemoveContainer" containerID="7fc5828a50a187fe8ccf98b89913744accb72a8e7e151fd7516b11ddfbd0849c" Jan 30 17:46:29 crc kubenswrapper[4766]: E0130 17:46:29.041511 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:46:35 crc kubenswrapper[4766]: I0130 17:46:35.615955 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vwlnn"] Jan 30 17:46:35 crc kubenswrapper[4766]: E0130 17:46:35.616908 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f01a059b-5337-4eba-bc02-106bb2e15da8" containerName="extract-content" Jan 30 17:46:35 crc kubenswrapper[4766]: I0130 17:46:35.616928 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f01a059b-5337-4eba-bc02-106bb2e15da8" containerName="extract-content" Jan 30 17:46:35 crc kubenswrapper[4766]: E0130 17:46:35.616960 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f01a059b-5337-4eba-bc02-106bb2e15da8" containerName="registry-server" Jan 30 17:46:35 crc kubenswrapper[4766]: I0130 17:46:35.616969 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f01a059b-5337-4eba-bc02-106bb2e15da8" containerName="registry-server" Jan 30 17:46:35 crc kubenswrapper[4766]: E0130 17:46:35.617010 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f01a059b-5337-4eba-bc02-106bb2e15da8" containerName="extract-utilities" Jan 30 17:46:35 crc kubenswrapper[4766]: I0130 17:46:35.617020 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f01a059b-5337-4eba-bc02-106bb2e15da8" containerName="extract-utilities" Jan 30 17:46:35 crc kubenswrapper[4766]: I0130 17:46:35.617255 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="f01a059b-5337-4eba-bc02-106bb2e15da8" containerName="registry-server" Jan 30 17:46:35 crc kubenswrapper[4766]: I0130 17:46:35.618498 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vwlnn" Jan 30 17:46:35 crc kubenswrapper[4766]: I0130 17:46:35.627252 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vwlnn"] Jan 30 17:46:35 crc kubenswrapper[4766]: I0130 17:46:35.768215 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/867849bc-5872-4cd8-8fb0-45bea0c35457-utilities\") pod \"redhat-marketplace-vwlnn\" (UID: \"867849bc-5872-4cd8-8fb0-45bea0c35457\") " pod="openshift-marketplace/redhat-marketplace-vwlnn" Jan 30 17:46:35 crc kubenswrapper[4766]: I0130 17:46:35.768297 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qjch\" (UniqueName: \"kubernetes.io/projected/867849bc-5872-4cd8-8fb0-45bea0c35457-kube-api-access-4qjch\") pod \"redhat-marketplace-vwlnn\" (UID: \"867849bc-5872-4cd8-8fb0-45bea0c35457\") " pod="openshift-marketplace/redhat-marketplace-vwlnn" Jan 30 17:46:35 crc kubenswrapper[4766]: I0130 17:46:35.768451 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/867849bc-5872-4cd8-8fb0-45bea0c35457-catalog-content\") pod \"redhat-marketplace-vwlnn\" (UID: \"867849bc-5872-4cd8-8fb0-45bea0c35457\") " pod="openshift-marketplace/redhat-marketplace-vwlnn" Jan 30 17:46:35 crc kubenswrapper[4766]: I0130 17:46:35.870105 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/867849bc-5872-4cd8-8fb0-45bea0c35457-catalog-content\") pod \"redhat-marketplace-vwlnn\" (UID: \"867849bc-5872-4cd8-8fb0-45bea0c35457\") " pod="openshift-marketplace/redhat-marketplace-vwlnn" Jan 30 17:46:35 crc kubenswrapper[4766]: I0130 17:46:35.870193 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/867849bc-5872-4cd8-8fb0-45bea0c35457-utilities\") pod \"redhat-marketplace-vwlnn\" (UID: \"867849bc-5872-4cd8-8fb0-45bea0c35457\") " pod="openshift-marketplace/redhat-marketplace-vwlnn" Jan 30 17:46:35 crc kubenswrapper[4766]: I0130 17:46:35.870226 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qjch\" (UniqueName: \"kubernetes.io/projected/867849bc-5872-4cd8-8fb0-45bea0c35457-kube-api-access-4qjch\") pod \"redhat-marketplace-vwlnn\" (UID: \"867849bc-5872-4cd8-8fb0-45bea0c35457\") " pod="openshift-marketplace/redhat-marketplace-vwlnn" Jan 30 17:46:35 crc kubenswrapper[4766]: I0130 17:46:35.870609 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/867849bc-5872-4cd8-8fb0-45bea0c35457-utilities\") pod \"redhat-marketplace-vwlnn\" (UID: \"867849bc-5872-4cd8-8fb0-45bea0c35457\") " pod="openshift-marketplace/redhat-marketplace-vwlnn" Jan 30 17:46:35 crc kubenswrapper[4766]: I0130 17:46:35.870840 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/867849bc-5872-4cd8-8fb0-45bea0c35457-catalog-content\") pod \"redhat-marketplace-vwlnn\" (UID: \"867849bc-5872-4cd8-8fb0-45bea0c35457\") " pod="openshift-marketplace/redhat-marketplace-vwlnn" Jan 30 17:46:35 crc kubenswrapper[4766]: I0130 17:46:35.888637 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qjch\" (UniqueName: \"kubernetes.io/projected/867849bc-5872-4cd8-8fb0-45bea0c35457-kube-api-access-4qjch\") pod \"redhat-marketplace-vwlnn\" (UID: \"867849bc-5872-4cd8-8fb0-45bea0c35457\") " pod="openshift-marketplace/redhat-marketplace-vwlnn" Jan 30 17:46:35 crc kubenswrapper[4766]: I0130 17:46:35.952919 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vwlnn" Jan 30 17:46:37 crc kubenswrapper[4766]: I0130 17:46:37.563818 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vwlnn"] Jan 30 17:46:38 crc kubenswrapper[4766]: I0130 17:46:38.323582 4766 generic.go:334] "Generic (PLEG): container finished" podID="867849bc-5872-4cd8-8fb0-45bea0c35457" containerID="b3718f4ed626374185830ef27f31957e8032c6e26976f17f2b5931b14efd38a7" exitCode=0 Jan 30 17:46:38 crc kubenswrapper[4766]: I0130 17:46:38.323634 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwlnn" event={"ID":"867849bc-5872-4cd8-8fb0-45bea0c35457","Type":"ContainerDied","Data":"b3718f4ed626374185830ef27f31957e8032c6e26976f17f2b5931b14efd38a7"} Jan 30 17:46:38 crc kubenswrapper[4766]: I0130 17:46:38.323892 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwlnn" event={"ID":"867849bc-5872-4cd8-8fb0-45bea0c35457","Type":"ContainerStarted","Data":"88c9e43d0a3ccbbce3f4bedcd3d0208a41e0cda34902c848ab56a33ffb898e0e"} Jan 30 17:46:40 crc kubenswrapper[4766]: I0130 17:46:40.340781 4766 generic.go:334] "Generic (PLEG): container finished" podID="867849bc-5872-4cd8-8fb0-45bea0c35457" containerID="e4eee46ad6a23d196259f7952049d7c7fd59a3bfb593f8db157da12549b06941" exitCode=0 Jan 30 17:46:40 crc kubenswrapper[4766]: I0130 17:46:40.340870 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwlnn" event={"ID":"867849bc-5872-4cd8-8fb0-45bea0c35457","Type":"ContainerDied","Data":"e4eee46ad6a23d196259f7952049d7c7fd59a3bfb593f8db157da12549b06941"} Jan 30 17:46:41 crc kubenswrapper[4766]: I0130 17:46:41.351587 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwlnn" event={"ID":"867849bc-5872-4cd8-8fb0-45bea0c35457","Type":"ContainerStarted","Data":"657787fa63953bfadb8e93df0777b82c5d657a1dba7f4eff523a6a3b391864cc"} Jan 30 17:46:41 crc kubenswrapper[4766]: I0130 17:46:41.372534 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vwlnn" podStartSLOduration=3.7562011379999998 podStartE2EDuration="6.372510199s" podCreationTimestamp="2026-01-30 17:46:35 +0000 UTC" firstStartedPulling="2026-01-30 17:46:38.325187917 +0000 UTC m=+5052.963145263" lastFinishedPulling="2026-01-30 17:46:40.941496978 +0000 UTC m=+5055.579454324" observedRunningTime="2026-01-30 17:46:41.367453187 +0000 UTC m=+5056.005410543" watchObservedRunningTime="2026-01-30 17:46:41.372510199 +0000 UTC m=+5056.010467535" Jan 30 17:46:42 crc kubenswrapper[4766]: I0130 17:46:42.039618 4766 scope.go:117] "RemoveContainer" containerID="7fc5828a50a187fe8ccf98b89913744accb72a8e7e151fd7516b11ddfbd0849c" Jan 30 17:46:42 crc kubenswrapper[4766]: E0130 17:46:42.039873 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:46:44 crc kubenswrapper[4766]: I0130 17:46:44.180318 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fd2sd"] Jan 30 17:46:44 crc kubenswrapper[4766]: I0130 17:46:44.181848 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fd2sd" Jan 30 17:46:44 crc kubenswrapper[4766]: I0130 17:46:44.191701 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fd2sd"] Jan 30 17:46:44 crc kubenswrapper[4766]: I0130 17:46:44.306030 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc49f\" (UniqueName: \"kubernetes.io/projected/05e015a3-c2f7-491b-a864-d6f03a8da284-kube-api-access-pc49f\") pod \"redhat-operators-fd2sd\" (UID: \"05e015a3-c2f7-491b-a864-d6f03a8da284\") " pod="openshift-marketplace/redhat-operators-fd2sd" Jan 30 17:46:44 crc kubenswrapper[4766]: I0130 17:46:44.306086 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05e015a3-c2f7-491b-a864-d6f03a8da284-catalog-content\") pod \"redhat-operators-fd2sd\" (UID: \"05e015a3-c2f7-491b-a864-d6f03a8da284\") " pod="openshift-marketplace/redhat-operators-fd2sd" Jan 30 17:46:44 crc kubenswrapper[4766]: I0130 17:46:44.306190 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05e015a3-c2f7-491b-a864-d6f03a8da284-utilities\") pod \"redhat-operators-fd2sd\" (UID: \"05e015a3-c2f7-491b-a864-d6f03a8da284\") " pod="openshift-marketplace/redhat-operators-fd2sd" Jan 30 17:46:44 crc kubenswrapper[4766]: I0130 17:46:44.408008 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05e015a3-c2f7-491b-a864-d6f03a8da284-utilities\") pod \"redhat-operators-fd2sd\" (UID: \"05e015a3-c2f7-491b-a864-d6f03a8da284\") " pod="openshift-marketplace/redhat-operators-fd2sd" Jan 30 17:46:44 crc kubenswrapper[4766]: I0130 17:46:44.408358 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pc49f\" (UniqueName: \"kubernetes.io/projected/05e015a3-c2f7-491b-a864-d6f03a8da284-kube-api-access-pc49f\") pod \"redhat-operators-fd2sd\" (UID: \"05e015a3-c2f7-491b-a864-d6f03a8da284\") " pod="openshift-marketplace/redhat-operators-fd2sd" Jan 30 17:46:44 crc kubenswrapper[4766]: I0130 17:46:44.408463 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05e015a3-c2f7-491b-a864-d6f03a8da284-catalog-content\") pod \"redhat-operators-fd2sd\" (UID: \"05e015a3-c2f7-491b-a864-d6f03a8da284\") " pod="openshift-marketplace/redhat-operators-fd2sd" Jan 30 17:46:44 crc kubenswrapper[4766]: I0130 17:46:44.408610 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05e015a3-c2f7-491b-a864-d6f03a8da284-utilities\") pod \"redhat-operators-fd2sd\" (UID: \"05e015a3-c2f7-491b-a864-d6f03a8da284\") " pod="openshift-marketplace/redhat-operators-fd2sd" Jan 30 17:46:44 crc kubenswrapper[4766]: I0130 17:46:44.408987 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05e015a3-c2f7-491b-a864-d6f03a8da284-catalog-content\") pod \"redhat-operators-fd2sd\" (UID: \"05e015a3-c2f7-491b-a864-d6f03a8da284\") " pod="openshift-marketplace/redhat-operators-fd2sd" Jan 30 17:46:44 crc kubenswrapper[4766]: I0130 17:46:44.439894 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pc49f\" (UniqueName: \"kubernetes.io/projected/05e015a3-c2f7-491b-a864-d6f03a8da284-kube-api-access-pc49f\") pod \"redhat-operators-fd2sd\" (UID: \"05e015a3-c2f7-491b-a864-d6f03a8da284\") " pod="openshift-marketplace/redhat-operators-fd2sd" Jan 30 17:46:44 crc kubenswrapper[4766]: I0130 17:46:44.501160 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fd2sd" Jan 30 17:46:44 crc kubenswrapper[4766]: I0130 17:46:44.985626 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fd2sd"] Jan 30 17:46:45 crc kubenswrapper[4766]: I0130 17:46:45.381223 4766 generic.go:334] "Generic (PLEG): container finished" podID="05e015a3-c2f7-491b-a864-d6f03a8da284" containerID="ab485356a29ab995ab7860ac3a3f1cc72df688f8bebad45fdd8e16c5b2e5a554" exitCode=0 Jan 30 17:46:45 crc kubenswrapper[4766]: I0130 17:46:45.381273 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fd2sd" event={"ID":"05e015a3-c2f7-491b-a864-d6f03a8da284","Type":"ContainerDied","Data":"ab485356a29ab995ab7860ac3a3f1cc72df688f8bebad45fdd8e16c5b2e5a554"} Jan 30 17:46:45 crc kubenswrapper[4766]: I0130 17:46:45.381297 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fd2sd" event={"ID":"05e015a3-c2f7-491b-a864-d6f03a8da284","Type":"ContainerStarted","Data":"b4995231c9fbeb8351b02fbf9273df0d1e9d55dedf024468b645cab4df9fce9a"} Jan 30 17:46:45 crc kubenswrapper[4766]: I0130 17:46:45.953810 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vwlnn" Jan 30 17:46:45 crc kubenswrapper[4766]: I0130 17:46:45.953879 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vwlnn" Jan 30 17:46:46 crc kubenswrapper[4766]: I0130 17:46:46.001954 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vwlnn" Jan 30 17:46:46 crc kubenswrapper[4766]: I0130 17:46:46.391127 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fd2sd" event={"ID":"05e015a3-c2f7-491b-a864-d6f03a8da284","Type":"ContainerStarted","Data":"ac1109ad27e435c6eeb4e27344b7c151cdc5174829db6cab7bdfb5dbaacbc67c"} Jan 30 17:46:46 crc kubenswrapper[4766]: I0130 17:46:46.443876 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vwlnn" Jan 30 17:46:47 crc kubenswrapper[4766]: I0130 17:46:47.400538 4766 generic.go:334] "Generic (PLEG): container finished" podID="05e015a3-c2f7-491b-a864-d6f03a8da284" containerID="ac1109ad27e435c6eeb4e27344b7c151cdc5174829db6cab7bdfb5dbaacbc67c" exitCode=0 Jan 30 17:46:47 crc kubenswrapper[4766]: I0130 17:46:47.400746 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fd2sd" event={"ID":"05e015a3-c2f7-491b-a864-d6f03a8da284","Type":"ContainerDied","Data":"ac1109ad27e435c6eeb4e27344b7c151cdc5174829db6cab7bdfb5dbaacbc67c"} Jan 30 17:46:48 crc kubenswrapper[4766]: I0130 17:46:48.769068 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vwlnn"] Jan 30 17:46:48 crc kubenswrapper[4766]: I0130 17:46:48.769557 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vwlnn" podUID="867849bc-5872-4cd8-8fb0-45bea0c35457" containerName="registry-server" containerID="cri-o://657787fa63953bfadb8e93df0777b82c5d657a1dba7f4eff523a6a3b391864cc" gracePeriod=2 Jan 30 17:46:49 crc kubenswrapper[4766]: I0130 17:46:49.187818 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vwlnn" Jan 30 17:46:49 crc kubenswrapper[4766]: I0130 17:46:49.300359 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/867849bc-5872-4cd8-8fb0-45bea0c35457-utilities\") pod \"867849bc-5872-4cd8-8fb0-45bea0c35457\" (UID: \"867849bc-5872-4cd8-8fb0-45bea0c35457\") " Jan 30 17:46:49 crc kubenswrapper[4766]: I0130 17:46:49.300471 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/867849bc-5872-4cd8-8fb0-45bea0c35457-catalog-content\") pod \"867849bc-5872-4cd8-8fb0-45bea0c35457\" (UID: \"867849bc-5872-4cd8-8fb0-45bea0c35457\") " Jan 30 17:46:49 crc kubenswrapper[4766]: I0130 17:46:49.300553 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qjch\" (UniqueName: \"kubernetes.io/projected/867849bc-5872-4cd8-8fb0-45bea0c35457-kube-api-access-4qjch\") pod \"867849bc-5872-4cd8-8fb0-45bea0c35457\" (UID: \"867849bc-5872-4cd8-8fb0-45bea0c35457\") " Jan 30 17:46:49 crc kubenswrapper[4766]: I0130 17:46:49.301470 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/867849bc-5872-4cd8-8fb0-45bea0c35457-utilities" (OuterVolumeSpecName: "utilities") pod "867849bc-5872-4cd8-8fb0-45bea0c35457" (UID: "867849bc-5872-4cd8-8fb0-45bea0c35457"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:46:49 crc kubenswrapper[4766]: I0130 17:46:49.310392 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/867849bc-5872-4cd8-8fb0-45bea0c35457-kube-api-access-4qjch" (OuterVolumeSpecName: "kube-api-access-4qjch") pod "867849bc-5872-4cd8-8fb0-45bea0c35457" (UID: "867849bc-5872-4cd8-8fb0-45bea0c35457"). InnerVolumeSpecName "kube-api-access-4qjch". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:46:49 crc kubenswrapper[4766]: I0130 17:46:49.326788 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/867849bc-5872-4cd8-8fb0-45bea0c35457-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "867849bc-5872-4cd8-8fb0-45bea0c35457" (UID: "867849bc-5872-4cd8-8fb0-45bea0c35457"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:46:49 crc kubenswrapper[4766]: I0130 17:46:49.402843 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/867849bc-5872-4cd8-8fb0-45bea0c35457-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:46:49 crc kubenswrapper[4766]: I0130 17:46:49.402913 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/867849bc-5872-4cd8-8fb0-45bea0c35457-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:46:49 crc kubenswrapper[4766]: I0130 17:46:49.402924 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qjch\" (UniqueName: \"kubernetes.io/projected/867849bc-5872-4cd8-8fb0-45bea0c35457-kube-api-access-4qjch\") on node \"crc\" DevicePath \"\"" Jan 30 17:46:49 crc kubenswrapper[4766]: I0130 17:46:49.418230 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vwlnn" Jan 30 17:46:49 crc kubenswrapper[4766]: I0130 17:46:49.418249 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwlnn" event={"ID":"867849bc-5872-4cd8-8fb0-45bea0c35457","Type":"ContainerDied","Data":"657787fa63953bfadb8e93df0777b82c5d657a1dba7f4eff523a6a3b391864cc"} Jan 30 17:46:49 crc kubenswrapper[4766]: I0130 17:46:49.418303 4766 scope.go:117] "RemoveContainer" containerID="657787fa63953bfadb8e93df0777b82c5d657a1dba7f4eff523a6a3b391864cc" Jan 30 17:46:49 crc kubenswrapper[4766]: I0130 17:46:49.418368 4766 generic.go:334] "Generic (PLEG): container finished" podID="867849bc-5872-4cd8-8fb0-45bea0c35457" containerID="657787fa63953bfadb8e93df0777b82c5d657a1dba7f4eff523a6a3b391864cc" exitCode=0 Jan 30 17:46:49 crc kubenswrapper[4766]: I0130 17:46:49.418441 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwlnn" event={"ID":"867849bc-5872-4cd8-8fb0-45bea0c35457","Type":"ContainerDied","Data":"88c9e43d0a3ccbbce3f4bedcd3d0208a41e0cda34902c848ab56a33ffb898e0e"} Jan 30 17:46:49 crc kubenswrapper[4766]: I0130 17:46:49.420982 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fd2sd" event={"ID":"05e015a3-c2f7-491b-a864-d6f03a8da284","Type":"ContainerStarted","Data":"8b0a3950eeb2f65987ccc1596f817bf2057bd60ced5d79b907e67333850dbc9f"} Jan 30 17:46:49 crc kubenswrapper[4766]: I0130 17:46:49.458243 4766 scope.go:117] "RemoveContainer" containerID="e4eee46ad6a23d196259f7952049d7c7fd59a3bfb593f8db157da12549b06941" Jan 30 17:46:49 crc kubenswrapper[4766]: I0130 17:46:49.466423 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fd2sd" podStartSLOduration=2.470871439 podStartE2EDuration="5.466398983s" podCreationTimestamp="2026-01-30 17:46:44 +0000 UTC" firstStartedPulling="2026-01-30 17:46:45.382591104 +0000 UTC m=+5060.020548460" lastFinishedPulling="2026-01-30 17:46:48.378118668 +0000 UTC m=+5063.016076004" observedRunningTime="2026-01-30 17:46:49.442735882 +0000 UTC m=+5064.080693238" watchObservedRunningTime="2026-01-30 17:46:49.466398983 +0000 UTC m=+5064.104356319" Jan 30 17:46:49 crc kubenswrapper[4766]: I0130 17:46:49.490433 4766 scope.go:117] "RemoveContainer" containerID="b3718f4ed626374185830ef27f31957e8032c6e26976f17f2b5931b14efd38a7" Jan 30 17:46:49 crc kubenswrapper[4766]: I0130 17:46:49.491681 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vwlnn"] Jan 30 17:46:49 crc kubenswrapper[4766]: I0130 17:46:49.500851 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vwlnn"] Jan 30 17:46:49 crc kubenswrapper[4766]: I0130 17:46:49.515159 4766 scope.go:117] "RemoveContainer" containerID="657787fa63953bfadb8e93df0777b82c5d657a1dba7f4eff523a6a3b391864cc" Jan 30 17:46:49 crc kubenswrapper[4766]: E0130 17:46:49.516757 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"657787fa63953bfadb8e93df0777b82c5d657a1dba7f4eff523a6a3b391864cc\": container with ID starting with 657787fa63953bfadb8e93df0777b82c5d657a1dba7f4eff523a6a3b391864cc not found: ID does not exist" containerID="657787fa63953bfadb8e93df0777b82c5d657a1dba7f4eff523a6a3b391864cc" Jan 30 17:46:49 crc kubenswrapper[4766]: I0130 17:46:49.516849 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"657787fa63953bfadb8e93df0777b82c5d657a1dba7f4eff523a6a3b391864cc"} err="failed to get container status \"657787fa63953bfadb8e93df0777b82c5d657a1dba7f4eff523a6a3b391864cc\": rpc error: code = NotFound desc = could not find container \"657787fa63953bfadb8e93df0777b82c5d657a1dba7f4eff523a6a3b391864cc\": container with ID starting with 657787fa63953bfadb8e93df0777b82c5d657a1dba7f4eff523a6a3b391864cc not found: ID does not exist" Jan 30 17:46:49 crc kubenswrapper[4766]: I0130 17:46:49.516885 4766 scope.go:117] "RemoveContainer" containerID="e4eee46ad6a23d196259f7952049d7c7fd59a3bfb593f8db157da12549b06941" Jan 30 17:46:49 crc kubenswrapper[4766]: E0130 17:46:49.517509 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4eee46ad6a23d196259f7952049d7c7fd59a3bfb593f8db157da12549b06941\": container with ID starting with e4eee46ad6a23d196259f7952049d7c7fd59a3bfb593f8db157da12549b06941 not found: ID does not exist" containerID="e4eee46ad6a23d196259f7952049d7c7fd59a3bfb593f8db157da12549b06941" Jan 30 17:46:49 crc kubenswrapper[4766]: I0130 17:46:49.517578 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4eee46ad6a23d196259f7952049d7c7fd59a3bfb593f8db157da12549b06941"} err="failed to get container status \"e4eee46ad6a23d196259f7952049d7c7fd59a3bfb593f8db157da12549b06941\": rpc error: code = NotFound desc = could not find container \"e4eee46ad6a23d196259f7952049d7c7fd59a3bfb593f8db157da12549b06941\": container with ID starting with e4eee46ad6a23d196259f7952049d7c7fd59a3bfb593f8db157da12549b06941 not found: ID does not exist" Jan 30 17:46:49 crc kubenswrapper[4766]: I0130 17:46:49.517619 4766 scope.go:117] "RemoveContainer" containerID="b3718f4ed626374185830ef27f31957e8032c6e26976f17f2b5931b14efd38a7" Jan 30 17:46:49 crc kubenswrapper[4766]: E0130 17:46:49.518294 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3718f4ed626374185830ef27f31957e8032c6e26976f17f2b5931b14efd38a7\": container with ID starting with b3718f4ed626374185830ef27f31957e8032c6e26976f17f2b5931b14efd38a7 not found: ID does not exist" containerID="b3718f4ed626374185830ef27f31957e8032c6e26976f17f2b5931b14efd38a7" Jan 30 17:46:49 crc kubenswrapper[4766]: I0130 17:46:49.518336 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3718f4ed626374185830ef27f31957e8032c6e26976f17f2b5931b14efd38a7"} err="failed to get container status \"b3718f4ed626374185830ef27f31957e8032c6e26976f17f2b5931b14efd38a7\": rpc error: code = NotFound desc = could not find container \"b3718f4ed626374185830ef27f31957e8032c6e26976f17f2b5931b14efd38a7\": container with ID starting with b3718f4ed626374185830ef27f31957e8032c6e26976f17f2b5931b14efd38a7 not found: ID does not exist" Jan 30 17:46:50 crc kubenswrapper[4766]: I0130 17:46:50.048633 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="867849bc-5872-4cd8-8fb0-45bea0c35457" path="/var/lib/kubelet/pods/867849bc-5872-4cd8-8fb0-45bea0c35457/volumes" Jan 30 17:46:54 crc kubenswrapper[4766]: I0130 17:46:54.502348 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fd2sd" Jan 30 17:46:54 crc kubenswrapper[4766]: I0130 17:46:54.503021 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fd2sd" Jan 30 17:46:54 crc kubenswrapper[4766]: I0130 17:46:54.557773 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fd2sd" Jan 30 17:46:55 crc kubenswrapper[4766]: I0130 17:46:55.525471 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fd2sd" Jan 30 17:46:55 crc kubenswrapper[4766]: I0130 17:46:55.579162 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fd2sd"] Jan 30 17:46:57 crc kubenswrapper[4766]: I0130 17:46:57.039832 4766 scope.go:117] "RemoveContainer" containerID="7fc5828a50a187fe8ccf98b89913744accb72a8e7e151fd7516b11ddfbd0849c" Jan 30 17:46:57 crc kubenswrapper[4766]: E0130 17:46:57.040156 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:46:57 crc kubenswrapper[4766]: I0130 17:46:57.483358 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fd2sd" podUID="05e015a3-c2f7-491b-a864-d6f03a8da284" containerName="registry-server" containerID="cri-o://8b0a3950eeb2f65987ccc1596f817bf2057bd60ced5d79b907e67333850dbc9f" gracePeriod=2 Jan 30 17:46:58 crc kubenswrapper[4766]: I0130 17:46:58.492764 4766 generic.go:334] "Generic (PLEG): container finished" podID="05e015a3-c2f7-491b-a864-d6f03a8da284" containerID="8b0a3950eeb2f65987ccc1596f817bf2057bd60ced5d79b907e67333850dbc9f" exitCode=0 Jan 30 17:46:58 crc kubenswrapper[4766]: I0130 17:46:58.492840 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fd2sd" event={"ID":"05e015a3-c2f7-491b-a864-d6f03a8da284","Type":"ContainerDied","Data":"8b0a3950eeb2f65987ccc1596f817bf2057bd60ced5d79b907e67333850dbc9f"} Jan 30 17:46:59 crc kubenswrapper[4766]: I0130 17:46:59.173816 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fd2sd" Jan 30 17:46:59 crc kubenswrapper[4766]: I0130 17:46:59.217611 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pc49f\" (UniqueName: \"kubernetes.io/projected/05e015a3-c2f7-491b-a864-d6f03a8da284-kube-api-access-pc49f\") pod \"05e015a3-c2f7-491b-a864-d6f03a8da284\" (UID: \"05e015a3-c2f7-491b-a864-d6f03a8da284\") " Jan 30 17:46:59 crc kubenswrapper[4766]: I0130 17:46:59.217682 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05e015a3-c2f7-491b-a864-d6f03a8da284-utilities\") pod \"05e015a3-c2f7-491b-a864-d6f03a8da284\" (UID: \"05e015a3-c2f7-491b-a864-d6f03a8da284\") " Jan 30 17:46:59 crc kubenswrapper[4766]: I0130 17:46:59.217727 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05e015a3-c2f7-491b-a864-d6f03a8da284-catalog-content\") pod \"05e015a3-c2f7-491b-a864-d6f03a8da284\" (UID: \"05e015a3-c2f7-491b-a864-d6f03a8da284\") " Jan 30 17:46:59 crc kubenswrapper[4766]: I0130 17:46:59.218529 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05e015a3-c2f7-491b-a864-d6f03a8da284-utilities" (OuterVolumeSpecName: "utilities") pod "05e015a3-c2f7-491b-a864-d6f03a8da284" (UID: "05e015a3-c2f7-491b-a864-d6f03a8da284"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:46:59 crc kubenswrapper[4766]: I0130 17:46:59.222629 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05e015a3-c2f7-491b-a864-d6f03a8da284-kube-api-access-pc49f" (OuterVolumeSpecName: "kube-api-access-pc49f") pod "05e015a3-c2f7-491b-a864-d6f03a8da284" (UID: "05e015a3-c2f7-491b-a864-d6f03a8da284"). InnerVolumeSpecName "kube-api-access-pc49f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:46:59 crc kubenswrapper[4766]: I0130 17:46:59.318925 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05e015a3-c2f7-491b-a864-d6f03a8da284-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:46:59 crc kubenswrapper[4766]: I0130 17:46:59.318959 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pc49f\" (UniqueName: \"kubernetes.io/projected/05e015a3-c2f7-491b-a864-d6f03a8da284-kube-api-access-pc49f\") on node \"crc\" DevicePath \"\"" Jan 30 17:46:59 crc kubenswrapper[4766]: I0130 17:46:59.339055 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05e015a3-c2f7-491b-a864-d6f03a8da284-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "05e015a3-c2f7-491b-a864-d6f03a8da284" (UID: "05e015a3-c2f7-491b-a864-d6f03a8da284"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:46:59 crc kubenswrapper[4766]: I0130 17:46:59.420613 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05e015a3-c2f7-491b-a864-d6f03a8da284-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:46:59 crc kubenswrapper[4766]: I0130 17:46:59.502137 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fd2sd" event={"ID":"05e015a3-c2f7-491b-a864-d6f03a8da284","Type":"ContainerDied","Data":"b4995231c9fbeb8351b02fbf9273df0d1e9d55dedf024468b645cab4df9fce9a"} Jan 30 17:46:59 crc kubenswrapper[4766]: I0130 17:46:59.502259 4766 scope.go:117] "RemoveContainer" containerID="8b0a3950eeb2f65987ccc1596f817bf2057bd60ced5d79b907e67333850dbc9f" Jan 30 17:46:59 crc kubenswrapper[4766]: I0130 17:46:59.502208 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fd2sd" Jan 30 17:46:59 crc kubenswrapper[4766]: I0130 17:46:59.529931 4766 scope.go:117] "RemoveContainer" containerID="ac1109ad27e435c6eeb4e27344b7c151cdc5174829db6cab7bdfb5dbaacbc67c" Jan 30 17:46:59 crc kubenswrapper[4766]: I0130 17:46:59.535167 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fd2sd"] Jan 30 17:46:59 crc kubenswrapper[4766]: I0130 17:46:59.547020 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fd2sd"] Jan 30 17:46:59 crc kubenswrapper[4766]: I0130 17:46:59.559045 4766 scope.go:117] "RemoveContainer" containerID="ab485356a29ab995ab7860ac3a3f1cc72df688f8bebad45fdd8e16c5b2e5a554" Jan 30 17:47:00 crc kubenswrapper[4766]: I0130 17:47:00.048070 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05e015a3-c2f7-491b-a864-d6f03a8da284" path="/var/lib/kubelet/pods/05e015a3-c2f7-491b-a864-d6f03a8da284/volumes" Jan 30 17:47:09 crc kubenswrapper[4766]: I0130 17:47:09.039856 4766 scope.go:117] "RemoveContainer" containerID="7fc5828a50a187fe8ccf98b89913744accb72a8e7e151fd7516b11ddfbd0849c" Jan 30 17:47:09 crc kubenswrapper[4766]: E0130 17:47:09.040698 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:47:13 crc kubenswrapper[4766]: I0130 17:47:13.780859 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-copy-data"] Jan 30 17:47:13 crc kubenswrapper[4766]: E0130 17:47:13.782001 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="867849bc-5872-4cd8-8fb0-45bea0c35457" containerName="extract-utilities" Jan 30 17:47:13 crc kubenswrapper[4766]: I0130 17:47:13.782027 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="867849bc-5872-4cd8-8fb0-45bea0c35457" containerName="extract-utilities" Jan 30 17:47:13 crc kubenswrapper[4766]: E0130 17:47:13.782059 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="867849bc-5872-4cd8-8fb0-45bea0c35457" containerName="registry-server" Jan 30 17:47:13 crc kubenswrapper[4766]: I0130 17:47:13.782071 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="867849bc-5872-4cd8-8fb0-45bea0c35457" containerName="registry-server" Jan 30 17:47:13 crc kubenswrapper[4766]: E0130 17:47:13.782093 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="867849bc-5872-4cd8-8fb0-45bea0c35457" containerName="extract-content" Jan 30 17:47:13 crc kubenswrapper[4766]: I0130 17:47:13.782104 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="867849bc-5872-4cd8-8fb0-45bea0c35457" containerName="extract-content" Jan 30 17:47:13 crc kubenswrapper[4766]: E0130 17:47:13.782123 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05e015a3-c2f7-491b-a864-d6f03a8da284" containerName="extract-content" Jan 30 17:47:13 crc kubenswrapper[4766]: I0130 17:47:13.782134 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="05e015a3-c2f7-491b-a864-d6f03a8da284" containerName="extract-content" Jan 30 17:47:13 crc kubenswrapper[4766]: E0130 17:47:13.782165 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05e015a3-c2f7-491b-a864-d6f03a8da284" containerName="extract-utilities" Jan 30 17:47:13 crc kubenswrapper[4766]: I0130 17:47:13.782202 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="05e015a3-c2f7-491b-a864-d6f03a8da284" containerName="extract-utilities" Jan 30 17:47:13 crc kubenswrapper[4766]: E0130 17:47:13.782228 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05e015a3-c2f7-491b-a864-d6f03a8da284" containerName="registry-server" Jan 30 17:47:13 crc kubenswrapper[4766]: I0130 17:47:13.782239 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="05e015a3-c2f7-491b-a864-d6f03a8da284" containerName="registry-server" Jan 30 17:47:13 crc kubenswrapper[4766]: I0130 17:47:13.782477 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="05e015a3-c2f7-491b-a864-d6f03a8da284" containerName="registry-server" Jan 30 17:47:13 crc kubenswrapper[4766]: I0130 17:47:13.782503 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="867849bc-5872-4cd8-8fb0-45bea0c35457" containerName="registry-server" Jan 30 17:47:13 crc kubenswrapper[4766]: I0130 17:47:13.783406 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Jan 30 17:47:13 crc kubenswrapper[4766]: I0130 17:47:13.788761 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-slmpt" Jan 30 17:47:13 crc kubenswrapper[4766]: I0130 17:47:13.792217 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Jan 30 17:47:13 crc kubenswrapper[4766]: I0130 17:47:13.946224 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7deb0965-ceb5-43dc-a5cd-42e162b9ce9a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7deb0965-ceb5-43dc-a5cd-42e162b9ce9a\") pod \"mariadb-copy-data\" (UID: \"d76c2935-d3e2-401f-bdd0-878e885a5add\") " pod="openstack/mariadb-copy-data" Jan 30 17:47:13 crc kubenswrapper[4766]: I0130 17:47:13.946793 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw29n\" (UniqueName: \"kubernetes.io/projected/d76c2935-d3e2-401f-bdd0-878e885a5add-kube-api-access-xw29n\") pod \"mariadb-copy-data\" (UID: \"d76c2935-d3e2-401f-bdd0-878e885a5add\") " pod="openstack/mariadb-copy-data" Jan 30 17:47:14 crc kubenswrapper[4766]: I0130 17:47:14.048894 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xw29n\" (UniqueName: \"kubernetes.io/projected/d76c2935-d3e2-401f-bdd0-878e885a5add-kube-api-access-xw29n\") pod \"mariadb-copy-data\" (UID: \"d76c2935-d3e2-401f-bdd0-878e885a5add\") " pod="openstack/mariadb-copy-data" Jan 30 17:47:14 crc kubenswrapper[4766]: I0130 17:47:14.049066 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7deb0965-ceb5-43dc-a5cd-42e162b9ce9a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7deb0965-ceb5-43dc-a5cd-42e162b9ce9a\") pod \"mariadb-copy-data\" (UID: \"d76c2935-d3e2-401f-bdd0-878e885a5add\") " pod="openstack/mariadb-copy-data" Jan 30 17:47:14 crc kubenswrapper[4766]: I0130 17:47:14.052821 4766 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 17:47:14 crc kubenswrapper[4766]: I0130 17:47:14.053031 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7deb0965-ceb5-43dc-a5cd-42e162b9ce9a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7deb0965-ceb5-43dc-a5cd-42e162b9ce9a\") pod \"mariadb-copy-data\" (UID: \"d76c2935-d3e2-401f-bdd0-878e885a5add\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b08e05cbea0c7e1d9c8983a7b751e75758c52c7cc2564acebca783f41c2e762a/globalmount\"" pod="openstack/mariadb-copy-data" Jan 30 17:47:14 crc kubenswrapper[4766]: I0130 17:47:14.071492 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xw29n\" (UniqueName: \"kubernetes.io/projected/d76c2935-d3e2-401f-bdd0-878e885a5add-kube-api-access-xw29n\") pod \"mariadb-copy-data\" (UID: \"d76c2935-d3e2-401f-bdd0-878e885a5add\") " pod="openstack/mariadb-copy-data" Jan 30 17:47:14 crc kubenswrapper[4766]: I0130 17:47:14.087991 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7deb0965-ceb5-43dc-a5cd-42e162b9ce9a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7deb0965-ceb5-43dc-a5cd-42e162b9ce9a\") pod \"mariadb-copy-data\" (UID: \"d76c2935-d3e2-401f-bdd0-878e885a5add\") " pod="openstack/mariadb-copy-data" Jan 30 17:47:14 crc kubenswrapper[4766]: I0130 17:47:14.110456 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Jan 30 17:47:14 crc kubenswrapper[4766]: I0130 17:47:14.422770 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Jan 30 17:47:14 crc kubenswrapper[4766]: I0130 17:47:14.614314 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"d76c2935-d3e2-401f-bdd0-878e885a5add","Type":"ContainerStarted","Data":"ac48a323f2edf7b25ffbd740e69d78e74fcc2f09968e3795ce9aeef43039cfbb"} Jan 30 17:47:14 crc kubenswrapper[4766]: I0130 17:47:14.614738 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"d76c2935-d3e2-401f-bdd0-878e885a5add","Type":"ContainerStarted","Data":"7ba7e39489ade85a844a534bd0d37887ae31b1c23e85bd5fa5f8f4795e986a2e"} Jan 30 17:47:14 crc kubenswrapper[4766]: I0130 17:47:14.651943 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-copy-data" podStartSLOduration=2.651922004 podStartE2EDuration="2.651922004s" podCreationTimestamp="2026-01-30 17:47:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:47:14.631444757 +0000 UTC m=+5089.269402113" watchObservedRunningTime="2026-01-30 17:47:14.651922004 +0000 UTC m=+5089.289879350" Jan 30 17:47:17 crc kubenswrapper[4766]: I0130 17:47:17.383819 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Jan 30 17:47:17 crc kubenswrapper[4766]: I0130 17:47:17.385265 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 17:47:17 crc kubenswrapper[4766]: I0130 17:47:17.394393 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 30 17:47:17 crc kubenswrapper[4766]: I0130 17:47:17.499229 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffzjb\" (UniqueName: \"kubernetes.io/projected/8dea217a-9314-4a8d-8607-a007c861127a-kube-api-access-ffzjb\") pod \"mariadb-client\" (UID: \"8dea217a-9314-4a8d-8607-a007c861127a\") " pod="openstack/mariadb-client" Jan 30 17:47:17 crc kubenswrapper[4766]: I0130 17:47:17.600741 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffzjb\" (UniqueName: \"kubernetes.io/projected/8dea217a-9314-4a8d-8607-a007c861127a-kube-api-access-ffzjb\") pod \"mariadb-client\" (UID: \"8dea217a-9314-4a8d-8607-a007c861127a\") " pod="openstack/mariadb-client" Jan 30 17:47:17 crc kubenswrapper[4766]: I0130 17:47:17.622633 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffzjb\" (UniqueName: \"kubernetes.io/projected/8dea217a-9314-4a8d-8607-a007c861127a-kube-api-access-ffzjb\") pod \"mariadb-client\" (UID: \"8dea217a-9314-4a8d-8607-a007c861127a\") " pod="openstack/mariadb-client" Jan 30 17:47:17 crc kubenswrapper[4766]: I0130 17:47:17.704555 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 17:47:18 crc kubenswrapper[4766]: I0130 17:47:18.116840 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 30 17:47:18 crc kubenswrapper[4766]: W0130 17:47:18.119136 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8dea217a_9314_4a8d_8607_a007c861127a.slice/crio-bbb0f391a7dc07c70b0a13527b5f305f68a28f1bfce23078772d33ec4bf718f1 WatchSource:0}: Error finding container bbb0f391a7dc07c70b0a13527b5f305f68a28f1bfce23078772d33ec4bf718f1: Status 404 returned error can't find the container with id bbb0f391a7dc07c70b0a13527b5f305f68a28f1bfce23078772d33ec4bf718f1 Jan 30 17:47:18 crc kubenswrapper[4766]: I0130 17:47:18.638510 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"8dea217a-9314-4a8d-8607-a007c861127a","Type":"ContainerStarted","Data":"b005c60a4add2d8581404792f9ce09c8f2b90990814a350d305efe960ab72a39"} Jan 30 17:47:18 crc kubenswrapper[4766]: I0130 17:47:18.638835 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"8dea217a-9314-4a8d-8607-a007c861127a","Type":"ContainerStarted","Data":"bbb0f391a7dc07c70b0a13527b5f305f68a28f1bfce23078772d33ec4bf718f1"} Jan 30 17:47:19 crc kubenswrapper[4766]: I0130 17:47:19.650863 4766 generic.go:334] "Generic (PLEG): container finished" podID="8dea217a-9314-4a8d-8607-a007c861127a" containerID="b005c60a4add2d8581404792f9ce09c8f2b90990814a350d305efe960ab72a39" exitCode=0 Jan 30 17:47:19 crc kubenswrapper[4766]: I0130 17:47:19.650946 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"8dea217a-9314-4a8d-8607-a007c861127a","Type":"ContainerDied","Data":"b005c60a4add2d8581404792f9ce09c8f2b90990814a350d305efe960ab72a39"} Jan 30 17:47:20 crc kubenswrapper[4766]: I0130 17:47:20.969980 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 17:47:20 crc kubenswrapper[4766]: I0130 17:47:20.993489 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_8dea217a-9314-4a8d-8607-a007c861127a/mariadb-client/0.log" Jan 30 17:47:21 crc kubenswrapper[4766]: I0130 17:47:21.020250 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 30 17:47:21 crc kubenswrapper[4766]: I0130 17:47:21.027136 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Jan 30 17:47:21 crc kubenswrapper[4766]: I0130 17:47:21.049715 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffzjb\" (UniqueName: \"kubernetes.io/projected/8dea217a-9314-4a8d-8607-a007c861127a-kube-api-access-ffzjb\") pod \"8dea217a-9314-4a8d-8607-a007c861127a\" (UID: \"8dea217a-9314-4a8d-8607-a007c861127a\") " Jan 30 17:47:21 crc kubenswrapper[4766]: I0130 17:47:21.063033 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8dea217a-9314-4a8d-8607-a007c861127a-kube-api-access-ffzjb" (OuterVolumeSpecName: "kube-api-access-ffzjb") pod "8dea217a-9314-4a8d-8607-a007c861127a" (UID: "8dea217a-9314-4a8d-8607-a007c861127a"). InnerVolumeSpecName "kube-api-access-ffzjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:47:21 crc kubenswrapper[4766]: I0130 17:47:21.130154 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Jan 30 17:47:21 crc kubenswrapper[4766]: E0130 17:47:21.130460 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dea217a-9314-4a8d-8607-a007c861127a" containerName="mariadb-client" Jan 30 17:47:21 crc kubenswrapper[4766]: I0130 17:47:21.130478 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dea217a-9314-4a8d-8607-a007c861127a" containerName="mariadb-client" Jan 30 17:47:21 crc kubenswrapper[4766]: I0130 17:47:21.130663 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dea217a-9314-4a8d-8607-a007c861127a" containerName="mariadb-client" Jan 30 17:47:21 crc kubenswrapper[4766]: I0130 17:47:21.131119 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 17:47:21 crc kubenswrapper[4766]: I0130 17:47:21.148065 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 30 17:47:21 crc kubenswrapper[4766]: I0130 17:47:21.155124 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffzjb\" (UniqueName: \"kubernetes.io/projected/8dea217a-9314-4a8d-8607-a007c861127a-kube-api-access-ffzjb\") on node \"crc\" DevicePath \"\"" Jan 30 17:47:21 crc kubenswrapper[4766]: I0130 17:47:21.256261 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b88rh\" (UniqueName: \"kubernetes.io/projected/68174148-c4d4-4f1d-ab10-8372f6dcaeb4-kube-api-access-b88rh\") pod \"mariadb-client\" (UID: \"68174148-c4d4-4f1d-ab10-8372f6dcaeb4\") " pod="openstack/mariadb-client" Jan 30 17:47:21 crc kubenswrapper[4766]: I0130 17:47:21.358160 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b88rh\" (UniqueName: \"kubernetes.io/projected/68174148-c4d4-4f1d-ab10-8372f6dcaeb4-kube-api-access-b88rh\") pod \"mariadb-client\" (UID: \"68174148-c4d4-4f1d-ab10-8372f6dcaeb4\") " pod="openstack/mariadb-client" Jan 30 17:47:21 crc kubenswrapper[4766]: I0130 17:47:21.375917 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b88rh\" (UniqueName: \"kubernetes.io/projected/68174148-c4d4-4f1d-ab10-8372f6dcaeb4-kube-api-access-b88rh\") pod \"mariadb-client\" (UID: \"68174148-c4d4-4f1d-ab10-8372f6dcaeb4\") " pod="openstack/mariadb-client" Jan 30 17:47:21 crc kubenswrapper[4766]: I0130 17:47:21.452662 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 17:47:21 crc kubenswrapper[4766]: I0130 17:47:21.669307 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbb0f391a7dc07c70b0a13527b5f305f68a28f1bfce23078772d33ec4bf718f1" Jan 30 17:47:21 crc kubenswrapper[4766]: I0130 17:47:21.669451 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 17:47:21 crc kubenswrapper[4766]: I0130 17:47:21.697673 4766 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/mariadb-client" oldPodUID="8dea217a-9314-4a8d-8607-a007c861127a" podUID="68174148-c4d4-4f1d-ab10-8372f6dcaeb4" Jan 30 17:47:21 crc kubenswrapper[4766]: W0130 17:47:21.957110 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68174148_c4d4_4f1d_ab10_8372f6dcaeb4.slice/crio-92bf3ceaae7cc23e9b0e853547c441ad451d50b28eb38c70a3a2726a06730d10 WatchSource:0}: Error finding container 92bf3ceaae7cc23e9b0e853547c441ad451d50b28eb38c70a3a2726a06730d10: Status 404 returned error can't find the container with id 92bf3ceaae7cc23e9b0e853547c441ad451d50b28eb38c70a3a2726a06730d10 Jan 30 17:47:21 crc kubenswrapper[4766]: I0130 17:47:21.957623 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 30 17:47:22 crc kubenswrapper[4766]: I0130 17:47:22.049149 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8dea217a-9314-4a8d-8607-a007c861127a" path="/var/lib/kubelet/pods/8dea217a-9314-4a8d-8607-a007c861127a/volumes" Jan 30 17:47:22 crc kubenswrapper[4766]: I0130 17:47:22.677862 4766 generic.go:334] "Generic (PLEG): container finished" podID="68174148-c4d4-4f1d-ab10-8372f6dcaeb4" containerID="d8504184abdc59d46439aff32e612a0f7f012cb9b67d257b000d3ef0913598c5" exitCode=0 Jan 30 17:47:22 crc kubenswrapper[4766]: I0130 17:47:22.677910 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"68174148-c4d4-4f1d-ab10-8372f6dcaeb4","Type":"ContainerDied","Data":"d8504184abdc59d46439aff32e612a0f7f012cb9b67d257b000d3ef0913598c5"} Jan 30 17:47:22 crc kubenswrapper[4766]: I0130 17:47:22.677938 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"68174148-c4d4-4f1d-ab10-8372f6dcaeb4","Type":"ContainerStarted","Data":"92bf3ceaae7cc23e9b0e853547c441ad451d50b28eb38c70a3a2726a06730d10"} Jan 30 17:47:23 crc kubenswrapper[4766]: I0130 17:47:23.040952 4766 scope.go:117] "RemoveContainer" containerID="7fc5828a50a187fe8ccf98b89913744accb72a8e7e151fd7516b11ddfbd0849c" Jan 30 17:47:23 crc kubenswrapper[4766]: E0130 17:47:23.041155 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:47:23 crc kubenswrapper[4766]: I0130 17:47:23.944614 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 17:47:23 crc kubenswrapper[4766]: I0130 17:47:23.963932 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_68174148-c4d4-4f1d-ab10-8372f6dcaeb4/mariadb-client/0.log" Jan 30 17:47:23 crc kubenswrapper[4766]: I0130 17:47:23.993070 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 30 17:47:24 crc kubenswrapper[4766]: I0130 17:47:24.002311 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Jan 30 17:47:24 crc kubenswrapper[4766]: I0130 17:47:24.108031 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b88rh\" (UniqueName: \"kubernetes.io/projected/68174148-c4d4-4f1d-ab10-8372f6dcaeb4-kube-api-access-b88rh\") pod \"68174148-c4d4-4f1d-ab10-8372f6dcaeb4\" (UID: \"68174148-c4d4-4f1d-ab10-8372f6dcaeb4\") " Jan 30 17:47:24 crc kubenswrapper[4766]: I0130 17:47:24.114451 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68174148-c4d4-4f1d-ab10-8372f6dcaeb4-kube-api-access-b88rh" (OuterVolumeSpecName: "kube-api-access-b88rh") pod "68174148-c4d4-4f1d-ab10-8372f6dcaeb4" (UID: "68174148-c4d4-4f1d-ab10-8372f6dcaeb4"). InnerVolumeSpecName "kube-api-access-b88rh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:47:24 crc kubenswrapper[4766]: I0130 17:47:24.210475 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b88rh\" (UniqueName: \"kubernetes.io/projected/68174148-c4d4-4f1d-ab10-8372f6dcaeb4-kube-api-access-b88rh\") on node \"crc\" DevicePath \"\"" Jan 30 17:47:24 crc kubenswrapper[4766]: I0130 17:47:24.691846 4766 scope.go:117] "RemoveContainer" containerID="d8504184abdc59d46439aff32e612a0f7f012cb9b67d257b000d3ef0913598c5" Jan 30 17:47:24 crc kubenswrapper[4766]: I0130 17:47:24.692007 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 17:47:26 crc kubenswrapper[4766]: I0130 17:47:26.049633 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68174148-c4d4-4f1d-ab10-8372f6dcaeb4" path="/var/lib/kubelet/pods/68174148-c4d4-4f1d-ab10-8372f6dcaeb4/volumes" Jan 30 17:47:31 crc kubenswrapper[4766]: I0130 17:47:31.564975 4766 scope.go:117] "RemoveContainer" containerID="0078600a657ee1591d8d9983657bcc34b477649798d6ae05ffcf66ebeaeaa4a4" Jan 30 17:47:36 crc kubenswrapper[4766]: I0130 17:47:36.043605 4766 scope.go:117] "RemoveContainer" containerID="7fc5828a50a187fe8ccf98b89913744accb72a8e7e151fd7516b11ddfbd0849c" Jan 30 17:47:36 crc kubenswrapper[4766]: E0130 17:47:36.044198 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:47:49 crc kubenswrapper[4766]: I0130 17:47:49.038879 4766 scope.go:117] "RemoveContainer" containerID="7fc5828a50a187fe8ccf98b89913744accb72a8e7e151fd7516b11ddfbd0849c" Jan 30 17:47:49 crc kubenswrapper[4766]: E0130 17:47:49.039549 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:48:01 crc kubenswrapper[4766]: I0130 17:48:01.039797 4766 scope.go:117] "RemoveContainer" containerID="7fc5828a50a187fe8ccf98b89913744accb72a8e7e151fd7516b11ddfbd0849c" Jan 30 17:48:01 crc kubenswrapper[4766]: E0130 17:48:01.040817 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:48:03 crc kubenswrapper[4766]: I0130 17:48:03.984662 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 17:48:03 crc kubenswrapper[4766]: E0130 17:48:03.985026 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68174148-c4d4-4f1d-ab10-8372f6dcaeb4" containerName="mariadb-client" Jan 30 17:48:03 crc kubenswrapper[4766]: I0130 17:48:03.985045 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="68174148-c4d4-4f1d-ab10-8372f6dcaeb4" containerName="mariadb-client" Jan 30 17:48:03 crc kubenswrapper[4766]: I0130 17:48:03.985362 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="68174148-c4d4-4f1d-ab10-8372f6dcaeb4" containerName="mariadb-client" Jan 30 17:48:03 crc kubenswrapper[4766]: I0130 17:48:03.986380 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 17:48:03 crc kubenswrapper[4766]: I0130 17:48:03.989718 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-h4smv" Jan 30 17:48:03 crc kubenswrapper[4766]: I0130 17:48:03.989914 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 30 17:48:03 crc kubenswrapper[4766]: I0130 17:48:03.994739 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 17:48:03 crc kubenswrapper[4766]: I0130 17:48:03.996416 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.008259 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.009519 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.021139 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.022487 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.026750 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b16a682c-8a11-4113-82e8-b361a1d8881e-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"b16a682c-8a11-4113-82e8-b361a1d8881e\") " pod="openstack/ovsdbserver-nb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.026839 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3ee35ca4-b7bd-4930-8399-0580ed877e5d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3ee35ca4-b7bd-4930-8399-0580ed877e5d\") pod \"ovsdbserver-nb-0\" (UID: \"b16a682c-8a11-4113-82e8-b361a1d8881e\") " pod="openstack/ovsdbserver-nb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.026865 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b16a682c-8a11-4113-82e8-b361a1d8881e-config\") pod \"ovsdbserver-nb-0\" (UID: \"b16a682c-8a11-4113-82e8-b361a1d8881e\") " pod="openstack/ovsdbserver-nb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.026888 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b16a682c-8a11-4113-82e8-b361a1d8881e-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"b16a682c-8a11-4113-82e8-b361a1d8881e\") " pod="openstack/ovsdbserver-nb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.026908 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b16a682c-8a11-4113-82e8-b361a1d8881e-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"b16a682c-8a11-4113-82e8-b361a1d8881e\") " pod="openstack/ovsdbserver-nb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.026996 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hghqb\" (UniqueName: \"kubernetes.io/projected/b16a682c-8a11-4113-82e8-b361a1d8881e-kube-api-access-hghqb\") pod \"ovsdbserver-nb-0\" (UID: \"b16a682c-8a11-4113-82e8-b361a1d8881e\") " pod="openstack/ovsdbserver-nb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.031483 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.063635 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.128999 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2591e329-01bd-4573-8590-6e3f62bfb187-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"2591e329-01bd-4573-8590-6e3f62bfb187\") " pod="openstack/ovsdbserver-nb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.129258 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b29c551b-31dd-4264-b3f0-04fde1a2529f-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"b29c551b-31dd-4264-b3f0-04fde1a2529f\") " pod="openstack/ovsdbserver-nb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.129370 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2591e329-01bd-4573-8590-6e3f62bfb187-config\") pod \"ovsdbserver-nb-2\" (UID: \"2591e329-01bd-4573-8590-6e3f62bfb187\") " pod="openstack/ovsdbserver-nb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.129551 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b29c551b-31dd-4264-b3f0-04fde1a2529f-config\") pod \"ovsdbserver-nb-1\" (UID: \"b29c551b-31dd-4264-b3f0-04fde1a2529f\") " pod="openstack/ovsdbserver-nb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.129683 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hghqb\" (UniqueName: \"kubernetes.io/projected/b16a682c-8a11-4113-82e8-b361a1d8881e-kube-api-access-hghqb\") pod \"ovsdbserver-nb-0\" (UID: \"b16a682c-8a11-4113-82e8-b361a1d8881e\") " pod="openstack/ovsdbserver-nb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.129849 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6f0cf9e3-1887-4ebc-84a7-4ca1bfdbe2ae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6f0cf9e3-1887-4ebc-84a7-4ca1bfdbe2ae\") pod \"ovsdbserver-nb-2\" (UID: \"2591e329-01bd-4573-8590-6e3f62bfb187\") " pod="openstack/ovsdbserver-nb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.131036 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b16a682c-8a11-4113-82e8-b361a1d8881e-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"b16a682c-8a11-4113-82e8-b361a1d8881e\") " pod="openstack/ovsdbserver-nb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.131139 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b29c551b-31dd-4264-b3f0-04fde1a2529f-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"b29c551b-31dd-4264-b3f0-04fde1a2529f\") " pod="openstack/ovsdbserver-nb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.131170 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xsvp\" (UniqueName: \"kubernetes.io/projected/2591e329-01bd-4573-8590-6e3f62bfb187-kube-api-access-7xsvp\") pod \"ovsdbserver-nb-2\" (UID: \"2591e329-01bd-4573-8590-6e3f62bfb187\") " pod="openstack/ovsdbserver-nb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.132061 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b16a682c-8a11-4113-82e8-b361a1d8881e-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"b16a682c-8a11-4113-82e8-b361a1d8881e\") " pod="openstack/ovsdbserver-nb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.132802 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-51b64680-3f7a-4288-943b-d0019aa91b8e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-51b64680-3f7a-4288-943b-d0019aa91b8e\") pod \"ovsdbserver-nb-1\" (UID: \"b29c551b-31dd-4264-b3f0-04fde1a2529f\") " pod="openstack/ovsdbserver-nb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.132904 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3ee35ca4-b7bd-4930-8399-0580ed877e5d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3ee35ca4-b7bd-4930-8399-0580ed877e5d\") pod \"ovsdbserver-nb-0\" (UID: \"b16a682c-8a11-4113-82e8-b361a1d8881e\") " pod="openstack/ovsdbserver-nb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.132950 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b16a682c-8a11-4113-82e8-b361a1d8881e-config\") pod \"ovsdbserver-nb-0\" (UID: \"b16a682c-8a11-4113-82e8-b361a1d8881e\") " pod="openstack/ovsdbserver-nb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.132977 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b16a682c-8a11-4113-82e8-b361a1d8881e-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"b16a682c-8a11-4113-82e8-b361a1d8881e\") " pod="openstack/ovsdbserver-nb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.133011 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b16a682c-8a11-4113-82e8-b361a1d8881e-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"b16a682c-8a11-4113-82e8-b361a1d8881e\") " pod="openstack/ovsdbserver-nb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.133033 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4b49\" (UniqueName: \"kubernetes.io/projected/b29c551b-31dd-4264-b3f0-04fde1a2529f-kube-api-access-d4b49\") pod \"ovsdbserver-nb-1\" (UID: \"b29c551b-31dd-4264-b3f0-04fde1a2529f\") " pod="openstack/ovsdbserver-nb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.133062 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b29c551b-31dd-4264-b3f0-04fde1a2529f-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"b29c551b-31dd-4264-b3f0-04fde1a2529f\") " pod="openstack/ovsdbserver-nb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.133128 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2591e329-01bd-4573-8590-6e3f62bfb187-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"2591e329-01bd-4573-8590-6e3f62bfb187\") " pod="openstack/ovsdbserver-nb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.133630 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2591e329-01bd-4573-8590-6e3f62bfb187-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"2591e329-01bd-4573-8590-6e3f62bfb187\") " pod="openstack/ovsdbserver-nb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.134072 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b16a682c-8a11-4113-82e8-b361a1d8881e-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"b16a682c-8a11-4113-82e8-b361a1d8881e\") " pod="openstack/ovsdbserver-nb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.134683 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b16a682c-8a11-4113-82e8-b361a1d8881e-config\") pod \"ovsdbserver-nb-0\" (UID: \"b16a682c-8a11-4113-82e8-b361a1d8881e\") " pod="openstack/ovsdbserver-nb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.136735 4766 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.136763 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3ee35ca4-b7bd-4930-8399-0580ed877e5d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3ee35ca4-b7bd-4930-8399-0580ed877e5d\") pod \"ovsdbserver-nb-0\" (UID: \"b16a682c-8a11-4113-82e8-b361a1d8881e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/57bd0c9220c7b3cf0c3fac8a83ec31e9cd3ecf2a08f7ee09f213bf587e64c805/globalmount\"" pod="openstack/ovsdbserver-nb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.139200 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b16a682c-8a11-4113-82e8-b361a1d8881e-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"b16a682c-8a11-4113-82e8-b361a1d8881e\") " pod="openstack/ovsdbserver-nb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.146908 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hghqb\" (UniqueName: \"kubernetes.io/projected/b16a682c-8a11-4113-82e8-b361a1d8881e-kube-api-access-hghqb\") pod \"ovsdbserver-nb-0\" (UID: \"b16a682c-8a11-4113-82e8-b361a1d8881e\") " pod="openstack/ovsdbserver-nb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.166663 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3ee35ca4-b7bd-4930-8399-0580ed877e5d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3ee35ca4-b7bd-4930-8399-0580ed877e5d\") pod \"ovsdbserver-nb-0\" (UID: \"b16a682c-8a11-4113-82e8-b361a1d8881e\") " pod="openstack/ovsdbserver-nb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.176395 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.178117 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.186091 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.186265 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.186674 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-fvvb2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.192373 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.221594 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.223168 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.232553 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.234113 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.236361 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2591e329-01bd-4573-8590-6e3f62bfb187-config\") pod \"ovsdbserver-nb-2\" (UID: \"2591e329-01bd-4573-8590-6e3f62bfb187\") " pod="openstack/ovsdbserver-nb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.236412 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1053f18b-60a9-44c8-84f5-77bc506a83c1-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"1053f18b-60a9-44c8-84f5-77bc506a83c1\") " pod="openstack/ovsdbserver-sb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.236442 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2a4e1574-cdca-439a-950d-d70cbd1603ae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a4e1574-cdca-439a-950d-d70cbd1603ae\") pod \"ovsdbserver-sb-0\" (UID: \"1053f18b-60a9-44c8-84f5-77bc506a83c1\") " pod="openstack/ovsdbserver-sb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.236472 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b29c551b-31dd-4264-b3f0-04fde1a2529f-config\") pod \"ovsdbserver-nb-1\" (UID: \"b29c551b-31dd-4264-b3f0-04fde1a2529f\") " pod="openstack/ovsdbserver-nb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.236495 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6f0cf9e3-1887-4ebc-84a7-4ca1bfdbe2ae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6f0cf9e3-1887-4ebc-84a7-4ca1bfdbe2ae\") pod \"ovsdbserver-nb-2\" (UID: \"2591e329-01bd-4573-8590-6e3f62bfb187\") " pod="openstack/ovsdbserver-nb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.236517 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1053f18b-60a9-44c8-84f5-77bc506a83c1-config\") pod \"ovsdbserver-sb-0\" (UID: \"1053f18b-60a9-44c8-84f5-77bc506a83c1\") " pod="openstack/ovsdbserver-sb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.236544 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1053f18b-60a9-44c8-84f5-77bc506a83c1-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"1053f18b-60a9-44c8-84f5-77bc506a83c1\") " pod="openstack/ovsdbserver-sb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.236569 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwlt7\" (UniqueName: \"kubernetes.io/projected/1053f18b-60a9-44c8-84f5-77bc506a83c1-kube-api-access-xwlt7\") pod \"ovsdbserver-sb-0\" (UID: \"1053f18b-60a9-44c8-84f5-77bc506a83c1\") " pod="openstack/ovsdbserver-sb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.236604 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1053f18b-60a9-44c8-84f5-77bc506a83c1-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"1053f18b-60a9-44c8-84f5-77bc506a83c1\") " pod="openstack/ovsdbserver-sb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.236631 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b29c551b-31dd-4264-b3f0-04fde1a2529f-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"b29c551b-31dd-4264-b3f0-04fde1a2529f\") " pod="openstack/ovsdbserver-nb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.240107 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xsvp\" (UniqueName: \"kubernetes.io/projected/2591e329-01bd-4573-8590-6e3f62bfb187-kube-api-access-7xsvp\") pod \"ovsdbserver-nb-2\" (UID: \"2591e329-01bd-4573-8590-6e3f62bfb187\") " pod="openstack/ovsdbserver-nb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.240151 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-51b64680-3f7a-4288-943b-d0019aa91b8e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-51b64680-3f7a-4288-943b-d0019aa91b8e\") pod \"ovsdbserver-nb-1\" (UID: \"b29c551b-31dd-4264-b3f0-04fde1a2529f\") " pod="openstack/ovsdbserver-nb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.240232 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4b49\" (UniqueName: \"kubernetes.io/projected/b29c551b-31dd-4264-b3f0-04fde1a2529f-kube-api-access-d4b49\") pod \"ovsdbserver-nb-1\" (UID: \"b29c551b-31dd-4264-b3f0-04fde1a2529f\") " pod="openstack/ovsdbserver-nb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.240257 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b29c551b-31dd-4264-b3f0-04fde1a2529f-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"b29c551b-31dd-4264-b3f0-04fde1a2529f\") " pod="openstack/ovsdbserver-nb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.240290 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2591e329-01bd-4573-8590-6e3f62bfb187-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"2591e329-01bd-4573-8590-6e3f62bfb187\") " pod="openstack/ovsdbserver-nb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.240328 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2591e329-01bd-4573-8590-6e3f62bfb187-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"2591e329-01bd-4573-8590-6e3f62bfb187\") " pod="openstack/ovsdbserver-nb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.240341 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.240373 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2591e329-01bd-4573-8590-6e3f62bfb187-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"2591e329-01bd-4573-8590-6e3f62bfb187\") " pod="openstack/ovsdbserver-nb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.240401 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b29c551b-31dd-4264-b3f0-04fde1a2529f-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"b29c551b-31dd-4264-b3f0-04fde1a2529f\") " pod="openstack/ovsdbserver-nb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.240693 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2591e329-01bd-4573-8590-6e3f62bfb187-config\") pod \"ovsdbserver-nb-2\" (UID: \"2591e329-01bd-4573-8590-6e3f62bfb187\") " pod="openstack/ovsdbserver-nb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.242131 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b29c551b-31dd-4264-b3f0-04fde1a2529f-config\") pod \"ovsdbserver-nb-1\" (UID: \"b29c551b-31dd-4264-b3f0-04fde1a2529f\") " pod="openstack/ovsdbserver-nb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.242765 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b29c551b-31dd-4264-b3f0-04fde1a2529f-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"b29c551b-31dd-4264-b3f0-04fde1a2529f\") " pod="openstack/ovsdbserver-nb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.243163 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b29c551b-31dd-4264-b3f0-04fde1a2529f-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"b29c551b-31dd-4264-b3f0-04fde1a2529f\") " pod="openstack/ovsdbserver-nb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.247018 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2591e329-01bd-4573-8590-6e3f62bfb187-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"2591e329-01bd-4573-8590-6e3f62bfb187\") " pod="openstack/ovsdbserver-nb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.247152 4766 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.247257 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6f0cf9e3-1887-4ebc-84a7-4ca1bfdbe2ae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6f0cf9e3-1887-4ebc-84a7-4ca1bfdbe2ae\") pod \"ovsdbserver-nb-2\" (UID: \"2591e329-01bd-4573-8590-6e3f62bfb187\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/27ea08c57b06496e2d93d97b9248d1c8155fdae78f0593fca82f73e37336042a/globalmount\"" pod="openstack/ovsdbserver-nb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.248093 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2591e329-01bd-4573-8590-6e3f62bfb187-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"2591e329-01bd-4573-8590-6e3f62bfb187\") " pod="openstack/ovsdbserver-nb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.250926 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b29c551b-31dd-4264-b3f0-04fde1a2529f-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"b29c551b-31dd-4264-b3f0-04fde1a2529f\") " pod="openstack/ovsdbserver-nb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.252540 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.256108 4766 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.256149 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-51b64680-3f7a-4288-943b-d0019aa91b8e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-51b64680-3f7a-4288-943b-d0019aa91b8e\") pod \"ovsdbserver-nb-1\" (UID: \"b29c551b-31dd-4264-b3f0-04fde1a2529f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e8d66a76004ccd7102542b83fa60b6d7731a2eea77eb91c16605bd100f23334a/globalmount\"" pod="openstack/ovsdbserver-nb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.259872 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2591e329-01bd-4573-8590-6e3f62bfb187-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"2591e329-01bd-4573-8590-6e3f62bfb187\") " pod="openstack/ovsdbserver-nb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.264990 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xsvp\" (UniqueName: \"kubernetes.io/projected/2591e329-01bd-4573-8590-6e3f62bfb187-kube-api-access-7xsvp\") pod \"ovsdbserver-nb-2\" (UID: \"2591e329-01bd-4573-8590-6e3f62bfb187\") " pod="openstack/ovsdbserver-nb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.269832 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4b49\" (UniqueName: \"kubernetes.io/projected/b29c551b-31dd-4264-b3f0-04fde1a2529f-kube-api-access-d4b49\") pod \"ovsdbserver-nb-1\" (UID: \"b29c551b-31dd-4264-b3f0-04fde1a2529f\") " pod="openstack/ovsdbserver-nb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.304509 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-51b64680-3f7a-4288-943b-d0019aa91b8e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-51b64680-3f7a-4288-943b-d0019aa91b8e\") pod \"ovsdbserver-nb-1\" (UID: \"b29c551b-31dd-4264-b3f0-04fde1a2529f\") " pod="openstack/ovsdbserver-nb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.310898 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6f0cf9e3-1887-4ebc-84a7-4ca1bfdbe2ae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6f0cf9e3-1887-4ebc-84a7-4ca1bfdbe2ae\") pod \"ovsdbserver-nb-2\" (UID: \"2591e329-01bd-4573-8590-6e3f62bfb187\") " pod="openstack/ovsdbserver-nb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.341394 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4e2c0f44-0757-48d2-8df4-07b76f444461\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4e2c0f44-0757-48d2-8df4-07b76f444461\") pod \"ovsdbserver-sb-2\" (UID: \"76df5ae8-0eeb-4bb5-86ee-1c416397a186\") " pod="openstack/ovsdbserver-sb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.341463 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1053f18b-60a9-44c8-84f5-77bc506a83c1-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"1053f18b-60a9-44c8-84f5-77bc506a83c1\") " pod="openstack/ovsdbserver-sb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.341499 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2a4e1574-cdca-439a-950d-d70cbd1603ae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a4e1574-cdca-439a-950d-d70cbd1603ae\") pod \"ovsdbserver-sb-0\" (UID: \"1053f18b-60a9-44c8-84f5-77bc506a83c1\") " pod="openstack/ovsdbserver-sb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.341533 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twm7h\" (UniqueName: \"kubernetes.io/projected/76df5ae8-0eeb-4bb5-86ee-1c416397a186-kube-api-access-twm7h\") pod \"ovsdbserver-sb-2\" (UID: \"76df5ae8-0eeb-4bb5-86ee-1c416397a186\") " pod="openstack/ovsdbserver-sb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.341569 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1053f18b-60a9-44c8-84f5-77bc506a83c1-config\") pod \"ovsdbserver-sb-0\" (UID: \"1053f18b-60a9-44c8-84f5-77bc506a83c1\") " pod="openstack/ovsdbserver-sb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.341600 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1053f18b-60a9-44c8-84f5-77bc506a83c1-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"1053f18b-60a9-44c8-84f5-77bc506a83c1\") " pod="openstack/ovsdbserver-sb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.341634 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwlt7\" (UniqueName: \"kubernetes.io/projected/1053f18b-60a9-44c8-84f5-77bc506a83c1-kube-api-access-xwlt7\") pod \"ovsdbserver-sb-0\" (UID: \"1053f18b-60a9-44c8-84f5-77bc506a83c1\") " pod="openstack/ovsdbserver-sb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.341662 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/76df5ae8-0eeb-4bb5-86ee-1c416397a186-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"76df5ae8-0eeb-4bb5-86ee-1c416397a186\") " pod="openstack/ovsdbserver-sb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.341695 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/76df5ae8-0eeb-4bb5-86ee-1c416397a186-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"76df5ae8-0eeb-4bb5-86ee-1c416397a186\") " pod="openstack/ovsdbserver-sb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.341734 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1053f18b-60a9-44c8-84f5-77bc506a83c1-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"1053f18b-60a9-44c8-84f5-77bc506a83c1\") " pod="openstack/ovsdbserver-sb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.341780 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76df5ae8-0eeb-4bb5-86ee-1c416397a186-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"76df5ae8-0eeb-4bb5-86ee-1c416397a186\") " pod="openstack/ovsdbserver-sb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.341842 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76df5ae8-0eeb-4bb5-86ee-1c416397a186-config\") pod \"ovsdbserver-sb-2\" (UID: \"76df5ae8-0eeb-4bb5-86ee-1c416397a186\") " pod="openstack/ovsdbserver-sb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.342213 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1053f18b-60a9-44c8-84f5-77bc506a83c1-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"1053f18b-60a9-44c8-84f5-77bc506a83c1\") " pod="openstack/ovsdbserver-sb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.343860 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1053f18b-60a9-44c8-84f5-77bc506a83c1-config\") pod \"ovsdbserver-sb-0\" (UID: \"1053f18b-60a9-44c8-84f5-77bc506a83c1\") " pod="openstack/ovsdbserver-sb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.344133 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1053f18b-60a9-44c8-84f5-77bc506a83c1-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"1053f18b-60a9-44c8-84f5-77bc506a83c1\") " pod="openstack/ovsdbserver-sb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.345215 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1053f18b-60a9-44c8-84f5-77bc506a83c1-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"1053f18b-60a9-44c8-84f5-77bc506a83c1\") " pod="openstack/ovsdbserver-sb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.345965 4766 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.345993 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2a4e1574-cdca-439a-950d-d70cbd1603ae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a4e1574-cdca-439a-950d-d70cbd1603ae\") pod \"ovsdbserver-sb-0\" (UID: \"1053f18b-60a9-44c8-84f5-77bc506a83c1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b79da9dd838b1d23c15710aab6ce2b6fb8c619bcc90851891501a8917c282052/globalmount\"" pod="openstack/ovsdbserver-sb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.355367 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.359167 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwlt7\" (UniqueName: \"kubernetes.io/projected/1053f18b-60a9-44c8-84f5-77bc506a83c1-kube-api-access-xwlt7\") pod \"ovsdbserver-sb-0\" (UID: \"1053f18b-60a9-44c8-84f5-77bc506a83c1\") " pod="openstack/ovsdbserver-sb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.368572 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.374660 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.383459 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2a4e1574-cdca-439a-950d-d70cbd1603ae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a4e1574-cdca-439a-950d-d70cbd1603ae\") pod \"ovsdbserver-sb-0\" (UID: \"1053f18b-60a9-44c8-84f5-77bc506a83c1\") " pod="openstack/ovsdbserver-sb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.443137 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95b4e121-951b-4c45-a227-1ec8638a2320-config\") pod \"ovsdbserver-sb-1\" (UID: \"95b4e121-951b-4c45-a227-1ec8638a2320\") " pod="openstack/ovsdbserver-sb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.443226 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/95b4e121-951b-4c45-a227-1ec8638a2320-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"95b4e121-951b-4c45-a227-1ec8638a2320\") " pod="openstack/ovsdbserver-sb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.443253 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/95b4e121-951b-4c45-a227-1ec8638a2320-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"95b4e121-951b-4c45-a227-1ec8638a2320\") " pod="openstack/ovsdbserver-sb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.443282 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76df5ae8-0eeb-4bb5-86ee-1c416397a186-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"76df5ae8-0eeb-4bb5-86ee-1c416397a186\") " pod="openstack/ovsdbserver-sb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.443322 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3533d9c0-0b55-4512-a747-7107c7faaaf0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3533d9c0-0b55-4512-a747-7107c7faaaf0\") pod \"ovsdbserver-sb-1\" (UID: \"95b4e121-951b-4c45-a227-1ec8638a2320\") " pod="openstack/ovsdbserver-sb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.443356 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95b4e121-951b-4c45-a227-1ec8638a2320-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"95b4e121-951b-4c45-a227-1ec8638a2320\") " pod="openstack/ovsdbserver-sb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.443403 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76df5ae8-0eeb-4bb5-86ee-1c416397a186-config\") pod \"ovsdbserver-sb-2\" (UID: \"76df5ae8-0eeb-4bb5-86ee-1c416397a186\") " pod="openstack/ovsdbserver-sb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.443446 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4e2c0f44-0757-48d2-8df4-07b76f444461\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4e2c0f44-0757-48d2-8df4-07b76f444461\") pod \"ovsdbserver-sb-2\" (UID: \"76df5ae8-0eeb-4bb5-86ee-1c416397a186\") " pod="openstack/ovsdbserver-sb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.443480 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7625x\" (UniqueName: \"kubernetes.io/projected/95b4e121-951b-4c45-a227-1ec8638a2320-kube-api-access-7625x\") pod \"ovsdbserver-sb-1\" (UID: \"95b4e121-951b-4c45-a227-1ec8638a2320\") " pod="openstack/ovsdbserver-sb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.443702 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twm7h\" (UniqueName: \"kubernetes.io/projected/76df5ae8-0eeb-4bb5-86ee-1c416397a186-kube-api-access-twm7h\") pod \"ovsdbserver-sb-2\" (UID: \"76df5ae8-0eeb-4bb5-86ee-1c416397a186\") " pod="openstack/ovsdbserver-sb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.443760 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/76df5ae8-0eeb-4bb5-86ee-1c416397a186-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"76df5ae8-0eeb-4bb5-86ee-1c416397a186\") " pod="openstack/ovsdbserver-sb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.443790 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/76df5ae8-0eeb-4bb5-86ee-1c416397a186-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"76df5ae8-0eeb-4bb5-86ee-1c416397a186\") " pod="openstack/ovsdbserver-sb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.444553 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/76df5ae8-0eeb-4bb5-86ee-1c416397a186-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"76df5ae8-0eeb-4bb5-86ee-1c416397a186\") " pod="openstack/ovsdbserver-sb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.447263 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76df5ae8-0eeb-4bb5-86ee-1c416397a186-config\") pod \"ovsdbserver-sb-2\" (UID: \"76df5ae8-0eeb-4bb5-86ee-1c416397a186\") " pod="openstack/ovsdbserver-sb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.450731 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/76df5ae8-0eeb-4bb5-86ee-1c416397a186-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"76df5ae8-0eeb-4bb5-86ee-1c416397a186\") " pod="openstack/ovsdbserver-sb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.455420 4766 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.468022 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4e2c0f44-0757-48d2-8df4-07b76f444461\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4e2c0f44-0757-48d2-8df4-07b76f444461\") pod \"ovsdbserver-sb-2\" (UID: \"76df5ae8-0eeb-4bb5-86ee-1c416397a186\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/fad90c2c72d134ef9ba9a53a0c0b32c3c7c172b59b324139234f8cbee12231bd/globalmount\"" pod="openstack/ovsdbserver-sb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.464215 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twm7h\" (UniqueName: \"kubernetes.io/projected/76df5ae8-0eeb-4bb5-86ee-1c416397a186-kube-api-access-twm7h\") pod \"ovsdbserver-sb-2\" (UID: \"76df5ae8-0eeb-4bb5-86ee-1c416397a186\") " pod="openstack/ovsdbserver-sb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.455657 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76df5ae8-0eeb-4bb5-86ee-1c416397a186-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"76df5ae8-0eeb-4bb5-86ee-1c416397a186\") " pod="openstack/ovsdbserver-sb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.497127 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4e2c0f44-0757-48d2-8df4-07b76f444461\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4e2c0f44-0757-48d2-8df4-07b76f444461\") pod \"ovsdbserver-sb-2\" (UID: \"76df5ae8-0eeb-4bb5-86ee-1c416397a186\") " pod="openstack/ovsdbserver-sb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.536882 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.545204 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7625x\" (UniqueName: \"kubernetes.io/projected/95b4e121-951b-4c45-a227-1ec8638a2320-kube-api-access-7625x\") pod \"ovsdbserver-sb-1\" (UID: \"95b4e121-951b-4c45-a227-1ec8638a2320\") " pod="openstack/ovsdbserver-sb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.545295 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95b4e121-951b-4c45-a227-1ec8638a2320-config\") pod \"ovsdbserver-sb-1\" (UID: \"95b4e121-951b-4c45-a227-1ec8638a2320\") " pod="openstack/ovsdbserver-sb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.545320 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/95b4e121-951b-4c45-a227-1ec8638a2320-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"95b4e121-951b-4c45-a227-1ec8638a2320\") " pod="openstack/ovsdbserver-sb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.545337 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/95b4e121-951b-4c45-a227-1ec8638a2320-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"95b4e121-951b-4c45-a227-1ec8638a2320\") " pod="openstack/ovsdbserver-sb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.545371 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3533d9c0-0b55-4512-a747-7107c7faaaf0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3533d9c0-0b55-4512-a747-7107c7faaaf0\") pod \"ovsdbserver-sb-1\" (UID: \"95b4e121-951b-4c45-a227-1ec8638a2320\") " pod="openstack/ovsdbserver-sb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.545393 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95b4e121-951b-4c45-a227-1ec8638a2320-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"95b4e121-951b-4c45-a227-1ec8638a2320\") " pod="openstack/ovsdbserver-sb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.546325 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95b4e121-951b-4c45-a227-1ec8638a2320-config\") pod \"ovsdbserver-sb-1\" (UID: \"95b4e121-951b-4c45-a227-1ec8638a2320\") " pod="openstack/ovsdbserver-sb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.546476 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/95b4e121-951b-4c45-a227-1ec8638a2320-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"95b4e121-951b-4c45-a227-1ec8638a2320\") " pod="openstack/ovsdbserver-sb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.546558 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/95b4e121-951b-4c45-a227-1ec8638a2320-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"95b4e121-951b-4c45-a227-1ec8638a2320\") " pod="openstack/ovsdbserver-sb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.553040 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95b4e121-951b-4c45-a227-1ec8638a2320-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"95b4e121-951b-4c45-a227-1ec8638a2320\") " pod="openstack/ovsdbserver-sb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.549431 4766 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.556323 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3533d9c0-0b55-4512-a747-7107c7faaaf0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3533d9c0-0b55-4512-a747-7107c7faaaf0\") pod \"ovsdbserver-sb-1\" (UID: \"95b4e121-951b-4c45-a227-1ec8638a2320\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/883e66b7a9a89d385eec218336add04608336322f761d687a93ed65b04608b84/globalmount\"" pod="openstack/ovsdbserver-sb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.564722 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7625x\" (UniqueName: \"kubernetes.io/projected/95b4e121-951b-4c45-a227-1ec8638a2320-kube-api-access-7625x\") pod \"ovsdbserver-sb-1\" (UID: \"95b4e121-951b-4c45-a227-1ec8638a2320\") " pod="openstack/ovsdbserver-sb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.582444 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3533d9c0-0b55-4512-a747-7107c7faaaf0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3533d9c0-0b55-4512-a747-7107c7faaaf0\") pod \"ovsdbserver-sb-1\" (UID: \"95b4e121-951b-4c45-a227-1ec8638a2320\") " pod="openstack/ovsdbserver-sb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.614308 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.622616 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.883017 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.971716 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.984143 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b16a682c-8a11-4113-82e8-b361a1d8881e","Type":"ContainerStarted","Data":"27286e812b18d1e43a8bae8a21c3ece2f203d193ecd85a3a5af8469a9941ce67"} Jan 30 17:48:05 crc kubenswrapper[4766]: I0130 17:48:05.109804 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 17:48:05 crc kubenswrapper[4766]: W0130 17:48:05.114268 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1053f18b_60a9_44c8_84f5_77bc506a83c1.slice/crio-8772dea3993e3d949db36eb27ca1b0725c47feb93d723b13e9964ed56e32d867 WatchSource:0}: Error finding container 8772dea3993e3d949db36eb27ca1b0725c47feb93d723b13e9964ed56e32d867: Status 404 returned error can't find the container with id 8772dea3993e3d949db36eb27ca1b0725c47feb93d723b13e9964ed56e32d867 Jan 30 17:48:05 crc kubenswrapper[4766]: I0130 17:48:05.203828 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 30 17:48:05 crc kubenswrapper[4766]: W0130 17:48:05.210622 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76df5ae8_0eeb_4bb5_86ee_1c416397a186.slice/crio-ecae13dba215bf678be8ce6ab83bc96e60d35a77b6550399dbd634cf31a926f5 WatchSource:0}: Error finding container ecae13dba215bf678be8ce6ab83bc96e60d35a77b6550399dbd634cf31a926f5: Status 404 returned error can't find the container with id ecae13dba215bf678be8ce6ab83bc96e60d35a77b6550399dbd634cf31a926f5 Jan 30 17:48:05 crc kubenswrapper[4766]: I0130 17:48:05.292953 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 30 17:48:05 crc kubenswrapper[4766]: W0130 17:48:05.316724 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95b4e121_951b_4c45_a227_1ec8638a2320.slice/crio-46caa19fbbdec32d614fe19c7317f54b50fc6460b1a5ac2156b8a52b0da0ff05 WatchSource:0}: Error finding container 46caa19fbbdec32d614fe19c7317f54b50fc6460b1a5ac2156b8a52b0da0ff05: Status 404 returned error can't find the container with id 46caa19fbbdec32d614fe19c7317f54b50fc6460b1a5ac2156b8a52b0da0ff05 Jan 30 17:48:05 crc kubenswrapper[4766]: I0130 17:48:05.932414 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 30 17:48:05 crc kubenswrapper[4766]: W0130 17:48:05.932415 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2591e329_01bd_4573_8590_6e3f62bfb187.slice/crio-81f52adcf79e450f66f880e45e97417d058e4c62aeecc065b46f1698cf28a0ba WatchSource:0}: Error finding container 81f52adcf79e450f66f880e45e97417d058e4c62aeecc065b46f1698cf28a0ba: Status 404 returned error can't find the container with id 81f52adcf79e450f66f880e45e97417d058e4c62aeecc065b46f1698cf28a0ba Jan 30 17:48:05 crc kubenswrapper[4766]: I0130 17:48:05.994139 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"95b4e121-951b-4c45-a227-1ec8638a2320","Type":"ContainerStarted","Data":"896dc9b2140cdbb9feca3570d7b30f7f18296bf3abaa007934e72bb64c6f8b1a"} Jan 30 17:48:05 crc kubenswrapper[4766]: I0130 17:48:05.994191 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"95b4e121-951b-4c45-a227-1ec8638a2320","Type":"ContainerStarted","Data":"6daf5437d8aeddaf3b430297928833b40f729f84da4f3e95f92d0aab3b16b563"} Jan 30 17:48:05 crc kubenswrapper[4766]: I0130 17:48:05.994203 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"95b4e121-951b-4c45-a227-1ec8638a2320","Type":"ContainerStarted","Data":"46caa19fbbdec32d614fe19c7317f54b50fc6460b1a5ac2156b8a52b0da0ff05"} Jan 30 17:48:05 crc kubenswrapper[4766]: I0130 17:48:05.995632 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b16a682c-8a11-4113-82e8-b361a1d8881e","Type":"ContainerStarted","Data":"201fe3bb1762cbcd5153a87229856504d7b798d0dd8ff55c10e85ad0f6c744d0"} Jan 30 17:48:05 crc kubenswrapper[4766]: I0130 17:48:05.995663 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b16a682c-8a11-4113-82e8-b361a1d8881e","Type":"ContainerStarted","Data":"a739b2823646428a167772010d67fbc65e78b9a529222cdff4121f7b89dedda7"} Jan 30 17:48:05 crc kubenswrapper[4766]: I0130 17:48:05.999601 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"1053f18b-60a9-44c8-84f5-77bc506a83c1","Type":"ContainerStarted","Data":"3835bd209ba13ed52993559938cc3f790f1653c2220bef4c91917bfec829fa7b"} Jan 30 17:48:05 crc kubenswrapper[4766]: I0130 17:48:05.999643 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"1053f18b-60a9-44c8-84f5-77bc506a83c1","Type":"ContainerStarted","Data":"92427a4b4d027e89307e4fea29d64af553b582cf6f71a3fb1eec67d57f975d98"} Jan 30 17:48:05 crc kubenswrapper[4766]: I0130 17:48:05.999672 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"1053f18b-60a9-44c8-84f5-77bc506a83c1","Type":"ContainerStarted","Data":"8772dea3993e3d949db36eb27ca1b0725c47feb93d723b13e9964ed56e32d867"} Jan 30 17:48:06 crc kubenswrapper[4766]: I0130 17:48:06.000636 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"2591e329-01bd-4573-8590-6e3f62bfb187","Type":"ContainerStarted","Data":"81f52adcf79e450f66f880e45e97417d058e4c62aeecc065b46f1698cf28a0ba"} Jan 30 17:48:06 crc kubenswrapper[4766]: I0130 17:48:06.002590 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"76df5ae8-0eeb-4bb5-86ee-1c416397a186","Type":"ContainerStarted","Data":"40ac9d1bc9dd75d4d0d07b3439b9abd3570631d5f84ca0e2439e890e6564322b"} Jan 30 17:48:06 crc kubenswrapper[4766]: I0130 17:48:06.002643 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"76df5ae8-0eeb-4bb5-86ee-1c416397a186","Type":"ContainerStarted","Data":"a57bd5da8711287631b67c7bf6f938a00df20d2c92a407cef4cd93aa386b134a"} Jan 30 17:48:06 crc kubenswrapper[4766]: I0130 17:48:06.002654 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"76df5ae8-0eeb-4bb5-86ee-1c416397a186","Type":"ContainerStarted","Data":"ecae13dba215bf678be8ce6ab83bc96e60d35a77b6550399dbd634cf31a926f5"} Jan 30 17:48:06 crc kubenswrapper[4766]: I0130 17:48:06.004635 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"b29c551b-31dd-4264-b3f0-04fde1a2529f","Type":"ContainerStarted","Data":"bb1c7bb1537090c6ec37d76442eee16cf11ba09db0137d4250ed20b8aac54faa"} Jan 30 17:48:06 crc kubenswrapper[4766]: I0130 17:48:06.004668 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"b29c551b-31dd-4264-b3f0-04fde1a2529f","Type":"ContainerStarted","Data":"25d6d8421b6706e8bd39234f0da9fcfe8c25d9a4cd62bd9fe06eea336925eb44"} Jan 30 17:48:06 crc kubenswrapper[4766]: I0130 17:48:06.004680 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"b29c551b-31dd-4264-b3f0-04fde1a2529f","Type":"ContainerStarted","Data":"937db78a490aacb898bb62aad7d1a63bd31912bdec468f3c8ceb94d72a3e1f56"} Jan 30 17:48:06 crc kubenswrapper[4766]: I0130 17:48:06.043060 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-1" podStartSLOduration=3.043037762 podStartE2EDuration="3.043037762s" podCreationTimestamp="2026-01-30 17:48:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:48:06.01550602 +0000 UTC m=+5140.653463366" watchObservedRunningTime="2026-01-30 17:48:06.043037762 +0000 UTC m=+5140.680995118" Jan 30 17:48:06 crc kubenswrapper[4766]: I0130 17:48:06.046341 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-1" podStartSLOduration=4.046328288 podStartE2EDuration="4.046328288s" podCreationTimestamp="2026-01-30 17:48:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:48:06.038387851 +0000 UTC m=+5140.676345217" watchObservedRunningTime="2026-01-30 17:48:06.046328288 +0000 UTC m=+5140.684285644" Jan 30 17:48:06 crc kubenswrapper[4766]: I0130 17:48:06.088388 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=3.088368461 podStartE2EDuration="3.088368461s" podCreationTimestamp="2026-01-30 17:48:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:48:06.082642761 +0000 UTC m=+5140.720600107" watchObservedRunningTime="2026-01-30 17:48:06.088368461 +0000 UTC m=+5140.726325807" Jan 30 17:48:06 crc kubenswrapper[4766]: I0130 17:48:06.094031 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=4.093986228 podStartE2EDuration="4.093986228s" podCreationTimestamp="2026-01-30 17:48:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:48:06.06351821 +0000 UTC m=+5140.701475566" watchObservedRunningTime="2026-01-30 17:48:06.093986228 +0000 UTC m=+5140.731943574" Jan 30 17:48:06 crc kubenswrapper[4766]: I0130 17:48:06.191574 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-2" podStartSLOduration=3.191551446 podStartE2EDuration="3.191551446s" podCreationTimestamp="2026-01-30 17:48:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:48:06.161460947 +0000 UTC m=+5140.799418303" watchObservedRunningTime="2026-01-30 17:48:06.191551446 +0000 UTC m=+5140.829508792" Jan 30 17:48:07 crc kubenswrapper[4766]: I0130 17:48:07.014562 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"2591e329-01bd-4573-8590-6e3f62bfb187","Type":"ContainerStarted","Data":"5c44f5d4d347a50bc0817a695e6ab2e88b01d8e4aa0980d011edffcea3a9eb80"} Jan 30 17:48:07 crc kubenswrapper[4766]: I0130 17:48:07.014630 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"2591e329-01bd-4573-8590-6e3f62bfb187","Type":"ContainerStarted","Data":"1dc982ffcbb9c41c87e94e8a298fae0ee3744121bd0a8d8542f5dc4cc4ba397c"} Jan 30 17:48:07 crc kubenswrapper[4766]: I0130 17:48:07.040838 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-2" podStartSLOduration=5.040817854 podStartE2EDuration="5.040817854s" podCreationTimestamp="2026-01-30 17:48:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:48:07.034718365 +0000 UTC m=+5141.672675741" watchObservedRunningTime="2026-01-30 17:48:07.040817854 +0000 UTC m=+5141.678775200" Jan 30 17:48:07 crc kubenswrapper[4766]: I0130 17:48:07.356080 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 30 17:48:07 crc kubenswrapper[4766]: I0130 17:48:07.369352 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-1" Jan 30 17:48:07 crc kubenswrapper[4766]: I0130 17:48:07.375155 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-2" Jan 30 17:48:07 crc kubenswrapper[4766]: I0130 17:48:07.538274 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 30 17:48:07 crc kubenswrapper[4766]: I0130 17:48:07.615129 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-2" Jan 30 17:48:07 crc kubenswrapper[4766]: I0130 17:48:07.624031 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-1" Jan 30 17:48:09 crc kubenswrapper[4766]: I0130 17:48:09.356417 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 30 17:48:09 crc kubenswrapper[4766]: I0130 17:48:09.369013 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-1" Jan 30 17:48:09 crc kubenswrapper[4766]: I0130 17:48:09.375708 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-2" Jan 30 17:48:09 crc kubenswrapper[4766]: I0130 17:48:09.538032 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 30 17:48:09 crc kubenswrapper[4766]: I0130 17:48:09.614981 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-2" Jan 30 17:48:09 crc kubenswrapper[4766]: I0130 17:48:09.623214 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-1" Jan 30 17:48:10 crc kubenswrapper[4766]: I0130 17:48:10.391491 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 30 17:48:10 crc kubenswrapper[4766]: I0130 17:48:10.404772 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-1" Jan 30 17:48:10 crc kubenswrapper[4766]: I0130 17:48:10.415439 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-2" Jan 30 17:48:10 crc kubenswrapper[4766]: I0130 17:48:10.436957 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 30 17:48:10 crc kubenswrapper[4766]: I0130 17:48:10.454084 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-1" Jan 30 17:48:10 crc kubenswrapper[4766]: I0130 17:48:10.572532 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 30 17:48:10 crc kubenswrapper[4766]: I0130 17:48:10.626162 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 30 17:48:10 crc kubenswrapper[4766]: I0130 17:48:10.683618 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-2" Jan 30 17:48:10 crc kubenswrapper[4766]: I0130 17:48:10.686450 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-1" Jan 30 17:48:10 crc kubenswrapper[4766]: I0130 17:48:10.742820 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-64848558ff-5rxbn"] Jan 30 17:48:10 crc kubenswrapper[4766]: I0130 17:48:10.744802 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64848558ff-5rxbn" Jan 30 17:48:10 crc kubenswrapper[4766]: I0130 17:48:10.747671 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 30 17:48:10 crc kubenswrapper[4766]: I0130 17:48:10.751273 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-2" Jan 30 17:48:10 crc kubenswrapper[4766]: I0130 17:48:10.761680 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-64848558ff-5rxbn"] Jan 30 17:48:10 crc kubenswrapper[4766]: I0130 17:48:10.761745 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-1" Jan 30 17:48:10 crc kubenswrapper[4766]: I0130 17:48:10.872062 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/06b62d4e-8988-4983-a956-a96e3c5b055d-ovsdbserver-nb\") pod \"dnsmasq-dns-64848558ff-5rxbn\" (UID: \"06b62d4e-8988-4983-a956-a96e3c5b055d\") " pod="openstack/dnsmasq-dns-64848558ff-5rxbn" Jan 30 17:48:10 crc kubenswrapper[4766]: I0130 17:48:10.872206 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxwj9\" (UniqueName: \"kubernetes.io/projected/06b62d4e-8988-4983-a956-a96e3c5b055d-kube-api-access-kxwj9\") pod \"dnsmasq-dns-64848558ff-5rxbn\" (UID: \"06b62d4e-8988-4983-a956-a96e3c5b055d\") " pod="openstack/dnsmasq-dns-64848558ff-5rxbn" Jan 30 17:48:10 crc kubenswrapper[4766]: I0130 17:48:10.872257 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06b62d4e-8988-4983-a956-a96e3c5b055d-config\") pod \"dnsmasq-dns-64848558ff-5rxbn\" (UID: \"06b62d4e-8988-4983-a956-a96e3c5b055d\") " pod="openstack/dnsmasq-dns-64848558ff-5rxbn" Jan 30 17:48:10 crc kubenswrapper[4766]: I0130 17:48:10.872298 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06b62d4e-8988-4983-a956-a96e3c5b055d-dns-svc\") pod \"dnsmasq-dns-64848558ff-5rxbn\" (UID: \"06b62d4e-8988-4983-a956-a96e3c5b055d\") " pod="openstack/dnsmasq-dns-64848558ff-5rxbn" Jan 30 17:48:10 crc kubenswrapper[4766]: I0130 17:48:10.927698 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64848558ff-5rxbn"] Jan 30 17:48:10 crc kubenswrapper[4766]: E0130 17:48:10.928315 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc kube-api-access-kxwj9 ovsdbserver-nb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-64848558ff-5rxbn" podUID="06b62d4e-8988-4983-a956-a96e3c5b055d" Jan 30 17:48:10 crc kubenswrapper[4766]: I0130 17:48:10.959082 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8fdcd7795-tjgm8"] Jan 30 17:48:10 crc kubenswrapper[4766]: I0130 17:48:10.960902 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8fdcd7795-tjgm8" Jan 30 17:48:10 crc kubenswrapper[4766]: I0130 17:48:10.963485 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 30 17:48:10 crc kubenswrapper[4766]: I0130 17:48:10.973888 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxwj9\" (UniqueName: \"kubernetes.io/projected/06b62d4e-8988-4983-a956-a96e3c5b055d-kube-api-access-kxwj9\") pod \"dnsmasq-dns-64848558ff-5rxbn\" (UID: \"06b62d4e-8988-4983-a956-a96e3c5b055d\") " pod="openstack/dnsmasq-dns-64848558ff-5rxbn" Jan 30 17:48:10 crc kubenswrapper[4766]: I0130 17:48:10.973996 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06b62d4e-8988-4983-a956-a96e3c5b055d-config\") pod \"dnsmasq-dns-64848558ff-5rxbn\" (UID: \"06b62d4e-8988-4983-a956-a96e3c5b055d\") " pod="openstack/dnsmasq-dns-64848558ff-5rxbn" Jan 30 17:48:10 crc kubenswrapper[4766]: I0130 17:48:10.975121 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06b62d4e-8988-4983-a956-a96e3c5b055d-config\") pod \"dnsmasq-dns-64848558ff-5rxbn\" (UID: \"06b62d4e-8988-4983-a956-a96e3c5b055d\") " pod="openstack/dnsmasq-dns-64848558ff-5rxbn" Jan 30 17:48:10 crc kubenswrapper[4766]: I0130 17:48:10.975382 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06b62d4e-8988-4983-a956-a96e3c5b055d-dns-svc\") pod \"dnsmasq-dns-64848558ff-5rxbn\" (UID: \"06b62d4e-8988-4983-a956-a96e3c5b055d\") " pod="openstack/dnsmasq-dns-64848558ff-5rxbn" Jan 30 17:48:10 crc kubenswrapper[4766]: I0130 17:48:10.975483 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8fdcd7795-tjgm8"] Jan 30 17:48:10 crc kubenswrapper[4766]: I0130 17:48:10.975643 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/06b62d4e-8988-4983-a956-a96e3c5b055d-ovsdbserver-nb\") pod \"dnsmasq-dns-64848558ff-5rxbn\" (UID: \"06b62d4e-8988-4983-a956-a96e3c5b055d\") " pod="openstack/dnsmasq-dns-64848558ff-5rxbn" Jan 30 17:48:10 crc kubenswrapper[4766]: I0130 17:48:10.976309 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06b62d4e-8988-4983-a956-a96e3c5b055d-dns-svc\") pod \"dnsmasq-dns-64848558ff-5rxbn\" (UID: \"06b62d4e-8988-4983-a956-a96e3c5b055d\") " pod="openstack/dnsmasq-dns-64848558ff-5rxbn" Jan 30 17:48:10 crc kubenswrapper[4766]: I0130 17:48:10.976493 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/06b62d4e-8988-4983-a956-a96e3c5b055d-ovsdbserver-nb\") pod \"dnsmasq-dns-64848558ff-5rxbn\" (UID: \"06b62d4e-8988-4983-a956-a96e3c5b055d\") " pod="openstack/dnsmasq-dns-64848558ff-5rxbn" Jan 30 17:48:10 crc kubenswrapper[4766]: I0130 17:48:10.994308 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxwj9\" (UniqueName: \"kubernetes.io/projected/06b62d4e-8988-4983-a956-a96e3c5b055d-kube-api-access-kxwj9\") pod \"dnsmasq-dns-64848558ff-5rxbn\" (UID: \"06b62d4e-8988-4983-a956-a96e3c5b055d\") " pod="openstack/dnsmasq-dns-64848558ff-5rxbn" Jan 30 17:48:11 crc kubenswrapper[4766]: I0130 17:48:11.050404 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64848558ff-5rxbn" Jan 30 17:48:11 crc kubenswrapper[4766]: I0130 17:48:11.063946 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64848558ff-5rxbn" Jan 30 17:48:11 crc kubenswrapper[4766]: I0130 17:48:11.076789 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ddf7416-cc27-4f15-9843-4ef68d7d4b1d-dns-svc\") pod \"dnsmasq-dns-8fdcd7795-tjgm8\" (UID: \"4ddf7416-cc27-4f15-9843-4ef68d7d4b1d\") " pod="openstack/dnsmasq-dns-8fdcd7795-tjgm8" Jan 30 17:48:11 crc kubenswrapper[4766]: I0130 17:48:11.076854 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qp4k\" (UniqueName: \"kubernetes.io/projected/4ddf7416-cc27-4f15-9843-4ef68d7d4b1d-kube-api-access-7qp4k\") pod \"dnsmasq-dns-8fdcd7795-tjgm8\" (UID: \"4ddf7416-cc27-4f15-9843-4ef68d7d4b1d\") " pod="openstack/dnsmasq-dns-8fdcd7795-tjgm8" Jan 30 17:48:11 crc kubenswrapper[4766]: I0130 17:48:11.076952 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ddf7416-cc27-4f15-9843-4ef68d7d4b1d-config\") pod \"dnsmasq-dns-8fdcd7795-tjgm8\" (UID: \"4ddf7416-cc27-4f15-9843-4ef68d7d4b1d\") " pod="openstack/dnsmasq-dns-8fdcd7795-tjgm8" Jan 30 17:48:11 crc kubenswrapper[4766]: I0130 17:48:11.077024 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4ddf7416-cc27-4f15-9843-4ef68d7d4b1d-ovsdbserver-sb\") pod \"dnsmasq-dns-8fdcd7795-tjgm8\" (UID: \"4ddf7416-cc27-4f15-9843-4ef68d7d4b1d\") " pod="openstack/dnsmasq-dns-8fdcd7795-tjgm8" Jan 30 17:48:11 crc kubenswrapper[4766]: I0130 17:48:11.077066 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4ddf7416-cc27-4f15-9843-4ef68d7d4b1d-ovsdbserver-nb\") pod \"dnsmasq-dns-8fdcd7795-tjgm8\" (UID: \"4ddf7416-cc27-4f15-9843-4ef68d7d4b1d\") " pod="openstack/dnsmasq-dns-8fdcd7795-tjgm8" Jan 30 17:48:11 crc kubenswrapper[4766]: I0130 17:48:11.088986 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-2" Jan 30 17:48:11 crc kubenswrapper[4766]: I0130 17:48:11.177713 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/06b62d4e-8988-4983-a956-a96e3c5b055d-ovsdbserver-nb\") pod \"06b62d4e-8988-4983-a956-a96e3c5b055d\" (UID: \"06b62d4e-8988-4983-a956-a96e3c5b055d\") " Jan 30 17:48:11 crc kubenswrapper[4766]: I0130 17:48:11.177786 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06b62d4e-8988-4983-a956-a96e3c5b055d-config\") pod \"06b62d4e-8988-4983-a956-a96e3c5b055d\" (UID: \"06b62d4e-8988-4983-a956-a96e3c5b055d\") " Jan 30 17:48:11 crc kubenswrapper[4766]: I0130 17:48:11.177905 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06b62d4e-8988-4983-a956-a96e3c5b055d-dns-svc\") pod \"06b62d4e-8988-4983-a956-a96e3c5b055d\" (UID: \"06b62d4e-8988-4983-a956-a96e3c5b055d\") " Jan 30 17:48:11 crc kubenswrapper[4766]: I0130 17:48:11.177935 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxwj9\" (UniqueName: \"kubernetes.io/projected/06b62d4e-8988-4983-a956-a96e3c5b055d-kube-api-access-kxwj9\") pod \"06b62d4e-8988-4983-a956-a96e3c5b055d\" (UID: \"06b62d4e-8988-4983-a956-a96e3c5b055d\") " Jan 30 17:48:11 crc kubenswrapper[4766]: I0130 17:48:11.178140 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4ddf7416-cc27-4f15-9843-4ef68d7d4b1d-ovsdbserver-sb\") pod \"dnsmasq-dns-8fdcd7795-tjgm8\" (UID: \"4ddf7416-cc27-4f15-9843-4ef68d7d4b1d\") " pod="openstack/dnsmasq-dns-8fdcd7795-tjgm8" Jan 30 17:48:11 crc kubenswrapper[4766]: I0130 17:48:11.178382 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06b62d4e-8988-4983-a956-a96e3c5b055d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "06b62d4e-8988-4983-a956-a96e3c5b055d" (UID: "06b62d4e-8988-4983-a956-a96e3c5b055d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:48:11 crc kubenswrapper[4766]: I0130 17:48:11.178556 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06b62d4e-8988-4983-a956-a96e3c5b055d-config" (OuterVolumeSpecName: "config") pod "06b62d4e-8988-4983-a956-a96e3c5b055d" (UID: "06b62d4e-8988-4983-a956-a96e3c5b055d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:48:11 crc kubenswrapper[4766]: I0130 17:48:11.178648 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06b62d4e-8988-4983-a956-a96e3c5b055d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "06b62d4e-8988-4983-a956-a96e3c5b055d" (UID: "06b62d4e-8988-4983-a956-a96e3c5b055d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:48:11 crc kubenswrapper[4766]: I0130 17:48:11.178994 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4ddf7416-cc27-4f15-9843-4ef68d7d4b1d-ovsdbserver-sb\") pod \"dnsmasq-dns-8fdcd7795-tjgm8\" (UID: \"4ddf7416-cc27-4f15-9843-4ef68d7d4b1d\") " pod="openstack/dnsmasq-dns-8fdcd7795-tjgm8" Jan 30 17:48:11 crc kubenswrapper[4766]: I0130 17:48:11.180362 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4ddf7416-cc27-4f15-9843-4ef68d7d4b1d-ovsdbserver-nb\") pod \"dnsmasq-dns-8fdcd7795-tjgm8\" (UID: \"4ddf7416-cc27-4f15-9843-4ef68d7d4b1d\") " pod="openstack/dnsmasq-dns-8fdcd7795-tjgm8" Jan 30 17:48:11 crc kubenswrapper[4766]: I0130 17:48:11.180897 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4ddf7416-cc27-4f15-9843-4ef68d7d4b1d-ovsdbserver-nb\") pod \"dnsmasq-dns-8fdcd7795-tjgm8\" (UID: \"4ddf7416-cc27-4f15-9843-4ef68d7d4b1d\") " pod="openstack/dnsmasq-dns-8fdcd7795-tjgm8" Jan 30 17:48:11 crc kubenswrapper[4766]: I0130 17:48:11.181022 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ddf7416-cc27-4f15-9843-4ef68d7d4b1d-dns-svc\") pod \"dnsmasq-dns-8fdcd7795-tjgm8\" (UID: \"4ddf7416-cc27-4f15-9843-4ef68d7d4b1d\") " pod="openstack/dnsmasq-dns-8fdcd7795-tjgm8" Jan 30 17:48:11 crc kubenswrapper[4766]: I0130 17:48:11.181092 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qp4k\" (UniqueName: \"kubernetes.io/projected/4ddf7416-cc27-4f15-9843-4ef68d7d4b1d-kube-api-access-7qp4k\") pod \"dnsmasq-dns-8fdcd7795-tjgm8\" (UID: \"4ddf7416-cc27-4f15-9843-4ef68d7d4b1d\") " pod="openstack/dnsmasq-dns-8fdcd7795-tjgm8" Jan 30 17:48:11 crc kubenswrapper[4766]: I0130 17:48:11.181200 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ddf7416-cc27-4f15-9843-4ef68d7d4b1d-config\") pod \"dnsmasq-dns-8fdcd7795-tjgm8\" (UID: \"4ddf7416-cc27-4f15-9843-4ef68d7d4b1d\") " pod="openstack/dnsmasq-dns-8fdcd7795-tjgm8" Jan 30 17:48:11 crc kubenswrapper[4766]: I0130 17:48:11.181824 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06b62d4e-8988-4983-a956-a96e3c5b055d-kube-api-access-kxwj9" (OuterVolumeSpecName: "kube-api-access-kxwj9") pod "06b62d4e-8988-4983-a956-a96e3c5b055d" (UID: "06b62d4e-8988-4983-a956-a96e3c5b055d"). InnerVolumeSpecName "kube-api-access-kxwj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:48:11 crc kubenswrapper[4766]: I0130 17:48:11.182272 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ddf7416-cc27-4f15-9843-4ef68d7d4b1d-dns-svc\") pod \"dnsmasq-dns-8fdcd7795-tjgm8\" (UID: \"4ddf7416-cc27-4f15-9843-4ef68d7d4b1d\") " pod="openstack/dnsmasq-dns-8fdcd7795-tjgm8" Jan 30 17:48:11 crc kubenswrapper[4766]: I0130 17:48:11.182462 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ddf7416-cc27-4f15-9843-4ef68d7d4b1d-config\") pod \"dnsmasq-dns-8fdcd7795-tjgm8\" (UID: \"4ddf7416-cc27-4f15-9843-4ef68d7d4b1d\") " pod="openstack/dnsmasq-dns-8fdcd7795-tjgm8" Jan 30 17:48:11 crc kubenswrapper[4766]: I0130 17:48:11.183123 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/06b62d4e-8988-4983-a956-a96e3c5b055d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 17:48:11 crc kubenswrapper[4766]: I0130 17:48:11.183138 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06b62d4e-8988-4983-a956-a96e3c5b055d-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:48:11 crc kubenswrapper[4766]: I0130 17:48:11.183148 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06b62d4e-8988-4983-a956-a96e3c5b055d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 17:48:11 crc kubenswrapper[4766]: I0130 17:48:11.201029 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qp4k\" (UniqueName: \"kubernetes.io/projected/4ddf7416-cc27-4f15-9843-4ef68d7d4b1d-kube-api-access-7qp4k\") pod \"dnsmasq-dns-8fdcd7795-tjgm8\" (UID: \"4ddf7416-cc27-4f15-9843-4ef68d7d4b1d\") " pod="openstack/dnsmasq-dns-8fdcd7795-tjgm8" Jan 30 17:48:11 crc kubenswrapper[4766]: I0130 17:48:11.284086 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8fdcd7795-tjgm8" Jan 30 17:48:11 crc kubenswrapper[4766]: I0130 17:48:11.284994 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxwj9\" (UniqueName: \"kubernetes.io/projected/06b62d4e-8988-4983-a956-a96e3c5b055d-kube-api-access-kxwj9\") on node \"crc\" DevicePath \"\"" Jan 30 17:48:11 crc kubenswrapper[4766]: I0130 17:48:11.814363 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8fdcd7795-tjgm8"] Jan 30 17:48:12 crc kubenswrapper[4766]: I0130 17:48:12.039615 4766 scope.go:117] "RemoveContainer" containerID="7fc5828a50a187fe8ccf98b89913744accb72a8e7e151fd7516b11ddfbd0849c" Jan 30 17:48:12 crc kubenswrapper[4766]: E0130 17:48:12.040164 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:48:12 crc kubenswrapper[4766]: I0130 17:48:12.057939 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8fdcd7795-tjgm8" event={"ID":"4ddf7416-cc27-4f15-9843-4ef68d7d4b1d","Type":"ContainerStarted","Data":"a807fa870f0e90a7991e2ca2af75e1355936893f5199ae4f636d635b578f5ca9"} Jan 30 17:48:12 crc kubenswrapper[4766]: I0130 17:48:12.058023 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64848558ff-5rxbn" Jan 30 17:48:12 crc kubenswrapper[4766]: I0130 17:48:12.145611 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64848558ff-5rxbn"] Jan 30 17:48:12 crc kubenswrapper[4766]: I0130 17:48:12.154248 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-64848558ff-5rxbn"] Jan 30 17:48:13 crc kubenswrapper[4766]: I0130 17:48:13.069535 4766 generic.go:334] "Generic (PLEG): container finished" podID="4ddf7416-cc27-4f15-9843-4ef68d7d4b1d" containerID="1b90a80f4637be44b39402681550752b5fc9bcb70acb1239adbe9ebd8ef0ae15" exitCode=0 Jan 30 17:48:13 crc kubenswrapper[4766]: I0130 17:48:13.069636 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8fdcd7795-tjgm8" event={"ID":"4ddf7416-cc27-4f15-9843-4ef68d7d4b1d","Type":"ContainerDied","Data":"1b90a80f4637be44b39402681550752b5fc9bcb70acb1239adbe9ebd8ef0ae15"} Jan 30 17:48:13 crc kubenswrapper[4766]: I0130 17:48:13.751763 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-copy-data"] Jan 30 17:48:13 crc kubenswrapper[4766]: I0130 17:48:13.752824 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Jan 30 17:48:13 crc kubenswrapper[4766]: I0130 17:48:13.757977 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovn-data-cert" Jan 30 17:48:13 crc kubenswrapper[4766]: I0130 17:48:13.759782 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Jan 30 17:48:13 crc kubenswrapper[4766]: I0130 17:48:13.879436 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-45de3e47-39b0-4107-8386-9d3706ed6887\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-45de3e47-39b0-4107-8386-9d3706ed6887\") pod \"ovn-copy-data\" (UID: \"7fb6354d-977f-494f-9a51-0a1b8f48c686\") " pod="openstack/ovn-copy-data" Jan 30 17:48:13 crc kubenswrapper[4766]: I0130 17:48:13.879476 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgm66\" (UniqueName: \"kubernetes.io/projected/7fb6354d-977f-494f-9a51-0a1b8f48c686-kube-api-access-hgm66\") pod \"ovn-copy-data\" (UID: \"7fb6354d-977f-494f-9a51-0a1b8f48c686\") " pod="openstack/ovn-copy-data" Jan 30 17:48:13 crc kubenswrapper[4766]: I0130 17:48:13.879495 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/7fb6354d-977f-494f-9a51-0a1b8f48c686-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"7fb6354d-977f-494f-9a51-0a1b8f48c686\") " pod="openstack/ovn-copy-data" Jan 30 17:48:13 crc kubenswrapper[4766]: I0130 17:48:13.980760 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-45de3e47-39b0-4107-8386-9d3706ed6887\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-45de3e47-39b0-4107-8386-9d3706ed6887\") pod \"ovn-copy-data\" (UID: \"7fb6354d-977f-494f-9a51-0a1b8f48c686\") " pod="openstack/ovn-copy-data" Jan 30 17:48:13 crc kubenswrapper[4766]: I0130 17:48:13.980815 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgm66\" (UniqueName: \"kubernetes.io/projected/7fb6354d-977f-494f-9a51-0a1b8f48c686-kube-api-access-hgm66\") pod \"ovn-copy-data\" (UID: \"7fb6354d-977f-494f-9a51-0a1b8f48c686\") " pod="openstack/ovn-copy-data" Jan 30 17:48:13 crc kubenswrapper[4766]: I0130 17:48:13.980845 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/7fb6354d-977f-494f-9a51-0a1b8f48c686-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"7fb6354d-977f-494f-9a51-0a1b8f48c686\") " pod="openstack/ovn-copy-data" Jan 30 17:48:13 crc kubenswrapper[4766]: I0130 17:48:13.986099 4766 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 17:48:13 crc kubenswrapper[4766]: I0130 17:48:13.986142 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-45de3e47-39b0-4107-8386-9d3706ed6887\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-45de3e47-39b0-4107-8386-9d3706ed6887\") pod \"ovn-copy-data\" (UID: \"7fb6354d-977f-494f-9a51-0a1b8f48c686\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/34fd43cf0e80788d7c16160b1b222ac3f3ff804c8ca8200947eb730686989322/globalmount\"" pod="openstack/ovn-copy-data" Jan 30 17:48:13 crc kubenswrapper[4766]: I0130 17:48:13.990981 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/7fb6354d-977f-494f-9a51-0a1b8f48c686-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"7fb6354d-977f-494f-9a51-0a1b8f48c686\") " pod="openstack/ovn-copy-data" Jan 30 17:48:14 crc kubenswrapper[4766]: I0130 17:48:14.003298 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgm66\" (UniqueName: \"kubernetes.io/projected/7fb6354d-977f-494f-9a51-0a1b8f48c686-kube-api-access-hgm66\") pod \"ovn-copy-data\" (UID: \"7fb6354d-977f-494f-9a51-0a1b8f48c686\") " pod="openstack/ovn-copy-data" Jan 30 17:48:14 crc kubenswrapper[4766]: I0130 17:48:14.021583 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-45de3e47-39b0-4107-8386-9d3706ed6887\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-45de3e47-39b0-4107-8386-9d3706ed6887\") pod \"ovn-copy-data\" (UID: \"7fb6354d-977f-494f-9a51-0a1b8f48c686\") " pod="openstack/ovn-copy-data" Jan 30 17:48:14 crc kubenswrapper[4766]: I0130 17:48:14.050470 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06b62d4e-8988-4983-a956-a96e3c5b055d" path="/var/lib/kubelet/pods/06b62d4e-8988-4983-a956-a96e3c5b055d/volumes" Jan 30 17:48:14 crc kubenswrapper[4766]: I0130 17:48:14.073756 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Jan 30 17:48:14 crc kubenswrapper[4766]: I0130 17:48:14.080119 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8fdcd7795-tjgm8" event={"ID":"4ddf7416-cc27-4f15-9843-4ef68d7d4b1d","Type":"ContainerStarted","Data":"d83ad14fd8f4b675ceb3460a2bf958a20357e50f2d888a5402edc7fdebd9aa08"} Jan 30 17:48:14 crc kubenswrapper[4766]: I0130 17:48:14.080391 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8fdcd7795-tjgm8" Jan 30 17:48:14 crc kubenswrapper[4766]: I0130 17:48:14.101625 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8fdcd7795-tjgm8" podStartSLOduration=4.101605211 podStartE2EDuration="4.101605211s" podCreationTimestamp="2026-01-30 17:48:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:48:14.097584575 +0000 UTC m=+5148.735541931" watchObservedRunningTime="2026-01-30 17:48:14.101605211 +0000 UTC m=+5148.739562547" Jan 30 17:48:14 crc kubenswrapper[4766]: I0130 17:48:14.566571 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Jan 30 17:48:14 crc kubenswrapper[4766]: W0130 17:48:14.567776 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7fb6354d_977f_494f_9a51_0a1b8f48c686.slice/crio-1fbd1f623b6e2fbfe2eb654bdf0ff41b986648ec654bf46f0c5fbbea88637591 WatchSource:0}: Error finding container 1fbd1f623b6e2fbfe2eb654bdf0ff41b986648ec654bf46f0c5fbbea88637591: Status 404 returned error can't find the container with id 1fbd1f623b6e2fbfe2eb654bdf0ff41b986648ec654bf46f0c5fbbea88637591 Jan 30 17:48:15 crc kubenswrapper[4766]: I0130 17:48:15.091652 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"7fb6354d-977f-494f-9a51-0a1b8f48c686","Type":"ContainerStarted","Data":"83a1f53cf7d0c4406d5a72249ccb3ade022d371c6d16fde6067b73d61e92f77b"} Jan 30 17:48:15 crc kubenswrapper[4766]: I0130 17:48:15.092029 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"7fb6354d-977f-494f-9a51-0a1b8f48c686","Type":"ContainerStarted","Data":"1fbd1f623b6e2fbfe2eb654bdf0ff41b986648ec654bf46f0c5fbbea88637591"} Jan 30 17:48:19 crc kubenswrapper[4766]: I0130 17:48:19.568847 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-copy-data" podStartSLOduration=7.568818992 podStartE2EDuration="7.568818992s" podCreationTimestamp="2026-01-30 17:48:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:48:15.11033535 +0000 UTC m=+5149.748292696" watchObservedRunningTime="2026-01-30 17:48:19.568818992 +0000 UTC m=+5154.206776338" Jan 30 17:48:19 crc kubenswrapper[4766]: I0130 17:48:19.574796 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 30 17:48:19 crc kubenswrapper[4766]: I0130 17:48:19.576440 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 17:48:19 crc kubenswrapper[4766]: I0130 17:48:19.581243 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-wxzgn" Jan 30 17:48:19 crc kubenswrapper[4766]: I0130 17:48:19.581712 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 30 17:48:19 crc kubenswrapper[4766]: I0130 17:48:19.581931 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 30 17:48:19 crc kubenswrapper[4766]: I0130 17:48:19.589257 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 17:48:19 crc kubenswrapper[4766]: I0130 17:48:19.675760 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9743ed16-7558-435e-9f72-3688bd1102d7-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"9743ed16-7558-435e-9f72-3688bd1102d7\") " pod="openstack/ovn-northd-0" Jan 30 17:48:19 crc kubenswrapper[4766]: I0130 17:48:19.675838 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9743ed16-7558-435e-9f72-3688bd1102d7-config\") pod \"ovn-northd-0\" (UID: \"9743ed16-7558-435e-9f72-3688bd1102d7\") " pod="openstack/ovn-northd-0" Jan 30 17:48:19 crc kubenswrapper[4766]: I0130 17:48:19.675888 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9743ed16-7558-435e-9f72-3688bd1102d7-scripts\") pod \"ovn-northd-0\" (UID: \"9743ed16-7558-435e-9f72-3688bd1102d7\") " pod="openstack/ovn-northd-0" Jan 30 17:48:19 crc kubenswrapper[4766]: I0130 17:48:19.675932 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n9h7\" (UniqueName: \"kubernetes.io/projected/9743ed16-7558-435e-9f72-3688bd1102d7-kube-api-access-9n9h7\") pod \"ovn-northd-0\" (UID: \"9743ed16-7558-435e-9f72-3688bd1102d7\") " pod="openstack/ovn-northd-0" Jan 30 17:48:19 crc kubenswrapper[4766]: I0130 17:48:19.675999 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9743ed16-7558-435e-9f72-3688bd1102d7-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"9743ed16-7558-435e-9f72-3688bd1102d7\") " pod="openstack/ovn-northd-0" Jan 30 17:48:19 crc kubenswrapper[4766]: I0130 17:48:19.777285 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9743ed16-7558-435e-9f72-3688bd1102d7-scripts\") pod \"ovn-northd-0\" (UID: \"9743ed16-7558-435e-9f72-3688bd1102d7\") " pod="openstack/ovn-northd-0" Jan 30 17:48:19 crc kubenswrapper[4766]: I0130 17:48:19.777381 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n9h7\" (UniqueName: \"kubernetes.io/projected/9743ed16-7558-435e-9f72-3688bd1102d7-kube-api-access-9n9h7\") pod \"ovn-northd-0\" (UID: \"9743ed16-7558-435e-9f72-3688bd1102d7\") " pod="openstack/ovn-northd-0" Jan 30 17:48:19 crc kubenswrapper[4766]: I0130 17:48:19.777439 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9743ed16-7558-435e-9f72-3688bd1102d7-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"9743ed16-7558-435e-9f72-3688bd1102d7\") " pod="openstack/ovn-northd-0" Jan 30 17:48:19 crc kubenswrapper[4766]: I0130 17:48:19.777470 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9743ed16-7558-435e-9f72-3688bd1102d7-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"9743ed16-7558-435e-9f72-3688bd1102d7\") " pod="openstack/ovn-northd-0" Jan 30 17:48:19 crc kubenswrapper[4766]: I0130 17:48:19.777521 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9743ed16-7558-435e-9f72-3688bd1102d7-config\") pod \"ovn-northd-0\" (UID: \"9743ed16-7558-435e-9f72-3688bd1102d7\") " pod="openstack/ovn-northd-0" Jan 30 17:48:19 crc kubenswrapper[4766]: I0130 17:48:19.778563 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9743ed16-7558-435e-9f72-3688bd1102d7-config\") pod \"ovn-northd-0\" (UID: \"9743ed16-7558-435e-9f72-3688bd1102d7\") " pod="openstack/ovn-northd-0" Jan 30 17:48:19 crc kubenswrapper[4766]: I0130 17:48:19.778754 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9743ed16-7558-435e-9f72-3688bd1102d7-scripts\") pod \"ovn-northd-0\" (UID: \"9743ed16-7558-435e-9f72-3688bd1102d7\") " pod="openstack/ovn-northd-0" Jan 30 17:48:19 crc kubenswrapper[4766]: I0130 17:48:19.778990 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9743ed16-7558-435e-9f72-3688bd1102d7-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"9743ed16-7558-435e-9f72-3688bd1102d7\") " pod="openstack/ovn-northd-0" Jan 30 17:48:19 crc kubenswrapper[4766]: I0130 17:48:19.790909 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9743ed16-7558-435e-9f72-3688bd1102d7-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"9743ed16-7558-435e-9f72-3688bd1102d7\") " pod="openstack/ovn-northd-0" Jan 30 17:48:19 crc kubenswrapper[4766]: I0130 17:48:19.795437 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n9h7\" (UniqueName: \"kubernetes.io/projected/9743ed16-7558-435e-9f72-3688bd1102d7-kube-api-access-9n9h7\") pod \"ovn-northd-0\" (UID: \"9743ed16-7558-435e-9f72-3688bd1102d7\") " pod="openstack/ovn-northd-0" Jan 30 17:48:19 crc kubenswrapper[4766]: I0130 17:48:19.905657 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 17:48:20 crc kubenswrapper[4766]: I0130 17:48:20.353967 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 17:48:20 crc kubenswrapper[4766]: W0130 17:48:20.357281 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9743ed16_7558_435e_9f72_3688bd1102d7.slice/crio-04eb7d290fcab78c4050b193513ebd5dba4255a5c372aa0799e7aea4dd7b98cc WatchSource:0}: Error finding container 04eb7d290fcab78c4050b193513ebd5dba4255a5c372aa0799e7aea4dd7b98cc: Status 404 returned error can't find the container with id 04eb7d290fcab78c4050b193513ebd5dba4255a5c372aa0799e7aea4dd7b98cc Jan 30 17:48:21 crc kubenswrapper[4766]: I0130 17:48:21.142679 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9743ed16-7558-435e-9f72-3688bd1102d7","Type":"ContainerStarted","Data":"f313b7a60eb3c33f8accb6f37a6bc487347211382a4d46df5c79886df8cdf21a"} Jan 30 17:48:21 crc kubenswrapper[4766]: I0130 17:48:21.142989 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9743ed16-7558-435e-9f72-3688bd1102d7","Type":"ContainerStarted","Data":"68560760de42cca4a7da438368fba192e806f9f333ff93aa469770b214518a36"} Jan 30 17:48:21 crc kubenswrapper[4766]: I0130 17:48:21.143003 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9743ed16-7558-435e-9f72-3688bd1102d7","Type":"ContainerStarted","Data":"04eb7d290fcab78c4050b193513ebd5dba4255a5c372aa0799e7aea4dd7b98cc"} Jan 30 17:48:21 crc kubenswrapper[4766]: I0130 17:48:21.143040 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 30 17:48:21 crc kubenswrapper[4766]: I0130 17:48:21.166359 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.16633491 podStartE2EDuration="2.16633491s" podCreationTimestamp="2026-01-30 17:48:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:48:21.157877508 +0000 UTC m=+5155.795834854" watchObservedRunningTime="2026-01-30 17:48:21.16633491 +0000 UTC m=+5155.804292256" Jan 30 17:48:21 crc kubenswrapper[4766]: I0130 17:48:21.286532 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8fdcd7795-tjgm8" Jan 30 17:48:21 crc kubenswrapper[4766]: I0130 17:48:21.339469 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-pmwzk"] Jan 30 17:48:21 crc kubenswrapper[4766]: I0130 17:48:21.339722 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b7946d7b9-pmwzk" podUID="83b52c39-5b23-4e74-abf9-0018a54b215e" containerName="dnsmasq-dns" containerID="cri-o://64dac631aecc3debe44408ebe734ca2389ee4d9a76ae4ea3913d0c3e70670dee" gracePeriod=10 Jan 30 17:48:21 crc kubenswrapper[4766]: I0130 17:48:21.819367 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7946d7b9-pmwzk" Jan 30 17:48:21 crc kubenswrapper[4766]: I0130 17:48:21.915311 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/83b52c39-5b23-4e74-abf9-0018a54b215e-dns-svc\") pod \"83b52c39-5b23-4e74-abf9-0018a54b215e\" (UID: \"83b52c39-5b23-4e74-abf9-0018a54b215e\") " Jan 30 17:48:21 crc kubenswrapper[4766]: I0130 17:48:21.915365 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dz2t6\" (UniqueName: \"kubernetes.io/projected/83b52c39-5b23-4e74-abf9-0018a54b215e-kube-api-access-dz2t6\") pod \"83b52c39-5b23-4e74-abf9-0018a54b215e\" (UID: \"83b52c39-5b23-4e74-abf9-0018a54b215e\") " Jan 30 17:48:21 crc kubenswrapper[4766]: I0130 17:48:21.915426 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83b52c39-5b23-4e74-abf9-0018a54b215e-config\") pod \"83b52c39-5b23-4e74-abf9-0018a54b215e\" (UID: \"83b52c39-5b23-4e74-abf9-0018a54b215e\") " Jan 30 17:48:21 crc kubenswrapper[4766]: I0130 17:48:21.920490 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83b52c39-5b23-4e74-abf9-0018a54b215e-kube-api-access-dz2t6" (OuterVolumeSpecName: "kube-api-access-dz2t6") pod "83b52c39-5b23-4e74-abf9-0018a54b215e" (UID: "83b52c39-5b23-4e74-abf9-0018a54b215e"). InnerVolumeSpecName "kube-api-access-dz2t6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:48:21 crc kubenswrapper[4766]: I0130 17:48:21.956417 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83b52c39-5b23-4e74-abf9-0018a54b215e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "83b52c39-5b23-4e74-abf9-0018a54b215e" (UID: "83b52c39-5b23-4e74-abf9-0018a54b215e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:48:21 crc kubenswrapper[4766]: I0130 17:48:21.957214 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83b52c39-5b23-4e74-abf9-0018a54b215e-config" (OuterVolumeSpecName: "config") pod "83b52c39-5b23-4e74-abf9-0018a54b215e" (UID: "83b52c39-5b23-4e74-abf9-0018a54b215e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:48:22 crc kubenswrapper[4766]: I0130 17:48:22.017446 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/83b52c39-5b23-4e74-abf9-0018a54b215e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 17:48:22 crc kubenswrapper[4766]: I0130 17:48:22.017484 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dz2t6\" (UniqueName: \"kubernetes.io/projected/83b52c39-5b23-4e74-abf9-0018a54b215e-kube-api-access-dz2t6\") on node \"crc\" DevicePath \"\"" Jan 30 17:48:22 crc kubenswrapper[4766]: I0130 17:48:22.017494 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83b52c39-5b23-4e74-abf9-0018a54b215e-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:48:22 crc kubenswrapper[4766]: I0130 17:48:22.154944 4766 generic.go:334] "Generic (PLEG): container finished" podID="83b52c39-5b23-4e74-abf9-0018a54b215e" containerID="64dac631aecc3debe44408ebe734ca2389ee4d9a76ae4ea3913d0c3e70670dee" exitCode=0 Jan 30 17:48:22 crc kubenswrapper[4766]: I0130 17:48:22.155072 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7946d7b9-pmwzk" Jan 30 17:48:22 crc kubenswrapper[4766]: I0130 17:48:22.155087 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-pmwzk" event={"ID":"83b52c39-5b23-4e74-abf9-0018a54b215e","Type":"ContainerDied","Data":"64dac631aecc3debe44408ebe734ca2389ee4d9a76ae4ea3913d0c3e70670dee"} Jan 30 17:48:22 crc kubenswrapper[4766]: I0130 17:48:22.155206 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-pmwzk" event={"ID":"83b52c39-5b23-4e74-abf9-0018a54b215e","Type":"ContainerDied","Data":"413a11896bba6c856744f800c01e207dabe5ad018e6db2441e865aa1619f4199"} Jan 30 17:48:22 crc kubenswrapper[4766]: I0130 17:48:22.155236 4766 scope.go:117] "RemoveContainer" containerID="64dac631aecc3debe44408ebe734ca2389ee4d9a76ae4ea3913d0c3e70670dee" Jan 30 17:48:22 crc kubenswrapper[4766]: I0130 17:48:22.178677 4766 scope.go:117] "RemoveContainer" containerID="71ad7de0b7e47b4f7b8f41c3804d227e46571979ff8bb171580d7c01a4182d89" Jan 30 17:48:22 crc kubenswrapper[4766]: I0130 17:48:22.180028 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-pmwzk"] Jan 30 17:48:22 crc kubenswrapper[4766]: I0130 17:48:22.186796 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-pmwzk"] Jan 30 17:48:22 crc kubenswrapper[4766]: I0130 17:48:22.198631 4766 scope.go:117] "RemoveContainer" containerID="64dac631aecc3debe44408ebe734ca2389ee4d9a76ae4ea3913d0c3e70670dee" Jan 30 17:48:22 crc kubenswrapper[4766]: E0130 17:48:22.199534 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64dac631aecc3debe44408ebe734ca2389ee4d9a76ae4ea3913d0c3e70670dee\": container with ID starting with 64dac631aecc3debe44408ebe734ca2389ee4d9a76ae4ea3913d0c3e70670dee not found: ID does not exist" containerID="64dac631aecc3debe44408ebe734ca2389ee4d9a76ae4ea3913d0c3e70670dee" Jan 30 17:48:22 crc kubenswrapper[4766]: I0130 17:48:22.199589 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64dac631aecc3debe44408ebe734ca2389ee4d9a76ae4ea3913d0c3e70670dee"} err="failed to get container status \"64dac631aecc3debe44408ebe734ca2389ee4d9a76ae4ea3913d0c3e70670dee\": rpc error: code = NotFound desc = could not find container \"64dac631aecc3debe44408ebe734ca2389ee4d9a76ae4ea3913d0c3e70670dee\": container with ID starting with 64dac631aecc3debe44408ebe734ca2389ee4d9a76ae4ea3913d0c3e70670dee not found: ID does not exist" Jan 30 17:48:22 crc kubenswrapper[4766]: I0130 17:48:22.199625 4766 scope.go:117] "RemoveContainer" containerID="71ad7de0b7e47b4f7b8f41c3804d227e46571979ff8bb171580d7c01a4182d89" Jan 30 17:48:22 crc kubenswrapper[4766]: E0130 17:48:22.200036 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71ad7de0b7e47b4f7b8f41c3804d227e46571979ff8bb171580d7c01a4182d89\": container with ID starting with 71ad7de0b7e47b4f7b8f41c3804d227e46571979ff8bb171580d7c01a4182d89 not found: ID does not exist" containerID="71ad7de0b7e47b4f7b8f41c3804d227e46571979ff8bb171580d7c01a4182d89" Jan 30 17:48:22 crc kubenswrapper[4766]: I0130 17:48:22.200094 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71ad7de0b7e47b4f7b8f41c3804d227e46571979ff8bb171580d7c01a4182d89"} err="failed to get container status \"71ad7de0b7e47b4f7b8f41c3804d227e46571979ff8bb171580d7c01a4182d89\": rpc error: code = NotFound desc = could not find container \"71ad7de0b7e47b4f7b8f41c3804d227e46571979ff8bb171580d7c01a4182d89\": container with ID starting with 71ad7de0b7e47b4f7b8f41c3804d227e46571979ff8bb171580d7c01a4182d89 not found: ID does not exist" Jan 30 17:48:24 crc kubenswrapper[4766]: I0130 17:48:24.048890 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83b52c39-5b23-4e74-abf9-0018a54b215e" path="/var/lib/kubelet/pods/83b52c39-5b23-4e74-abf9-0018a54b215e/volumes" Jan 30 17:48:24 crc kubenswrapper[4766]: I0130 17:48:24.201941 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-q5td7"] Jan 30 17:48:24 crc kubenswrapper[4766]: E0130 17:48:24.202299 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83b52c39-5b23-4e74-abf9-0018a54b215e" containerName="dnsmasq-dns" Jan 30 17:48:24 crc kubenswrapper[4766]: I0130 17:48:24.202315 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="83b52c39-5b23-4e74-abf9-0018a54b215e" containerName="dnsmasq-dns" Jan 30 17:48:24 crc kubenswrapper[4766]: E0130 17:48:24.202330 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83b52c39-5b23-4e74-abf9-0018a54b215e" containerName="init" Jan 30 17:48:24 crc kubenswrapper[4766]: I0130 17:48:24.202336 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="83b52c39-5b23-4e74-abf9-0018a54b215e" containerName="init" Jan 30 17:48:24 crc kubenswrapper[4766]: I0130 17:48:24.202480 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="83b52c39-5b23-4e74-abf9-0018a54b215e" containerName="dnsmasq-dns" Jan 30 17:48:24 crc kubenswrapper[4766]: I0130 17:48:24.202995 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-q5td7" Jan 30 17:48:24 crc kubenswrapper[4766]: I0130 17:48:24.213949 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-q5td7"] Jan 30 17:48:24 crc kubenswrapper[4766]: I0130 17:48:24.298672 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7780-account-create-update-96kcq"] Jan 30 17:48:24 crc kubenswrapper[4766]: I0130 17:48:24.299717 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7780-account-create-update-96kcq" Jan 30 17:48:24 crc kubenswrapper[4766]: I0130 17:48:24.306568 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 30 17:48:24 crc kubenswrapper[4766]: I0130 17:48:24.311171 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7780-account-create-update-96kcq"] Jan 30 17:48:24 crc kubenswrapper[4766]: I0130 17:48:24.357431 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e09e2e76-7c0b-4efa-b226-18df0a512567-operator-scripts\") pod \"keystone-db-create-q5td7\" (UID: \"e09e2e76-7c0b-4efa-b226-18df0a512567\") " pod="openstack/keystone-db-create-q5td7" Jan 30 17:48:24 crc kubenswrapper[4766]: I0130 17:48:24.357488 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwzfr\" (UniqueName: \"kubernetes.io/projected/e09e2e76-7c0b-4efa-b226-18df0a512567-kube-api-access-jwzfr\") pod \"keystone-db-create-q5td7\" (UID: \"e09e2e76-7c0b-4efa-b226-18df0a512567\") " pod="openstack/keystone-db-create-q5td7" Jan 30 17:48:24 crc kubenswrapper[4766]: I0130 17:48:24.458637 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rjd6\" (UniqueName: \"kubernetes.io/projected/9d0580a7-5f19-4aa4-893f-106812b15326-kube-api-access-9rjd6\") pod \"keystone-7780-account-create-update-96kcq\" (UID: \"9d0580a7-5f19-4aa4-893f-106812b15326\") " pod="openstack/keystone-7780-account-create-update-96kcq" Jan 30 17:48:24 crc kubenswrapper[4766]: I0130 17:48:24.458697 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d0580a7-5f19-4aa4-893f-106812b15326-operator-scripts\") pod \"keystone-7780-account-create-update-96kcq\" (UID: \"9d0580a7-5f19-4aa4-893f-106812b15326\") " pod="openstack/keystone-7780-account-create-update-96kcq" Jan 30 17:48:24 crc kubenswrapper[4766]: I0130 17:48:24.458768 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e09e2e76-7c0b-4efa-b226-18df0a512567-operator-scripts\") pod \"keystone-db-create-q5td7\" (UID: \"e09e2e76-7c0b-4efa-b226-18df0a512567\") " pod="openstack/keystone-db-create-q5td7" Jan 30 17:48:24 crc kubenswrapper[4766]: I0130 17:48:24.458802 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwzfr\" (UniqueName: \"kubernetes.io/projected/e09e2e76-7c0b-4efa-b226-18df0a512567-kube-api-access-jwzfr\") pod \"keystone-db-create-q5td7\" (UID: \"e09e2e76-7c0b-4efa-b226-18df0a512567\") " pod="openstack/keystone-db-create-q5td7" Jan 30 17:48:24 crc kubenswrapper[4766]: I0130 17:48:24.459940 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e09e2e76-7c0b-4efa-b226-18df0a512567-operator-scripts\") pod \"keystone-db-create-q5td7\" (UID: \"e09e2e76-7c0b-4efa-b226-18df0a512567\") " pod="openstack/keystone-db-create-q5td7" Jan 30 17:48:24 crc kubenswrapper[4766]: I0130 17:48:24.474774 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwzfr\" (UniqueName: \"kubernetes.io/projected/e09e2e76-7c0b-4efa-b226-18df0a512567-kube-api-access-jwzfr\") pod \"keystone-db-create-q5td7\" (UID: \"e09e2e76-7c0b-4efa-b226-18df0a512567\") " pod="openstack/keystone-db-create-q5td7" Jan 30 17:48:24 crc kubenswrapper[4766]: I0130 17:48:24.519104 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-q5td7" Jan 30 17:48:24 crc kubenswrapper[4766]: I0130 17:48:24.560650 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rjd6\" (UniqueName: \"kubernetes.io/projected/9d0580a7-5f19-4aa4-893f-106812b15326-kube-api-access-9rjd6\") pod \"keystone-7780-account-create-update-96kcq\" (UID: \"9d0580a7-5f19-4aa4-893f-106812b15326\") " pod="openstack/keystone-7780-account-create-update-96kcq" Jan 30 17:48:24 crc kubenswrapper[4766]: I0130 17:48:24.560729 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d0580a7-5f19-4aa4-893f-106812b15326-operator-scripts\") pod \"keystone-7780-account-create-update-96kcq\" (UID: \"9d0580a7-5f19-4aa4-893f-106812b15326\") " pod="openstack/keystone-7780-account-create-update-96kcq" Jan 30 17:48:24 crc kubenswrapper[4766]: I0130 17:48:24.561708 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d0580a7-5f19-4aa4-893f-106812b15326-operator-scripts\") pod \"keystone-7780-account-create-update-96kcq\" (UID: \"9d0580a7-5f19-4aa4-893f-106812b15326\") " pod="openstack/keystone-7780-account-create-update-96kcq" Jan 30 17:48:24 crc kubenswrapper[4766]: I0130 17:48:24.579783 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rjd6\" (UniqueName: \"kubernetes.io/projected/9d0580a7-5f19-4aa4-893f-106812b15326-kube-api-access-9rjd6\") pod \"keystone-7780-account-create-update-96kcq\" (UID: \"9d0580a7-5f19-4aa4-893f-106812b15326\") " pod="openstack/keystone-7780-account-create-update-96kcq" Jan 30 17:48:24 crc kubenswrapper[4766]: I0130 17:48:24.615448 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7780-account-create-update-96kcq" Jan 30 17:48:25 crc kubenswrapper[4766]: I0130 17:48:25.016744 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-q5td7"] Jan 30 17:48:25 crc kubenswrapper[4766]: I0130 17:48:25.091039 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7780-account-create-update-96kcq"] Jan 30 17:48:25 crc kubenswrapper[4766]: W0130 17:48:25.092725 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d0580a7_5f19_4aa4_893f_106812b15326.slice/crio-5f3adb3a466ec48c63d358c60a04d8af3697680f93cd2029491e6ee4b314f528 WatchSource:0}: Error finding container 5f3adb3a466ec48c63d358c60a04d8af3697680f93cd2029491e6ee4b314f528: Status 404 returned error can't find the container with id 5f3adb3a466ec48c63d358c60a04d8af3697680f93cd2029491e6ee4b314f528 Jan 30 17:48:25 crc kubenswrapper[4766]: I0130 17:48:25.178505 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7780-account-create-update-96kcq" event={"ID":"9d0580a7-5f19-4aa4-893f-106812b15326","Type":"ContainerStarted","Data":"5f3adb3a466ec48c63d358c60a04d8af3697680f93cd2029491e6ee4b314f528"} Jan 30 17:48:25 crc kubenswrapper[4766]: I0130 17:48:25.180024 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-q5td7" event={"ID":"e09e2e76-7c0b-4efa-b226-18df0a512567","Type":"ContainerStarted","Data":"3c6e55bd0cf024ebee065ba107a5ecdfde761cb270a8d820adbc79b96576773c"} Jan 30 17:48:25 crc kubenswrapper[4766]: I0130 17:48:25.180051 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-q5td7" event={"ID":"e09e2e76-7c0b-4efa-b226-18df0a512567","Type":"ContainerStarted","Data":"92a6012a11fcd5aa262360bec683c731ecf508b92807da3cdc67df994d81261e"} Jan 30 17:48:25 crc kubenswrapper[4766]: I0130 17:48:25.202734 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-q5td7" podStartSLOduration=1.202711185 podStartE2EDuration="1.202711185s" podCreationTimestamp="2026-01-30 17:48:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:48:25.195081235 +0000 UTC m=+5159.833038611" watchObservedRunningTime="2026-01-30 17:48:25.202711185 +0000 UTC m=+5159.840668531" Jan 30 17:48:26 crc kubenswrapper[4766]: I0130 17:48:26.190044 4766 generic.go:334] "Generic (PLEG): container finished" podID="9d0580a7-5f19-4aa4-893f-106812b15326" containerID="869db07172127624e0324810e45f248df650df66e4eafda3a0b74e7b81e90798" exitCode=0 Jan 30 17:48:26 crc kubenswrapper[4766]: I0130 17:48:26.190152 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7780-account-create-update-96kcq" event={"ID":"9d0580a7-5f19-4aa4-893f-106812b15326","Type":"ContainerDied","Data":"869db07172127624e0324810e45f248df650df66e4eafda3a0b74e7b81e90798"} Jan 30 17:48:26 crc kubenswrapper[4766]: I0130 17:48:26.194002 4766 generic.go:334] "Generic (PLEG): container finished" podID="e09e2e76-7c0b-4efa-b226-18df0a512567" containerID="3c6e55bd0cf024ebee065ba107a5ecdfde761cb270a8d820adbc79b96576773c" exitCode=0 Jan 30 17:48:26 crc kubenswrapper[4766]: I0130 17:48:26.194049 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-q5td7" event={"ID":"e09e2e76-7c0b-4efa-b226-18df0a512567","Type":"ContainerDied","Data":"3c6e55bd0cf024ebee065ba107a5ecdfde761cb270a8d820adbc79b96576773c"} Jan 30 17:48:27 crc kubenswrapper[4766]: I0130 17:48:27.039894 4766 scope.go:117] "RemoveContainer" containerID="7fc5828a50a187fe8ccf98b89913744accb72a8e7e151fd7516b11ddfbd0849c" Jan 30 17:48:27 crc kubenswrapper[4766]: E0130 17:48:27.040228 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:48:27 crc kubenswrapper[4766]: I0130 17:48:27.597560 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-q5td7" Jan 30 17:48:27 crc kubenswrapper[4766]: I0130 17:48:27.610444 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7780-account-create-update-96kcq" Jan 30 17:48:27 crc kubenswrapper[4766]: I0130 17:48:27.713800 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e09e2e76-7c0b-4efa-b226-18df0a512567-operator-scripts\") pod \"e09e2e76-7c0b-4efa-b226-18df0a512567\" (UID: \"e09e2e76-7c0b-4efa-b226-18df0a512567\") " Jan 30 17:48:27 crc kubenswrapper[4766]: I0130 17:48:27.713889 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d0580a7-5f19-4aa4-893f-106812b15326-operator-scripts\") pod \"9d0580a7-5f19-4aa4-893f-106812b15326\" (UID: \"9d0580a7-5f19-4aa4-893f-106812b15326\") " Jan 30 17:48:27 crc kubenswrapper[4766]: I0130 17:48:27.714004 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwzfr\" (UniqueName: \"kubernetes.io/projected/e09e2e76-7c0b-4efa-b226-18df0a512567-kube-api-access-jwzfr\") pod \"e09e2e76-7c0b-4efa-b226-18df0a512567\" (UID: \"e09e2e76-7c0b-4efa-b226-18df0a512567\") " Jan 30 17:48:27 crc kubenswrapper[4766]: I0130 17:48:27.714037 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rjd6\" (UniqueName: \"kubernetes.io/projected/9d0580a7-5f19-4aa4-893f-106812b15326-kube-api-access-9rjd6\") pod \"9d0580a7-5f19-4aa4-893f-106812b15326\" (UID: \"9d0580a7-5f19-4aa4-893f-106812b15326\") " Jan 30 17:48:27 crc kubenswrapper[4766]: I0130 17:48:27.715081 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d0580a7-5f19-4aa4-893f-106812b15326-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9d0580a7-5f19-4aa4-893f-106812b15326" (UID: "9d0580a7-5f19-4aa4-893f-106812b15326"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:48:27 crc kubenswrapper[4766]: I0130 17:48:27.715108 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e09e2e76-7c0b-4efa-b226-18df0a512567-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e09e2e76-7c0b-4efa-b226-18df0a512567" (UID: "e09e2e76-7c0b-4efa-b226-18df0a512567"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:48:27 crc kubenswrapper[4766]: I0130 17:48:27.721097 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e09e2e76-7c0b-4efa-b226-18df0a512567-kube-api-access-jwzfr" (OuterVolumeSpecName: "kube-api-access-jwzfr") pod "e09e2e76-7c0b-4efa-b226-18df0a512567" (UID: "e09e2e76-7c0b-4efa-b226-18df0a512567"). InnerVolumeSpecName "kube-api-access-jwzfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:48:27 crc kubenswrapper[4766]: I0130 17:48:27.723296 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d0580a7-5f19-4aa4-893f-106812b15326-kube-api-access-9rjd6" (OuterVolumeSpecName: "kube-api-access-9rjd6") pod "9d0580a7-5f19-4aa4-893f-106812b15326" (UID: "9d0580a7-5f19-4aa4-893f-106812b15326"). InnerVolumeSpecName "kube-api-access-9rjd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:48:27 crc kubenswrapper[4766]: I0130 17:48:27.815393 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e09e2e76-7c0b-4efa-b226-18df0a512567-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:48:27 crc kubenswrapper[4766]: I0130 17:48:27.815420 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d0580a7-5f19-4aa4-893f-106812b15326-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:48:27 crc kubenswrapper[4766]: I0130 17:48:27.815431 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwzfr\" (UniqueName: \"kubernetes.io/projected/e09e2e76-7c0b-4efa-b226-18df0a512567-kube-api-access-jwzfr\") on node \"crc\" DevicePath \"\"" Jan 30 17:48:27 crc kubenswrapper[4766]: I0130 17:48:27.815442 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rjd6\" (UniqueName: \"kubernetes.io/projected/9d0580a7-5f19-4aa4-893f-106812b15326-kube-api-access-9rjd6\") on node \"crc\" DevicePath \"\"" Jan 30 17:48:28 crc kubenswrapper[4766]: I0130 17:48:28.210367 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-q5td7" Jan 30 17:48:28 crc kubenswrapper[4766]: I0130 17:48:28.210369 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-q5td7" event={"ID":"e09e2e76-7c0b-4efa-b226-18df0a512567","Type":"ContainerDied","Data":"92a6012a11fcd5aa262360bec683c731ecf508b92807da3cdc67df994d81261e"} Jan 30 17:48:28 crc kubenswrapper[4766]: I0130 17:48:28.210499 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92a6012a11fcd5aa262360bec683c731ecf508b92807da3cdc67df994d81261e" Jan 30 17:48:28 crc kubenswrapper[4766]: I0130 17:48:28.212745 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7780-account-create-update-96kcq" event={"ID":"9d0580a7-5f19-4aa4-893f-106812b15326","Type":"ContainerDied","Data":"5f3adb3a466ec48c63d358c60a04d8af3697680f93cd2029491e6ee4b314f528"} Jan 30 17:48:28 crc kubenswrapper[4766]: I0130 17:48:28.212786 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f3adb3a466ec48c63d358c60a04d8af3697680f93cd2029491e6ee4b314f528" Jan 30 17:48:28 crc kubenswrapper[4766]: I0130 17:48:28.212844 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7780-account-create-update-96kcq" Jan 30 17:48:29 crc kubenswrapper[4766]: I0130 17:48:29.721559 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-6hlg5"] Jan 30 17:48:29 crc kubenswrapper[4766]: E0130 17:48:29.721924 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e09e2e76-7c0b-4efa-b226-18df0a512567" containerName="mariadb-database-create" Jan 30 17:48:29 crc kubenswrapper[4766]: I0130 17:48:29.721937 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="e09e2e76-7c0b-4efa-b226-18df0a512567" containerName="mariadb-database-create" Jan 30 17:48:29 crc kubenswrapper[4766]: E0130 17:48:29.721959 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d0580a7-5f19-4aa4-893f-106812b15326" containerName="mariadb-account-create-update" Jan 30 17:48:29 crc kubenswrapper[4766]: I0130 17:48:29.721965 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d0580a7-5f19-4aa4-893f-106812b15326" containerName="mariadb-account-create-update" Jan 30 17:48:29 crc kubenswrapper[4766]: I0130 17:48:29.722101 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d0580a7-5f19-4aa4-893f-106812b15326" containerName="mariadb-account-create-update" Jan 30 17:48:29 crc kubenswrapper[4766]: I0130 17:48:29.722118 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="e09e2e76-7c0b-4efa-b226-18df0a512567" containerName="mariadb-database-create" Jan 30 17:48:29 crc kubenswrapper[4766]: I0130 17:48:29.723020 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-6hlg5" Jan 30 17:48:29 crc kubenswrapper[4766]: I0130 17:48:29.724797 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7zq5b" Jan 30 17:48:29 crc kubenswrapper[4766]: I0130 17:48:29.725276 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 17:48:29 crc kubenswrapper[4766]: I0130 17:48:29.725473 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 17:48:29 crc kubenswrapper[4766]: I0130 17:48:29.725564 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 17:48:29 crc kubenswrapper[4766]: I0130 17:48:29.746771 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-6hlg5"] Jan 30 17:48:29 crc kubenswrapper[4766]: I0130 17:48:29.846819 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsdwj\" (UniqueName: \"kubernetes.io/projected/7a04cef9-eaad-4fba-9aa9-0f15ed426885-kube-api-access-xsdwj\") pod \"keystone-db-sync-6hlg5\" (UID: \"7a04cef9-eaad-4fba-9aa9-0f15ed426885\") " pod="openstack/keystone-db-sync-6hlg5" Jan 30 17:48:29 crc kubenswrapper[4766]: I0130 17:48:29.847020 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a04cef9-eaad-4fba-9aa9-0f15ed426885-config-data\") pod \"keystone-db-sync-6hlg5\" (UID: \"7a04cef9-eaad-4fba-9aa9-0f15ed426885\") " pod="openstack/keystone-db-sync-6hlg5" Jan 30 17:48:29 crc kubenswrapper[4766]: I0130 17:48:29.847196 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a04cef9-eaad-4fba-9aa9-0f15ed426885-combined-ca-bundle\") pod \"keystone-db-sync-6hlg5\" (UID: \"7a04cef9-eaad-4fba-9aa9-0f15ed426885\") " pod="openstack/keystone-db-sync-6hlg5" Jan 30 17:48:29 crc kubenswrapper[4766]: I0130 17:48:29.949136 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a04cef9-eaad-4fba-9aa9-0f15ed426885-config-data\") pod \"keystone-db-sync-6hlg5\" (UID: \"7a04cef9-eaad-4fba-9aa9-0f15ed426885\") " pod="openstack/keystone-db-sync-6hlg5" Jan 30 17:48:29 crc kubenswrapper[4766]: I0130 17:48:29.949231 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a04cef9-eaad-4fba-9aa9-0f15ed426885-combined-ca-bundle\") pod \"keystone-db-sync-6hlg5\" (UID: \"7a04cef9-eaad-4fba-9aa9-0f15ed426885\") " pod="openstack/keystone-db-sync-6hlg5" Jan 30 17:48:29 crc kubenswrapper[4766]: I0130 17:48:29.949300 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsdwj\" (UniqueName: \"kubernetes.io/projected/7a04cef9-eaad-4fba-9aa9-0f15ed426885-kube-api-access-xsdwj\") pod \"keystone-db-sync-6hlg5\" (UID: \"7a04cef9-eaad-4fba-9aa9-0f15ed426885\") " pod="openstack/keystone-db-sync-6hlg5" Jan 30 17:48:29 crc kubenswrapper[4766]: I0130 17:48:29.955135 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a04cef9-eaad-4fba-9aa9-0f15ed426885-combined-ca-bundle\") pod \"keystone-db-sync-6hlg5\" (UID: \"7a04cef9-eaad-4fba-9aa9-0f15ed426885\") " pod="openstack/keystone-db-sync-6hlg5" Jan 30 17:48:29 crc kubenswrapper[4766]: I0130 17:48:29.955675 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a04cef9-eaad-4fba-9aa9-0f15ed426885-config-data\") pod \"keystone-db-sync-6hlg5\" (UID: \"7a04cef9-eaad-4fba-9aa9-0f15ed426885\") " pod="openstack/keystone-db-sync-6hlg5" Jan 30 17:48:29 crc kubenswrapper[4766]: I0130 17:48:29.970606 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsdwj\" (UniqueName: \"kubernetes.io/projected/7a04cef9-eaad-4fba-9aa9-0f15ed426885-kube-api-access-xsdwj\") pod \"keystone-db-sync-6hlg5\" (UID: \"7a04cef9-eaad-4fba-9aa9-0f15ed426885\") " pod="openstack/keystone-db-sync-6hlg5" Jan 30 17:48:30 crc kubenswrapper[4766]: I0130 17:48:30.038989 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-6hlg5" Jan 30 17:48:30 crc kubenswrapper[4766]: I0130 17:48:30.496046 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-6hlg5"] Jan 30 17:48:30 crc kubenswrapper[4766]: W0130 17:48:30.501028 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a04cef9_eaad_4fba_9aa9_0f15ed426885.slice/crio-797efd2eb388ecb77489cbf9f956dc11307b33e66dbf00d7ce49fc1b68049e3a WatchSource:0}: Error finding container 797efd2eb388ecb77489cbf9f956dc11307b33e66dbf00d7ce49fc1b68049e3a: Status 404 returned error can't find the container with id 797efd2eb388ecb77489cbf9f956dc11307b33e66dbf00d7ce49fc1b68049e3a Jan 30 17:48:31 crc kubenswrapper[4766]: I0130 17:48:31.246315 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-6hlg5" event={"ID":"7a04cef9-eaad-4fba-9aa9-0f15ed426885","Type":"ContainerStarted","Data":"a65fe77666bd1dd89a9c3e39317ec3bd94cd2f336d1abf824947e6dcb6ba640a"} Jan 30 17:48:31 crc kubenswrapper[4766]: I0130 17:48:31.246363 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-6hlg5" event={"ID":"7a04cef9-eaad-4fba-9aa9-0f15ed426885","Type":"ContainerStarted","Data":"797efd2eb388ecb77489cbf9f956dc11307b33e66dbf00d7ce49fc1b68049e3a"} Jan 30 17:48:31 crc kubenswrapper[4766]: I0130 17:48:31.267997 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-6hlg5" podStartSLOduration=2.267978378 podStartE2EDuration="2.267978378s" podCreationTimestamp="2026-01-30 17:48:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:48:31.266734735 +0000 UTC m=+5165.904692101" watchObservedRunningTime="2026-01-30 17:48:31.267978378 +0000 UTC m=+5165.905935724" Jan 30 17:48:33 crc kubenswrapper[4766]: I0130 17:48:33.263748 4766 generic.go:334] "Generic (PLEG): container finished" podID="7a04cef9-eaad-4fba-9aa9-0f15ed426885" containerID="a65fe77666bd1dd89a9c3e39317ec3bd94cd2f336d1abf824947e6dcb6ba640a" exitCode=0 Jan 30 17:48:33 crc kubenswrapper[4766]: I0130 17:48:33.263844 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-6hlg5" event={"ID":"7a04cef9-eaad-4fba-9aa9-0f15ed426885","Type":"ContainerDied","Data":"a65fe77666bd1dd89a9c3e39317ec3bd94cd2f336d1abf824947e6dcb6ba640a"} Jan 30 17:48:34 crc kubenswrapper[4766]: I0130 17:48:34.619379 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-6hlg5" Jan 30 17:48:34 crc kubenswrapper[4766]: I0130 17:48:34.732036 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a04cef9-eaad-4fba-9aa9-0f15ed426885-config-data\") pod \"7a04cef9-eaad-4fba-9aa9-0f15ed426885\" (UID: \"7a04cef9-eaad-4fba-9aa9-0f15ed426885\") " Jan 30 17:48:34 crc kubenswrapper[4766]: I0130 17:48:34.732189 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a04cef9-eaad-4fba-9aa9-0f15ed426885-combined-ca-bundle\") pod \"7a04cef9-eaad-4fba-9aa9-0f15ed426885\" (UID: \"7a04cef9-eaad-4fba-9aa9-0f15ed426885\") " Jan 30 17:48:34 crc kubenswrapper[4766]: I0130 17:48:34.732301 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsdwj\" (UniqueName: \"kubernetes.io/projected/7a04cef9-eaad-4fba-9aa9-0f15ed426885-kube-api-access-xsdwj\") pod \"7a04cef9-eaad-4fba-9aa9-0f15ed426885\" (UID: \"7a04cef9-eaad-4fba-9aa9-0f15ed426885\") " Jan 30 17:48:34 crc kubenswrapper[4766]: I0130 17:48:34.737542 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a04cef9-eaad-4fba-9aa9-0f15ed426885-kube-api-access-xsdwj" (OuterVolumeSpecName: "kube-api-access-xsdwj") pod "7a04cef9-eaad-4fba-9aa9-0f15ed426885" (UID: "7a04cef9-eaad-4fba-9aa9-0f15ed426885"). InnerVolumeSpecName "kube-api-access-xsdwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:48:34 crc kubenswrapper[4766]: I0130 17:48:34.757650 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a04cef9-eaad-4fba-9aa9-0f15ed426885-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7a04cef9-eaad-4fba-9aa9-0f15ed426885" (UID: "7a04cef9-eaad-4fba-9aa9-0f15ed426885"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:48:34 crc kubenswrapper[4766]: I0130 17:48:34.769545 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a04cef9-eaad-4fba-9aa9-0f15ed426885-config-data" (OuterVolumeSpecName: "config-data") pod "7a04cef9-eaad-4fba-9aa9-0f15ed426885" (UID: "7a04cef9-eaad-4fba-9aa9-0f15ed426885"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:48:34 crc kubenswrapper[4766]: I0130 17:48:34.834425 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsdwj\" (UniqueName: \"kubernetes.io/projected/7a04cef9-eaad-4fba-9aa9-0f15ed426885-kube-api-access-xsdwj\") on node \"crc\" DevicePath \"\"" Jan 30 17:48:34 crc kubenswrapper[4766]: I0130 17:48:34.834462 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a04cef9-eaad-4fba-9aa9-0f15ed426885-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:48:34 crc kubenswrapper[4766]: I0130 17:48:34.834472 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a04cef9-eaad-4fba-9aa9-0f15ed426885-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.281776 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-6hlg5" event={"ID":"7a04cef9-eaad-4fba-9aa9-0f15ed426885","Type":"ContainerDied","Data":"797efd2eb388ecb77489cbf9f956dc11307b33e66dbf00d7ce49fc1b68049e3a"} Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.281817 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="797efd2eb388ecb77489cbf9f956dc11307b33e66dbf00d7ce49fc1b68049e3a" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.281819 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-6hlg5" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.574690 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77f4494f49-kmx27"] Jan 30 17:48:35 crc kubenswrapper[4766]: E0130 17:48:35.575104 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a04cef9-eaad-4fba-9aa9-0f15ed426885" containerName="keystone-db-sync" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.575118 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a04cef9-eaad-4fba-9aa9-0f15ed426885" containerName="keystone-db-sync" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.575410 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a04cef9-eaad-4fba-9aa9-0f15ed426885" containerName="keystone-db-sync" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.576620 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77f4494f49-kmx27" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.584365 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-xhm6m"] Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.585500 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xhm6m" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.593741 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.594250 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7zq5b" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.595908 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77f4494f49-kmx27"] Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.596534 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.596829 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.597044 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.607053 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xhm6m"] Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.677166 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83eadd27-65d9-4d4b-aa94-e58a77793239-config-data\") pod \"keystone-bootstrap-xhm6m\" (UID: \"83eadd27-65d9-4d4b-aa94-e58a77793239\") " pod="openstack/keystone-bootstrap-xhm6m" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.677242 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d56c4a02-3a71-44af-b4e3-c01fdfe94aa2-dns-svc\") pod \"dnsmasq-dns-77f4494f49-kmx27\" (UID: \"d56c4a02-3a71-44af-b4e3-c01fdfe94aa2\") " pod="openstack/dnsmasq-dns-77f4494f49-kmx27" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.677279 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d56c4a02-3a71-44af-b4e3-c01fdfe94aa2-config\") pod \"dnsmasq-dns-77f4494f49-kmx27\" (UID: \"d56c4a02-3a71-44af-b4e3-c01fdfe94aa2\") " pod="openstack/dnsmasq-dns-77f4494f49-kmx27" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.677339 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/83eadd27-65d9-4d4b-aa94-e58a77793239-credential-keys\") pod \"keystone-bootstrap-xhm6m\" (UID: \"83eadd27-65d9-4d4b-aa94-e58a77793239\") " pod="openstack/keystone-bootstrap-xhm6m" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.677375 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/83eadd27-65d9-4d4b-aa94-e58a77793239-fernet-keys\") pod \"keystone-bootstrap-xhm6m\" (UID: \"83eadd27-65d9-4d4b-aa94-e58a77793239\") " pod="openstack/keystone-bootstrap-xhm6m" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.677411 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d56c4a02-3a71-44af-b4e3-c01fdfe94aa2-ovsdbserver-nb\") pod \"dnsmasq-dns-77f4494f49-kmx27\" (UID: \"d56c4a02-3a71-44af-b4e3-c01fdfe94aa2\") " pod="openstack/dnsmasq-dns-77f4494f49-kmx27" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.677442 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83eadd27-65d9-4d4b-aa94-e58a77793239-scripts\") pod \"keystone-bootstrap-xhm6m\" (UID: \"83eadd27-65d9-4d4b-aa94-e58a77793239\") " pod="openstack/keystone-bootstrap-xhm6m" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.677510 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-862w4\" (UniqueName: \"kubernetes.io/projected/83eadd27-65d9-4d4b-aa94-e58a77793239-kube-api-access-862w4\") pod \"keystone-bootstrap-xhm6m\" (UID: \"83eadd27-65d9-4d4b-aa94-e58a77793239\") " pod="openstack/keystone-bootstrap-xhm6m" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.677558 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdrv8\" (UniqueName: \"kubernetes.io/projected/d56c4a02-3a71-44af-b4e3-c01fdfe94aa2-kube-api-access-kdrv8\") pod \"dnsmasq-dns-77f4494f49-kmx27\" (UID: \"d56c4a02-3a71-44af-b4e3-c01fdfe94aa2\") " pod="openstack/dnsmasq-dns-77f4494f49-kmx27" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.677813 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d56c4a02-3a71-44af-b4e3-c01fdfe94aa2-ovsdbserver-sb\") pod \"dnsmasq-dns-77f4494f49-kmx27\" (UID: \"d56c4a02-3a71-44af-b4e3-c01fdfe94aa2\") " pod="openstack/dnsmasq-dns-77f4494f49-kmx27" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.677888 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83eadd27-65d9-4d4b-aa94-e58a77793239-combined-ca-bundle\") pod \"keystone-bootstrap-xhm6m\" (UID: \"83eadd27-65d9-4d4b-aa94-e58a77793239\") " pod="openstack/keystone-bootstrap-xhm6m" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.779664 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d56c4a02-3a71-44af-b4e3-c01fdfe94aa2-ovsdbserver-sb\") pod \"dnsmasq-dns-77f4494f49-kmx27\" (UID: \"d56c4a02-3a71-44af-b4e3-c01fdfe94aa2\") " pod="openstack/dnsmasq-dns-77f4494f49-kmx27" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.779722 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83eadd27-65d9-4d4b-aa94-e58a77793239-combined-ca-bundle\") pod \"keystone-bootstrap-xhm6m\" (UID: \"83eadd27-65d9-4d4b-aa94-e58a77793239\") " pod="openstack/keystone-bootstrap-xhm6m" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.779793 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83eadd27-65d9-4d4b-aa94-e58a77793239-config-data\") pod \"keystone-bootstrap-xhm6m\" (UID: \"83eadd27-65d9-4d4b-aa94-e58a77793239\") " pod="openstack/keystone-bootstrap-xhm6m" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.779824 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d56c4a02-3a71-44af-b4e3-c01fdfe94aa2-dns-svc\") pod \"dnsmasq-dns-77f4494f49-kmx27\" (UID: \"d56c4a02-3a71-44af-b4e3-c01fdfe94aa2\") " pod="openstack/dnsmasq-dns-77f4494f49-kmx27" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.779855 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d56c4a02-3a71-44af-b4e3-c01fdfe94aa2-config\") pod \"dnsmasq-dns-77f4494f49-kmx27\" (UID: \"d56c4a02-3a71-44af-b4e3-c01fdfe94aa2\") " pod="openstack/dnsmasq-dns-77f4494f49-kmx27" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.779897 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/83eadd27-65d9-4d4b-aa94-e58a77793239-credential-keys\") pod \"keystone-bootstrap-xhm6m\" (UID: \"83eadd27-65d9-4d4b-aa94-e58a77793239\") " pod="openstack/keystone-bootstrap-xhm6m" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.779926 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/83eadd27-65d9-4d4b-aa94-e58a77793239-fernet-keys\") pod \"keystone-bootstrap-xhm6m\" (UID: \"83eadd27-65d9-4d4b-aa94-e58a77793239\") " pod="openstack/keystone-bootstrap-xhm6m" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.779960 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d56c4a02-3a71-44af-b4e3-c01fdfe94aa2-ovsdbserver-nb\") pod \"dnsmasq-dns-77f4494f49-kmx27\" (UID: \"d56c4a02-3a71-44af-b4e3-c01fdfe94aa2\") " pod="openstack/dnsmasq-dns-77f4494f49-kmx27" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.779989 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83eadd27-65d9-4d4b-aa94-e58a77793239-scripts\") pod \"keystone-bootstrap-xhm6m\" (UID: \"83eadd27-65d9-4d4b-aa94-e58a77793239\") " pod="openstack/keystone-bootstrap-xhm6m" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.780021 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-862w4\" (UniqueName: \"kubernetes.io/projected/83eadd27-65d9-4d4b-aa94-e58a77793239-kube-api-access-862w4\") pod \"keystone-bootstrap-xhm6m\" (UID: \"83eadd27-65d9-4d4b-aa94-e58a77793239\") " pod="openstack/keystone-bootstrap-xhm6m" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.780052 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdrv8\" (UniqueName: \"kubernetes.io/projected/d56c4a02-3a71-44af-b4e3-c01fdfe94aa2-kube-api-access-kdrv8\") pod \"dnsmasq-dns-77f4494f49-kmx27\" (UID: \"d56c4a02-3a71-44af-b4e3-c01fdfe94aa2\") " pod="openstack/dnsmasq-dns-77f4494f49-kmx27" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.781003 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d56c4a02-3a71-44af-b4e3-c01fdfe94aa2-ovsdbserver-nb\") pod \"dnsmasq-dns-77f4494f49-kmx27\" (UID: \"d56c4a02-3a71-44af-b4e3-c01fdfe94aa2\") " pod="openstack/dnsmasq-dns-77f4494f49-kmx27" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.781515 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d56c4a02-3a71-44af-b4e3-c01fdfe94aa2-dns-svc\") pod \"dnsmasq-dns-77f4494f49-kmx27\" (UID: \"d56c4a02-3a71-44af-b4e3-c01fdfe94aa2\") " pod="openstack/dnsmasq-dns-77f4494f49-kmx27" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.781610 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d56c4a02-3a71-44af-b4e3-c01fdfe94aa2-config\") pod \"dnsmasq-dns-77f4494f49-kmx27\" (UID: \"d56c4a02-3a71-44af-b4e3-c01fdfe94aa2\") " pod="openstack/dnsmasq-dns-77f4494f49-kmx27" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.782549 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d56c4a02-3a71-44af-b4e3-c01fdfe94aa2-ovsdbserver-sb\") pod \"dnsmasq-dns-77f4494f49-kmx27\" (UID: \"d56c4a02-3a71-44af-b4e3-c01fdfe94aa2\") " pod="openstack/dnsmasq-dns-77f4494f49-kmx27" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.786205 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/83eadd27-65d9-4d4b-aa94-e58a77793239-fernet-keys\") pod \"keystone-bootstrap-xhm6m\" (UID: \"83eadd27-65d9-4d4b-aa94-e58a77793239\") " pod="openstack/keystone-bootstrap-xhm6m" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.786613 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83eadd27-65d9-4d4b-aa94-e58a77793239-config-data\") pod \"keystone-bootstrap-xhm6m\" (UID: \"83eadd27-65d9-4d4b-aa94-e58a77793239\") " pod="openstack/keystone-bootstrap-xhm6m" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.787244 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/83eadd27-65d9-4d4b-aa94-e58a77793239-credential-keys\") pod \"keystone-bootstrap-xhm6m\" (UID: \"83eadd27-65d9-4d4b-aa94-e58a77793239\") " pod="openstack/keystone-bootstrap-xhm6m" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.789502 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83eadd27-65d9-4d4b-aa94-e58a77793239-scripts\") pod \"keystone-bootstrap-xhm6m\" (UID: \"83eadd27-65d9-4d4b-aa94-e58a77793239\") " pod="openstack/keystone-bootstrap-xhm6m" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.796006 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83eadd27-65d9-4d4b-aa94-e58a77793239-combined-ca-bundle\") pod \"keystone-bootstrap-xhm6m\" (UID: \"83eadd27-65d9-4d4b-aa94-e58a77793239\") " pod="openstack/keystone-bootstrap-xhm6m" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.804616 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdrv8\" (UniqueName: \"kubernetes.io/projected/d56c4a02-3a71-44af-b4e3-c01fdfe94aa2-kube-api-access-kdrv8\") pod \"dnsmasq-dns-77f4494f49-kmx27\" (UID: \"d56c4a02-3a71-44af-b4e3-c01fdfe94aa2\") " pod="openstack/dnsmasq-dns-77f4494f49-kmx27" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.812728 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-862w4\" (UniqueName: \"kubernetes.io/projected/83eadd27-65d9-4d4b-aa94-e58a77793239-kube-api-access-862w4\") pod \"keystone-bootstrap-xhm6m\" (UID: \"83eadd27-65d9-4d4b-aa94-e58a77793239\") " pod="openstack/keystone-bootstrap-xhm6m" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.903810 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77f4494f49-kmx27" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.912561 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xhm6m" Jan 30 17:48:36 crc kubenswrapper[4766]: I0130 17:48:36.428155 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xhm6m"] Jan 30 17:48:36 crc kubenswrapper[4766]: W0130 17:48:36.436667 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83eadd27_65d9_4d4b_aa94_e58a77793239.slice/crio-6400380770eb3831da9c27d15183138e14280fb5e142a3214e48b17d35052a40 WatchSource:0}: Error finding container 6400380770eb3831da9c27d15183138e14280fb5e142a3214e48b17d35052a40: Status 404 returned error can't find the container with id 6400380770eb3831da9c27d15183138e14280fb5e142a3214e48b17d35052a40 Jan 30 17:48:36 crc kubenswrapper[4766]: I0130 17:48:36.504906 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77f4494f49-kmx27"] Jan 30 17:48:36 crc kubenswrapper[4766]: W0130 17:48:36.515852 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd56c4a02_3a71_44af_b4e3_c01fdfe94aa2.slice/crio-8ceea891fbcb5fb421d81a3c1c5593d03fae3166d751648db9c3253347233743 WatchSource:0}: Error finding container 8ceea891fbcb5fb421d81a3c1c5593d03fae3166d751648db9c3253347233743: Status 404 returned error can't find the container with id 8ceea891fbcb5fb421d81a3c1c5593d03fae3166d751648db9c3253347233743 Jan 30 17:48:37 crc kubenswrapper[4766]: I0130 17:48:37.317500 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xhm6m" event={"ID":"83eadd27-65d9-4d4b-aa94-e58a77793239","Type":"ContainerStarted","Data":"39cb977a0be995f7d392e56740fc2759cd94bc46c0c9536f717062f35b225716"} Jan 30 17:48:37 crc kubenswrapper[4766]: I0130 17:48:37.317791 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xhm6m" event={"ID":"83eadd27-65d9-4d4b-aa94-e58a77793239","Type":"ContainerStarted","Data":"6400380770eb3831da9c27d15183138e14280fb5e142a3214e48b17d35052a40"} Jan 30 17:48:37 crc kubenswrapper[4766]: I0130 17:48:37.319454 4766 generic.go:334] "Generic (PLEG): container finished" podID="d56c4a02-3a71-44af-b4e3-c01fdfe94aa2" containerID="221d49a1d4c421b4915316ea508e130c64fe759e3aa996c068719e4d84855633" exitCode=0 Jan 30 17:48:37 crc kubenswrapper[4766]: I0130 17:48:37.319488 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77f4494f49-kmx27" event={"ID":"d56c4a02-3a71-44af-b4e3-c01fdfe94aa2","Type":"ContainerDied","Data":"221d49a1d4c421b4915316ea508e130c64fe759e3aa996c068719e4d84855633"} Jan 30 17:48:37 crc kubenswrapper[4766]: I0130 17:48:37.319504 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77f4494f49-kmx27" event={"ID":"d56c4a02-3a71-44af-b4e3-c01fdfe94aa2","Type":"ContainerStarted","Data":"8ceea891fbcb5fb421d81a3c1c5593d03fae3166d751648db9c3253347233743"} Jan 30 17:48:37 crc kubenswrapper[4766]: I0130 17:48:37.341388 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-xhm6m" podStartSLOduration=2.341370844 podStartE2EDuration="2.341370844s" podCreationTimestamp="2026-01-30 17:48:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:48:37.338278823 +0000 UTC m=+5171.976236159" watchObservedRunningTime="2026-01-30 17:48:37.341370844 +0000 UTC m=+5171.979328190" Jan 30 17:48:38 crc kubenswrapper[4766]: I0130 17:48:38.041204 4766 scope.go:117] "RemoveContainer" containerID="7fc5828a50a187fe8ccf98b89913744accb72a8e7e151fd7516b11ddfbd0849c" Jan 30 17:48:38 crc kubenswrapper[4766]: E0130 17:48:38.041775 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:48:38 crc kubenswrapper[4766]: I0130 17:48:38.330045 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77f4494f49-kmx27" event={"ID":"d56c4a02-3a71-44af-b4e3-c01fdfe94aa2","Type":"ContainerStarted","Data":"0a46cd154d575e3a8c79e1f39b696f40c2dd09cb6642b1622e60f70d1ca2fbf0"} Jan 30 17:48:38 crc kubenswrapper[4766]: I0130 17:48:38.330345 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77f4494f49-kmx27" Jan 30 17:48:38 crc kubenswrapper[4766]: I0130 17:48:38.358090 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-77f4494f49-kmx27" podStartSLOduration=3.358073362 podStartE2EDuration="3.358073362s" podCreationTimestamp="2026-01-30 17:48:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:48:38.352480006 +0000 UTC m=+5172.990437352" watchObservedRunningTime="2026-01-30 17:48:38.358073362 +0000 UTC m=+5172.996030708" Jan 30 17:48:39 crc kubenswrapper[4766]: I0130 17:48:39.956911 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 30 17:48:40 crc kubenswrapper[4766]: I0130 17:48:40.350614 4766 generic.go:334] "Generic (PLEG): container finished" podID="83eadd27-65d9-4d4b-aa94-e58a77793239" containerID="39cb977a0be995f7d392e56740fc2759cd94bc46c0c9536f717062f35b225716" exitCode=0 Jan 30 17:48:40 crc kubenswrapper[4766]: I0130 17:48:40.350665 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xhm6m" event={"ID":"83eadd27-65d9-4d4b-aa94-e58a77793239","Type":"ContainerDied","Data":"39cb977a0be995f7d392e56740fc2759cd94bc46c0c9536f717062f35b225716"} Jan 30 17:48:41 crc kubenswrapper[4766]: I0130 17:48:41.707054 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xhm6m" Jan 30 17:48:41 crc kubenswrapper[4766]: I0130 17:48:41.781265 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83eadd27-65d9-4d4b-aa94-e58a77793239-config-data\") pod \"83eadd27-65d9-4d4b-aa94-e58a77793239\" (UID: \"83eadd27-65d9-4d4b-aa94-e58a77793239\") " Jan 30 17:48:41 crc kubenswrapper[4766]: I0130 17:48:41.781572 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/83eadd27-65d9-4d4b-aa94-e58a77793239-credential-keys\") pod \"83eadd27-65d9-4d4b-aa94-e58a77793239\" (UID: \"83eadd27-65d9-4d4b-aa94-e58a77793239\") " Jan 30 17:48:41 crc kubenswrapper[4766]: I0130 17:48:41.781683 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83eadd27-65d9-4d4b-aa94-e58a77793239-combined-ca-bundle\") pod \"83eadd27-65d9-4d4b-aa94-e58a77793239\" (UID: \"83eadd27-65d9-4d4b-aa94-e58a77793239\") " Jan 30 17:48:41 crc kubenswrapper[4766]: I0130 17:48:41.781783 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/83eadd27-65d9-4d4b-aa94-e58a77793239-fernet-keys\") pod \"83eadd27-65d9-4d4b-aa94-e58a77793239\" (UID: \"83eadd27-65d9-4d4b-aa94-e58a77793239\") " Jan 30 17:48:41 crc kubenswrapper[4766]: I0130 17:48:41.781885 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83eadd27-65d9-4d4b-aa94-e58a77793239-scripts\") pod \"83eadd27-65d9-4d4b-aa94-e58a77793239\" (UID: \"83eadd27-65d9-4d4b-aa94-e58a77793239\") " Jan 30 17:48:41 crc kubenswrapper[4766]: I0130 17:48:41.781990 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-862w4\" (UniqueName: \"kubernetes.io/projected/83eadd27-65d9-4d4b-aa94-e58a77793239-kube-api-access-862w4\") pod \"83eadd27-65d9-4d4b-aa94-e58a77793239\" (UID: \"83eadd27-65d9-4d4b-aa94-e58a77793239\") " Jan 30 17:48:41 crc kubenswrapper[4766]: I0130 17:48:41.786762 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83eadd27-65d9-4d4b-aa94-e58a77793239-scripts" (OuterVolumeSpecName: "scripts") pod "83eadd27-65d9-4d4b-aa94-e58a77793239" (UID: "83eadd27-65d9-4d4b-aa94-e58a77793239"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:48:41 crc kubenswrapper[4766]: I0130 17:48:41.786914 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83eadd27-65d9-4d4b-aa94-e58a77793239-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "83eadd27-65d9-4d4b-aa94-e58a77793239" (UID: "83eadd27-65d9-4d4b-aa94-e58a77793239"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:48:41 crc kubenswrapper[4766]: I0130 17:48:41.787742 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83eadd27-65d9-4d4b-aa94-e58a77793239-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "83eadd27-65d9-4d4b-aa94-e58a77793239" (UID: "83eadd27-65d9-4d4b-aa94-e58a77793239"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:48:41 crc kubenswrapper[4766]: I0130 17:48:41.787773 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83eadd27-65d9-4d4b-aa94-e58a77793239-kube-api-access-862w4" (OuterVolumeSpecName: "kube-api-access-862w4") pod "83eadd27-65d9-4d4b-aa94-e58a77793239" (UID: "83eadd27-65d9-4d4b-aa94-e58a77793239"). InnerVolumeSpecName "kube-api-access-862w4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:48:41 crc kubenswrapper[4766]: I0130 17:48:41.805570 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83eadd27-65d9-4d4b-aa94-e58a77793239-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "83eadd27-65d9-4d4b-aa94-e58a77793239" (UID: "83eadd27-65d9-4d4b-aa94-e58a77793239"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:48:41 crc kubenswrapper[4766]: I0130 17:48:41.806605 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83eadd27-65d9-4d4b-aa94-e58a77793239-config-data" (OuterVolumeSpecName: "config-data") pod "83eadd27-65d9-4d4b-aa94-e58a77793239" (UID: "83eadd27-65d9-4d4b-aa94-e58a77793239"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:48:41 crc kubenswrapper[4766]: I0130 17:48:41.884226 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-862w4\" (UniqueName: \"kubernetes.io/projected/83eadd27-65d9-4d4b-aa94-e58a77793239-kube-api-access-862w4\") on node \"crc\" DevicePath \"\"" Jan 30 17:48:41 crc kubenswrapper[4766]: I0130 17:48:41.884555 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83eadd27-65d9-4d4b-aa94-e58a77793239-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:48:41 crc kubenswrapper[4766]: I0130 17:48:41.884629 4766 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/83eadd27-65d9-4d4b-aa94-e58a77793239-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 30 17:48:41 crc kubenswrapper[4766]: I0130 17:48:41.884683 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83eadd27-65d9-4d4b-aa94-e58a77793239-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:48:41 crc kubenswrapper[4766]: I0130 17:48:41.884741 4766 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/83eadd27-65d9-4d4b-aa94-e58a77793239-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 17:48:41 crc kubenswrapper[4766]: I0130 17:48:41.884794 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83eadd27-65d9-4d4b-aa94-e58a77793239-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.367739 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xhm6m" event={"ID":"83eadd27-65d9-4d4b-aa94-e58a77793239","Type":"ContainerDied","Data":"6400380770eb3831da9c27d15183138e14280fb5e142a3214e48b17d35052a40"} Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.367778 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6400380770eb3831da9c27d15183138e14280fb5e142a3214e48b17d35052a40" Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.367850 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xhm6m" Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.424590 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-xhm6m"] Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.430004 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-xhm6m"] Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.546007 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-zr744"] Jan 30 17:48:42 crc kubenswrapper[4766]: E0130 17:48:42.546866 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83eadd27-65d9-4d4b-aa94-e58a77793239" containerName="keystone-bootstrap" Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.546955 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="83eadd27-65d9-4d4b-aa94-e58a77793239" containerName="keystone-bootstrap" Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.547488 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="83eadd27-65d9-4d4b-aa94-e58a77793239" containerName="keystone-bootstrap" Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.548424 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zr744" Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.551999 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.552612 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7zq5b" Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.552879 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.553094 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.555448 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.555545 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-zr744"] Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.700974 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c267d58-0d99-463b-9011-34118e7f961a-config-data\") pod \"keystone-bootstrap-zr744\" (UID: \"9c267d58-0d99-463b-9011-34118e7f961a\") " pod="openstack/keystone-bootstrap-zr744" Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.701046 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c267d58-0d99-463b-9011-34118e7f961a-combined-ca-bundle\") pod \"keystone-bootstrap-zr744\" (UID: \"9c267d58-0d99-463b-9011-34118e7f961a\") " pod="openstack/keystone-bootstrap-zr744" Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.701318 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7jjb\" (UniqueName: \"kubernetes.io/projected/9c267d58-0d99-463b-9011-34118e7f961a-kube-api-access-g7jjb\") pod \"keystone-bootstrap-zr744\" (UID: \"9c267d58-0d99-463b-9011-34118e7f961a\") " pod="openstack/keystone-bootstrap-zr744" Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.701594 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9c267d58-0d99-463b-9011-34118e7f961a-fernet-keys\") pod \"keystone-bootstrap-zr744\" (UID: \"9c267d58-0d99-463b-9011-34118e7f961a\") " pod="openstack/keystone-bootstrap-zr744" Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.701659 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c267d58-0d99-463b-9011-34118e7f961a-scripts\") pod \"keystone-bootstrap-zr744\" (UID: \"9c267d58-0d99-463b-9011-34118e7f961a\") " pod="openstack/keystone-bootstrap-zr744" Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.701753 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9c267d58-0d99-463b-9011-34118e7f961a-credential-keys\") pod \"keystone-bootstrap-zr744\" (UID: \"9c267d58-0d99-463b-9011-34118e7f961a\") " pod="openstack/keystone-bootstrap-zr744" Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.803161 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9c267d58-0d99-463b-9011-34118e7f961a-fernet-keys\") pod \"keystone-bootstrap-zr744\" (UID: \"9c267d58-0d99-463b-9011-34118e7f961a\") " pod="openstack/keystone-bootstrap-zr744" Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.803271 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c267d58-0d99-463b-9011-34118e7f961a-scripts\") pod \"keystone-bootstrap-zr744\" (UID: \"9c267d58-0d99-463b-9011-34118e7f961a\") " pod="openstack/keystone-bootstrap-zr744" Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.803334 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9c267d58-0d99-463b-9011-34118e7f961a-credential-keys\") pod \"keystone-bootstrap-zr744\" (UID: \"9c267d58-0d99-463b-9011-34118e7f961a\") " pod="openstack/keystone-bootstrap-zr744" Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.803396 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c267d58-0d99-463b-9011-34118e7f961a-config-data\") pod \"keystone-bootstrap-zr744\" (UID: \"9c267d58-0d99-463b-9011-34118e7f961a\") " pod="openstack/keystone-bootstrap-zr744" Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.803454 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c267d58-0d99-463b-9011-34118e7f961a-combined-ca-bundle\") pod \"keystone-bootstrap-zr744\" (UID: \"9c267d58-0d99-463b-9011-34118e7f961a\") " pod="openstack/keystone-bootstrap-zr744" Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.803611 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7jjb\" (UniqueName: \"kubernetes.io/projected/9c267d58-0d99-463b-9011-34118e7f961a-kube-api-access-g7jjb\") pod \"keystone-bootstrap-zr744\" (UID: \"9c267d58-0d99-463b-9011-34118e7f961a\") " pod="openstack/keystone-bootstrap-zr744" Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.807058 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c267d58-0d99-463b-9011-34118e7f961a-scripts\") pod \"keystone-bootstrap-zr744\" (UID: \"9c267d58-0d99-463b-9011-34118e7f961a\") " pod="openstack/keystone-bootstrap-zr744" Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.807660 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c267d58-0d99-463b-9011-34118e7f961a-config-data\") pod \"keystone-bootstrap-zr744\" (UID: \"9c267d58-0d99-463b-9011-34118e7f961a\") " pod="openstack/keystone-bootstrap-zr744" Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.807831 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c267d58-0d99-463b-9011-34118e7f961a-combined-ca-bundle\") pod \"keystone-bootstrap-zr744\" (UID: \"9c267d58-0d99-463b-9011-34118e7f961a\") " pod="openstack/keystone-bootstrap-zr744" Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.808486 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9c267d58-0d99-463b-9011-34118e7f961a-credential-keys\") pod \"keystone-bootstrap-zr744\" (UID: \"9c267d58-0d99-463b-9011-34118e7f961a\") " pod="openstack/keystone-bootstrap-zr744" Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.808532 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9c267d58-0d99-463b-9011-34118e7f961a-fernet-keys\") pod \"keystone-bootstrap-zr744\" (UID: \"9c267d58-0d99-463b-9011-34118e7f961a\") " pod="openstack/keystone-bootstrap-zr744" Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.819741 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7jjb\" (UniqueName: \"kubernetes.io/projected/9c267d58-0d99-463b-9011-34118e7f961a-kube-api-access-g7jjb\") pod \"keystone-bootstrap-zr744\" (UID: \"9c267d58-0d99-463b-9011-34118e7f961a\") " pod="openstack/keystone-bootstrap-zr744" Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.864580 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zr744" Jan 30 17:48:43 crc kubenswrapper[4766]: I0130 17:48:43.255401 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-zr744"] Jan 30 17:48:43 crc kubenswrapper[4766]: W0130 17:48:43.258221 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c267d58_0d99_463b_9011_34118e7f961a.slice/crio-69365c38c9e87ab8654abea3ad45813c59ecd6ceabb392f68d9ea51a6183da41 WatchSource:0}: Error finding container 69365c38c9e87ab8654abea3ad45813c59ecd6ceabb392f68d9ea51a6183da41: Status 404 returned error can't find the container with id 69365c38c9e87ab8654abea3ad45813c59ecd6ceabb392f68d9ea51a6183da41 Jan 30 17:48:43 crc kubenswrapper[4766]: I0130 17:48:43.376465 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zr744" event={"ID":"9c267d58-0d99-463b-9011-34118e7f961a","Type":"ContainerStarted","Data":"69365c38c9e87ab8654abea3ad45813c59ecd6ceabb392f68d9ea51a6183da41"} Jan 30 17:48:44 crc kubenswrapper[4766]: I0130 17:48:44.052035 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83eadd27-65d9-4d4b-aa94-e58a77793239" path="/var/lib/kubelet/pods/83eadd27-65d9-4d4b-aa94-e58a77793239/volumes" Jan 30 17:48:44 crc kubenswrapper[4766]: I0130 17:48:44.392979 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zr744" event={"ID":"9c267d58-0d99-463b-9011-34118e7f961a","Type":"ContainerStarted","Data":"bc8079f8c0ccd370bc3a3a51529041c82b6352c79d4171184261059c45df6bfa"} Jan 30 17:48:44 crc kubenswrapper[4766]: I0130 17:48:44.423541 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-zr744" podStartSLOduration=2.42351781 podStartE2EDuration="2.42351781s" podCreationTimestamp="2026-01-30 17:48:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:48:44.418707314 +0000 UTC m=+5179.056664680" watchObservedRunningTime="2026-01-30 17:48:44.42351781 +0000 UTC m=+5179.061475156" Jan 30 17:48:45 crc kubenswrapper[4766]: I0130 17:48:45.905395 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-77f4494f49-kmx27" Jan 30 17:48:45 crc kubenswrapper[4766]: I0130 17:48:45.961931 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8fdcd7795-tjgm8"] Jan 30 17:48:45 crc kubenswrapper[4766]: I0130 17:48:45.962234 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8fdcd7795-tjgm8" podUID="4ddf7416-cc27-4f15-9843-4ef68d7d4b1d" containerName="dnsmasq-dns" containerID="cri-o://d83ad14fd8f4b675ceb3460a2bf958a20357e50f2d888a5402edc7fdebd9aa08" gracePeriod=10 Jan 30 17:48:46 crc kubenswrapper[4766]: I0130 17:48:46.424920 4766 generic.go:334] "Generic (PLEG): container finished" podID="9c267d58-0d99-463b-9011-34118e7f961a" containerID="bc8079f8c0ccd370bc3a3a51529041c82b6352c79d4171184261059c45df6bfa" exitCode=0 Jan 30 17:48:46 crc kubenswrapper[4766]: I0130 17:48:46.425031 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zr744" event={"ID":"9c267d58-0d99-463b-9011-34118e7f961a","Type":"ContainerDied","Data":"bc8079f8c0ccd370bc3a3a51529041c82b6352c79d4171184261059c45df6bfa"} Jan 30 17:48:46 crc kubenswrapper[4766]: I0130 17:48:46.442808 4766 generic.go:334] "Generic (PLEG): container finished" podID="4ddf7416-cc27-4f15-9843-4ef68d7d4b1d" containerID="d83ad14fd8f4b675ceb3460a2bf958a20357e50f2d888a5402edc7fdebd9aa08" exitCode=0 Jan 30 17:48:46 crc kubenswrapper[4766]: I0130 17:48:46.442854 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8fdcd7795-tjgm8" event={"ID":"4ddf7416-cc27-4f15-9843-4ef68d7d4b1d","Type":"ContainerDied","Data":"d83ad14fd8f4b675ceb3460a2bf958a20357e50f2d888a5402edc7fdebd9aa08"} Jan 30 17:48:46 crc kubenswrapper[4766]: I0130 17:48:46.442881 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8fdcd7795-tjgm8" event={"ID":"4ddf7416-cc27-4f15-9843-4ef68d7d4b1d","Type":"ContainerDied","Data":"a807fa870f0e90a7991e2ca2af75e1355936893f5199ae4f636d635b578f5ca9"} Jan 30 17:48:46 crc kubenswrapper[4766]: I0130 17:48:46.442896 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a807fa870f0e90a7991e2ca2af75e1355936893f5199ae4f636d635b578f5ca9" Jan 30 17:48:46 crc kubenswrapper[4766]: I0130 17:48:46.463480 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8fdcd7795-tjgm8" Jan 30 17:48:46 crc kubenswrapper[4766]: I0130 17:48:46.568229 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ddf7416-cc27-4f15-9843-4ef68d7d4b1d-dns-svc\") pod \"4ddf7416-cc27-4f15-9843-4ef68d7d4b1d\" (UID: \"4ddf7416-cc27-4f15-9843-4ef68d7d4b1d\") " Jan 30 17:48:46 crc kubenswrapper[4766]: I0130 17:48:46.568274 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4ddf7416-cc27-4f15-9843-4ef68d7d4b1d-ovsdbserver-sb\") pod \"4ddf7416-cc27-4f15-9843-4ef68d7d4b1d\" (UID: \"4ddf7416-cc27-4f15-9843-4ef68d7d4b1d\") " Jan 30 17:48:46 crc kubenswrapper[4766]: I0130 17:48:46.568335 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qp4k\" (UniqueName: \"kubernetes.io/projected/4ddf7416-cc27-4f15-9843-4ef68d7d4b1d-kube-api-access-7qp4k\") pod \"4ddf7416-cc27-4f15-9843-4ef68d7d4b1d\" (UID: \"4ddf7416-cc27-4f15-9843-4ef68d7d4b1d\") " Jan 30 17:48:46 crc kubenswrapper[4766]: I0130 17:48:46.568398 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ddf7416-cc27-4f15-9843-4ef68d7d4b1d-config\") pod \"4ddf7416-cc27-4f15-9843-4ef68d7d4b1d\" (UID: \"4ddf7416-cc27-4f15-9843-4ef68d7d4b1d\") " Jan 30 17:48:46 crc kubenswrapper[4766]: I0130 17:48:46.568475 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4ddf7416-cc27-4f15-9843-4ef68d7d4b1d-ovsdbserver-nb\") pod \"4ddf7416-cc27-4f15-9843-4ef68d7d4b1d\" (UID: \"4ddf7416-cc27-4f15-9843-4ef68d7d4b1d\") " Jan 30 17:48:46 crc kubenswrapper[4766]: I0130 17:48:46.586227 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ddf7416-cc27-4f15-9843-4ef68d7d4b1d-kube-api-access-7qp4k" (OuterVolumeSpecName: "kube-api-access-7qp4k") pod "4ddf7416-cc27-4f15-9843-4ef68d7d4b1d" (UID: "4ddf7416-cc27-4f15-9843-4ef68d7d4b1d"). InnerVolumeSpecName "kube-api-access-7qp4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:48:46 crc kubenswrapper[4766]: I0130 17:48:46.615364 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ddf7416-cc27-4f15-9843-4ef68d7d4b1d-config" (OuterVolumeSpecName: "config") pod "4ddf7416-cc27-4f15-9843-4ef68d7d4b1d" (UID: "4ddf7416-cc27-4f15-9843-4ef68d7d4b1d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:48:46 crc kubenswrapper[4766]: I0130 17:48:46.616556 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ddf7416-cc27-4f15-9843-4ef68d7d4b1d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4ddf7416-cc27-4f15-9843-4ef68d7d4b1d" (UID: "4ddf7416-cc27-4f15-9843-4ef68d7d4b1d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:48:46 crc kubenswrapper[4766]: I0130 17:48:46.617472 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ddf7416-cc27-4f15-9843-4ef68d7d4b1d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4ddf7416-cc27-4f15-9843-4ef68d7d4b1d" (UID: "4ddf7416-cc27-4f15-9843-4ef68d7d4b1d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:48:46 crc kubenswrapper[4766]: I0130 17:48:46.619346 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ddf7416-cc27-4f15-9843-4ef68d7d4b1d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4ddf7416-cc27-4f15-9843-4ef68d7d4b1d" (UID: "4ddf7416-cc27-4f15-9843-4ef68d7d4b1d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:48:46 crc kubenswrapper[4766]: I0130 17:48:46.670767 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4ddf7416-cc27-4f15-9843-4ef68d7d4b1d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 17:48:46 crc kubenswrapper[4766]: I0130 17:48:46.670810 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ddf7416-cc27-4f15-9843-4ef68d7d4b1d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 17:48:46 crc kubenswrapper[4766]: I0130 17:48:46.670821 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4ddf7416-cc27-4f15-9843-4ef68d7d4b1d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 17:48:46 crc kubenswrapper[4766]: I0130 17:48:46.670835 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qp4k\" (UniqueName: \"kubernetes.io/projected/4ddf7416-cc27-4f15-9843-4ef68d7d4b1d-kube-api-access-7qp4k\") on node \"crc\" DevicePath \"\"" Jan 30 17:48:46 crc kubenswrapper[4766]: I0130 17:48:46.670847 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ddf7416-cc27-4f15-9843-4ef68d7d4b1d-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:48:47 crc kubenswrapper[4766]: I0130 17:48:47.449000 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8fdcd7795-tjgm8" Jan 30 17:48:47 crc kubenswrapper[4766]: I0130 17:48:47.483808 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8fdcd7795-tjgm8"] Jan 30 17:48:47 crc kubenswrapper[4766]: I0130 17:48:47.489913 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8fdcd7795-tjgm8"] Jan 30 17:48:47 crc kubenswrapper[4766]: I0130 17:48:47.789202 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zr744" Jan 30 17:48:47 crc kubenswrapper[4766]: I0130 17:48:47.887102 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c267d58-0d99-463b-9011-34118e7f961a-scripts\") pod \"9c267d58-0d99-463b-9011-34118e7f961a\" (UID: \"9c267d58-0d99-463b-9011-34118e7f961a\") " Jan 30 17:48:47 crc kubenswrapper[4766]: I0130 17:48:47.887250 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c267d58-0d99-463b-9011-34118e7f961a-combined-ca-bundle\") pod \"9c267d58-0d99-463b-9011-34118e7f961a\" (UID: \"9c267d58-0d99-463b-9011-34118e7f961a\") " Jan 30 17:48:47 crc kubenswrapper[4766]: I0130 17:48:47.887323 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9c267d58-0d99-463b-9011-34118e7f961a-credential-keys\") pod \"9c267d58-0d99-463b-9011-34118e7f961a\" (UID: \"9c267d58-0d99-463b-9011-34118e7f961a\") " Jan 30 17:48:47 crc kubenswrapper[4766]: I0130 17:48:47.887391 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7jjb\" (UniqueName: \"kubernetes.io/projected/9c267d58-0d99-463b-9011-34118e7f961a-kube-api-access-g7jjb\") pod \"9c267d58-0d99-463b-9011-34118e7f961a\" (UID: \"9c267d58-0d99-463b-9011-34118e7f961a\") " Jan 30 17:48:47 crc kubenswrapper[4766]: I0130 17:48:47.887411 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c267d58-0d99-463b-9011-34118e7f961a-config-data\") pod \"9c267d58-0d99-463b-9011-34118e7f961a\" (UID: \"9c267d58-0d99-463b-9011-34118e7f961a\") " Jan 30 17:48:47 crc kubenswrapper[4766]: I0130 17:48:47.887472 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9c267d58-0d99-463b-9011-34118e7f961a-fernet-keys\") pod \"9c267d58-0d99-463b-9011-34118e7f961a\" (UID: \"9c267d58-0d99-463b-9011-34118e7f961a\") " Jan 30 17:48:47 crc kubenswrapper[4766]: I0130 17:48:47.890657 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c267d58-0d99-463b-9011-34118e7f961a-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "9c267d58-0d99-463b-9011-34118e7f961a" (UID: "9c267d58-0d99-463b-9011-34118e7f961a"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:48:47 crc kubenswrapper[4766]: I0130 17:48:47.890778 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c267d58-0d99-463b-9011-34118e7f961a-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "9c267d58-0d99-463b-9011-34118e7f961a" (UID: "9c267d58-0d99-463b-9011-34118e7f961a"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:48:47 crc kubenswrapper[4766]: I0130 17:48:47.890942 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c267d58-0d99-463b-9011-34118e7f961a-scripts" (OuterVolumeSpecName: "scripts") pod "9c267d58-0d99-463b-9011-34118e7f961a" (UID: "9c267d58-0d99-463b-9011-34118e7f961a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:48:47 crc kubenswrapper[4766]: I0130 17:48:47.891345 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c267d58-0d99-463b-9011-34118e7f961a-kube-api-access-g7jjb" (OuterVolumeSpecName: "kube-api-access-g7jjb") pod "9c267d58-0d99-463b-9011-34118e7f961a" (UID: "9c267d58-0d99-463b-9011-34118e7f961a"). InnerVolumeSpecName "kube-api-access-g7jjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:48:47 crc kubenswrapper[4766]: I0130 17:48:47.909482 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c267d58-0d99-463b-9011-34118e7f961a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9c267d58-0d99-463b-9011-34118e7f961a" (UID: "9c267d58-0d99-463b-9011-34118e7f961a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:48:47 crc kubenswrapper[4766]: I0130 17:48:47.911461 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c267d58-0d99-463b-9011-34118e7f961a-config-data" (OuterVolumeSpecName: "config-data") pod "9c267d58-0d99-463b-9011-34118e7f961a" (UID: "9c267d58-0d99-463b-9011-34118e7f961a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:48:47 crc kubenswrapper[4766]: I0130 17:48:47.988921 4766 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9c267d58-0d99-463b-9011-34118e7f961a-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 30 17:48:47 crc kubenswrapper[4766]: I0130 17:48:47.988952 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7jjb\" (UniqueName: \"kubernetes.io/projected/9c267d58-0d99-463b-9011-34118e7f961a-kube-api-access-g7jjb\") on node \"crc\" DevicePath \"\"" Jan 30 17:48:47 crc kubenswrapper[4766]: I0130 17:48:47.988963 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c267d58-0d99-463b-9011-34118e7f961a-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:48:47 crc kubenswrapper[4766]: I0130 17:48:47.988971 4766 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9c267d58-0d99-463b-9011-34118e7f961a-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 17:48:47 crc kubenswrapper[4766]: I0130 17:48:47.988979 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c267d58-0d99-463b-9011-34118e7f961a-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:48:47 crc kubenswrapper[4766]: I0130 17:48:47.988987 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c267d58-0d99-463b-9011-34118e7f961a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.051072 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ddf7416-cc27-4f15-9843-4ef68d7d4b1d" path="/var/lib/kubelet/pods/4ddf7416-cc27-4f15-9843-4ef68d7d4b1d/volumes" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.456498 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zr744" event={"ID":"9c267d58-0d99-463b-9011-34118e7f961a","Type":"ContainerDied","Data":"69365c38c9e87ab8654abea3ad45813c59ecd6ceabb392f68d9ea51a6183da41"} Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.456539 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69365c38c9e87ab8654abea3ad45813c59ecd6ceabb392f68d9ea51a6183da41" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.456559 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zr744" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.527236 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-d9bc78c74-tqx5h"] Jan 30 17:48:48 crc kubenswrapper[4766]: E0130 17:48:48.527621 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c267d58-0d99-463b-9011-34118e7f961a" containerName="keystone-bootstrap" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.527643 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c267d58-0d99-463b-9011-34118e7f961a" containerName="keystone-bootstrap" Jan 30 17:48:48 crc kubenswrapper[4766]: E0130 17:48:48.527658 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ddf7416-cc27-4f15-9843-4ef68d7d4b1d" containerName="init" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.527666 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ddf7416-cc27-4f15-9843-4ef68d7d4b1d" containerName="init" Jan 30 17:48:48 crc kubenswrapper[4766]: E0130 17:48:48.527679 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ddf7416-cc27-4f15-9843-4ef68d7d4b1d" containerName="dnsmasq-dns" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.527686 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ddf7416-cc27-4f15-9843-4ef68d7d4b1d" containerName="dnsmasq-dns" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.527843 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c267d58-0d99-463b-9011-34118e7f961a" containerName="keystone-bootstrap" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.527860 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ddf7416-cc27-4f15-9843-4ef68d7d4b1d" containerName="dnsmasq-dns" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.528406 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d9bc78c74-tqx5h" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.531814 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7zq5b" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.531863 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.531896 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.532335 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.538900 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-d9bc78c74-tqx5h"] Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.599336 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2175d86-a673-4c75-9344-d410bff4770a-config-data\") pod \"keystone-d9bc78c74-tqx5h\" (UID: \"d2175d86-a673-4c75-9344-d410bff4770a\") " pod="openstack/keystone-d9bc78c74-tqx5h" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.599712 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d2175d86-a673-4c75-9344-d410bff4770a-credential-keys\") pod \"keystone-d9bc78c74-tqx5h\" (UID: \"d2175d86-a673-4c75-9344-d410bff4770a\") " pod="openstack/keystone-d9bc78c74-tqx5h" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.599734 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2175d86-a673-4c75-9344-d410bff4770a-scripts\") pod \"keystone-d9bc78c74-tqx5h\" (UID: \"d2175d86-a673-4c75-9344-d410bff4770a\") " pod="openstack/keystone-d9bc78c74-tqx5h" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.599765 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d2175d86-a673-4c75-9344-d410bff4770a-fernet-keys\") pod \"keystone-d9bc78c74-tqx5h\" (UID: \"d2175d86-a673-4c75-9344-d410bff4770a\") " pod="openstack/keystone-d9bc78c74-tqx5h" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.599789 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b84gn\" (UniqueName: \"kubernetes.io/projected/d2175d86-a673-4c75-9344-d410bff4770a-kube-api-access-b84gn\") pod \"keystone-d9bc78c74-tqx5h\" (UID: \"d2175d86-a673-4c75-9344-d410bff4770a\") " pod="openstack/keystone-d9bc78c74-tqx5h" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.599813 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2175d86-a673-4c75-9344-d410bff4770a-combined-ca-bundle\") pod \"keystone-d9bc78c74-tqx5h\" (UID: \"d2175d86-a673-4c75-9344-d410bff4770a\") " pod="openstack/keystone-d9bc78c74-tqx5h" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.700849 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d2175d86-a673-4c75-9344-d410bff4770a-credential-keys\") pod \"keystone-d9bc78c74-tqx5h\" (UID: \"d2175d86-a673-4c75-9344-d410bff4770a\") " pod="openstack/keystone-d9bc78c74-tqx5h" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.700883 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2175d86-a673-4c75-9344-d410bff4770a-scripts\") pod \"keystone-d9bc78c74-tqx5h\" (UID: \"d2175d86-a673-4c75-9344-d410bff4770a\") " pod="openstack/keystone-d9bc78c74-tqx5h" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.700917 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d2175d86-a673-4c75-9344-d410bff4770a-fernet-keys\") pod \"keystone-d9bc78c74-tqx5h\" (UID: \"d2175d86-a673-4c75-9344-d410bff4770a\") " pod="openstack/keystone-d9bc78c74-tqx5h" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.700944 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b84gn\" (UniqueName: \"kubernetes.io/projected/d2175d86-a673-4c75-9344-d410bff4770a-kube-api-access-b84gn\") pod \"keystone-d9bc78c74-tqx5h\" (UID: \"d2175d86-a673-4c75-9344-d410bff4770a\") " pod="openstack/keystone-d9bc78c74-tqx5h" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.700967 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2175d86-a673-4c75-9344-d410bff4770a-combined-ca-bundle\") pod \"keystone-d9bc78c74-tqx5h\" (UID: \"d2175d86-a673-4c75-9344-d410bff4770a\") " pod="openstack/keystone-d9bc78c74-tqx5h" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.701053 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2175d86-a673-4c75-9344-d410bff4770a-config-data\") pod \"keystone-d9bc78c74-tqx5h\" (UID: \"d2175d86-a673-4c75-9344-d410bff4770a\") " pod="openstack/keystone-d9bc78c74-tqx5h" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.706250 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2175d86-a673-4c75-9344-d410bff4770a-scripts\") pod \"keystone-d9bc78c74-tqx5h\" (UID: \"d2175d86-a673-4c75-9344-d410bff4770a\") " pod="openstack/keystone-d9bc78c74-tqx5h" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.706373 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d2175d86-a673-4c75-9344-d410bff4770a-credential-keys\") pod \"keystone-d9bc78c74-tqx5h\" (UID: \"d2175d86-a673-4c75-9344-d410bff4770a\") " pod="openstack/keystone-d9bc78c74-tqx5h" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.706576 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2175d86-a673-4c75-9344-d410bff4770a-config-data\") pod \"keystone-d9bc78c74-tqx5h\" (UID: \"d2175d86-a673-4c75-9344-d410bff4770a\") " pod="openstack/keystone-d9bc78c74-tqx5h" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.706597 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2175d86-a673-4c75-9344-d410bff4770a-combined-ca-bundle\") pod \"keystone-d9bc78c74-tqx5h\" (UID: \"d2175d86-a673-4c75-9344-d410bff4770a\") " pod="openstack/keystone-d9bc78c74-tqx5h" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.707118 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d2175d86-a673-4c75-9344-d410bff4770a-fernet-keys\") pod \"keystone-d9bc78c74-tqx5h\" (UID: \"d2175d86-a673-4c75-9344-d410bff4770a\") " pod="openstack/keystone-d9bc78c74-tqx5h" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.722996 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b84gn\" (UniqueName: \"kubernetes.io/projected/d2175d86-a673-4c75-9344-d410bff4770a-kube-api-access-b84gn\") pod \"keystone-d9bc78c74-tqx5h\" (UID: \"d2175d86-a673-4c75-9344-d410bff4770a\") " pod="openstack/keystone-d9bc78c74-tqx5h" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.872146 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d9bc78c74-tqx5h" Jan 30 17:48:49 crc kubenswrapper[4766]: I0130 17:48:49.307411 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-d9bc78c74-tqx5h"] Jan 30 17:48:49 crc kubenswrapper[4766]: W0130 17:48:49.315552 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2175d86_a673_4c75_9344_d410bff4770a.slice/crio-37ffe040a9fb5c8ce2316939bb28a6d294efe0c3ccd57bef362dcb3722f85923 WatchSource:0}: Error finding container 37ffe040a9fb5c8ce2316939bb28a6d294efe0c3ccd57bef362dcb3722f85923: Status 404 returned error can't find the container with id 37ffe040a9fb5c8ce2316939bb28a6d294efe0c3ccd57bef362dcb3722f85923 Jan 30 17:48:49 crc kubenswrapper[4766]: I0130 17:48:49.479608 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d9bc78c74-tqx5h" event={"ID":"d2175d86-a673-4c75-9344-d410bff4770a","Type":"ContainerStarted","Data":"37ffe040a9fb5c8ce2316939bb28a6d294efe0c3ccd57bef362dcb3722f85923"} Jan 30 17:48:50 crc kubenswrapper[4766]: I0130 17:48:50.488904 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d9bc78c74-tqx5h" event={"ID":"d2175d86-a673-4c75-9344-d410bff4770a","Type":"ContainerStarted","Data":"a3a9e271e5adcc9216346b37d04fe08b89775cb7254ad09c6fcfddb496f06d4c"} Jan 30 17:48:50 crc kubenswrapper[4766]: I0130 17:48:50.489324 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-d9bc78c74-tqx5h" Jan 30 17:48:50 crc kubenswrapper[4766]: I0130 17:48:50.509087 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-d9bc78c74-tqx5h" podStartSLOduration=2.509050274 podStartE2EDuration="2.509050274s" podCreationTimestamp="2026-01-30 17:48:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:48:50.505414109 +0000 UTC m=+5185.143371455" watchObservedRunningTime="2026-01-30 17:48:50.509050274 +0000 UTC m=+5185.147007620" Jan 30 17:48:51 crc kubenswrapper[4766]: I0130 17:48:51.285043 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8fdcd7795-tjgm8" podUID="4ddf7416-cc27-4f15-9843-4ef68d7d4b1d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.8:5353: i/o timeout" Jan 30 17:48:52 crc kubenswrapper[4766]: I0130 17:48:52.040032 4766 scope.go:117] "RemoveContainer" containerID="7fc5828a50a187fe8ccf98b89913744accb72a8e7e151fd7516b11ddfbd0849c" Jan 30 17:48:52 crc kubenswrapper[4766]: E0130 17:48:52.040292 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:49:03 crc kubenswrapper[4766]: I0130 17:49:03.244357 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-z4k4s"] Jan 30 17:49:03 crc kubenswrapper[4766]: I0130 17:49:03.246816 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z4k4s" Jan 30 17:49:03 crc kubenswrapper[4766]: I0130 17:49:03.255900 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z4k4s"] Jan 30 17:49:03 crc kubenswrapper[4766]: I0130 17:49:03.359579 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cb4fd9d-0f69-412a-80ee-5ae509d9fff7-utilities\") pod \"certified-operators-z4k4s\" (UID: \"8cb4fd9d-0f69-412a-80ee-5ae509d9fff7\") " pod="openshift-marketplace/certified-operators-z4k4s" Jan 30 17:49:03 crc kubenswrapper[4766]: I0130 17:49:03.359647 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cb4fd9d-0f69-412a-80ee-5ae509d9fff7-catalog-content\") pod \"certified-operators-z4k4s\" (UID: \"8cb4fd9d-0f69-412a-80ee-5ae509d9fff7\") " pod="openshift-marketplace/certified-operators-z4k4s" Jan 30 17:49:03 crc kubenswrapper[4766]: I0130 17:49:03.359676 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv8dj\" (UniqueName: \"kubernetes.io/projected/8cb4fd9d-0f69-412a-80ee-5ae509d9fff7-kube-api-access-dv8dj\") pod \"certified-operators-z4k4s\" (UID: \"8cb4fd9d-0f69-412a-80ee-5ae509d9fff7\") " pod="openshift-marketplace/certified-operators-z4k4s" Jan 30 17:49:03 crc kubenswrapper[4766]: I0130 17:49:03.461667 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cb4fd9d-0f69-412a-80ee-5ae509d9fff7-utilities\") pod \"certified-operators-z4k4s\" (UID: \"8cb4fd9d-0f69-412a-80ee-5ae509d9fff7\") " pod="openshift-marketplace/certified-operators-z4k4s" Jan 30 17:49:03 crc kubenswrapper[4766]: I0130 17:49:03.461738 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cb4fd9d-0f69-412a-80ee-5ae509d9fff7-catalog-content\") pod \"certified-operators-z4k4s\" (UID: \"8cb4fd9d-0f69-412a-80ee-5ae509d9fff7\") " pod="openshift-marketplace/certified-operators-z4k4s" Jan 30 17:49:03 crc kubenswrapper[4766]: I0130 17:49:03.461762 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dv8dj\" (UniqueName: \"kubernetes.io/projected/8cb4fd9d-0f69-412a-80ee-5ae509d9fff7-kube-api-access-dv8dj\") pod \"certified-operators-z4k4s\" (UID: \"8cb4fd9d-0f69-412a-80ee-5ae509d9fff7\") " pod="openshift-marketplace/certified-operators-z4k4s" Jan 30 17:49:03 crc kubenswrapper[4766]: I0130 17:49:03.483756 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cb4fd9d-0f69-412a-80ee-5ae509d9fff7-catalog-content\") pod \"certified-operators-z4k4s\" (UID: \"8cb4fd9d-0f69-412a-80ee-5ae509d9fff7\") " pod="openshift-marketplace/certified-operators-z4k4s" Jan 30 17:49:03 crc kubenswrapper[4766]: I0130 17:49:03.483870 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cb4fd9d-0f69-412a-80ee-5ae509d9fff7-utilities\") pod \"certified-operators-z4k4s\" (UID: \"8cb4fd9d-0f69-412a-80ee-5ae509d9fff7\") " pod="openshift-marketplace/certified-operators-z4k4s" Jan 30 17:49:03 crc kubenswrapper[4766]: I0130 17:49:03.489708 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dv8dj\" (UniqueName: \"kubernetes.io/projected/8cb4fd9d-0f69-412a-80ee-5ae509d9fff7-kube-api-access-dv8dj\") pod \"certified-operators-z4k4s\" (UID: \"8cb4fd9d-0f69-412a-80ee-5ae509d9fff7\") " pod="openshift-marketplace/certified-operators-z4k4s" Jan 30 17:49:03 crc kubenswrapper[4766]: I0130 17:49:03.784541 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z4k4s" Jan 30 17:49:04 crc kubenswrapper[4766]: I0130 17:49:04.257966 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z4k4s"] Jan 30 17:49:04 crc kubenswrapper[4766]: W0130 17:49:04.267351 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8cb4fd9d_0f69_412a_80ee_5ae509d9fff7.slice/crio-9c893eee3aea14a4d7afe327a1498b0ddf2e526362f273c73c0fef20008a2bf3 WatchSource:0}: Error finding container 9c893eee3aea14a4d7afe327a1498b0ddf2e526362f273c73c0fef20008a2bf3: Status 404 returned error can't find the container with id 9c893eee3aea14a4d7afe327a1498b0ddf2e526362f273c73c0fef20008a2bf3 Jan 30 17:49:04 crc kubenswrapper[4766]: I0130 17:49:04.597135 4766 generic.go:334] "Generic (PLEG): container finished" podID="8cb4fd9d-0f69-412a-80ee-5ae509d9fff7" containerID="142030d434abf493fad5a17f0cf9f31ace527337f988319dac06f8a3c899f364" exitCode=0 Jan 30 17:49:04 crc kubenswrapper[4766]: I0130 17:49:04.597218 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z4k4s" event={"ID":"8cb4fd9d-0f69-412a-80ee-5ae509d9fff7","Type":"ContainerDied","Data":"142030d434abf493fad5a17f0cf9f31ace527337f988319dac06f8a3c899f364"} Jan 30 17:49:04 crc kubenswrapper[4766]: I0130 17:49:04.597425 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z4k4s" event={"ID":"8cb4fd9d-0f69-412a-80ee-5ae509d9fff7","Type":"ContainerStarted","Data":"9c893eee3aea14a4d7afe327a1498b0ddf2e526362f273c73c0fef20008a2bf3"} Jan 30 17:49:06 crc kubenswrapper[4766]: I0130 17:49:06.613398 4766 generic.go:334] "Generic (PLEG): container finished" podID="8cb4fd9d-0f69-412a-80ee-5ae509d9fff7" containerID="3b217b7b05879b9b34a79438b996c8a047d6426ba1855d5c839e842921d2329a" exitCode=0 Jan 30 17:49:06 crc kubenswrapper[4766]: I0130 17:49:06.613478 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z4k4s" event={"ID":"8cb4fd9d-0f69-412a-80ee-5ae509d9fff7","Type":"ContainerDied","Data":"3b217b7b05879b9b34a79438b996c8a047d6426ba1855d5c839e842921d2329a"} Jan 30 17:49:07 crc kubenswrapper[4766]: I0130 17:49:07.039114 4766 scope.go:117] "RemoveContainer" containerID="7fc5828a50a187fe8ccf98b89913744accb72a8e7e151fd7516b11ddfbd0849c" Jan 30 17:49:07 crc kubenswrapper[4766]: E0130 17:49:07.039731 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:49:08 crc kubenswrapper[4766]: I0130 17:49:08.642585 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z4k4s" event={"ID":"8cb4fd9d-0f69-412a-80ee-5ae509d9fff7","Type":"ContainerStarted","Data":"4aa7ffd12e9eb98c634df2bdec484daa37abfb9ecc0df4d07814293b7ff68291"} Jan 30 17:49:08 crc kubenswrapper[4766]: I0130 17:49:08.668056 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-z4k4s" podStartSLOduration=2.823199805 podStartE2EDuration="5.668032887s" podCreationTimestamp="2026-01-30 17:49:03 +0000 UTC" firstStartedPulling="2026-01-30 17:49:04.598682718 +0000 UTC m=+5199.236640064" lastFinishedPulling="2026-01-30 17:49:07.44351578 +0000 UTC m=+5202.081473146" observedRunningTime="2026-01-30 17:49:08.666862206 +0000 UTC m=+5203.304819562" watchObservedRunningTime="2026-01-30 17:49:08.668032887 +0000 UTC m=+5203.305990233" Jan 30 17:49:13 crc kubenswrapper[4766]: I0130 17:49:13.785524 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-z4k4s" Jan 30 17:49:13 crc kubenswrapper[4766]: I0130 17:49:13.785872 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-z4k4s" Jan 30 17:49:13 crc kubenswrapper[4766]: I0130 17:49:13.840657 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-z4k4s" Jan 30 17:49:14 crc kubenswrapper[4766]: I0130 17:49:14.733082 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-z4k4s" Jan 30 17:49:14 crc kubenswrapper[4766]: I0130 17:49:14.788662 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z4k4s"] Jan 30 17:49:16 crc kubenswrapper[4766]: I0130 17:49:16.702605 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-z4k4s" podUID="8cb4fd9d-0f69-412a-80ee-5ae509d9fff7" containerName="registry-server" containerID="cri-o://4aa7ffd12e9eb98c634df2bdec484daa37abfb9ecc0df4d07814293b7ff68291" gracePeriod=2 Jan 30 17:49:17 crc kubenswrapper[4766]: I0130 17:49:17.618400 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z4k4s" Jan 30 17:49:17 crc kubenswrapper[4766]: I0130 17:49:17.710185 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dv8dj\" (UniqueName: \"kubernetes.io/projected/8cb4fd9d-0f69-412a-80ee-5ae509d9fff7-kube-api-access-dv8dj\") pod \"8cb4fd9d-0f69-412a-80ee-5ae509d9fff7\" (UID: \"8cb4fd9d-0f69-412a-80ee-5ae509d9fff7\") " Jan 30 17:49:17 crc kubenswrapper[4766]: I0130 17:49:17.710411 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cb4fd9d-0f69-412a-80ee-5ae509d9fff7-catalog-content\") pod \"8cb4fd9d-0f69-412a-80ee-5ae509d9fff7\" (UID: \"8cb4fd9d-0f69-412a-80ee-5ae509d9fff7\") " Jan 30 17:49:17 crc kubenswrapper[4766]: I0130 17:49:17.710467 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cb4fd9d-0f69-412a-80ee-5ae509d9fff7-utilities\") pod \"8cb4fd9d-0f69-412a-80ee-5ae509d9fff7\" (UID: \"8cb4fd9d-0f69-412a-80ee-5ae509d9fff7\") " Jan 30 17:49:17 crc kubenswrapper[4766]: I0130 17:49:17.711382 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cb4fd9d-0f69-412a-80ee-5ae509d9fff7-utilities" (OuterVolumeSpecName: "utilities") pod "8cb4fd9d-0f69-412a-80ee-5ae509d9fff7" (UID: "8cb4fd9d-0f69-412a-80ee-5ae509d9fff7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:49:17 crc kubenswrapper[4766]: I0130 17:49:17.713476 4766 generic.go:334] "Generic (PLEG): container finished" podID="8cb4fd9d-0f69-412a-80ee-5ae509d9fff7" containerID="4aa7ffd12e9eb98c634df2bdec484daa37abfb9ecc0df4d07814293b7ff68291" exitCode=0 Jan 30 17:49:17 crc kubenswrapper[4766]: I0130 17:49:17.713537 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z4k4s" event={"ID":"8cb4fd9d-0f69-412a-80ee-5ae509d9fff7","Type":"ContainerDied","Data":"4aa7ffd12e9eb98c634df2bdec484daa37abfb9ecc0df4d07814293b7ff68291"} Jan 30 17:49:17 crc kubenswrapper[4766]: I0130 17:49:17.713565 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z4k4s" event={"ID":"8cb4fd9d-0f69-412a-80ee-5ae509d9fff7","Type":"ContainerDied","Data":"9c893eee3aea14a4d7afe327a1498b0ddf2e526362f273c73c0fef20008a2bf3"} Jan 30 17:49:17 crc kubenswrapper[4766]: I0130 17:49:17.713581 4766 scope.go:117] "RemoveContainer" containerID="4aa7ffd12e9eb98c634df2bdec484daa37abfb9ecc0df4d07814293b7ff68291" Jan 30 17:49:17 crc kubenswrapper[4766]: I0130 17:49:17.713737 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z4k4s" Jan 30 17:49:17 crc kubenswrapper[4766]: I0130 17:49:17.717930 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cb4fd9d-0f69-412a-80ee-5ae509d9fff7-kube-api-access-dv8dj" (OuterVolumeSpecName: "kube-api-access-dv8dj") pod "8cb4fd9d-0f69-412a-80ee-5ae509d9fff7" (UID: "8cb4fd9d-0f69-412a-80ee-5ae509d9fff7"). InnerVolumeSpecName "kube-api-access-dv8dj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:49:17 crc kubenswrapper[4766]: I0130 17:49:17.760119 4766 scope.go:117] "RemoveContainer" containerID="3b217b7b05879b9b34a79438b996c8a047d6426ba1855d5c839e842921d2329a" Jan 30 17:49:17 crc kubenswrapper[4766]: I0130 17:49:17.763240 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cb4fd9d-0f69-412a-80ee-5ae509d9fff7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8cb4fd9d-0f69-412a-80ee-5ae509d9fff7" (UID: "8cb4fd9d-0f69-412a-80ee-5ae509d9fff7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:49:17 crc kubenswrapper[4766]: I0130 17:49:17.779941 4766 scope.go:117] "RemoveContainer" containerID="142030d434abf493fad5a17f0cf9f31ace527337f988319dac06f8a3c899f364" Jan 30 17:49:17 crc kubenswrapper[4766]: I0130 17:49:17.811283 4766 scope.go:117] "RemoveContainer" containerID="4aa7ffd12e9eb98c634df2bdec484daa37abfb9ecc0df4d07814293b7ff68291" Jan 30 17:49:17 crc kubenswrapper[4766]: E0130 17:49:17.811800 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4aa7ffd12e9eb98c634df2bdec484daa37abfb9ecc0df4d07814293b7ff68291\": container with ID starting with 4aa7ffd12e9eb98c634df2bdec484daa37abfb9ecc0df4d07814293b7ff68291 not found: ID does not exist" containerID="4aa7ffd12e9eb98c634df2bdec484daa37abfb9ecc0df4d07814293b7ff68291" Jan 30 17:49:17 crc kubenswrapper[4766]: I0130 17:49:17.812103 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4aa7ffd12e9eb98c634df2bdec484daa37abfb9ecc0df4d07814293b7ff68291"} err="failed to get container status \"4aa7ffd12e9eb98c634df2bdec484daa37abfb9ecc0df4d07814293b7ff68291\": rpc error: code = NotFound desc = could not find container \"4aa7ffd12e9eb98c634df2bdec484daa37abfb9ecc0df4d07814293b7ff68291\": container with ID starting with 4aa7ffd12e9eb98c634df2bdec484daa37abfb9ecc0df4d07814293b7ff68291 not found: ID does not exist" Jan 30 17:49:17 crc kubenswrapper[4766]: I0130 17:49:17.812125 4766 scope.go:117] "RemoveContainer" containerID="3b217b7b05879b9b34a79438b996c8a047d6426ba1855d5c839e842921d2329a" Jan 30 17:49:17 crc kubenswrapper[4766]: E0130 17:49:17.812571 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b217b7b05879b9b34a79438b996c8a047d6426ba1855d5c839e842921d2329a\": container with ID starting with 3b217b7b05879b9b34a79438b996c8a047d6426ba1855d5c839e842921d2329a not found: ID does not exist" containerID="3b217b7b05879b9b34a79438b996c8a047d6426ba1855d5c839e842921d2329a" Jan 30 17:49:17 crc kubenswrapper[4766]: I0130 17:49:17.812659 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b217b7b05879b9b34a79438b996c8a047d6426ba1855d5c839e842921d2329a"} err="failed to get container status \"3b217b7b05879b9b34a79438b996c8a047d6426ba1855d5c839e842921d2329a\": rpc error: code = NotFound desc = could not find container \"3b217b7b05879b9b34a79438b996c8a047d6426ba1855d5c839e842921d2329a\": container with ID starting with 3b217b7b05879b9b34a79438b996c8a047d6426ba1855d5c839e842921d2329a not found: ID does not exist" Jan 30 17:49:17 crc kubenswrapper[4766]: I0130 17:49:17.812692 4766 scope.go:117] "RemoveContainer" containerID="142030d434abf493fad5a17f0cf9f31ace527337f988319dac06f8a3c899f364" Jan 30 17:49:17 crc kubenswrapper[4766]: I0130 17:49:17.812881 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dv8dj\" (UniqueName: \"kubernetes.io/projected/8cb4fd9d-0f69-412a-80ee-5ae509d9fff7-kube-api-access-dv8dj\") on node \"crc\" DevicePath \"\"" Jan 30 17:49:17 crc kubenswrapper[4766]: I0130 17:49:17.812901 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cb4fd9d-0f69-412a-80ee-5ae509d9fff7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:49:17 crc kubenswrapper[4766]: I0130 17:49:17.812912 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cb4fd9d-0f69-412a-80ee-5ae509d9fff7-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:49:17 crc kubenswrapper[4766]: E0130 17:49:17.813315 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"142030d434abf493fad5a17f0cf9f31ace527337f988319dac06f8a3c899f364\": container with ID starting with 142030d434abf493fad5a17f0cf9f31ace527337f988319dac06f8a3c899f364 not found: ID does not exist" containerID="142030d434abf493fad5a17f0cf9f31ace527337f988319dac06f8a3c899f364" Jan 30 17:49:17 crc kubenswrapper[4766]: I0130 17:49:17.813337 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"142030d434abf493fad5a17f0cf9f31ace527337f988319dac06f8a3c899f364"} err="failed to get container status \"142030d434abf493fad5a17f0cf9f31ace527337f988319dac06f8a3c899f364\": rpc error: code = NotFound desc = could not find container \"142030d434abf493fad5a17f0cf9f31ace527337f988319dac06f8a3c899f364\": container with ID starting with 142030d434abf493fad5a17f0cf9f31ace527337f988319dac06f8a3c899f364 not found: ID does not exist" Jan 30 17:49:18 crc kubenswrapper[4766]: I0130 17:49:18.044473 4766 scope.go:117] "RemoveContainer" containerID="7fc5828a50a187fe8ccf98b89913744accb72a8e7e151fd7516b11ddfbd0849c" Jan 30 17:49:18 crc kubenswrapper[4766]: E0130 17:49:18.044684 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:49:18 crc kubenswrapper[4766]: I0130 17:49:18.060827 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z4k4s"] Jan 30 17:49:18 crc kubenswrapper[4766]: I0130 17:49:18.061737 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-z4k4s"] Jan 30 17:49:20 crc kubenswrapper[4766]: I0130 17:49:20.048573 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cb4fd9d-0f69-412a-80ee-5ae509d9fff7" path="/var/lib/kubelet/pods/8cb4fd9d-0f69-412a-80ee-5ae509d9fff7/volumes" Jan 30 17:49:20 crc kubenswrapper[4766]: I0130 17:49:20.315293 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-d9bc78c74-tqx5h" Jan 30 17:49:24 crc kubenswrapper[4766]: I0130 17:49:24.229956 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 30 17:49:24 crc kubenswrapper[4766]: E0130 17:49:24.230820 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cb4fd9d-0f69-412a-80ee-5ae509d9fff7" containerName="extract-utilities" Jan 30 17:49:24 crc kubenswrapper[4766]: I0130 17:49:24.230836 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cb4fd9d-0f69-412a-80ee-5ae509d9fff7" containerName="extract-utilities" Jan 30 17:49:24 crc kubenswrapper[4766]: E0130 17:49:24.230862 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cb4fd9d-0f69-412a-80ee-5ae509d9fff7" containerName="registry-server" Jan 30 17:49:24 crc kubenswrapper[4766]: I0130 17:49:24.230870 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cb4fd9d-0f69-412a-80ee-5ae509d9fff7" containerName="registry-server" Jan 30 17:49:24 crc kubenswrapper[4766]: E0130 17:49:24.230892 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cb4fd9d-0f69-412a-80ee-5ae509d9fff7" containerName="extract-content" Jan 30 17:49:24 crc kubenswrapper[4766]: I0130 17:49:24.230902 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cb4fd9d-0f69-412a-80ee-5ae509d9fff7" containerName="extract-content" Jan 30 17:49:24 crc kubenswrapper[4766]: I0130 17:49:24.231084 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cb4fd9d-0f69-412a-80ee-5ae509d9fff7" containerName="registry-server" Jan 30 17:49:24 crc kubenswrapper[4766]: I0130 17:49:24.231718 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 17:49:24 crc kubenswrapper[4766]: I0130 17:49:24.240474 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 30 17:49:24 crc kubenswrapper[4766]: I0130 17:49:24.240696 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 30 17:49:24 crc kubenswrapper[4766]: I0130 17:49:24.241308 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-5thlv" Jan 30 17:49:24 crc kubenswrapper[4766]: I0130 17:49:24.245682 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 17:49:24 crc kubenswrapper[4766]: I0130 17:49:24.364329 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c0b97605-5664-4ae7-a15d-26b0ae7b4614-openstack-config-secret\") pod \"openstackclient\" (UID: \"c0b97605-5664-4ae7-a15d-26b0ae7b4614\") " pod="openstack/openstackclient" Jan 30 17:49:24 crc kubenswrapper[4766]: I0130 17:49:24.364390 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgzlc\" (UniqueName: \"kubernetes.io/projected/c0b97605-5664-4ae7-a15d-26b0ae7b4614-kube-api-access-sgzlc\") pod \"openstackclient\" (UID: \"c0b97605-5664-4ae7-a15d-26b0ae7b4614\") " pod="openstack/openstackclient" Jan 30 17:49:24 crc kubenswrapper[4766]: I0130 17:49:24.364452 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c0b97605-5664-4ae7-a15d-26b0ae7b4614-openstack-config\") pod \"openstackclient\" (UID: \"c0b97605-5664-4ae7-a15d-26b0ae7b4614\") " pod="openstack/openstackclient" Jan 30 17:49:24 crc kubenswrapper[4766]: I0130 17:49:24.466629 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c0b97605-5664-4ae7-a15d-26b0ae7b4614-openstack-config-secret\") pod \"openstackclient\" (UID: \"c0b97605-5664-4ae7-a15d-26b0ae7b4614\") " pod="openstack/openstackclient" Jan 30 17:49:24 crc kubenswrapper[4766]: I0130 17:49:24.466710 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgzlc\" (UniqueName: \"kubernetes.io/projected/c0b97605-5664-4ae7-a15d-26b0ae7b4614-kube-api-access-sgzlc\") pod \"openstackclient\" (UID: \"c0b97605-5664-4ae7-a15d-26b0ae7b4614\") " pod="openstack/openstackclient" Jan 30 17:49:24 crc kubenswrapper[4766]: I0130 17:49:24.466771 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c0b97605-5664-4ae7-a15d-26b0ae7b4614-openstack-config\") pod \"openstackclient\" (UID: \"c0b97605-5664-4ae7-a15d-26b0ae7b4614\") " pod="openstack/openstackclient" Jan 30 17:49:24 crc kubenswrapper[4766]: I0130 17:49:24.468009 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c0b97605-5664-4ae7-a15d-26b0ae7b4614-openstack-config\") pod \"openstackclient\" (UID: \"c0b97605-5664-4ae7-a15d-26b0ae7b4614\") " pod="openstack/openstackclient" Jan 30 17:49:24 crc kubenswrapper[4766]: I0130 17:49:24.472620 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c0b97605-5664-4ae7-a15d-26b0ae7b4614-openstack-config-secret\") pod \"openstackclient\" (UID: \"c0b97605-5664-4ae7-a15d-26b0ae7b4614\") " pod="openstack/openstackclient" Jan 30 17:49:24 crc kubenswrapper[4766]: I0130 17:49:24.483246 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgzlc\" (UniqueName: \"kubernetes.io/projected/c0b97605-5664-4ae7-a15d-26b0ae7b4614-kube-api-access-sgzlc\") pod \"openstackclient\" (UID: \"c0b97605-5664-4ae7-a15d-26b0ae7b4614\") " pod="openstack/openstackclient" Jan 30 17:49:24 crc kubenswrapper[4766]: I0130 17:49:24.549349 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 17:49:24 crc kubenswrapper[4766]: I0130 17:49:24.962569 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 17:49:24 crc kubenswrapper[4766]: W0130 17:49:24.968635 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0b97605_5664_4ae7_a15d_26b0ae7b4614.slice/crio-c318f8e40c26125c8e3b00c8d69461bd6e8e95d7a572dd588629d5d36bd8f982 WatchSource:0}: Error finding container c318f8e40c26125c8e3b00c8d69461bd6e8e95d7a572dd588629d5d36bd8f982: Status 404 returned error can't find the container with id c318f8e40c26125c8e3b00c8d69461bd6e8e95d7a572dd588629d5d36bd8f982 Jan 30 17:49:25 crc kubenswrapper[4766]: I0130 17:49:25.837361 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"c0b97605-5664-4ae7-a15d-26b0ae7b4614","Type":"ContainerStarted","Data":"4d5a385a379300f1667fee7b30c6a58a29d62b44dc31d6716fcde576f98cfadd"} Jan 30 17:49:25 crc kubenswrapper[4766]: I0130 17:49:25.837646 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"c0b97605-5664-4ae7-a15d-26b0ae7b4614","Type":"ContainerStarted","Data":"c318f8e40c26125c8e3b00c8d69461bd6e8e95d7a572dd588629d5d36bd8f982"} Jan 30 17:49:25 crc kubenswrapper[4766]: I0130 17:49:25.857638 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.857615692 podStartE2EDuration="1.857615692s" podCreationTimestamp="2026-01-30 17:49:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:49:25.854533171 +0000 UTC m=+5220.492490517" watchObservedRunningTime="2026-01-30 17:49:25.857615692 +0000 UTC m=+5220.495573058" Jan 30 17:49:31 crc kubenswrapper[4766]: I0130 17:49:31.040127 4766 scope.go:117] "RemoveContainer" containerID="7fc5828a50a187fe8ccf98b89913744accb72a8e7e151fd7516b11ddfbd0849c" Jan 30 17:49:31 crc kubenswrapper[4766]: E0130 17:49:31.040754 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:49:43 crc kubenswrapper[4766]: I0130 17:49:43.041466 4766 scope.go:117] "RemoveContainer" containerID="7fc5828a50a187fe8ccf98b89913744accb72a8e7e151fd7516b11ddfbd0849c" Jan 30 17:49:43 crc kubenswrapper[4766]: E0130 17:49:43.043651 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:49:56 crc kubenswrapper[4766]: I0130 17:49:56.044307 4766 scope.go:117] "RemoveContainer" containerID="7fc5828a50a187fe8ccf98b89913744accb72a8e7e151fd7516b11ddfbd0849c" Jan 30 17:49:56 crc kubenswrapper[4766]: E0130 17:49:56.045063 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:50:09 crc kubenswrapper[4766]: I0130 17:50:09.039298 4766 scope.go:117] "RemoveContainer" containerID="7fc5828a50a187fe8ccf98b89913744accb72a8e7e151fd7516b11ddfbd0849c" Jan 30 17:50:09 crc kubenswrapper[4766]: E0130 17:50:09.040020 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:50:22 crc kubenswrapper[4766]: I0130 17:50:22.039217 4766 scope.go:117] "RemoveContainer" containerID="7fc5828a50a187fe8ccf98b89913744accb72a8e7e151fd7516b11ddfbd0849c" Jan 30 17:50:22 crc kubenswrapper[4766]: E0130 17:50:22.040012 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:50:33 crc kubenswrapper[4766]: I0130 17:50:33.039768 4766 scope.go:117] "RemoveContainer" containerID="7fc5828a50a187fe8ccf98b89913744accb72a8e7e151fd7516b11ddfbd0849c" Jan 30 17:50:33 crc kubenswrapper[4766]: E0130 17:50:33.040515 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:50:45 crc kubenswrapper[4766]: I0130 17:50:45.038931 4766 scope.go:117] "RemoveContainer" containerID="7fc5828a50a187fe8ccf98b89913744accb72a8e7e151fd7516b11ddfbd0849c" Jan 30 17:50:45 crc kubenswrapper[4766]: I0130 17:50:45.470582 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerStarted","Data":"8e6d5be2cdd78ae95945579ba29f0735f8e5f2a5f43aacf73ebc0159baabfa78"} Jan 30 17:51:00 crc kubenswrapper[4766]: I0130 17:51:00.022213 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-v7zdn"] Jan 30 17:51:00 crc kubenswrapper[4766]: I0130 17:51:00.023880 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-v7zdn" Jan 30 17:51:00 crc kubenswrapper[4766]: I0130 17:51:00.063386 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-360a-account-create-update-9fwlc"] Jan 30 17:51:00 crc kubenswrapper[4766]: I0130 17:51:00.064581 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-v7zdn"] Jan 30 17:51:00 crc kubenswrapper[4766]: I0130 17:51:00.064677 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-360a-account-create-update-9fwlc" Jan 30 17:51:00 crc kubenswrapper[4766]: I0130 17:51:00.067114 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 30 17:51:00 crc kubenswrapper[4766]: I0130 17:51:00.073682 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-360a-account-create-update-9fwlc"] Jan 30 17:51:00 crc kubenswrapper[4766]: I0130 17:51:00.183604 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pb9ht\" (UniqueName: \"kubernetes.io/projected/aa06091d-37e1-4828-9f71-7160f12ac3de-kube-api-access-pb9ht\") pod \"barbican-360a-account-create-update-9fwlc\" (UID: \"aa06091d-37e1-4828-9f71-7160f12ac3de\") " pod="openstack/barbican-360a-account-create-update-9fwlc" Jan 30 17:51:00 crc kubenswrapper[4766]: I0130 17:51:00.183677 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa06091d-37e1-4828-9f71-7160f12ac3de-operator-scripts\") pod \"barbican-360a-account-create-update-9fwlc\" (UID: \"aa06091d-37e1-4828-9f71-7160f12ac3de\") " pod="openstack/barbican-360a-account-create-update-9fwlc" Jan 30 17:51:00 crc kubenswrapper[4766]: I0130 17:51:00.183772 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4n4x\" (UniqueName: \"kubernetes.io/projected/3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9-kube-api-access-s4n4x\") pod \"barbican-db-create-v7zdn\" (UID: \"3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9\") " pod="openstack/barbican-db-create-v7zdn" Jan 30 17:51:00 crc kubenswrapper[4766]: I0130 17:51:00.184004 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9-operator-scripts\") pod \"barbican-db-create-v7zdn\" (UID: \"3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9\") " pod="openstack/barbican-db-create-v7zdn" Jan 30 17:51:00 crc kubenswrapper[4766]: I0130 17:51:00.285789 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pb9ht\" (UniqueName: \"kubernetes.io/projected/aa06091d-37e1-4828-9f71-7160f12ac3de-kube-api-access-pb9ht\") pod \"barbican-360a-account-create-update-9fwlc\" (UID: \"aa06091d-37e1-4828-9f71-7160f12ac3de\") " pod="openstack/barbican-360a-account-create-update-9fwlc" Jan 30 17:51:00 crc kubenswrapper[4766]: I0130 17:51:00.285870 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa06091d-37e1-4828-9f71-7160f12ac3de-operator-scripts\") pod \"barbican-360a-account-create-update-9fwlc\" (UID: \"aa06091d-37e1-4828-9f71-7160f12ac3de\") " pod="openstack/barbican-360a-account-create-update-9fwlc" Jan 30 17:51:00 crc kubenswrapper[4766]: I0130 17:51:00.285941 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4n4x\" (UniqueName: \"kubernetes.io/projected/3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9-kube-api-access-s4n4x\") pod \"barbican-db-create-v7zdn\" (UID: \"3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9\") " pod="openstack/barbican-db-create-v7zdn" Jan 30 17:51:00 crc kubenswrapper[4766]: I0130 17:51:00.286004 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9-operator-scripts\") pod \"barbican-db-create-v7zdn\" (UID: \"3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9\") " pod="openstack/barbican-db-create-v7zdn" Jan 30 17:51:00 crc kubenswrapper[4766]: I0130 17:51:00.286940 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa06091d-37e1-4828-9f71-7160f12ac3de-operator-scripts\") pod \"barbican-360a-account-create-update-9fwlc\" (UID: \"aa06091d-37e1-4828-9f71-7160f12ac3de\") " pod="openstack/barbican-360a-account-create-update-9fwlc" Jan 30 17:51:00 crc kubenswrapper[4766]: I0130 17:51:00.286957 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9-operator-scripts\") pod \"barbican-db-create-v7zdn\" (UID: \"3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9\") " pod="openstack/barbican-db-create-v7zdn" Jan 30 17:51:00 crc kubenswrapper[4766]: I0130 17:51:00.307061 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4n4x\" (UniqueName: \"kubernetes.io/projected/3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9-kube-api-access-s4n4x\") pod \"barbican-db-create-v7zdn\" (UID: \"3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9\") " pod="openstack/barbican-db-create-v7zdn" Jan 30 17:51:00 crc kubenswrapper[4766]: I0130 17:51:00.307199 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pb9ht\" (UniqueName: \"kubernetes.io/projected/aa06091d-37e1-4828-9f71-7160f12ac3de-kube-api-access-pb9ht\") pod \"barbican-360a-account-create-update-9fwlc\" (UID: \"aa06091d-37e1-4828-9f71-7160f12ac3de\") " pod="openstack/barbican-360a-account-create-update-9fwlc" Jan 30 17:51:00 crc kubenswrapper[4766]: I0130 17:51:00.359605 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-v7zdn" Jan 30 17:51:00 crc kubenswrapper[4766]: I0130 17:51:00.384342 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-360a-account-create-update-9fwlc" Jan 30 17:51:00 crc kubenswrapper[4766]: I0130 17:51:00.809991 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-v7zdn"] Jan 30 17:51:00 crc kubenswrapper[4766]: W0130 17:51:00.820921 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d2bd9b1_3f21_43b5_ab17_c0724bbbafd9.slice/crio-fdb61dacf00d0c321e5be9f820e9c55ef244873728712b98d1f6fbeb605fd0ce WatchSource:0}: Error finding container fdb61dacf00d0c321e5be9f820e9c55ef244873728712b98d1f6fbeb605fd0ce: Status 404 returned error can't find the container with id fdb61dacf00d0c321e5be9f820e9c55ef244873728712b98d1f6fbeb605fd0ce Jan 30 17:51:00 crc kubenswrapper[4766]: I0130 17:51:00.867856 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-360a-account-create-update-9fwlc"] Jan 30 17:51:00 crc kubenswrapper[4766]: W0130 17:51:00.868444 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa06091d_37e1_4828_9f71_7160f12ac3de.slice/crio-3eed4057c6d5e23b070c8557123990e8fda61ee79aca7be4e57cae92caf7a983 WatchSource:0}: Error finding container 3eed4057c6d5e23b070c8557123990e8fda61ee79aca7be4e57cae92caf7a983: Status 404 returned error can't find the container with id 3eed4057c6d5e23b070c8557123990e8fda61ee79aca7be4e57cae92caf7a983 Jan 30 17:51:01 crc kubenswrapper[4766]: I0130 17:51:01.593013 4766 generic.go:334] "Generic (PLEG): container finished" podID="aa06091d-37e1-4828-9f71-7160f12ac3de" containerID="61e9004b9e632e72beed11f4761ff65b41d449187e767891bb96ba3995cb339f" exitCode=0 Jan 30 17:51:01 crc kubenswrapper[4766]: I0130 17:51:01.593063 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-360a-account-create-update-9fwlc" event={"ID":"aa06091d-37e1-4828-9f71-7160f12ac3de","Type":"ContainerDied","Data":"61e9004b9e632e72beed11f4761ff65b41d449187e767891bb96ba3995cb339f"} Jan 30 17:51:01 crc kubenswrapper[4766]: I0130 17:51:01.593319 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-360a-account-create-update-9fwlc" event={"ID":"aa06091d-37e1-4828-9f71-7160f12ac3de","Type":"ContainerStarted","Data":"3eed4057c6d5e23b070c8557123990e8fda61ee79aca7be4e57cae92caf7a983"} Jan 30 17:51:01 crc kubenswrapper[4766]: I0130 17:51:01.594877 4766 generic.go:334] "Generic (PLEG): container finished" podID="3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9" containerID="b8510fbc15448bdb8f9309d677310c9146372ad00679154fc9bdb8459d54cf36" exitCode=0 Jan 30 17:51:01 crc kubenswrapper[4766]: I0130 17:51:01.594904 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-v7zdn" event={"ID":"3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9","Type":"ContainerDied","Data":"b8510fbc15448bdb8f9309d677310c9146372ad00679154fc9bdb8459d54cf36"} Jan 30 17:51:01 crc kubenswrapper[4766]: I0130 17:51:01.594918 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-v7zdn" event={"ID":"3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9","Type":"ContainerStarted","Data":"fdb61dacf00d0c321e5be9f820e9c55ef244873728712b98d1f6fbeb605fd0ce"} Jan 30 17:51:02 crc kubenswrapper[4766]: I0130 17:51:02.987813 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-v7zdn" Jan 30 17:51:02 crc kubenswrapper[4766]: I0130 17:51:02.993805 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-360a-account-create-update-9fwlc" Jan 30 17:51:03 crc kubenswrapper[4766]: I0130 17:51:03.135673 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n4x\" (UniqueName: \"kubernetes.io/projected/3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9-kube-api-access-s4n4x\") pod \"3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9\" (UID: \"3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9\") " Jan 30 17:51:03 crc kubenswrapper[4766]: I0130 17:51:03.135945 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9-operator-scripts\") pod \"3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9\" (UID: \"3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9\") " Jan 30 17:51:03 crc kubenswrapper[4766]: I0130 17:51:03.136207 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pb9ht\" (UniqueName: \"kubernetes.io/projected/aa06091d-37e1-4828-9f71-7160f12ac3de-kube-api-access-pb9ht\") pod \"aa06091d-37e1-4828-9f71-7160f12ac3de\" (UID: \"aa06091d-37e1-4828-9f71-7160f12ac3de\") " Jan 30 17:51:03 crc kubenswrapper[4766]: I0130 17:51:03.136849 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa06091d-37e1-4828-9f71-7160f12ac3de-operator-scripts\") pod \"aa06091d-37e1-4828-9f71-7160f12ac3de\" (UID: \"aa06091d-37e1-4828-9f71-7160f12ac3de\") " Jan 30 17:51:03 crc kubenswrapper[4766]: I0130 17:51:03.136654 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9" (UID: "3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:51:03 crc kubenswrapper[4766]: I0130 17:51:03.137574 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa06091d-37e1-4828-9f71-7160f12ac3de-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "aa06091d-37e1-4828-9f71-7160f12ac3de" (UID: "aa06091d-37e1-4828-9f71-7160f12ac3de"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:51:03 crc kubenswrapper[4766]: I0130 17:51:03.137840 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa06091d-37e1-4828-9f71-7160f12ac3de-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:51:03 crc kubenswrapper[4766]: I0130 17:51:03.137921 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:51:03 crc kubenswrapper[4766]: I0130 17:51:03.141726 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9-kube-api-access-s4n4x" (OuterVolumeSpecName: "kube-api-access-s4n4x") pod "3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9" (UID: "3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9"). InnerVolumeSpecName "kube-api-access-s4n4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:51:03 crc kubenswrapper[4766]: I0130 17:51:03.142093 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa06091d-37e1-4828-9f71-7160f12ac3de-kube-api-access-pb9ht" (OuterVolumeSpecName: "kube-api-access-pb9ht") pod "aa06091d-37e1-4828-9f71-7160f12ac3de" (UID: "aa06091d-37e1-4828-9f71-7160f12ac3de"). InnerVolumeSpecName "kube-api-access-pb9ht". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:51:03 crc kubenswrapper[4766]: I0130 17:51:03.242914 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n4x\" (UniqueName: \"kubernetes.io/projected/3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9-kube-api-access-s4n4x\") on node \"crc\" DevicePath \"\"" Jan 30 17:51:03 crc kubenswrapper[4766]: I0130 17:51:03.242946 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pb9ht\" (UniqueName: \"kubernetes.io/projected/aa06091d-37e1-4828-9f71-7160f12ac3de-kube-api-access-pb9ht\") on node \"crc\" DevicePath \"\"" Jan 30 17:51:03 crc kubenswrapper[4766]: I0130 17:51:03.610889 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-360a-account-create-update-9fwlc" Jan 30 17:51:03 crc kubenswrapper[4766]: I0130 17:51:03.610881 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-360a-account-create-update-9fwlc" event={"ID":"aa06091d-37e1-4828-9f71-7160f12ac3de","Type":"ContainerDied","Data":"3eed4057c6d5e23b070c8557123990e8fda61ee79aca7be4e57cae92caf7a983"} Jan 30 17:51:03 crc kubenswrapper[4766]: I0130 17:51:03.611031 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3eed4057c6d5e23b070c8557123990e8fda61ee79aca7be4e57cae92caf7a983" Jan 30 17:51:03 crc kubenswrapper[4766]: I0130 17:51:03.612527 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-v7zdn" event={"ID":"3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9","Type":"ContainerDied","Data":"fdb61dacf00d0c321e5be9f820e9c55ef244873728712b98d1f6fbeb605fd0ce"} Jan 30 17:51:03 crc kubenswrapper[4766]: I0130 17:51:03.612555 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fdb61dacf00d0c321e5be9f820e9c55ef244873728712b98d1f6fbeb605fd0ce" Jan 30 17:51:03 crc kubenswrapper[4766]: I0130 17:51:03.612706 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-v7zdn" Jan 30 17:51:05 crc kubenswrapper[4766]: I0130 17:51:05.338052 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-h2fkl"] Jan 30 17:51:05 crc kubenswrapper[4766]: E0130 17:51:05.338936 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9" containerName="mariadb-database-create" Jan 30 17:51:05 crc kubenswrapper[4766]: I0130 17:51:05.338960 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9" containerName="mariadb-database-create" Jan 30 17:51:05 crc kubenswrapper[4766]: E0130 17:51:05.339000 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa06091d-37e1-4828-9f71-7160f12ac3de" containerName="mariadb-account-create-update" Jan 30 17:51:05 crc kubenswrapper[4766]: I0130 17:51:05.339009 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa06091d-37e1-4828-9f71-7160f12ac3de" containerName="mariadb-account-create-update" Jan 30 17:51:05 crc kubenswrapper[4766]: I0130 17:51:05.339264 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9" containerName="mariadb-database-create" Jan 30 17:51:05 crc kubenswrapper[4766]: I0130 17:51:05.339282 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa06091d-37e1-4828-9f71-7160f12ac3de" containerName="mariadb-account-create-update" Jan 30 17:51:05 crc kubenswrapper[4766]: I0130 17:51:05.340240 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-h2fkl" Jan 30 17:51:05 crc kubenswrapper[4766]: I0130 17:51:05.342929 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-ck5sq" Jan 30 17:51:05 crc kubenswrapper[4766]: I0130 17:51:05.344379 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 30 17:51:05 crc kubenswrapper[4766]: I0130 17:51:05.350866 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-h2fkl"] Jan 30 17:51:05 crc kubenswrapper[4766]: I0130 17:51:05.480813 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g49wm\" (UniqueName: \"kubernetes.io/projected/b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2-kube-api-access-g49wm\") pod \"barbican-db-sync-h2fkl\" (UID: \"b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2\") " pod="openstack/barbican-db-sync-h2fkl" Jan 30 17:51:05 crc kubenswrapper[4766]: I0130 17:51:05.480980 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2-db-sync-config-data\") pod \"barbican-db-sync-h2fkl\" (UID: \"b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2\") " pod="openstack/barbican-db-sync-h2fkl" Jan 30 17:51:05 crc kubenswrapper[4766]: I0130 17:51:05.481191 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2-combined-ca-bundle\") pod \"barbican-db-sync-h2fkl\" (UID: \"b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2\") " pod="openstack/barbican-db-sync-h2fkl" Jan 30 17:51:05 crc kubenswrapper[4766]: I0130 17:51:05.583493 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g49wm\" (UniqueName: \"kubernetes.io/projected/b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2-kube-api-access-g49wm\") pod \"barbican-db-sync-h2fkl\" (UID: \"b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2\") " pod="openstack/barbican-db-sync-h2fkl" Jan 30 17:51:05 crc kubenswrapper[4766]: I0130 17:51:05.583604 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2-db-sync-config-data\") pod \"barbican-db-sync-h2fkl\" (UID: \"b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2\") " pod="openstack/barbican-db-sync-h2fkl" Jan 30 17:51:05 crc kubenswrapper[4766]: I0130 17:51:05.583664 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2-combined-ca-bundle\") pod \"barbican-db-sync-h2fkl\" (UID: \"b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2\") " pod="openstack/barbican-db-sync-h2fkl" Jan 30 17:51:05 crc kubenswrapper[4766]: I0130 17:51:05.588903 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2-db-sync-config-data\") pod \"barbican-db-sync-h2fkl\" (UID: \"b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2\") " pod="openstack/barbican-db-sync-h2fkl" Jan 30 17:51:05 crc kubenswrapper[4766]: I0130 17:51:05.601803 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2-combined-ca-bundle\") pod \"barbican-db-sync-h2fkl\" (UID: \"b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2\") " pod="openstack/barbican-db-sync-h2fkl" Jan 30 17:51:05 crc kubenswrapper[4766]: I0130 17:51:05.603703 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g49wm\" (UniqueName: \"kubernetes.io/projected/b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2-kube-api-access-g49wm\") pod \"barbican-db-sync-h2fkl\" (UID: \"b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2\") " pod="openstack/barbican-db-sync-h2fkl" Jan 30 17:51:05 crc kubenswrapper[4766]: I0130 17:51:05.665600 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-h2fkl" Jan 30 17:51:06 crc kubenswrapper[4766]: I0130 17:51:06.096842 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-h2fkl"] Jan 30 17:51:06 crc kubenswrapper[4766]: I0130 17:51:06.633900 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-h2fkl" event={"ID":"b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2","Type":"ContainerStarted","Data":"f8e723715c56394706bb110f28e25bd51569d6ba082c9fb3e8b9a75ae2fcfda9"} Jan 30 17:51:06 crc kubenswrapper[4766]: I0130 17:51:06.633947 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-h2fkl" event={"ID":"b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2","Type":"ContainerStarted","Data":"37f54523533770e701f675d1a2a0a8445b848df53c1f3149ade26237977259fd"} Jan 30 17:51:06 crc kubenswrapper[4766]: I0130 17:51:06.649111 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-h2fkl" podStartSLOduration=1.649092902 podStartE2EDuration="1.649092902s" podCreationTimestamp="2026-01-30 17:51:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:51:06.645568025 +0000 UTC m=+5321.283525371" watchObservedRunningTime="2026-01-30 17:51:06.649092902 +0000 UTC m=+5321.287050248" Jan 30 17:51:08 crc kubenswrapper[4766]: I0130 17:51:08.653024 4766 generic.go:334] "Generic (PLEG): container finished" podID="b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2" containerID="f8e723715c56394706bb110f28e25bd51569d6ba082c9fb3e8b9a75ae2fcfda9" exitCode=0 Jan 30 17:51:08 crc kubenswrapper[4766]: I0130 17:51:08.653314 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-h2fkl" event={"ID":"b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2","Type":"ContainerDied","Data":"f8e723715c56394706bb110f28e25bd51569d6ba082c9fb3e8b9a75ae2fcfda9"} Jan 30 17:51:09 crc kubenswrapper[4766]: I0130 17:51:09.905449 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-h2fkl" Jan 30 17:51:10 crc kubenswrapper[4766]: I0130 17:51:10.069710 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2-db-sync-config-data\") pod \"b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2\" (UID: \"b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2\") " Jan 30 17:51:10 crc kubenswrapper[4766]: I0130 17:51:10.069851 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2-combined-ca-bundle\") pod \"b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2\" (UID: \"b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2\") " Jan 30 17:51:10 crc kubenswrapper[4766]: I0130 17:51:10.069955 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g49wm\" (UniqueName: \"kubernetes.io/projected/b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2-kube-api-access-g49wm\") pod \"b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2\" (UID: \"b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2\") " Jan 30 17:51:10 crc kubenswrapper[4766]: I0130 17:51:10.077985 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2-kube-api-access-g49wm" (OuterVolumeSpecName: "kube-api-access-g49wm") pod "b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2" (UID: "b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2"). InnerVolumeSpecName "kube-api-access-g49wm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:51:10 crc kubenswrapper[4766]: I0130 17:51:10.099832 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2" (UID: "b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:51:10 crc kubenswrapper[4766]: I0130 17:51:10.104427 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2" (UID: "b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:51:10 crc kubenswrapper[4766]: I0130 17:51:10.171535 4766 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:51:10 crc kubenswrapper[4766]: I0130 17:51:10.171572 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:51:10 crc kubenswrapper[4766]: I0130 17:51:10.171586 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g49wm\" (UniqueName: \"kubernetes.io/projected/b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2-kube-api-access-g49wm\") on node \"crc\" DevicePath \"\"" Jan 30 17:51:10 crc kubenswrapper[4766]: I0130 17:51:10.671060 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-h2fkl" event={"ID":"b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2","Type":"ContainerDied","Data":"37f54523533770e701f675d1a2a0a8445b848df53c1f3149ade26237977259fd"} Jan 30 17:51:10 crc kubenswrapper[4766]: I0130 17:51:10.671097 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37f54523533770e701f675d1a2a0a8445b848df53c1f3149ade26237977259fd" Jan 30 17:51:10 crc kubenswrapper[4766]: I0130 17:51:10.671148 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-h2fkl" Jan 30 17:51:10 crc kubenswrapper[4766]: I0130 17:51:10.913886 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-84dcf975b7-fj984"] Jan 30 17:51:10 crc kubenswrapper[4766]: E0130 17:51:10.914895 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2" containerName="barbican-db-sync" Jan 30 17:51:10 crc kubenswrapper[4766]: I0130 17:51:10.914915 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2" containerName="barbican-db-sync" Jan 30 17:51:10 crc kubenswrapper[4766]: I0130 17:51:10.915115 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2" containerName="barbican-db-sync" Jan 30 17:51:10 crc kubenswrapper[4766]: I0130 17:51:10.916269 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-84dcf975b7-fj984" Jan 30 17:51:10 crc kubenswrapper[4766]: I0130 17:51:10.920271 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 30 17:51:10 crc kubenswrapper[4766]: I0130 17:51:10.920610 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 30 17:51:10 crc kubenswrapper[4766]: I0130 17:51:10.920774 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-ck5sq" Jan 30 17:51:10 crc kubenswrapper[4766]: I0130 17:51:10.930792 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-78445c974-66754"] Jan 30 17:51:10 crc kubenswrapper[4766]: I0130 17:51:10.932299 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-78445c974-66754" Jan 30 17:51:10 crc kubenswrapper[4766]: I0130 17:51:10.937670 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 30 17:51:10 crc kubenswrapper[4766]: I0130 17:51:10.971241 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-84dcf975b7-fj984"] Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:10.998553 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-78445c974-66754"] Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.000846 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb8f2fee-863e-4c1e-90af-6ed7a631a4ac-config-data\") pod \"barbican-worker-84dcf975b7-fj984\" (UID: \"eb8f2fee-863e-4c1e-90af-6ed7a631a4ac\") " pod="openstack/barbican-worker-84dcf975b7-fj984" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.001232 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdshk\" (UniqueName: \"kubernetes.io/projected/eb8f2fee-863e-4c1e-90af-6ed7a631a4ac-kube-api-access-hdshk\") pod \"barbican-worker-84dcf975b7-fj984\" (UID: \"eb8f2fee-863e-4c1e-90af-6ed7a631a4ac\") " pod="openstack/barbican-worker-84dcf975b7-fj984" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.001635 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb8f2fee-863e-4c1e-90af-6ed7a631a4ac-combined-ca-bundle\") pod \"barbican-worker-84dcf975b7-fj984\" (UID: \"eb8f2fee-863e-4c1e-90af-6ed7a631a4ac\") " pod="openstack/barbican-worker-84dcf975b7-fj984" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.001931 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb8f2fee-863e-4c1e-90af-6ed7a631a4ac-logs\") pod \"barbican-worker-84dcf975b7-fj984\" (UID: \"eb8f2fee-863e-4c1e-90af-6ed7a631a4ac\") " pod="openstack/barbican-worker-84dcf975b7-fj984" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.002194 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eb8f2fee-863e-4c1e-90af-6ed7a631a4ac-config-data-custom\") pod \"barbican-worker-84dcf975b7-fj984\" (UID: \"eb8f2fee-863e-4c1e-90af-6ed7a631a4ac\") " pod="openstack/barbican-worker-84dcf975b7-fj984" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.069890 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f4b85cbd9-qr7g8"] Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.071803 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f4b85cbd9-qr7g8" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.107821 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f4b85cbd9-qr7g8"] Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.114297 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb8f2fee-863e-4c1e-90af-6ed7a631a4ac-combined-ca-bundle\") pod \"barbican-worker-84dcf975b7-fj984\" (UID: \"eb8f2fee-863e-4c1e-90af-6ed7a631a4ac\") " pod="openstack/barbican-worker-84dcf975b7-fj984" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.114375 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a6132938-2052-4889-b1d7-2e43deb664e1-config-data-custom\") pod \"barbican-keystone-listener-78445c974-66754\" (UID: \"a6132938-2052-4889-b1d7-2e43deb664e1\") " pod="openstack/barbican-keystone-listener-78445c974-66754" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.114401 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx79n\" (UniqueName: \"kubernetes.io/projected/a6132938-2052-4889-b1d7-2e43deb664e1-kube-api-access-tx79n\") pod \"barbican-keystone-listener-78445c974-66754\" (UID: \"a6132938-2052-4889-b1d7-2e43deb664e1\") " pod="openstack/barbican-keystone-listener-78445c974-66754" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.114426 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb8f2fee-863e-4c1e-90af-6ed7a631a4ac-logs\") pod \"barbican-worker-84dcf975b7-fj984\" (UID: \"eb8f2fee-863e-4c1e-90af-6ed7a631a4ac\") " pod="openstack/barbican-worker-84dcf975b7-fj984" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.114460 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6132938-2052-4889-b1d7-2e43deb664e1-combined-ca-bundle\") pod \"barbican-keystone-listener-78445c974-66754\" (UID: \"a6132938-2052-4889-b1d7-2e43deb664e1\") " pod="openstack/barbican-keystone-listener-78445c974-66754" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.114487 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eb8f2fee-863e-4c1e-90af-6ed7a631a4ac-config-data-custom\") pod \"barbican-worker-84dcf975b7-fj984\" (UID: \"eb8f2fee-863e-4c1e-90af-6ed7a631a4ac\") " pod="openstack/barbican-worker-84dcf975b7-fj984" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.114539 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb8f2fee-863e-4c1e-90af-6ed7a631a4ac-config-data\") pod \"barbican-worker-84dcf975b7-fj984\" (UID: \"eb8f2fee-863e-4c1e-90af-6ed7a631a4ac\") " pod="openstack/barbican-worker-84dcf975b7-fj984" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.114559 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdshk\" (UniqueName: \"kubernetes.io/projected/eb8f2fee-863e-4c1e-90af-6ed7a631a4ac-kube-api-access-hdshk\") pod \"barbican-worker-84dcf975b7-fj984\" (UID: \"eb8f2fee-863e-4c1e-90af-6ed7a631a4ac\") " pod="openstack/barbican-worker-84dcf975b7-fj984" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.114578 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6132938-2052-4889-b1d7-2e43deb664e1-logs\") pod \"barbican-keystone-listener-78445c974-66754\" (UID: \"a6132938-2052-4889-b1d7-2e43deb664e1\") " pod="openstack/barbican-keystone-listener-78445c974-66754" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.114605 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6132938-2052-4889-b1d7-2e43deb664e1-config-data\") pod \"barbican-keystone-listener-78445c974-66754\" (UID: \"a6132938-2052-4889-b1d7-2e43deb664e1\") " pod="openstack/barbican-keystone-listener-78445c974-66754" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.115487 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb8f2fee-863e-4c1e-90af-6ed7a631a4ac-logs\") pod \"barbican-worker-84dcf975b7-fj984\" (UID: \"eb8f2fee-863e-4c1e-90af-6ed7a631a4ac\") " pod="openstack/barbican-worker-84dcf975b7-fj984" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.122893 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5b7b4f6b66-crqxp"] Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.123022 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eb8f2fee-863e-4c1e-90af-6ed7a631a4ac-config-data-custom\") pod \"barbican-worker-84dcf975b7-fj984\" (UID: \"eb8f2fee-863e-4c1e-90af-6ed7a631a4ac\") " pod="openstack/barbican-worker-84dcf975b7-fj984" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.124216 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb8f2fee-863e-4c1e-90af-6ed7a631a4ac-config-data\") pod \"barbican-worker-84dcf975b7-fj984\" (UID: \"eb8f2fee-863e-4c1e-90af-6ed7a631a4ac\") " pod="openstack/barbican-worker-84dcf975b7-fj984" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.124454 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5b7b4f6b66-crqxp" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.126416 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.130901 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5b7b4f6b66-crqxp"] Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.124083 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb8f2fee-863e-4c1e-90af-6ed7a631a4ac-combined-ca-bundle\") pod \"barbican-worker-84dcf975b7-fj984\" (UID: \"eb8f2fee-863e-4c1e-90af-6ed7a631a4ac\") " pod="openstack/barbican-worker-84dcf975b7-fj984" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.158255 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdshk\" (UniqueName: \"kubernetes.io/projected/eb8f2fee-863e-4c1e-90af-6ed7a631a4ac-kube-api-access-hdshk\") pod \"barbican-worker-84dcf975b7-fj984\" (UID: \"eb8f2fee-863e-4c1e-90af-6ed7a631a4ac\") " pod="openstack/barbican-worker-84dcf975b7-fj984" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.216570 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cfd4446-3501-49ef-911f-360c75070ca8-config\") pod \"dnsmasq-dns-6f4b85cbd9-qr7g8\" (UID: \"8cfd4446-3501-49ef-911f-360c75070ca8\") " pod="openstack/dnsmasq-dns-6f4b85cbd9-qr7g8" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.216672 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0607eb3-be12-4282-ac48-55b5220b4888-config-data\") pod \"barbican-api-5b7b4f6b66-crqxp\" (UID: \"c0607eb3-be12-4282-ac48-55b5220b4888\") " pod="openstack/barbican-api-5b7b4f6b66-crqxp" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.216809 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0607eb3-be12-4282-ac48-55b5220b4888-combined-ca-bundle\") pod \"barbican-api-5b7b4f6b66-crqxp\" (UID: \"c0607eb3-be12-4282-ac48-55b5220b4888\") " pod="openstack/barbican-api-5b7b4f6b66-crqxp" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.216843 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8cfd4446-3501-49ef-911f-360c75070ca8-dns-svc\") pod \"dnsmasq-dns-6f4b85cbd9-qr7g8\" (UID: \"8cfd4446-3501-49ef-911f-360c75070ca8\") " pod="openstack/dnsmasq-dns-6f4b85cbd9-qr7g8" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.216988 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0607eb3-be12-4282-ac48-55b5220b4888-logs\") pod \"barbican-api-5b7b4f6b66-crqxp\" (UID: \"c0607eb3-be12-4282-ac48-55b5220b4888\") " pod="openstack/barbican-api-5b7b4f6b66-crqxp" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.217039 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6132938-2052-4889-b1d7-2e43deb664e1-logs\") pod \"barbican-keystone-listener-78445c974-66754\" (UID: \"a6132938-2052-4889-b1d7-2e43deb664e1\") " pod="openstack/barbican-keystone-listener-78445c974-66754" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.217070 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6132938-2052-4889-b1d7-2e43deb664e1-config-data\") pod \"barbican-keystone-listener-78445c974-66754\" (UID: \"a6132938-2052-4889-b1d7-2e43deb664e1\") " pod="openstack/barbican-keystone-listener-78445c974-66754" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.217091 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8cfd4446-3501-49ef-911f-360c75070ca8-ovsdbserver-nb\") pod \"dnsmasq-dns-6f4b85cbd9-qr7g8\" (UID: \"8cfd4446-3501-49ef-911f-360c75070ca8\") " pod="openstack/dnsmasq-dns-6f4b85cbd9-qr7g8" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.217126 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7fsh\" (UniqueName: \"kubernetes.io/projected/8cfd4446-3501-49ef-911f-360c75070ca8-kube-api-access-s7fsh\") pod \"dnsmasq-dns-6f4b85cbd9-qr7g8\" (UID: \"8cfd4446-3501-49ef-911f-360c75070ca8\") " pod="openstack/dnsmasq-dns-6f4b85cbd9-qr7g8" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.217143 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8cfd4446-3501-49ef-911f-360c75070ca8-ovsdbserver-sb\") pod \"dnsmasq-dns-6f4b85cbd9-qr7g8\" (UID: \"8cfd4446-3501-49ef-911f-360c75070ca8\") " pod="openstack/dnsmasq-dns-6f4b85cbd9-qr7g8" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.217215 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcndz\" (UniqueName: \"kubernetes.io/projected/c0607eb3-be12-4282-ac48-55b5220b4888-kube-api-access-vcndz\") pod \"barbican-api-5b7b4f6b66-crqxp\" (UID: \"c0607eb3-be12-4282-ac48-55b5220b4888\") " pod="openstack/barbican-api-5b7b4f6b66-crqxp" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.217239 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a6132938-2052-4889-b1d7-2e43deb664e1-config-data-custom\") pod \"barbican-keystone-listener-78445c974-66754\" (UID: \"a6132938-2052-4889-b1d7-2e43deb664e1\") " pod="openstack/barbican-keystone-listener-78445c974-66754" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.217259 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tx79n\" (UniqueName: \"kubernetes.io/projected/a6132938-2052-4889-b1d7-2e43deb664e1-kube-api-access-tx79n\") pod \"barbican-keystone-listener-78445c974-66754\" (UID: \"a6132938-2052-4889-b1d7-2e43deb664e1\") " pod="openstack/barbican-keystone-listener-78445c974-66754" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.217278 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0607eb3-be12-4282-ac48-55b5220b4888-config-data-custom\") pod \"barbican-api-5b7b4f6b66-crqxp\" (UID: \"c0607eb3-be12-4282-ac48-55b5220b4888\") " pod="openstack/barbican-api-5b7b4f6b66-crqxp" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.217303 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6132938-2052-4889-b1d7-2e43deb664e1-combined-ca-bundle\") pod \"barbican-keystone-listener-78445c974-66754\" (UID: \"a6132938-2052-4889-b1d7-2e43deb664e1\") " pod="openstack/barbican-keystone-listener-78445c974-66754" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.217965 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6132938-2052-4889-b1d7-2e43deb664e1-logs\") pod \"barbican-keystone-listener-78445c974-66754\" (UID: \"a6132938-2052-4889-b1d7-2e43deb664e1\") " pod="openstack/barbican-keystone-listener-78445c974-66754" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.222201 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6132938-2052-4889-b1d7-2e43deb664e1-combined-ca-bundle\") pod \"barbican-keystone-listener-78445c974-66754\" (UID: \"a6132938-2052-4889-b1d7-2e43deb664e1\") " pod="openstack/barbican-keystone-listener-78445c974-66754" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.230257 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6132938-2052-4889-b1d7-2e43deb664e1-config-data\") pod \"barbican-keystone-listener-78445c974-66754\" (UID: \"a6132938-2052-4889-b1d7-2e43deb664e1\") " pod="openstack/barbican-keystone-listener-78445c974-66754" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.238338 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tx79n\" (UniqueName: \"kubernetes.io/projected/a6132938-2052-4889-b1d7-2e43deb664e1-kube-api-access-tx79n\") pod \"barbican-keystone-listener-78445c974-66754\" (UID: \"a6132938-2052-4889-b1d7-2e43deb664e1\") " pod="openstack/barbican-keystone-listener-78445c974-66754" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.242119 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-84dcf975b7-fj984" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.242254 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a6132938-2052-4889-b1d7-2e43deb664e1-config-data-custom\") pod \"barbican-keystone-listener-78445c974-66754\" (UID: \"a6132938-2052-4889-b1d7-2e43deb664e1\") " pod="openstack/barbican-keystone-listener-78445c974-66754" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.252333 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-78445c974-66754" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.319557 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8cfd4446-3501-49ef-911f-360c75070ca8-ovsdbserver-nb\") pod \"dnsmasq-dns-6f4b85cbd9-qr7g8\" (UID: \"8cfd4446-3501-49ef-911f-360c75070ca8\") " pod="openstack/dnsmasq-dns-6f4b85cbd9-qr7g8" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.319853 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7fsh\" (UniqueName: \"kubernetes.io/projected/8cfd4446-3501-49ef-911f-360c75070ca8-kube-api-access-s7fsh\") pod \"dnsmasq-dns-6f4b85cbd9-qr7g8\" (UID: \"8cfd4446-3501-49ef-911f-360c75070ca8\") " pod="openstack/dnsmasq-dns-6f4b85cbd9-qr7g8" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.319883 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8cfd4446-3501-49ef-911f-360c75070ca8-ovsdbserver-sb\") pod \"dnsmasq-dns-6f4b85cbd9-qr7g8\" (UID: \"8cfd4446-3501-49ef-911f-360c75070ca8\") " pod="openstack/dnsmasq-dns-6f4b85cbd9-qr7g8" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.319936 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcndz\" (UniqueName: \"kubernetes.io/projected/c0607eb3-be12-4282-ac48-55b5220b4888-kube-api-access-vcndz\") pod \"barbican-api-5b7b4f6b66-crqxp\" (UID: \"c0607eb3-be12-4282-ac48-55b5220b4888\") " pod="openstack/barbican-api-5b7b4f6b66-crqxp" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.319978 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0607eb3-be12-4282-ac48-55b5220b4888-config-data-custom\") pod \"barbican-api-5b7b4f6b66-crqxp\" (UID: \"c0607eb3-be12-4282-ac48-55b5220b4888\") " pod="openstack/barbican-api-5b7b4f6b66-crqxp" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.320027 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cfd4446-3501-49ef-911f-360c75070ca8-config\") pod \"dnsmasq-dns-6f4b85cbd9-qr7g8\" (UID: \"8cfd4446-3501-49ef-911f-360c75070ca8\") " pod="openstack/dnsmasq-dns-6f4b85cbd9-qr7g8" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.320064 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0607eb3-be12-4282-ac48-55b5220b4888-config-data\") pod \"barbican-api-5b7b4f6b66-crqxp\" (UID: \"c0607eb3-be12-4282-ac48-55b5220b4888\") " pod="openstack/barbican-api-5b7b4f6b66-crqxp" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.320085 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0607eb3-be12-4282-ac48-55b5220b4888-combined-ca-bundle\") pod \"barbican-api-5b7b4f6b66-crqxp\" (UID: \"c0607eb3-be12-4282-ac48-55b5220b4888\") " pod="openstack/barbican-api-5b7b4f6b66-crqxp" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.320112 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8cfd4446-3501-49ef-911f-360c75070ca8-dns-svc\") pod \"dnsmasq-dns-6f4b85cbd9-qr7g8\" (UID: \"8cfd4446-3501-49ef-911f-360c75070ca8\") " pod="openstack/dnsmasq-dns-6f4b85cbd9-qr7g8" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.320133 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0607eb3-be12-4282-ac48-55b5220b4888-logs\") pod \"barbican-api-5b7b4f6b66-crqxp\" (UID: \"c0607eb3-be12-4282-ac48-55b5220b4888\") " pod="openstack/barbican-api-5b7b4f6b66-crqxp" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.320657 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0607eb3-be12-4282-ac48-55b5220b4888-logs\") pod \"barbican-api-5b7b4f6b66-crqxp\" (UID: \"c0607eb3-be12-4282-ac48-55b5220b4888\") " pod="openstack/barbican-api-5b7b4f6b66-crqxp" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.320922 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8cfd4446-3501-49ef-911f-360c75070ca8-ovsdbserver-nb\") pod \"dnsmasq-dns-6f4b85cbd9-qr7g8\" (UID: \"8cfd4446-3501-49ef-911f-360c75070ca8\") " pod="openstack/dnsmasq-dns-6f4b85cbd9-qr7g8" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.321047 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cfd4446-3501-49ef-911f-360c75070ca8-config\") pod \"dnsmasq-dns-6f4b85cbd9-qr7g8\" (UID: \"8cfd4446-3501-49ef-911f-360c75070ca8\") " pod="openstack/dnsmasq-dns-6f4b85cbd9-qr7g8" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.322012 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8cfd4446-3501-49ef-911f-360c75070ca8-ovsdbserver-sb\") pod \"dnsmasq-dns-6f4b85cbd9-qr7g8\" (UID: \"8cfd4446-3501-49ef-911f-360c75070ca8\") " pod="openstack/dnsmasq-dns-6f4b85cbd9-qr7g8" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.324494 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0607eb3-be12-4282-ac48-55b5220b4888-config-data-custom\") pod \"barbican-api-5b7b4f6b66-crqxp\" (UID: \"c0607eb3-be12-4282-ac48-55b5220b4888\") " pod="openstack/barbican-api-5b7b4f6b66-crqxp" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.324733 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8cfd4446-3501-49ef-911f-360c75070ca8-dns-svc\") pod \"dnsmasq-dns-6f4b85cbd9-qr7g8\" (UID: \"8cfd4446-3501-49ef-911f-360c75070ca8\") " pod="openstack/dnsmasq-dns-6f4b85cbd9-qr7g8" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.325365 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0607eb3-be12-4282-ac48-55b5220b4888-config-data\") pod \"barbican-api-5b7b4f6b66-crqxp\" (UID: \"c0607eb3-be12-4282-ac48-55b5220b4888\") " pod="openstack/barbican-api-5b7b4f6b66-crqxp" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.326803 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0607eb3-be12-4282-ac48-55b5220b4888-combined-ca-bundle\") pod \"barbican-api-5b7b4f6b66-crqxp\" (UID: \"c0607eb3-be12-4282-ac48-55b5220b4888\") " pod="openstack/barbican-api-5b7b4f6b66-crqxp" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.341801 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcndz\" (UniqueName: \"kubernetes.io/projected/c0607eb3-be12-4282-ac48-55b5220b4888-kube-api-access-vcndz\") pod \"barbican-api-5b7b4f6b66-crqxp\" (UID: \"c0607eb3-be12-4282-ac48-55b5220b4888\") " pod="openstack/barbican-api-5b7b4f6b66-crqxp" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.346933 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7fsh\" (UniqueName: \"kubernetes.io/projected/8cfd4446-3501-49ef-911f-360c75070ca8-kube-api-access-s7fsh\") pod \"dnsmasq-dns-6f4b85cbd9-qr7g8\" (UID: \"8cfd4446-3501-49ef-911f-360c75070ca8\") " pod="openstack/dnsmasq-dns-6f4b85cbd9-qr7g8" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.390821 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f4b85cbd9-qr7g8" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.524710 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5b7b4f6b66-crqxp" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.773085 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-78445c974-66754"] Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.844768 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-84dcf975b7-fj984"] Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.936850 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f4b85cbd9-qr7g8"] Jan 30 17:51:12 crc kubenswrapper[4766]: I0130 17:51:12.114729 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5b7b4f6b66-crqxp"] Jan 30 17:51:12 crc kubenswrapper[4766]: I0130 17:51:12.694438 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-78445c974-66754" event={"ID":"a6132938-2052-4889-b1d7-2e43deb664e1","Type":"ContainerStarted","Data":"389cdc439394cf9e2a0253416f271aa4a58746f87c121c65b9b03c78fb1ceacd"} Jan 30 17:51:12 crc kubenswrapper[4766]: I0130 17:51:12.694834 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-78445c974-66754" event={"ID":"a6132938-2052-4889-b1d7-2e43deb664e1","Type":"ContainerStarted","Data":"78492f49f846b1256b3db7b53047261273622abe7a4794f3ed572978359ecc54"} Jan 30 17:51:12 crc kubenswrapper[4766]: I0130 17:51:12.694852 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-78445c974-66754" event={"ID":"a6132938-2052-4889-b1d7-2e43deb664e1","Type":"ContainerStarted","Data":"e7622063fdf190dc4ccf71c87249acb6f6959e90f9e8cb8f3a873c9d6a4c8cfe"} Jan 30 17:51:12 crc kubenswrapper[4766]: I0130 17:51:12.696279 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b7b4f6b66-crqxp" event={"ID":"c0607eb3-be12-4282-ac48-55b5220b4888","Type":"ContainerStarted","Data":"dfd96894ccfafc22aae2011b188b54b8bf915c7933591a4684f23e54bdc33901"} Jan 30 17:51:12 crc kubenswrapper[4766]: I0130 17:51:12.696391 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5b7b4f6b66-crqxp" Jan 30 17:51:12 crc kubenswrapper[4766]: I0130 17:51:12.696470 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5b7b4f6b66-crqxp" Jan 30 17:51:12 crc kubenswrapper[4766]: I0130 17:51:12.696546 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b7b4f6b66-crqxp" event={"ID":"c0607eb3-be12-4282-ac48-55b5220b4888","Type":"ContainerStarted","Data":"9bb9b9ad67a62b19a9976e1b2c313627445cd9818a62877c503776085a30fbd9"} Jan 30 17:51:12 crc kubenswrapper[4766]: I0130 17:51:12.696607 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b7b4f6b66-crqxp" event={"ID":"c0607eb3-be12-4282-ac48-55b5220b4888","Type":"ContainerStarted","Data":"6ecac4918b239081671a08f85badf1a13396f5fe11e242a4bc6c0650658e4926"} Jan 30 17:51:12 crc kubenswrapper[4766]: I0130 17:51:12.698126 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-84dcf975b7-fj984" event={"ID":"eb8f2fee-863e-4c1e-90af-6ed7a631a4ac","Type":"ContainerStarted","Data":"924476c1644746c6b40ebb18696b8734d8b090b6ff5fdb004ac01cb93030580a"} Jan 30 17:51:12 crc kubenswrapper[4766]: I0130 17:51:12.698169 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-84dcf975b7-fj984" event={"ID":"eb8f2fee-863e-4c1e-90af-6ed7a631a4ac","Type":"ContainerStarted","Data":"923cd2a7cb9abde1b8b978ceae5b5b8a54640b6febc9cdeb634c4ce79ce28775"} Jan 30 17:51:12 crc kubenswrapper[4766]: I0130 17:51:12.698203 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-84dcf975b7-fj984" event={"ID":"eb8f2fee-863e-4c1e-90af-6ed7a631a4ac","Type":"ContainerStarted","Data":"535cb8b70fa905ffe5b07582d53b005fb3401118c255483eaa57449afdb1880e"} Jan 30 17:51:12 crc kubenswrapper[4766]: I0130 17:51:12.702846 4766 generic.go:334] "Generic (PLEG): container finished" podID="8cfd4446-3501-49ef-911f-360c75070ca8" containerID="2284b685070b20ff7f99a6b288edfe628604e9b16f379e70a8725075d3d9749a" exitCode=0 Jan 30 17:51:12 crc kubenswrapper[4766]: I0130 17:51:12.702902 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f4b85cbd9-qr7g8" event={"ID":"8cfd4446-3501-49ef-911f-360c75070ca8","Type":"ContainerDied","Data":"2284b685070b20ff7f99a6b288edfe628604e9b16f379e70a8725075d3d9749a"} Jan 30 17:51:12 crc kubenswrapper[4766]: I0130 17:51:12.703278 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f4b85cbd9-qr7g8" event={"ID":"8cfd4446-3501-49ef-911f-360c75070ca8","Type":"ContainerStarted","Data":"2a00f6308abf923c4adfba878c7daf0c4fdb4080490739d33a8a3b9162feb232"} Jan 30 17:51:12 crc kubenswrapper[4766]: I0130 17:51:12.720505 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-78445c974-66754" podStartSLOduration=2.7204847819999998 podStartE2EDuration="2.720484782s" podCreationTimestamp="2026-01-30 17:51:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:51:12.710420418 +0000 UTC m=+5327.348377764" watchObservedRunningTime="2026-01-30 17:51:12.720484782 +0000 UTC m=+5327.358442128" Jan 30 17:51:12 crc kubenswrapper[4766]: I0130 17:51:12.754118 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5b7b4f6b66-crqxp" podStartSLOduration=1.754100866 podStartE2EDuration="1.754100866s" podCreationTimestamp="2026-01-30 17:51:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:51:12.750834247 +0000 UTC m=+5327.388791593" watchObservedRunningTime="2026-01-30 17:51:12.754100866 +0000 UTC m=+5327.392058212" Jan 30 17:51:12 crc kubenswrapper[4766]: I0130 17:51:12.772763 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-84dcf975b7-fj984" podStartSLOduration=2.772743702 podStartE2EDuration="2.772743702s" podCreationTimestamp="2026-01-30 17:51:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:51:12.763730707 +0000 UTC m=+5327.401688063" watchObservedRunningTime="2026-01-30 17:51:12.772743702 +0000 UTC m=+5327.410701048" Jan 30 17:51:13 crc kubenswrapper[4766]: I0130 17:51:13.716607 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f4b85cbd9-qr7g8" event={"ID":"8cfd4446-3501-49ef-911f-360c75070ca8","Type":"ContainerStarted","Data":"325111ae8b2b39896c73638f1c0026db7d59ab4097cfdf84ec6a851d0d088ecd"} Jan 30 17:51:13 crc kubenswrapper[4766]: I0130 17:51:13.716921 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f4b85cbd9-qr7g8" Jan 30 17:51:13 crc kubenswrapper[4766]: I0130 17:51:13.742677 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f4b85cbd9-qr7g8" podStartSLOduration=3.742656293 podStartE2EDuration="3.742656293s" podCreationTimestamp="2026-01-30 17:51:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:51:13.738920591 +0000 UTC m=+5328.376877947" watchObservedRunningTime="2026-01-30 17:51:13.742656293 +0000 UTC m=+5328.380613639" Jan 30 17:51:21 crc kubenswrapper[4766]: I0130 17:51:21.066290 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-xfq5b"] Jan 30 17:51:21 crc kubenswrapper[4766]: I0130 17:51:21.081884 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-xfq5b"] Jan 30 17:51:21 crc kubenswrapper[4766]: I0130 17:51:21.393118 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6f4b85cbd9-qr7g8" Jan 30 17:51:21 crc kubenswrapper[4766]: I0130 17:51:21.459206 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77f4494f49-kmx27"] Jan 30 17:51:21 crc kubenswrapper[4766]: I0130 17:51:21.459490 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-77f4494f49-kmx27" podUID="d56c4a02-3a71-44af-b4e3-c01fdfe94aa2" containerName="dnsmasq-dns" containerID="cri-o://0a46cd154d575e3a8c79e1f39b696f40c2dd09cb6642b1622e60f70d1ca2fbf0" gracePeriod=10 Jan 30 17:51:21 crc kubenswrapper[4766]: I0130 17:51:21.780985 4766 generic.go:334] "Generic (PLEG): container finished" podID="d56c4a02-3a71-44af-b4e3-c01fdfe94aa2" containerID="0a46cd154d575e3a8c79e1f39b696f40c2dd09cb6642b1622e60f70d1ca2fbf0" exitCode=0 Jan 30 17:51:21 crc kubenswrapper[4766]: I0130 17:51:21.781335 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77f4494f49-kmx27" event={"ID":"d56c4a02-3a71-44af-b4e3-c01fdfe94aa2","Type":"ContainerDied","Data":"0a46cd154d575e3a8c79e1f39b696f40c2dd09cb6642b1622e60f70d1ca2fbf0"} Jan 30 17:51:21 crc kubenswrapper[4766]: I0130 17:51:21.999435 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77f4494f49-kmx27" Jan 30 17:51:22 crc kubenswrapper[4766]: I0130 17:51:22.064327 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e74a4a8-0c9c-4bba-b839-4caeca1e9304" path="/var/lib/kubelet/pods/0e74a4a8-0c9c-4bba-b839-4caeca1e9304/volumes" Jan 30 17:51:22 crc kubenswrapper[4766]: I0130 17:51:22.141662 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d56c4a02-3a71-44af-b4e3-c01fdfe94aa2-dns-svc\") pod \"d56c4a02-3a71-44af-b4e3-c01fdfe94aa2\" (UID: \"d56c4a02-3a71-44af-b4e3-c01fdfe94aa2\") " Jan 30 17:51:22 crc kubenswrapper[4766]: I0130 17:51:22.141805 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdrv8\" (UniqueName: \"kubernetes.io/projected/d56c4a02-3a71-44af-b4e3-c01fdfe94aa2-kube-api-access-kdrv8\") pod \"d56c4a02-3a71-44af-b4e3-c01fdfe94aa2\" (UID: \"d56c4a02-3a71-44af-b4e3-c01fdfe94aa2\") " Jan 30 17:51:22 crc kubenswrapper[4766]: I0130 17:51:22.141929 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d56c4a02-3a71-44af-b4e3-c01fdfe94aa2-ovsdbserver-nb\") pod \"d56c4a02-3a71-44af-b4e3-c01fdfe94aa2\" (UID: \"d56c4a02-3a71-44af-b4e3-c01fdfe94aa2\") " Jan 30 17:51:22 crc kubenswrapper[4766]: I0130 17:51:22.141970 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d56c4a02-3a71-44af-b4e3-c01fdfe94aa2-config\") pod \"d56c4a02-3a71-44af-b4e3-c01fdfe94aa2\" (UID: \"d56c4a02-3a71-44af-b4e3-c01fdfe94aa2\") " Jan 30 17:51:22 crc kubenswrapper[4766]: I0130 17:51:22.141992 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d56c4a02-3a71-44af-b4e3-c01fdfe94aa2-ovsdbserver-sb\") pod \"d56c4a02-3a71-44af-b4e3-c01fdfe94aa2\" (UID: \"d56c4a02-3a71-44af-b4e3-c01fdfe94aa2\") " Jan 30 17:51:22 crc kubenswrapper[4766]: I0130 17:51:22.147091 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d56c4a02-3a71-44af-b4e3-c01fdfe94aa2-kube-api-access-kdrv8" (OuterVolumeSpecName: "kube-api-access-kdrv8") pod "d56c4a02-3a71-44af-b4e3-c01fdfe94aa2" (UID: "d56c4a02-3a71-44af-b4e3-c01fdfe94aa2"). InnerVolumeSpecName "kube-api-access-kdrv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:51:22 crc kubenswrapper[4766]: I0130 17:51:22.184102 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d56c4a02-3a71-44af-b4e3-c01fdfe94aa2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d56c4a02-3a71-44af-b4e3-c01fdfe94aa2" (UID: "d56c4a02-3a71-44af-b4e3-c01fdfe94aa2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:51:22 crc kubenswrapper[4766]: I0130 17:51:22.187926 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d56c4a02-3a71-44af-b4e3-c01fdfe94aa2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d56c4a02-3a71-44af-b4e3-c01fdfe94aa2" (UID: "d56c4a02-3a71-44af-b4e3-c01fdfe94aa2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:51:22 crc kubenswrapper[4766]: I0130 17:51:22.188872 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d56c4a02-3a71-44af-b4e3-c01fdfe94aa2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d56c4a02-3a71-44af-b4e3-c01fdfe94aa2" (UID: "d56c4a02-3a71-44af-b4e3-c01fdfe94aa2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:51:22 crc kubenswrapper[4766]: I0130 17:51:22.189629 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d56c4a02-3a71-44af-b4e3-c01fdfe94aa2-config" (OuterVolumeSpecName: "config") pod "d56c4a02-3a71-44af-b4e3-c01fdfe94aa2" (UID: "d56c4a02-3a71-44af-b4e3-c01fdfe94aa2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:51:22 crc kubenswrapper[4766]: I0130 17:51:22.244397 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d56c4a02-3a71-44af-b4e3-c01fdfe94aa2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 17:51:22 crc kubenswrapper[4766]: I0130 17:51:22.244439 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d56c4a02-3a71-44af-b4e3-c01fdfe94aa2-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:51:22 crc kubenswrapper[4766]: I0130 17:51:22.244453 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d56c4a02-3a71-44af-b4e3-c01fdfe94aa2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 17:51:22 crc kubenswrapper[4766]: I0130 17:51:22.244468 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d56c4a02-3a71-44af-b4e3-c01fdfe94aa2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 17:51:22 crc kubenswrapper[4766]: I0130 17:51:22.244482 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdrv8\" (UniqueName: \"kubernetes.io/projected/d56c4a02-3a71-44af-b4e3-c01fdfe94aa2-kube-api-access-kdrv8\") on node \"crc\" DevicePath \"\"" Jan 30 17:51:22 crc kubenswrapper[4766]: I0130 17:51:22.790003 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77f4494f49-kmx27" event={"ID":"d56c4a02-3a71-44af-b4e3-c01fdfe94aa2","Type":"ContainerDied","Data":"8ceea891fbcb5fb421d81a3c1c5593d03fae3166d751648db9c3253347233743"} Jan 30 17:51:22 crc kubenswrapper[4766]: I0130 17:51:22.790060 4766 scope.go:117] "RemoveContainer" containerID="0a46cd154d575e3a8c79e1f39b696f40c2dd09cb6642b1622e60f70d1ca2fbf0" Jan 30 17:51:22 crc kubenswrapper[4766]: I0130 17:51:22.790223 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77f4494f49-kmx27" Jan 30 17:51:22 crc kubenswrapper[4766]: I0130 17:51:22.832717 4766 scope.go:117] "RemoveContainer" containerID="221d49a1d4c421b4915316ea508e130c64fe759e3aa996c068719e4d84855633" Jan 30 17:51:22 crc kubenswrapper[4766]: I0130 17:51:22.836992 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77f4494f49-kmx27"] Jan 30 17:51:22 crc kubenswrapper[4766]: I0130 17:51:22.855666 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77f4494f49-kmx27"] Jan 30 17:51:23 crc kubenswrapper[4766]: I0130 17:51:23.107955 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5b7b4f6b66-crqxp" Jan 30 17:51:23 crc kubenswrapper[4766]: I0130 17:51:23.180064 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5b7b4f6b66-crqxp" Jan 30 17:51:24 crc kubenswrapper[4766]: I0130 17:51:24.050983 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d56c4a02-3a71-44af-b4e3-c01fdfe94aa2" path="/var/lib/kubelet/pods/d56c4a02-3a71-44af-b4e3-c01fdfe94aa2/volumes" Jan 30 17:51:31 crc kubenswrapper[4766]: I0130 17:51:31.730785 4766 scope.go:117] "RemoveContainer" containerID="a1009dde22ffcc8455d2189a3b2d9bd31c4314e79dc5a1b8bf480ca3671346fc" Jan 30 17:51:34 crc kubenswrapper[4766]: I0130 17:51:34.870432 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-jdcqq"] Jan 30 17:51:34 crc kubenswrapper[4766]: E0130 17:51:34.871308 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d56c4a02-3a71-44af-b4e3-c01fdfe94aa2" containerName="dnsmasq-dns" Jan 30 17:51:34 crc kubenswrapper[4766]: I0130 17:51:34.871321 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d56c4a02-3a71-44af-b4e3-c01fdfe94aa2" containerName="dnsmasq-dns" Jan 30 17:51:34 crc kubenswrapper[4766]: E0130 17:51:34.871342 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d56c4a02-3a71-44af-b4e3-c01fdfe94aa2" containerName="init" Jan 30 17:51:34 crc kubenswrapper[4766]: I0130 17:51:34.871348 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d56c4a02-3a71-44af-b4e3-c01fdfe94aa2" containerName="init" Jan 30 17:51:34 crc kubenswrapper[4766]: I0130 17:51:34.871498 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="d56c4a02-3a71-44af-b4e3-c01fdfe94aa2" containerName="dnsmasq-dns" Jan 30 17:51:34 crc kubenswrapper[4766]: I0130 17:51:34.872116 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jdcqq" Jan 30 17:51:34 crc kubenswrapper[4766]: I0130 17:51:34.881768 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-jdcqq"] Jan 30 17:51:34 crc kubenswrapper[4766]: I0130 17:51:34.970816 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7364-account-create-update-5qkkz"] Jan 30 17:51:34 crc kubenswrapper[4766]: I0130 17:51:34.971946 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7364-account-create-update-5qkkz" Jan 30 17:51:34 crc kubenswrapper[4766]: I0130 17:51:34.972642 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d09a627-470a-4719-a1d8-458eda413878-operator-scripts\") pod \"neutron-db-create-jdcqq\" (UID: \"9d09a627-470a-4719-a1d8-458eda413878\") " pod="openstack/neutron-db-create-jdcqq" Jan 30 17:51:34 crc kubenswrapper[4766]: I0130 17:51:34.972836 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7n8m\" (UniqueName: \"kubernetes.io/projected/9d09a627-470a-4719-a1d8-458eda413878-kube-api-access-h7n8m\") pod \"neutron-db-create-jdcqq\" (UID: \"9d09a627-470a-4719-a1d8-458eda413878\") " pod="openstack/neutron-db-create-jdcqq" Jan 30 17:51:34 crc kubenswrapper[4766]: I0130 17:51:34.973656 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 30 17:51:34 crc kubenswrapper[4766]: I0130 17:51:34.988489 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7364-account-create-update-5qkkz"] Jan 30 17:51:35 crc kubenswrapper[4766]: I0130 17:51:35.074814 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/632e98c6-d202-4c07-9220-636bd07da76d-operator-scripts\") pod \"neutron-7364-account-create-update-5qkkz\" (UID: \"632e98c6-d202-4c07-9220-636bd07da76d\") " pod="openstack/neutron-7364-account-create-update-5qkkz" Jan 30 17:51:35 crc kubenswrapper[4766]: I0130 17:51:35.074881 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7n8m\" (UniqueName: \"kubernetes.io/projected/9d09a627-470a-4719-a1d8-458eda413878-kube-api-access-h7n8m\") pod \"neutron-db-create-jdcqq\" (UID: \"9d09a627-470a-4719-a1d8-458eda413878\") " pod="openstack/neutron-db-create-jdcqq" Jan 30 17:51:35 crc kubenswrapper[4766]: I0130 17:51:35.075049 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d09a627-470a-4719-a1d8-458eda413878-operator-scripts\") pod \"neutron-db-create-jdcqq\" (UID: \"9d09a627-470a-4719-a1d8-458eda413878\") " pod="openstack/neutron-db-create-jdcqq" Jan 30 17:51:35 crc kubenswrapper[4766]: I0130 17:51:35.075232 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dljxv\" (UniqueName: \"kubernetes.io/projected/632e98c6-d202-4c07-9220-636bd07da76d-kube-api-access-dljxv\") pod \"neutron-7364-account-create-update-5qkkz\" (UID: \"632e98c6-d202-4c07-9220-636bd07da76d\") " pod="openstack/neutron-7364-account-create-update-5qkkz" Jan 30 17:51:35 crc kubenswrapper[4766]: I0130 17:51:35.075935 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d09a627-470a-4719-a1d8-458eda413878-operator-scripts\") pod \"neutron-db-create-jdcqq\" (UID: \"9d09a627-470a-4719-a1d8-458eda413878\") " pod="openstack/neutron-db-create-jdcqq" Jan 30 17:51:35 crc kubenswrapper[4766]: I0130 17:51:35.095101 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7n8m\" (UniqueName: \"kubernetes.io/projected/9d09a627-470a-4719-a1d8-458eda413878-kube-api-access-h7n8m\") pod \"neutron-db-create-jdcqq\" (UID: \"9d09a627-470a-4719-a1d8-458eda413878\") " pod="openstack/neutron-db-create-jdcqq" Jan 30 17:51:35 crc kubenswrapper[4766]: I0130 17:51:35.177244 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dljxv\" (UniqueName: \"kubernetes.io/projected/632e98c6-d202-4c07-9220-636bd07da76d-kube-api-access-dljxv\") pod \"neutron-7364-account-create-update-5qkkz\" (UID: \"632e98c6-d202-4c07-9220-636bd07da76d\") " pod="openstack/neutron-7364-account-create-update-5qkkz" Jan 30 17:51:35 crc kubenswrapper[4766]: I0130 17:51:35.177338 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/632e98c6-d202-4c07-9220-636bd07da76d-operator-scripts\") pod \"neutron-7364-account-create-update-5qkkz\" (UID: \"632e98c6-d202-4c07-9220-636bd07da76d\") " pod="openstack/neutron-7364-account-create-update-5qkkz" Jan 30 17:51:35 crc kubenswrapper[4766]: I0130 17:51:35.178084 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/632e98c6-d202-4c07-9220-636bd07da76d-operator-scripts\") pod \"neutron-7364-account-create-update-5qkkz\" (UID: \"632e98c6-d202-4c07-9220-636bd07da76d\") " pod="openstack/neutron-7364-account-create-update-5qkkz" Jan 30 17:51:35 crc kubenswrapper[4766]: I0130 17:51:35.186645 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jdcqq" Jan 30 17:51:35 crc kubenswrapper[4766]: I0130 17:51:35.195422 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dljxv\" (UniqueName: \"kubernetes.io/projected/632e98c6-d202-4c07-9220-636bd07da76d-kube-api-access-dljxv\") pod \"neutron-7364-account-create-update-5qkkz\" (UID: \"632e98c6-d202-4c07-9220-636bd07da76d\") " pod="openstack/neutron-7364-account-create-update-5qkkz" Jan 30 17:51:35 crc kubenswrapper[4766]: I0130 17:51:35.288925 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7364-account-create-update-5qkkz" Jan 30 17:51:35 crc kubenswrapper[4766]: I0130 17:51:35.608135 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-jdcqq"] Jan 30 17:51:35 crc kubenswrapper[4766]: W0130 17:51:35.615437 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d09a627_470a_4719_a1d8_458eda413878.slice/crio-f0955baa7952a0b3e80dd0303be9dfdc6a839dab17620896df6b3aa5737a71e5 WatchSource:0}: Error finding container f0955baa7952a0b3e80dd0303be9dfdc6a839dab17620896df6b3aa5737a71e5: Status 404 returned error can't find the container with id f0955baa7952a0b3e80dd0303be9dfdc6a839dab17620896df6b3aa5737a71e5 Jan 30 17:51:35 crc kubenswrapper[4766]: I0130 17:51:35.777314 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7364-account-create-update-5qkkz"] Jan 30 17:51:35 crc kubenswrapper[4766]: I0130 17:51:35.885537 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7364-account-create-update-5qkkz" event={"ID":"632e98c6-d202-4c07-9220-636bd07da76d","Type":"ContainerStarted","Data":"05bce2aed477cea70dbc4a3338ad7356030c9b12fdfc1c75857a86ddbde346bb"} Jan 30 17:51:35 crc kubenswrapper[4766]: I0130 17:51:35.887465 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-jdcqq" event={"ID":"9d09a627-470a-4719-a1d8-458eda413878","Type":"ContainerStarted","Data":"b4325ef51e7b158001efb6dda87f6f28be293ddce88e91cc9243a0d6ae57bb71"} Jan 30 17:51:35 crc kubenswrapper[4766]: I0130 17:51:35.887520 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-jdcqq" event={"ID":"9d09a627-470a-4719-a1d8-458eda413878","Type":"ContainerStarted","Data":"f0955baa7952a0b3e80dd0303be9dfdc6a839dab17620896df6b3aa5737a71e5"} Jan 30 17:51:35 crc kubenswrapper[4766]: I0130 17:51:35.903592 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-jdcqq" podStartSLOduration=1.9035686699999999 podStartE2EDuration="1.90356867s" podCreationTimestamp="2026-01-30 17:51:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:51:35.899814679 +0000 UTC m=+5350.537772035" watchObservedRunningTime="2026-01-30 17:51:35.90356867 +0000 UTC m=+5350.541526016" Jan 30 17:51:36 crc kubenswrapper[4766]: I0130 17:51:36.896832 4766 generic.go:334] "Generic (PLEG): container finished" podID="9d09a627-470a-4719-a1d8-458eda413878" containerID="b4325ef51e7b158001efb6dda87f6f28be293ddce88e91cc9243a0d6ae57bb71" exitCode=0 Jan 30 17:51:36 crc kubenswrapper[4766]: I0130 17:51:36.896917 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-jdcqq" event={"ID":"9d09a627-470a-4719-a1d8-458eda413878","Type":"ContainerDied","Data":"b4325ef51e7b158001efb6dda87f6f28be293ddce88e91cc9243a0d6ae57bb71"} Jan 30 17:51:36 crc kubenswrapper[4766]: I0130 17:51:36.898788 4766 generic.go:334] "Generic (PLEG): container finished" podID="632e98c6-d202-4c07-9220-636bd07da76d" containerID="e819a03329a60f5f707891aab84349c260acf78c226512ac444ec14f902344ab" exitCode=0 Jan 30 17:51:36 crc kubenswrapper[4766]: I0130 17:51:36.898827 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7364-account-create-update-5qkkz" event={"ID":"632e98c6-d202-4c07-9220-636bd07da76d","Type":"ContainerDied","Data":"e819a03329a60f5f707891aab84349c260acf78c226512ac444ec14f902344ab"} Jan 30 17:51:38 crc kubenswrapper[4766]: I0130 17:51:38.383243 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jdcqq" Jan 30 17:51:38 crc kubenswrapper[4766]: I0130 17:51:38.388462 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7364-account-create-update-5qkkz" Jan 30 17:51:38 crc kubenswrapper[4766]: I0130 17:51:38.463162 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/632e98c6-d202-4c07-9220-636bd07da76d-operator-scripts\") pod \"632e98c6-d202-4c07-9220-636bd07da76d\" (UID: \"632e98c6-d202-4c07-9220-636bd07da76d\") " Jan 30 17:51:38 crc kubenswrapper[4766]: I0130 17:51:38.463359 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d09a627-470a-4719-a1d8-458eda413878-operator-scripts\") pod \"9d09a627-470a-4719-a1d8-458eda413878\" (UID: \"9d09a627-470a-4719-a1d8-458eda413878\") " Jan 30 17:51:38 crc kubenswrapper[4766]: I0130 17:51:38.463390 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7n8m\" (UniqueName: \"kubernetes.io/projected/9d09a627-470a-4719-a1d8-458eda413878-kube-api-access-h7n8m\") pod \"9d09a627-470a-4719-a1d8-458eda413878\" (UID: \"9d09a627-470a-4719-a1d8-458eda413878\") " Jan 30 17:51:38 crc kubenswrapper[4766]: I0130 17:51:38.463432 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dljxv\" (UniqueName: \"kubernetes.io/projected/632e98c6-d202-4c07-9220-636bd07da76d-kube-api-access-dljxv\") pod \"632e98c6-d202-4c07-9220-636bd07da76d\" (UID: \"632e98c6-d202-4c07-9220-636bd07da76d\") " Jan 30 17:51:38 crc kubenswrapper[4766]: I0130 17:51:38.464807 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/632e98c6-d202-4c07-9220-636bd07da76d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "632e98c6-d202-4c07-9220-636bd07da76d" (UID: "632e98c6-d202-4c07-9220-636bd07da76d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:51:38 crc kubenswrapper[4766]: I0130 17:51:38.464846 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d09a627-470a-4719-a1d8-458eda413878-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9d09a627-470a-4719-a1d8-458eda413878" (UID: "9d09a627-470a-4719-a1d8-458eda413878"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:51:38 crc kubenswrapper[4766]: I0130 17:51:38.469597 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/632e98c6-d202-4c07-9220-636bd07da76d-kube-api-access-dljxv" (OuterVolumeSpecName: "kube-api-access-dljxv") pod "632e98c6-d202-4c07-9220-636bd07da76d" (UID: "632e98c6-d202-4c07-9220-636bd07da76d"). InnerVolumeSpecName "kube-api-access-dljxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:51:38 crc kubenswrapper[4766]: I0130 17:51:38.471510 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d09a627-470a-4719-a1d8-458eda413878-kube-api-access-h7n8m" (OuterVolumeSpecName: "kube-api-access-h7n8m") pod "9d09a627-470a-4719-a1d8-458eda413878" (UID: "9d09a627-470a-4719-a1d8-458eda413878"). InnerVolumeSpecName "kube-api-access-h7n8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:51:38 crc kubenswrapper[4766]: I0130 17:51:38.564674 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/632e98c6-d202-4c07-9220-636bd07da76d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:51:38 crc kubenswrapper[4766]: I0130 17:51:38.564698 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d09a627-470a-4719-a1d8-458eda413878-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:51:38 crc kubenswrapper[4766]: I0130 17:51:38.564708 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7n8m\" (UniqueName: \"kubernetes.io/projected/9d09a627-470a-4719-a1d8-458eda413878-kube-api-access-h7n8m\") on node \"crc\" DevicePath \"\"" Jan 30 17:51:38 crc kubenswrapper[4766]: I0130 17:51:38.564717 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dljxv\" (UniqueName: \"kubernetes.io/projected/632e98c6-d202-4c07-9220-636bd07da76d-kube-api-access-dljxv\") on node \"crc\" DevicePath \"\"" Jan 30 17:51:38 crc kubenswrapper[4766]: I0130 17:51:38.913308 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jdcqq" Jan 30 17:51:38 crc kubenswrapper[4766]: I0130 17:51:38.914741 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-jdcqq" event={"ID":"9d09a627-470a-4719-a1d8-458eda413878","Type":"ContainerDied","Data":"f0955baa7952a0b3e80dd0303be9dfdc6a839dab17620896df6b3aa5737a71e5"} Jan 30 17:51:38 crc kubenswrapper[4766]: I0130 17:51:38.914787 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0955baa7952a0b3e80dd0303be9dfdc6a839dab17620896df6b3aa5737a71e5" Jan 30 17:51:38 crc kubenswrapper[4766]: I0130 17:51:38.916588 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7364-account-create-update-5qkkz" event={"ID":"632e98c6-d202-4c07-9220-636bd07da76d","Type":"ContainerDied","Data":"05bce2aed477cea70dbc4a3338ad7356030c9b12fdfc1c75857a86ddbde346bb"} Jan 30 17:51:38 crc kubenswrapper[4766]: I0130 17:51:38.916605 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05bce2aed477cea70dbc4a3338ad7356030c9b12fdfc1c75857a86ddbde346bb" Jan 30 17:51:38 crc kubenswrapper[4766]: I0130 17:51:38.916697 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7364-account-create-update-5qkkz" Jan 30 17:51:40 crc kubenswrapper[4766]: I0130 17:51:40.249080 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-6cksv"] Jan 30 17:51:40 crc kubenswrapper[4766]: E0130 17:51:40.249922 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d09a627-470a-4719-a1d8-458eda413878" containerName="mariadb-database-create" Jan 30 17:51:40 crc kubenswrapper[4766]: I0130 17:51:40.249942 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d09a627-470a-4719-a1d8-458eda413878" containerName="mariadb-database-create" Jan 30 17:51:40 crc kubenswrapper[4766]: E0130 17:51:40.249989 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="632e98c6-d202-4c07-9220-636bd07da76d" containerName="mariadb-account-create-update" Jan 30 17:51:40 crc kubenswrapper[4766]: I0130 17:51:40.249999 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="632e98c6-d202-4c07-9220-636bd07da76d" containerName="mariadb-account-create-update" Jan 30 17:51:40 crc kubenswrapper[4766]: I0130 17:51:40.250225 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="632e98c6-d202-4c07-9220-636bd07da76d" containerName="mariadb-account-create-update" Jan 30 17:51:40 crc kubenswrapper[4766]: I0130 17:51:40.250266 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d09a627-470a-4719-a1d8-458eda413878" containerName="mariadb-database-create" Jan 30 17:51:40 crc kubenswrapper[4766]: I0130 17:51:40.251123 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6cksv" Jan 30 17:51:40 crc kubenswrapper[4766]: I0130 17:51:40.252784 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 30 17:51:40 crc kubenswrapper[4766]: I0130 17:51:40.252988 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 30 17:51:40 crc kubenswrapper[4766]: I0130 17:51:40.253064 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-dxxvc" Jan 30 17:51:40 crc kubenswrapper[4766]: I0130 17:51:40.259792 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-6cksv"] Jan 30 17:51:40 crc kubenswrapper[4766]: I0130 17:51:40.393324 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1262aa38-ee4d-4579-b034-3669dd58a238-config\") pod \"neutron-db-sync-6cksv\" (UID: \"1262aa38-ee4d-4579-b034-3669dd58a238\") " pod="openstack/neutron-db-sync-6cksv" Jan 30 17:51:40 crc kubenswrapper[4766]: I0130 17:51:40.393392 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1262aa38-ee4d-4579-b034-3669dd58a238-combined-ca-bundle\") pod \"neutron-db-sync-6cksv\" (UID: \"1262aa38-ee4d-4579-b034-3669dd58a238\") " pod="openstack/neutron-db-sync-6cksv" Jan 30 17:51:40 crc kubenswrapper[4766]: I0130 17:51:40.393417 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmp8w\" (UniqueName: \"kubernetes.io/projected/1262aa38-ee4d-4579-b034-3669dd58a238-kube-api-access-fmp8w\") pod \"neutron-db-sync-6cksv\" (UID: \"1262aa38-ee4d-4579-b034-3669dd58a238\") " pod="openstack/neutron-db-sync-6cksv" Jan 30 17:51:40 crc kubenswrapper[4766]: I0130 17:51:40.494878 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1262aa38-ee4d-4579-b034-3669dd58a238-config\") pod \"neutron-db-sync-6cksv\" (UID: \"1262aa38-ee4d-4579-b034-3669dd58a238\") " pod="openstack/neutron-db-sync-6cksv" Jan 30 17:51:40 crc kubenswrapper[4766]: I0130 17:51:40.494953 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1262aa38-ee4d-4579-b034-3669dd58a238-combined-ca-bundle\") pod \"neutron-db-sync-6cksv\" (UID: \"1262aa38-ee4d-4579-b034-3669dd58a238\") " pod="openstack/neutron-db-sync-6cksv" Jan 30 17:51:40 crc kubenswrapper[4766]: I0130 17:51:40.494986 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmp8w\" (UniqueName: \"kubernetes.io/projected/1262aa38-ee4d-4579-b034-3669dd58a238-kube-api-access-fmp8w\") pod \"neutron-db-sync-6cksv\" (UID: \"1262aa38-ee4d-4579-b034-3669dd58a238\") " pod="openstack/neutron-db-sync-6cksv" Jan 30 17:51:40 crc kubenswrapper[4766]: I0130 17:51:40.501700 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1262aa38-ee4d-4579-b034-3669dd58a238-config\") pod \"neutron-db-sync-6cksv\" (UID: \"1262aa38-ee4d-4579-b034-3669dd58a238\") " pod="openstack/neutron-db-sync-6cksv" Jan 30 17:51:40 crc kubenswrapper[4766]: I0130 17:51:40.502131 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1262aa38-ee4d-4579-b034-3669dd58a238-combined-ca-bundle\") pod \"neutron-db-sync-6cksv\" (UID: \"1262aa38-ee4d-4579-b034-3669dd58a238\") " pod="openstack/neutron-db-sync-6cksv" Jan 30 17:51:40 crc kubenswrapper[4766]: I0130 17:51:40.510704 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmp8w\" (UniqueName: \"kubernetes.io/projected/1262aa38-ee4d-4579-b034-3669dd58a238-kube-api-access-fmp8w\") pod \"neutron-db-sync-6cksv\" (UID: \"1262aa38-ee4d-4579-b034-3669dd58a238\") " pod="openstack/neutron-db-sync-6cksv" Jan 30 17:51:40 crc kubenswrapper[4766]: I0130 17:51:40.614707 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6cksv" Jan 30 17:51:41 crc kubenswrapper[4766]: I0130 17:51:41.145943 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-6cksv"] Jan 30 17:51:41 crc kubenswrapper[4766]: I0130 17:51:41.941499 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6cksv" event={"ID":"1262aa38-ee4d-4579-b034-3669dd58a238","Type":"ContainerStarted","Data":"a53070aa7bf54f8e11851d2a42b467aeddd56da5149b02bbbe37c928d714291e"} Jan 30 17:51:41 crc kubenswrapper[4766]: I0130 17:51:41.941554 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6cksv" event={"ID":"1262aa38-ee4d-4579-b034-3669dd58a238","Type":"ContainerStarted","Data":"9634c08dc94922e1eeb1ba8f3a871513e592fe18fa7b663776b867aaa7f35d7c"} Jan 30 17:51:44 crc kubenswrapper[4766]: I0130 17:51:44.964344 4766 generic.go:334] "Generic (PLEG): container finished" podID="1262aa38-ee4d-4579-b034-3669dd58a238" containerID="a53070aa7bf54f8e11851d2a42b467aeddd56da5149b02bbbe37c928d714291e" exitCode=0 Jan 30 17:51:44 crc kubenswrapper[4766]: I0130 17:51:44.964444 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6cksv" event={"ID":"1262aa38-ee4d-4579-b034-3669dd58a238","Type":"ContainerDied","Data":"a53070aa7bf54f8e11851d2a42b467aeddd56da5149b02bbbe37c928d714291e"} Jan 30 17:51:46 crc kubenswrapper[4766]: I0130 17:51:46.297839 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6cksv" Jan 30 17:51:46 crc kubenswrapper[4766]: I0130 17:51:46.396382 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmp8w\" (UniqueName: \"kubernetes.io/projected/1262aa38-ee4d-4579-b034-3669dd58a238-kube-api-access-fmp8w\") pod \"1262aa38-ee4d-4579-b034-3669dd58a238\" (UID: \"1262aa38-ee4d-4579-b034-3669dd58a238\") " Jan 30 17:51:46 crc kubenswrapper[4766]: I0130 17:51:46.396432 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1262aa38-ee4d-4579-b034-3669dd58a238-config\") pod \"1262aa38-ee4d-4579-b034-3669dd58a238\" (UID: \"1262aa38-ee4d-4579-b034-3669dd58a238\") " Jan 30 17:51:46 crc kubenswrapper[4766]: I0130 17:51:46.396563 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1262aa38-ee4d-4579-b034-3669dd58a238-combined-ca-bundle\") pod \"1262aa38-ee4d-4579-b034-3669dd58a238\" (UID: \"1262aa38-ee4d-4579-b034-3669dd58a238\") " Jan 30 17:51:46 crc kubenswrapper[4766]: I0130 17:51:46.405382 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1262aa38-ee4d-4579-b034-3669dd58a238-kube-api-access-fmp8w" (OuterVolumeSpecName: "kube-api-access-fmp8w") pod "1262aa38-ee4d-4579-b034-3669dd58a238" (UID: "1262aa38-ee4d-4579-b034-3669dd58a238"). InnerVolumeSpecName "kube-api-access-fmp8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:51:46 crc kubenswrapper[4766]: I0130 17:51:46.436691 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1262aa38-ee4d-4579-b034-3669dd58a238-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1262aa38-ee4d-4579-b034-3669dd58a238" (UID: "1262aa38-ee4d-4579-b034-3669dd58a238"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:51:46 crc kubenswrapper[4766]: I0130 17:51:46.437241 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1262aa38-ee4d-4579-b034-3669dd58a238-config" (OuterVolumeSpecName: "config") pod "1262aa38-ee4d-4579-b034-3669dd58a238" (UID: "1262aa38-ee4d-4579-b034-3669dd58a238"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:51:46 crc kubenswrapper[4766]: I0130 17:51:46.499265 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmp8w\" (UniqueName: \"kubernetes.io/projected/1262aa38-ee4d-4579-b034-3669dd58a238-kube-api-access-fmp8w\") on node \"crc\" DevicePath \"\"" Jan 30 17:51:46 crc kubenswrapper[4766]: I0130 17:51:46.499326 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/1262aa38-ee4d-4579-b034-3669dd58a238-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:51:46 crc kubenswrapper[4766]: I0130 17:51:46.499344 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1262aa38-ee4d-4579-b034-3669dd58a238-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:51:46 crc kubenswrapper[4766]: I0130 17:51:46.980221 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6cksv" event={"ID":"1262aa38-ee4d-4579-b034-3669dd58a238","Type":"ContainerDied","Data":"9634c08dc94922e1eeb1ba8f3a871513e592fe18fa7b663776b867aaa7f35d7c"} Jan 30 17:51:46 crc kubenswrapper[4766]: I0130 17:51:46.980281 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9634c08dc94922e1eeb1ba8f3a871513e592fe18fa7b663776b867aaa7f35d7c" Jan 30 17:51:46 crc kubenswrapper[4766]: I0130 17:51:46.980292 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6cksv" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.203321 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7dc5cbf9f7-zkw64"] Jan 30 17:51:47 crc kubenswrapper[4766]: E0130 17:51:47.204071 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1262aa38-ee4d-4579-b034-3669dd58a238" containerName="neutron-db-sync" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.204099 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="1262aa38-ee4d-4579-b034-3669dd58a238" containerName="neutron-db-sync" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.204367 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="1262aa38-ee4d-4579-b034-3669dd58a238" containerName="neutron-db-sync" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.205477 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dc5cbf9f7-zkw64" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.213727 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7dc5cbf9f7-zkw64"] Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.302592 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-577cfcb8f7-k7t7l"] Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.304524 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-577cfcb8f7-k7t7l" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.307913 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.308039 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.308245 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-dxxvc" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.312879 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a7525bc-5e61-4580-b6ec-03ee13b7eefe-ovsdbserver-nb\") pod \"dnsmasq-dns-7dc5cbf9f7-zkw64\" (UID: \"3a7525bc-5e61-4580-b6ec-03ee13b7eefe\") " pod="openstack/dnsmasq-dns-7dc5cbf9f7-zkw64" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.312942 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fvkp\" (UniqueName: \"kubernetes.io/projected/3a7525bc-5e61-4580-b6ec-03ee13b7eefe-kube-api-access-5fvkp\") pod \"dnsmasq-dns-7dc5cbf9f7-zkw64\" (UID: \"3a7525bc-5e61-4580-b6ec-03ee13b7eefe\") " pod="openstack/dnsmasq-dns-7dc5cbf9f7-zkw64" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.312993 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a7525bc-5e61-4580-b6ec-03ee13b7eefe-config\") pod \"dnsmasq-dns-7dc5cbf9f7-zkw64\" (UID: \"3a7525bc-5e61-4580-b6ec-03ee13b7eefe\") " pod="openstack/dnsmasq-dns-7dc5cbf9f7-zkw64" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.313030 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a7525bc-5e61-4580-b6ec-03ee13b7eefe-dns-svc\") pod \"dnsmasq-dns-7dc5cbf9f7-zkw64\" (UID: \"3a7525bc-5e61-4580-b6ec-03ee13b7eefe\") " pod="openstack/dnsmasq-dns-7dc5cbf9f7-zkw64" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.313074 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a7525bc-5e61-4580-b6ec-03ee13b7eefe-ovsdbserver-sb\") pod \"dnsmasq-dns-7dc5cbf9f7-zkw64\" (UID: \"3a7525bc-5e61-4580-b6ec-03ee13b7eefe\") " pod="openstack/dnsmasq-dns-7dc5cbf9f7-zkw64" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.316703 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-577cfcb8f7-k7t7l"] Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.414948 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8fd7445-369a-43d1-8b68-6a3d7b2abbe3-combined-ca-bundle\") pod \"neutron-577cfcb8f7-k7t7l\" (UID: \"f8fd7445-369a-43d1-8b68-6a3d7b2abbe3\") " pod="openstack/neutron-577cfcb8f7-k7t7l" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.415012 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f8fd7445-369a-43d1-8b68-6a3d7b2abbe3-httpd-config\") pod \"neutron-577cfcb8f7-k7t7l\" (UID: \"f8fd7445-369a-43d1-8b68-6a3d7b2abbe3\") " pod="openstack/neutron-577cfcb8f7-k7t7l" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.415125 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a7525bc-5e61-4580-b6ec-03ee13b7eefe-ovsdbserver-nb\") pod \"dnsmasq-dns-7dc5cbf9f7-zkw64\" (UID: \"3a7525bc-5e61-4580-b6ec-03ee13b7eefe\") " pod="openstack/dnsmasq-dns-7dc5cbf9f7-zkw64" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.415159 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fvkp\" (UniqueName: \"kubernetes.io/projected/3a7525bc-5e61-4580-b6ec-03ee13b7eefe-kube-api-access-5fvkp\") pod \"dnsmasq-dns-7dc5cbf9f7-zkw64\" (UID: \"3a7525bc-5e61-4580-b6ec-03ee13b7eefe\") " pod="openstack/dnsmasq-dns-7dc5cbf9f7-zkw64" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.415217 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a7525bc-5e61-4580-b6ec-03ee13b7eefe-config\") pod \"dnsmasq-dns-7dc5cbf9f7-zkw64\" (UID: \"3a7525bc-5e61-4580-b6ec-03ee13b7eefe\") " pod="openstack/dnsmasq-dns-7dc5cbf9f7-zkw64" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.415245 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a7525bc-5e61-4580-b6ec-03ee13b7eefe-dns-svc\") pod \"dnsmasq-dns-7dc5cbf9f7-zkw64\" (UID: \"3a7525bc-5e61-4580-b6ec-03ee13b7eefe\") " pod="openstack/dnsmasq-dns-7dc5cbf9f7-zkw64" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.415270 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f8fd7445-369a-43d1-8b68-6a3d7b2abbe3-config\") pod \"neutron-577cfcb8f7-k7t7l\" (UID: \"f8fd7445-369a-43d1-8b68-6a3d7b2abbe3\") " pod="openstack/neutron-577cfcb8f7-k7t7l" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.415293 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sjgj\" (UniqueName: \"kubernetes.io/projected/f8fd7445-369a-43d1-8b68-6a3d7b2abbe3-kube-api-access-2sjgj\") pod \"neutron-577cfcb8f7-k7t7l\" (UID: \"f8fd7445-369a-43d1-8b68-6a3d7b2abbe3\") " pod="openstack/neutron-577cfcb8f7-k7t7l" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.415316 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a7525bc-5e61-4580-b6ec-03ee13b7eefe-ovsdbserver-sb\") pod \"dnsmasq-dns-7dc5cbf9f7-zkw64\" (UID: \"3a7525bc-5e61-4580-b6ec-03ee13b7eefe\") " pod="openstack/dnsmasq-dns-7dc5cbf9f7-zkw64" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.416146 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a7525bc-5e61-4580-b6ec-03ee13b7eefe-ovsdbserver-sb\") pod \"dnsmasq-dns-7dc5cbf9f7-zkw64\" (UID: \"3a7525bc-5e61-4580-b6ec-03ee13b7eefe\") " pod="openstack/dnsmasq-dns-7dc5cbf9f7-zkw64" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.417005 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a7525bc-5e61-4580-b6ec-03ee13b7eefe-ovsdbserver-nb\") pod \"dnsmasq-dns-7dc5cbf9f7-zkw64\" (UID: \"3a7525bc-5e61-4580-b6ec-03ee13b7eefe\") " pod="openstack/dnsmasq-dns-7dc5cbf9f7-zkw64" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.417459 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a7525bc-5e61-4580-b6ec-03ee13b7eefe-config\") pod \"dnsmasq-dns-7dc5cbf9f7-zkw64\" (UID: \"3a7525bc-5e61-4580-b6ec-03ee13b7eefe\") " pod="openstack/dnsmasq-dns-7dc5cbf9f7-zkw64" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.417634 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a7525bc-5e61-4580-b6ec-03ee13b7eefe-dns-svc\") pod \"dnsmasq-dns-7dc5cbf9f7-zkw64\" (UID: \"3a7525bc-5e61-4580-b6ec-03ee13b7eefe\") " pod="openstack/dnsmasq-dns-7dc5cbf9f7-zkw64" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.448682 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fvkp\" (UniqueName: \"kubernetes.io/projected/3a7525bc-5e61-4580-b6ec-03ee13b7eefe-kube-api-access-5fvkp\") pod \"dnsmasq-dns-7dc5cbf9f7-zkw64\" (UID: \"3a7525bc-5e61-4580-b6ec-03ee13b7eefe\") " pod="openstack/dnsmasq-dns-7dc5cbf9f7-zkw64" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.517328 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f8fd7445-369a-43d1-8b68-6a3d7b2abbe3-httpd-config\") pod \"neutron-577cfcb8f7-k7t7l\" (UID: \"f8fd7445-369a-43d1-8b68-6a3d7b2abbe3\") " pod="openstack/neutron-577cfcb8f7-k7t7l" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.517473 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f8fd7445-369a-43d1-8b68-6a3d7b2abbe3-config\") pod \"neutron-577cfcb8f7-k7t7l\" (UID: \"f8fd7445-369a-43d1-8b68-6a3d7b2abbe3\") " pod="openstack/neutron-577cfcb8f7-k7t7l" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.517498 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sjgj\" (UniqueName: \"kubernetes.io/projected/f8fd7445-369a-43d1-8b68-6a3d7b2abbe3-kube-api-access-2sjgj\") pod \"neutron-577cfcb8f7-k7t7l\" (UID: \"f8fd7445-369a-43d1-8b68-6a3d7b2abbe3\") " pod="openstack/neutron-577cfcb8f7-k7t7l" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.517556 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8fd7445-369a-43d1-8b68-6a3d7b2abbe3-combined-ca-bundle\") pod \"neutron-577cfcb8f7-k7t7l\" (UID: \"f8fd7445-369a-43d1-8b68-6a3d7b2abbe3\") " pod="openstack/neutron-577cfcb8f7-k7t7l" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.522689 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f8fd7445-369a-43d1-8b68-6a3d7b2abbe3-httpd-config\") pod \"neutron-577cfcb8f7-k7t7l\" (UID: \"f8fd7445-369a-43d1-8b68-6a3d7b2abbe3\") " pod="openstack/neutron-577cfcb8f7-k7t7l" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.523403 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8fd7445-369a-43d1-8b68-6a3d7b2abbe3-combined-ca-bundle\") pod \"neutron-577cfcb8f7-k7t7l\" (UID: \"f8fd7445-369a-43d1-8b68-6a3d7b2abbe3\") " pod="openstack/neutron-577cfcb8f7-k7t7l" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.523741 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dc5cbf9f7-zkw64" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.524586 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f8fd7445-369a-43d1-8b68-6a3d7b2abbe3-config\") pod \"neutron-577cfcb8f7-k7t7l\" (UID: \"f8fd7445-369a-43d1-8b68-6a3d7b2abbe3\") " pod="openstack/neutron-577cfcb8f7-k7t7l" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.541417 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sjgj\" (UniqueName: \"kubernetes.io/projected/f8fd7445-369a-43d1-8b68-6a3d7b2abbe3-kube-api-access-2sjgj\") pod \"neutron-577cfcb8f7-k7t7l\" (UID: \"f8fd7445-369a-43d1-8b68-6a3d7b2abbe3\") " pod="openstack/neutron-577cfcb8f7-k7t7l" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.637732 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-577cfcb8f7-k7t7l" Jan 30 17:51:48 crc kubenswrapper[4766]: I0130 17:51:48.018158 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7dc5cbf9f7-zkw64"] Jan 30 17:51:48 crc kubenswrapper[4766]: I0130 17:51:48.357452 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-577cfcb8f7-k7t7l"] Jan 30 17:51:49 crc kubenswrapper[4766]: I0130 17:51:48.998534 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-577cfcb8f7-k7t7l" event={"ID":"f8fd7445-369a-43d1-8b68-6a3d7b2abbe3","Type":"ContainerStarted","Data":"b2dc6f2009589171a2fecbbf84375aa1b0bc4bfae1376d7014628cf51dddb1b0"} Jan 30 17:51:49 crc kubenswrapper[4766]: I0130 17:51:49.000046 4766 generic.go:334] "Generic (PLEG): container finished" podID="3a7525bc-5e61-4580-b6ec-03ee13b7eefe" containerID="55c241c1b1860be383ecda1eec34453e72d6dcb7f7ddf745097a4fb7e9ad2729" exitCode=0 Jan 30 17:51:49 crc kubenswrapper[4766]: I0130 17:51:49.000737 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-577cfcb8f7-k7t7l" event={"ID":"f8fd7445-369a-43d1-8b68-6a3d7b2abbe3","Type":"ContainerStarted","Data":"bd74302c924e258fde4a2f09fea2671e40e3d24fd058c8828e9537e4000ff226"} Jan 30 17:51:49 crc kubenswrapper[4766]: I0130 17:51:49.000792 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-577cfcb8f7-k7t7l" event={"ID":"f8fd7445-369a-43d1-8b68-6a3d7b2abbe3","Type":"ContainerStarted","Data":"b35542c0826b0f1fa728f5e02e3f960926f34005248508c4b99a54bf50cb8f1f"} Jan 30 17:51:49 crc kubenswrapper[4766]: I0130 17:51:49.000819 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dc5cbf9f7-zkw64" event={"ID":"3a7525bc-5e61-4580-b6ec-03ee13b7eefe","Type":"ContainerDied","Data":"55c241c1b1860be383ecda1eec34453e72d6dcb7f7ddf745097a4fb7e9ad2729"} Jan 30 17:51:49 crc kubenswrapper[4766]: I0130 17:51:49.000841 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-577cfcb8f7-k7t7l" Jan 30 17:51:49 crc kubenswrapper[4766]: I0130 17:51:49.000857 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dc5cbf9f7-zkw64" event={"ID":"3a7525bc-5e61-4580-b6ec-03ee13b7eefe","Type":"ContainerStarted","Data":"b79a964de471e1d1b203d59d894a14ac3d8e1bae897a81215e4af1ded098934b"} Jan 30 17:51:49 crc kubenswrapper[4766]: I0130 17:51:49.030656 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-577cfcb8f7-k7t7l" podStartSLOduration=2.030634033 podStartE2EDuration="2.030634033s" podCreationTimestamp="2026-01-30 17:51:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:51:49.029604415 +0000 UTC m=+5363.667561761" watchObservedRunningTime="2026-01-30 17:51:49.030634033 +0000 UTC m=+5363.668591389" Jan 30 17:51:50 crc kubenswrapper[4766]: I0130 17:51:50.009508 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dc5cbf9f7-zkw64" event={"ID":"3a7525bc-5e61-4580-b6ec-03ee13b7eefe","Type":"ContainerStarted","Data":"90a0a5811dcd0404a316f42d00527453af74b9dc4dd4a141b0ba0cd2e2cf54c4"} Jan 30 17:51:50 crc kubenswrapper[4766]: I0130 17:51:50.035067 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7dc5cbf9f7-zkw64" podStartSLOduration=3.035048542 podStartE2EDuration="3.035048542s" podCreationTimestamp="2026-01-30 17:51:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:51:50.029255524 +0000 UTC m=+5364.667212870" watchObservedRunningTime="2026-01-30 17:51:50.035048542 +0000 UTC m=+5364.673005888" Jan 30 17:51:51 crc kubenswrapper[4766]: I0130 17:51:51.029748 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7dc5cbf9f7-zkw64" Jan 30 17:51:57 crc kubenswrapper[4766]: I0130 17:51:57.525328 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7dc5cbf9f7-zkw64" Jan 30 17:51:57 crc kubenswrapper[4766]: I0130 17:51:57.597072 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f4b85cbd9-qr7g8"] Jan 30 17:51:57 crc kubenswrapper[4766]: I0130 17:51:57.597400 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6f4b85cbd9-qr7g8" podUID="8cfd4446-3501-49ef-911f-360c75070ca8" containerName="dnsmasq-dns" containerID="cri-o://325111ae8b2b39896c73638f1c0026db7d59ab4097cfdf84ec6a851d0d088ecd" gracePeriod=10 Jan 30 17:51:58 crc kubenswrapper[4766]: I0130 17:51:58.079673 4766 generic.go:334] "Generic (PLEG): container finished" podID="8cfd4446-3501-49ef-911f-360c75070ca8" containerID="325111ae8b2b39896c73638f1c0026db7d59ab4097cfdf84ec6a851d0d088ecd" exitCode=0 Jan 30 17:51:58 crc kubenswrapper[4766]: I0130 17:51:58.079950 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f4b85cbd9-qr7g8" event={"ID":"8cfd4446-3501-49ef-911f-360c75070ca8","Type":"ContainerDied","Data":"325111ae8b2b39896c73638f1c0026db7d59ab4097cfdf84ec6a851d0d088ecd"} Jan 30 17:51:58 crc kubenswrapper[4766]: I0130 17:51:58.079978 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f4b85cbd9-qr7g8" event={"ID":"8cfd4446-3501-49ef-911f-360c75070ca8","Type":"ContainerDied","Data":"2a00f6308abf923c4adfba878c7daf0c4fdb4080490739d33a8a3b9162feb232"} Jan 30 17:51:58 crc kubenswrapper[4766]: I0130 17:51:58.079990 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a00f6308abf923c4adfba878c7daf0c4fdb4080490739d33a8a3b9162feb232" Jan 30 17:51:58 crc kubenswrapper[4766]: I0130 17:51:58.139118 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f4b85cbd9-qr7g8" Jan 30 17:51:58 crc kubenswrapper[4766]: I0130 17:51:58.218553 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8cfd4446-3501-49ef-911f-360c75070ca8-dns-svc\") pod \"8cfd4446-3501-49ef-911f-360c75070ca8\" (UID: \"8cfd4446-3501-49ef-911f-360c75070ca8\") " Jan 30 17:51:58 crc kubenswrapper[4766]: I0130 17:51:58.218613 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7fsh\" (UniqueName: \"kubernetes.io/projected/8cfd4446-3501-49ef-911f-360c75070ca8-kube-api-access-s7fsh\") pod \"8cfd4446-3501-49ef-911f-360c75070ca8\" (UID: \"8cfd4446-3501-49ef-911f-360c75070ca8\") " Jan 30 17:51:58 crc kubenswrapper[4766]: I0130 17:51:58.218731 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8cfd4446-3501-49ef-911f-360c75070ca8-ovsdbserver-nb\") pod \"8cfd4446-3501-49ef-911f-360c75070ca8\" (UID: \"8cfd4446-3501-49ef-911f-360c75070ca8\") " Jan 30 17:51:58 crc kubenswrapper[4766]: I0130 17:51:58.218833 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cfd4446-3501-49ef-911f-360c75070ca8-config\") pod \"8cfd4446-3501-49ef-911f-360c75070ca8\" (UID: \"8cfd4446-3501-49ef-911f-360c75070ca8\") " Jan 30 17:51:58 crc kubenswrapper[4766]: I0130 17:51:58.218866 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8cfd4446-3501-49ef-911f-360c75070ca8-ovsdbserver-sb\") pod \"8cfd4446-3501-49ef-911f-360c75070ca8\" (UID: \"8cfd4446-3501-49ef-911f-360c75070ca8\") " Jan 30 17:51:58 crc kubenswrapper[4766]: I0130 17:51:58.225047 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cfd4446-3501-49ef-911f-360c75070ca8-kube-api-access-s7fsh" (OuterVolumeSpecName: "kube-api-access-s7fsh") pod "8cfd4446-3501-49ef-911f-360c75070ca8" (UID: "8cfd4446-3501-49ef-911f-360c75070ca8"). InnerVolumeSpecName "kube-api-access-s7fsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:51:58 crc kubenswrapper[4766]: I0130 17:51:58.264413 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cfd4446-3501-49ef-911f-360c75070ca8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8cfd4446-3501-49ef-911f-360c75070ca8" (UID: "8cfd4446-3501-49ef-911f-360c75070ca8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:51:58 crc kubenswrapper[4766]: I0130 17:51:58.267458 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cfd4446-3501-49ef-911f-360c75070ca8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8cfd4446-3501-49ef-911f-360c75070ca8" (UID: "8cfd4446-3501-49ef-911f-360c75070ca8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:51:58 crc kubenswrapper[4766]: I0130 17:51:58.276424 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cfd4446-3501-49ef-911f-360c75070ca8-config" (OuterVolumeSpecName: "config") pod "8cfd4446-3501-49ef-911f-360c75070ca8" (UID: "8cfd4446-3501-49ef-911f-360c75070ca8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:51:58 crc kubenswrapper[4766]: I0130 17:51:58.305573 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cfd4446-3501-49ef-911f-360c75070ca8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8cfd4446-3501-49ef-911f-360c75070ca8" (UID: "8cfd4446-3501-49ef-911f-360c75070ca8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:51:58 crc kubenswrapper[4766]: I0130 17:51:58.322937 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8cfd4446-3501-49ef-911f-360c75070ca8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 17:51:58 crc kubenswrapper[4766]: I0130 17:51:58.322968 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cfd4446-3501-49ef-911f-360c75070ca8-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:51:58 crc kubenswrapper[4766]: I0130 17:51:58.322979 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8cfd4446-3501-49ef-911f-360c75070ca8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 17:51:58 crc kubenswrapper[4766]: I0130 17:51:58.322986 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8cfd4446-3501-49ef-911f-360c75070ca8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 17:51:58 crc kubenswrapper[4766]: I0130 17:51:58.322997 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7fsh\" (UniqueName: \"kubernetes.io/projected/8cfd4446-3501-49ef-911f-360c75070ca8-kube-api-access-s7fsh\") on node \"crc\" DevicePath \"\"" Jan 30 17:51:59 crc kubenswrapper[4766]: I0130 17:51:59.086653 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f4b85cbd9-qr7g8" Jan 30 17:51:59 crc kubenswrapper[4766]: I0130 17:51:59.129368 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f4b85cbd9-qr7g8"] Jan 30 17:51:59 crc kubenswrapper[4766]: E0130 17:51:59.133273 4766 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8cfd4446_3501_49ef_911f_360c75070ca8.slice/crio-2a00f6308abf923c4adfba878c7daf0c4fdb4080490739d33a8a3b9162feb232\": RecentStats: unable to find data in memory cache]" Jan 30 17:51:59 crc kubenswrapper[4766]: I0130 17:51:59.137698 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6f4b85cbd9-qr7g8"] Jan 30 17:52:00 crc kubenswrapper[4766]: I0130 17:52:00.048421 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cfd4446-3501-49ef-911f-360c75070ca8" path="/var/lib/kubelet/pods/8cfd4446-3501-49ef-911f-360c75070ca8/volumes" Jan 30 17:52:17 crc kubenswrapper[4766]: I0130 17:52:17.655331 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-577cfcb8f7-k7t7l" Jan 30 17:52:23 crc kubenswrapper[4766]: I0130 17:52:23.902493 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-tm7r5"] Jan 30 17:52:23 crc kubenswrapper[4766]: E0130 17:52:23.903385 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cfd4446-3501-49ef-911f-360c75070ca8" containerName="dnsmasq-dns" Jan 30 17:52:23 crc kubenswrapper[4766]: I0130 17:52:23.903403 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cfd4446-3501-49ef-911f-360c75070ca8" containerName="dnsmasq-dns" Jan 30 17:52:23 crc kubenswrapper[4766]: E0130 17:52:23.903428 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cfd4446-3501-49ef-911f-360c75070ca8" containerName="init" Jan 30 17:52:23 crc kubenswrapper[4766]: I0130 17:52:23.903436 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cfd4446-3501-49ef-911f-360c75070ca8" containerName="init" Jan 30 17:52:23 crc kubenswrapper[4766]: I0130 17:52:23.903622 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cfd4446-3501-49ef-911f-360c75070ca8" containerName="dnsmasq-dns" Jan 30 17:52:23 crc kubenswrapper[4766]: I0130 17:52:23.904461 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-tm7r5" Jan 30 17:52:23 crc kubenswrapper[4766]: I0130 17:52:23.911908 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-tm7r5"] Jan 30 17:52:23 crc kubenswrapper[4766]: I0130 17:52:23.959299 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zpml\" (UniqueName: \"kubernetes.io/projected/5946960e-4a1d-4360-ae75-7648934eeb0c-kube-api-access-6zpml\") pod \"glance-db-create-tm7r5\" (UID: \"5946960e-4a1d-4360-ae75-7648934eeb0c\") " pod="openstack/glance-db-create-tm7r5" Jan 30 17:52:23 crc kubenswrapper[4766]: I0130 17:52:23.959378 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5946960e-4a1d-4360-ae75-7648934eeb0c-operator-scripts\") pod \"glance-db-create-tm7r5\" (UID: \"5946960e-4a1d-4360-ae75-7648934eeb0c\") " pod="openstack/glance-db-create-tm7r5" Jan 30 17:52:24 crc kubenswrapper[4766]: I0130 17:52:24.015726 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-823f-account-create-update-pttr7"] Jan 30 17:52:24 crc kubenswrapper[4766]: I0130 17:52:24.017219 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-823f-account-create-update-pttr7" Jan 30 17:52:24 crc kubenswrapper[4766]: I0130 17:52:24.034100 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 30 17:52:24 crc kubenswrapper[4766]: I0130 17:52:24.036283 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-823f-account-create-update-pttr7"] Jan 30 17:52:24 crc kubenswrapper[4766]: I0130 17:52:24.062568 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zpml\" (UniqueName: \"kubernetes.io/projected/5946960e-4a1d-4360-ae75-7648934eeb0c-kube-api-access-6zpml\") pod \"glance-db-create-tm7r5\" (UID: \"5946960e-4a1d-4360-ae75-7648934eeb0c\") " pod="openstack/glance-db-create-tm7r5" Jan 30 17:52:24 crc kubenswrapper[4766]: I0130 17:52:24.062642 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5946960e-4a1d-4360-ae75-7648934eeb0c-operator-scripts\") pod \"glance-db-create-tm7r5\" (UID: \"5946960e-4a1d-4360-ae75-7648934eeb0c\") " pod="openstack/glance-db-create-tm7r5" Jan 30 17:52:24 crc kubenswrapper[4766]: I0130 17:52:24.062705 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f01e6326-2d83-4889-9b7a-f45b9f6f3063-operator-scripts\") pod \"glance-823f-account-create-update-pttr7\" (UID: \"f01e6326-2d83-4889-9b7a-f45b9f6f3063\") " pod="openstack/glance-823f-account-create-update-pttr7" Jan 30 17:52:24 crc kubenswrapper[4766]: I0130 17:52:24.062838 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjc6w\" (UniqueName: \"kubernetes.io/projected/f01e6326-2d83-4889-9b7a-f45b9f6f3063-kube-api-access-tjc6w\") pod \"glance-823f-account-create-update-pttr7\" (UID: \"f01e6326-2d83-4889-9b7a-f45b9f6f3063\") " pod="openstack/glance-823f-account-create-update-pttr7" Jan 30 17:52:24 crc kubenswrapper[4766]: I0130 17:52:24.063432 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5946960e-4a1d-4360-ae75-7648934eeb0c-operator-scripts\") pod \"glance-db-create-tm7r5\" (UID: \"5946960e-4a1d-4360-ae75-7648934eeb0c\") " pod="openstack/glance-db-create-tm7r5" Jan 30 17:52:24 crc kubenswrapper[4766]: I0130 17:52:24.083076 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zpml\" (UniqueName: \"kubernetes.io/projected/5946960e-4a1d-4360-ae75-7648934eeb0c-kube-api-access-6zpml\") pod \"glance-db-create-tm7r5\" (UID: \"5946960e-4a1d-4360-ae75-7648934eeb0c\") " pod="openstack/glance-db-create-tm7r5" Jan 30 17:52:24 crc kubenswrapper[4766]: I0130 17:52:24.164409 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f01e6326-2d83-4889-9b7a-f45b9f6f3063-operator-scripts\") pod \"glance-823f-account-create-update-pttr7\" (UID: \"f01e6326-2d83-4889-9b7a-f45b9f6f3063\") " pod="openstack/glance-823f-account-create-update-pttr7" Jan 30 17:52:24 crc kubenswrapper[4766]: I0130 17:52:24.164573 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjc6w\" (UniqueName: \"kubernetes.io/projected/f01e6326-2d83-4889-9b7a-f45b9f6f3063-kube-api-access-tjc6w\") pod \"glance-823f-account-create-update-pttr7\" (UID: \"f01e6326-2d83-4889-9b7a-f45b9f6f3063\") " pod="openstack/glance-823f-account-create-update-pttr7" Jan 30 17:52:24 crc kubenswrapper[4766]: I0130 17:52:24.165220 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f01e6326-2d83-4889-9b7a-f45b9f6f3063-operator-scripts\") pod \"glance-823f-account-create-update-pttr7\" (UID: \"f01e6326-2d83-4889-9b7a-f45b9f6f3063\") " pod="openstack/glance-823f-account-create-update-pttr7" Jan 30 17:52:24 crc kubenswrapper[4766]: I0130 17:52:24.184987 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjc6w\" (UniqueName: \"kubernetes.io/projected/f01e6326-2d83-4889-9b7a-f45b9f6f3063-kube-api-access-tjc6w\") pod \"glance-823f-account-create-update-pttr7\" (UID: \"f01e6326-2d83-4889-9b7a-f45b9f6f3063\") " pod="openstack/glance-823f-account-create-update-pttr7" Jan 30 17:52:24 crc kubenswrapper[4766]: I0130 17:52:24.257859 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-tm7r5" Jan 30 17:52:24 crc kubenswrapper[4766]: I0130 17:52:24.338426 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-823f-account-create-update-pttr7" Jan 30 17:52:24 crc kubenswrapper[4766]: I0130 17:52:24.782767 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-tm7r5"] Jan 30 17:52:24 crc kubenswrapper[4766]: I0130 17:52:24.845369 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-823f-account-create-update-pttr7"] Jan 30 17:52:24 crc kubenswrapper[4766]: W0130 17:52:24.851792 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf01e6326_2d83_4889_9b7a_f45b9f6f3063.slice/crio-90bb896a333417b0b7e62647ee38db8e1abbc7f45f83d7cd560326df62ac92ff WatchSource:0}: Error finding container 90bb896a333417b0b7e62647ee38db8e1abbc7f45f83d7cd560326df62ac92ff: Status 404 returned error can't find the container with id 90bb896a333417b0b7e62647ee38db8e1abbc7f45f83d7cd560326df62ac92ff Jan 30 17:52:25 crc kubenswrapper[4766]: I0130 17:52:25.310057 4766 generic.go:334] "Generic (PLEG): container finished" podID="5946960e-4a1d-4360-ae75-7648934eeb0c" containerID="5dc0db8c133f2561de270e8d644a27c259f84f30c2c5e0b609690a8e3867c8ad" exitCode=0 Jan 30 17:52:25 crc kubenswrapper[4766]: I0130 17:52:25.310165 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-tm7r5" event={"ID":"5946960e-4a1d-4360-ae75-7648934eeb0c","Type":"ContainerDied","Data":"5dc0db8c133f2561de270e8d644a27c259f84f30c2c5e0b609690a8e3867c8ad"} Jan 30 17:52:25 crc kubenswrapper[4766]: I0130 17:52:25.310226 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-tm7r5" event={"ID":"5946960e-4a1d-4360-ae75-7648934eeb0c","Type":"ContainerStarted","Data":"5cbfbe2787c3b0e9fc4ebb7fda72b2f5db2fe014e9b366f10d95bd8db396a1ca"} Jan 30 17:52:25 crc kubenswrapper[4766]: I0130 17:52:25.316233 4766 generic.go:334] "Generic (PLEG): container finished" podID="f01e6326-2d83-4889-9b7a-f45b9f6f3063" containerID="ee4c2e79057aa3b57922a39a79c5f1fe75768ec53755ad01f26f4a886101dcae" exitCode=0 Jan 30 17:52:25 crc kubenswrapper[4766]: I0130 17:52:25.316395 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-823f-account-create-update-pttr7" event={"ID":"f01e6326-2d83-4889-9b7a-f45b9f6f3063","Type":"ContainerDied","Data":"ee4c2e79057aa3b57922a39a79c5f1fe75768ec53755ad01f26f4a886101dcae"} Jan 30 17:52:25 crc kubenswrapper[4766]: I0130 17:52:25.316503 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-823f-account-create-update-pttr7" event={"ID":"f01e6326-2d83-4889-9b7a-f45b9f6f3063","Type":"ContainerStarted","Data":"90bb896a333417b0b7e62647ee38db8e1abbc7f45f83d7cd560326df62ac92ff"} Jan 30 17:52:26 crc kubenswrapper[4766]: I0130 17:52:26.788704 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-823f-account-create-update-pttr7" Jan 30 17:52:26 crc kubenswrapper[4766]: I0130 17:52:26.794867 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-tm7r5" Jan 30 17:52:26 crc kubenswrapper[4766]: I0130 17:52:26.920013 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjc6w\" (UniqueName: \"kubernetes.io/projected/f01e6326-2d83-4889-9b7a-f45b9f6f3063-kube-api-access-tjc6w\") pod \"f01e6326-2d83-4889-9b7a-f45b9f6f3063\" (UID: \"f01e6326-2d83-4889-9b7a-f45b9f6f3063\") " Jan 30 17:52:26 crc kubenswrapper[4766]: I0130 17:52:26.920076 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f01e6326-2d83-4889-9b7a-f45b9f6f3063-operator-scripts\") pod \"f01e6326-2d83-4889-9b7a-f45b9f6f3063\" (UID: \"f01e6326-2d83-4889-9b7a-f45b9f6f3063\") " Jan 30 17:52:26 crc kubenswrapper[4766]: I0130 17:52:26.920268 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5946960e-4a1d-4360-ae75-7648934eeb0c-operator-scripts\") pod \"5946960e-4a1d-4360-ae75-7648934eeb0c\" (UID: \"5946960e-4a1d-4360-ae75-7648934eeb0c\") " Jan 30 17:52:26 crc kubenswrapper[4766]: I0130 17:52:26.920333 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zpml\" (UniqueName: \"kubernetes.io/projected/5946960e-4a1d-4360-ae75-7648934eeb0c-kube-api-access-6zpml\") pod \"5946960e-4a1d-4360-ae75-7648934eeb0c\" (UID: \"5946960e-4a1d-4360-ae75-7648934eeb0c\") " Jan 30 17:52:26 crc kubenswrapper[4766]: I0130 17:52:26.921050 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f01e6326-2d83-4889-9b7a-f45b9f6f3063-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f01e6326-2d83-4889-9b7a-f45b9f6f3063" (UID: "f01e6326-2d83-4889-9b7a-f45b9f6f3063"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:52:26 crc kubenswrapper[4766]: I0130 17:52:26.921307 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5946960e-4a1d-4360-ae75-7648934eeb0c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5946960e-4a1d-4360-ae75-7648934eeb0c" (UID: "5946960e-4a1d-4360-ae75-7648934eeb0c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:52:26 crc kubenswrapper[4766]: I0130 17:52:26.926577 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f01e6326-2d83-4889-9b7a-f45b9f6f3063-kube-api-access-tjc6w" (OuterVolumeSpecName: "kube-api-access-tjc6w") pod "f01e6326-2d83-4889-9b7a-f45b9f6f3063" (UID: "f01e6326-2d83-4889-9b7a-f45b9f6f3063"). InnerVolumeSpecName "kube-api-access-tjc6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:52:26 crc kubenswrapper[4766]: I0130 17:52:26.927131 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5946960e-4a1d-4360-ae75-7648934eeb0c-kube-api-access-6zpml" (OuterVolumeSpecName: "kube-api-access-6zpml") pod "5946960e-4a1d-4360-ae75-7648934eeb0c" (UID: "5946960e-4a1d-4360-ae75-7648934eeb0c"). InnerVolumeSpecName "kube-api-access-6zpml". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:52:27 crc kubenswrapper[4766]: I0130 17:52:27.024930 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zpml\" (UniqueName: \"kubernetes.io/projected/5946960e-4a1d-4360-ae75-7648934eeb0c-kube-api-access-6zpml\") on node \"crc\" DevicePath \"\"" Jan 30 17:52:27 crc kubenswrapper[4766]: I0130 17:52:27.024960 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjc6w\" (UniqueName: \"kubernetes.io/projected/f01e6326-2d83-4889-9b7a-f45b9f6f3063-kube-api-access-tjc6w\") on node \"crc\" DevicePath \"\"" Jan 30 17:52:27 crc kubenswrapper[4766]: I0130 17:52:27.024970 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f01e6326-2d83-4889-9b7a-f45b9f6f3063-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:52:27 crc kubenswrapper[4766]: I0130 17:52:27.024980 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5946960e-4a1d-4360-ae75-7648934eeb0c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:52:27 crc kubenswrapper[4766]: I0130 17:52:27.348441 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-tm7r5" event={"ID":"5946960e-4a1d-4360-ae75-7648934eeb0c","Type":"ContainerDied","Data":"5cbfbe2787c3b0e9fc4ebb7fda72b2f5db2fe014e9b366f10d95bd8db396a1ca"} Jan 30 17:52:27 crc kubenswrapper[4766]: I0130 17:52:27.348490 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5cbfbe2787c3b0e9fc4ebb7fda72b2f5db2fe014e9b366f10d95bd8db396a1ca" Jan 30 17:52:27 crc kubenswrapper[4766]: I0130 17:52:27.348562 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-tm7r5" Jan 30 17:52:27 crc kubenswrapper[4766]: I0130 17:52:27.351491 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-823f-account-create-update-pttr7" event={"ID":"f01e6326-2d83-4889-9b7a-f45b9f6f3063","Type":"ContainerDied","Data":"90bb896a333417b0b7e62647ee38db8e1abbc7f45f83d7cd560326df62ac92ff"} Jan 30 17:52:27 crc kubenswrapper[4766]: I0130 17:52:27.351549 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-823f-account-create-update-pttr7" Jan 30 17:52:27 crc kubenswrapper[4766]: I0130 17:52:27.351556 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90bb896a333417b0b7e62647ee38db8e1abbc7f45f83d7cd560326df62ac92ff" Jan 30 17:52:29 crc kubenswrapper[4766]: I0130 17:52:29.186057 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-ngkz2"] Jan 30 17:52:29 crc kubenswrapper[4766]: E0130 17:52:29.186761 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f01e6326-2d83-4889-9b7a-f45b9f6f3063" containerName="mariadb-account-create-update" Jan 30 17:52:29 crc kubenswrapper[4766]: I0130 17:52:29.186779 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f01e6326-2d83-4889-9b7a-f45b9f6f3063" containerName="mariadb-account-create-update" Jan 30 17:52:29 crc kubenswrapper[4766]: E0130 17:52:29.186816 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5946960e-4a1d-4360-ae75-7648934eeb0c" containerName="mariadb-database-create" Jan 30 17:52:29 crc kubenswrapper[4766]: I0130 17:52:29.186824 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="5946960e-4a1d-4360-ae75-7648934eeb0c" containerName="mariadb-database-create" Jan 30 17:52:29 crc kubenswrapper[4766]: I0130 17:52:29.187005 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="5946960e-4a1d-4360-ae75-7648934eeb0c" containerName="mariadb-database-create" Jan 30 17:52:29 crc kubenswrapper[4766]: I0130 17:52:29.187024 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="f01e6326-2d83-4889-9b7a-f45b9f6f3063" containerName="mariadb-account-create-update" Jan 30 17:52:29 crc kubenswrapper[4766]: I0130 17:52:29.187622 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-ngkz2" Jan 30 17:52:29 crc kubenswrapper[4766]: I0130 17:52:29.193323 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-fmg4z" Jan 30 17:52:29 crc kubenswrapper[4766]: I0130 17:52:29.193487 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 30 17:52:29 crc kubenswrapper[4766]: I0130 17:52:29.203468 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-ngkz2"] Jan 30 17:52:29 crc kubenswrapper[4766]: I0130 17:52:29.276436 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fca69b03-2748-4111-8dd8-0cc28cf328d3-db-sync-config-data\") pod \"glance-db-sync-ngkz2\" (UID: \"fca69b03-2748-4111-8dd8-0cc28cf328d3\") " pod="openstack/glance-db-sync-ngkz2" Jan 30 17:52:29 crc kubenswrapper[4766]: I0130 17:52:29.276492 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fca69b03-2748-4111-8dd8-0cc28cf328d3-config-data\") pod \"glance-db-sync-ngkz2\" (UID: \"fca69b03-2748-4111-8dd8-0cc28cf328d3\") " pod="openstack/glance-db-sync-ngkz2" Jan 30 17:52:29 crc kubenswrapper[4766]: I0130 17:52:29.276576 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhlz7\" (UniqueName: \"kubernetes.io/projected/fca69b03-2748-4111-8dd8-0cc28cf328d3-kube-api-access-xhlz7\") pod \"glance-db-sync-ngkz2\" (UID: \"fca69b03-2748-4111-8dd8-0cc28cf328d3\") " pod="openstack/glance-db-sync-ngkz2" Jan 30 17:52:29 crc kubenswrapper[4766]: I0130 17:52:29.276811 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fca69b03-2748-4111-8dd8-0cc28cf328d3-combined-ca-bundle\") pod \"glance-db-sync-ngkz2\" (UID: \"fca69b03-2748-4111-8dd8-0cc28cf328d3\") " pod="openstack/glance-db-sync-ngkz2" Jan 30 17:52:29 crc kubenswrapper[4766]: I0130 17:52:29.378084 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhlz7\" (UniqueName: \"kubernetes.io/projected/fca69b03-2748-4111-8dd8-0cc28cf328d3-kube-api-access-xhlz7\") pod \"glance-db-sync-ngkz2\" (UID: \"fca69b03-2748-4111-8dd8-0cc28cf328d3\") " pod="openstack/glance-db-sync-ngkz2" Jan 30 17:52:29 crc kubenswrapper[4766]: I0130 17:52:29.378194 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fca69b03-2748-4111-8dd8-0cc28cf328d3-combined-ca-bundle\") pod \"glance-db-sync-ngkz2\" (UID: \"fca69b03-2748-4111-8dd8-0cc28cf328d3\") " pod="openstack/glance-db-sync-ngkz2" Jan 30 17:52:29 crc kubenswrapper[4766]: I0130 17:52:29.378267 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fca69b03-2748-4111-8dd8-0cc28cf328d3-db-sync-config-data\") pod \"glance-db-sync-ngkz2\" (UID: \"fca69b03-2748-4111-8dd8-0cc28cf328d3\") " pod="openstack/glance-db-sync-ngkz2" Jan 30 17:52:29 crc kubenswrapper[4766]: I0130 17:52:29.378308 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fca69b03-2748-4111-8dd8-0cc28cf328d3-config-data\") pod \"glance-db-sync-ngkz2\" (UID: \"fca69b03-2748-4111-8dd8-0cc28cf328d3\") " pod="openstack/glance-db-sync-ngkz2" Jan 30 17:52:29 crc kubenswrapper[4766]: I0130 17:52:29.385164 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fca69b03-2748-4111-8dd8-0cc28cf328d3-combined-ca-bundle\") pod \"glance-db-sync-ngkz2\" (UID: \"fca69b03-2748-4111-8dd8-0cc28cf328d3\") " pod="openstack/glance-db-sync-ngkz2" Jan 30 17:52:29 crc kubenswrapper[4766]: I0130 17:52:29.386436 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fca69b03-2748-4111-8dd8-0cc28cf328d3-db-sync-config-data\") pod \"glance-db-sync-ngkz2\" (UID: \"fca69b03-2748-4111-8dd8-0cc28cf328d3\") " pod="openstack/glance-db-sync-ngkz2" Jan 30 17:52:29 crc kubenswrapper[4766]: I0130 17:52:29.392201 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fca69b03-2748-4111-8dd8-0cc28cf328d3-config-data\") pod \"glance-db-sync-ngkz2\" (UID: \"fca69b03-2748-4111-8dd8-0cc28cf328d3\") " pod="openstack/glance-db-sync-ngkz2" Jan 30 17:52:29 crc kubenswrapper[4766]: I0130 17:52:29.412124 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhlz7\" (UniqueName: \"kubernetes.io/projected/fca69b03-2748-4111-8dd8-0cc28cf328d3-kube-api-access-xhlz7\") pod \"glance-db-sync-ngkz2\" (UID: \"fca69b03-2748-4111-8dd8-0cc28cf328d3\") " pod="openstack/glance-db-sync-ngkz2" Jan 30 17:52:29 crc kubenswrapper[4766]: I0130 17:52:29.504130 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-ngkz2" Jan 30 17:52:30 crc kubenswrapper[4766]: I0130 17:52:30.061684 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-ngkz2"] Jan 30 17:52:30 crc kubenswrapper[4766]: I0130 17:52:30.375538 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-ngkz2" event={"ID":"fca69b03-2748-4111-8dd8-0cc28cf328d3","Type":"ContainerStarted","Data":"0bc6de5d813b15fbe8b2b6ce02d0d20c213af5cf7a77ca5fa196e374fca2b94d"} Jan 30 17:52:31 crc kubenswrapper[4766]: I0130 17:52:31.393049 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-ngkz2" event={"ID":"fca69b03-2748-4111-8dd8-0cc28cf328d3","Type":"ContainerStarted","Data":"c9458198dfab56b6f64fbd05b1295b35eb049ea1af74a3aa668d258a59d21ba1"} Jan 30 17:52:31 crc kubenswrapper[4766]: I0130 17:52:31.411090 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-ngkz2" podStartSLOduration=2.411067846 podStartE2EDuration="2.411067846s" podCreationTimestamp="2026-01-30 17:52:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:52:31.40758135 +0000 UTC m=+5406.045538706" watchObservedRunningTime="2026-01-30 17:52:31.411067846 +0000 UTC m=+5406.049025192" Jan 30 17:52:34 crc kubenswrapper[4766]: I0130 17:52:34.417484 4766 generic.go:334] "Generic (PLEG): container finished" podID="fca69b03-2748-4111-8dd8-0cc28cf328d3" containerID="c9458198dfab56b6f64fbd05b1295b35eb049ea1af74a3aa668d258a59d21ba1" exitCode=0 Jan 30 17:52:34 crc kubenswrapper[4766]: I0130 17:52:34.417565 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-ngkz2" event={"ID":"fca69b03-2748-4111-8dd8-0cc28cf328d3","Type":"ContainerDied","Data":"c9458198dfab56b6f64fbd05b1295b35eb049ea1af74a3aa668d258a59d21ba1"} Jan 30 17:52:35 crc kubenswrapper[4766]: I0130 17:52:35.785876 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-ngkz2" Jan 30 17:52:35 crc kubenswrapper[4766]: I0130 17:52:35.917613 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fca69b03-2748-4111-8dd8-0cc28cf328d3-config-data\") pod \"fca69b03-2748-4111-8dd8-0cc28cf328d3\" (UID: \"fca69b03-2748-4111-8dd8-0cc28cf328d3\") " Jan 30 17:52:35 crc kubenswrapper[4766]: I0130 17:52:35.917697 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fca69b03-2748-4111-8dd8-0cc28cf328d3-combined-ca-bundle\") pod \"fca69b03-2748-4111-8dd8-0cc28cf328d3\" (UID: \"fca69b03-2748-4111-8dd8-0cc28cf328d3\") " Jan 30 17:52:35 crc kubenswrapper[4766]: I0130 17:52:35.917809 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhlz7\" (UniqueName: \"kubernetes.io/projected/fca69b03-2748-4111-8dd8-0cc28cf328d3-kube-api-access-xhlz7\") pod \"fca69b03-2748-4111-8dd8-0cc28cf328d3\" (UID: \"fca69b03-2748-4111-8dd8-0cc28cf328d3\") " Jan 30 17:52:35 crc kubenswrapper[4766]: I0130 17:52:35.917929 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fca69b03-2748-4111-8dd8-0cc28cf328d3-db-sync-config-data\") pod \"fca69b03-2748-4111-8dd8-0cc28cf328d3\" (UID: \"fca69b03-2748-4111-8dd8-0cc28cf328d3\") " Jan 30 17:52:35 crc kubenswrapper[4766]: I0130 17:52:35.923411 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fca69b03-2748-4111-8dd8-0cc28cf328d3-kube-api-access-xhlz7" (OuterVolumeSpecName: "kube-api-access-xhlz7") pod "fca69b03-2748-4111-8dd8-0cc28cf328d3" (UID: "fca69b03-2748-4111-8dd8-0cc28cf328d3"). InnerVolumeSpecName "kube-api-access-xhlz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:52:35 crc kubenswrapper[4766]: I0130 17:52:35.923684 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fca69b03-2748-4111-8dd8-0cc28cf328d3-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "fca69b03-2748-4111-8dd8-0cc28cf328d3" (UID: "fca69b03-2748-4111-8dd8-0cc28cf328d3"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:52:35 crc kubenswrapper[4766]: I0130 17:52:35.943519 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fca69b03-2748-4111-8dd8-0cc28cf328d3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fca69b03-2748-4111-8dd8-0cc28cf328d3" (UID: "fca69b03-2748-4111-8dd8-0cc28cf328d3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:52:35 crc kubenswrapper[4766]: I0130 17:52:35.967163 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fca69b03-2748-4111-8dd8-0cc28cf328d3-config-data" (OuterVolumeSpecName: "config-data") pod "fca69b03-2748-4111-8dd8-0cc28cf328d3" (UID: "fca69b03-2748-4111-8dd8-0cc28cf328d3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.020365 4766 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fca69b03-2748-4111-8dd8-0cc28cf328d3-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.020416 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fca69b03-2748-4111-8dd8-0cc28cf328d3-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.020434 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fca69b03-2748-4111-8dd8-0cc28cf328d3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.020453 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhlz7\" (UniqueName: \"kubernetes.io/projected/fca69b03-2748-4111-8dd8-0cc28cf328d3-kube-api-access-xhlz7\") on node \"crc\" DevicePath \"\"" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.436113 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-ngkz2" event={"ID":"fca69b03-2748-4111-8dd8-0cc28cf328d3","Type":"ContainerDied","Data":"0bc6de5d813b15fbe8b2b6ce02d0d20c213af5cf7a77ca5fa196e374fca2b94d"} Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.436161 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0bc6de5d813b15fbe8b2b6ce02d0d20c213af5cf7a77ca5fa196e374fca2b94d" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.436278 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-ngkz2" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.704060 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 17:52:36 crc kubenswrapper[4766]: E0130 17:52:36.704432 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fca69b03-2748-4111-8dd8-0cc28cf328d3" containerName="glance-db-sync" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.704696 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="fca69b03-2748-4111-8dd8-0cc28cf328d3" containerName="glance-db-sync" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.704882 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="fca69b03-2748-4111-8dd8-0cc28cf328d3" containerName="glance-db-sync" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.705804 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.709436 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.710003 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.710615 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-fmg4z" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.712060 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.742727 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.834643 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpxlt\" (UniqueName: \"kubernetes.io/projected/cc97dcd2-d933-4049-b658-f84b0a58dceb-kube-api-access-dpxlt\") pod \"glance-default-external-api-0\" (UID: \"cc97dcd2-d933-4049-b658-f84b0a58dceb\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.834702 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc97dcd2-d933-4049-b658-f84b0a58dceb-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cc97dcd2-d933-4049-b658-f84b0a58dceb\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.834737 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc97dcd2-d933-4049-b658-f84b0a58dceb-logs\") pod \"glance-default-external-api-0\" (UID: \"cc97dcd2-d933-4049-b658-f84b0a58dceb\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.835111 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc97dcd2-d933-4049-b658-f84b0a58dceb-config-data\") pod \"glance-default-external-api-0\" (UID: \"cc97dcd2-d933-4049-b658-f84b0a58dceb\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.835321 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cc97dcd2-d933-4049-b658-f84b0a58dceb-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cc97dcd2-d933-4049-b658-f84b0a58dceb\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.835364 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc97dcd2-d933-4049-b658-f84b0a58dceb-scripts\") pod \"glance-default-external-api-0\" (UID: \"cc97dcd2-d933-4049-b658-f84b0a58dceb\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.835421 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/cc97dcd2-d933-4049-b658-f84b0a58dceb-ceph\") pod \"glance-default-external-api-0\" (UID: \"cc97dcd2-d933-4049-b658-f84b0a58dceb\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.835579 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-548c78df-gwvnq"] Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.837757 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-548c78df-gwvnq" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.846903 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-548c78df-gwvnq"] Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.915774 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.921997 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.924621 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.938119 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpxlt\" (UniqueName: \"kubernetes.io/projected/cc97dcd2-d933-4049-b658-f84b0a58dceb-kube-api-access-dpxlt\") pod \"glance-default-external-api-0\" (UID: \"cc97dcd2-d933-4049-b658-f84b0a58dceb\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.938233 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f59ac31c-2444-4acf-b7a1-d4bce77181bf-dns-svc\") pod \"dnsmasq-dns-548c78df-gwvnq\" (UID: \"f59ac31c-2444-4acf-b7a1-d4bce77181bf\") " pod="openstack/dnsmasq-dns-548c78df-gwvnq" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.938300 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc97dcd2-d933-4049-b658-f84b0a58dceb-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cc97dcd2-d933-4049-b658-f84b0a58dceb\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.938374 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc97dcd2-d933-4049-b658-f84b0a58dceb-logs\") pod \"glance-default-external-api-0\" (UID: \"cc97dcd2-d933-4049-b658-f84b0a58dceb\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.938443 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc97dcd2-d933-4049-b658-f84b0a58dceb-config-data\") pod \"glance-default-external-api-0\" (UID: \"cc97dcd2-d933-4049-b658-f84b0a58dceb\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.938514 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f59ac31c-2444-4acf-b7a1-d4bce77181bf-ovsdbserver-nb\") pod \"dnsmasq-dns-548c78df-gwvnq\" (UID: \"f59ac31c-2444-4acf-b7a1-d4bce77181bf\") " pod="openstack/dnsmasq-dns-548c78df-gwvnq" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.938549 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cc97dcd2-d933-4049-b658-f84b0a58dceb-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cc97dcd2-d933-4049-b658-f84b0a58dceb\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.938597 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc97dcd2-d933-4049-b658-f84b0a58dceb-scripts\") pod \"glance-default-external-api-0\" (UID: \"cc97dcd2-d933-4049-b658-f84b0a58dceb\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.938630 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcrln\" (UniqueName: \"kubernetes.io/projected/f59ac31c-2444-4acf-b7a1-d4bce77181bf-kube-api-access-qcrln\") pod \"dnsmasq-dns-548c78df-gwvnq\" (UID: \"f59ac31c-2444-4acf-b7a1-d4bce77181bf\") " pod="openstack/dnsmasq-dns-548c78df-gwvnq" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.938676 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/cc97dcd2-d933-4049-b658-f84b0a58dceb-ceph\") pod \"glance-default-external-api-0\" (UID: \"cc97dcd2-d933-4049-b658-f84b0a58dceb\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.938753 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f59ac31c-2444-4acf-b7a1-d4bce77181bf-ovsdbserver-sb\") pod \"dnsmasq-dns-548c78df-gwvnq\" (UID: \"f59ac31c-2444-4acf-b7a1-d4bce77181bf\") " pod="openstack/dnsmasq-dns-548c78df-gwvnq" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.938777 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f59ac31c-2444-4acf-b7a1-d4bce77181bf-config\") pod \"dnsmasq-dns-548c78df-gwvnq\" (UID: \"f59ac31c-2444-4acf-b7a1-d4bce77181bf\") " pod="openstack/dnsmasq-dns-548c78df-gwvnq" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.939501 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cc97dcd2-d933-4049-b658-f84b0a58dceb-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cc97dcd2-d933-4049-b658-f84b0a58dceb\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.942913 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc97dcd2-d933-4049-b658-f84b0a58dceb-logs\") pod \"glance-default-external-api-0\" (UID: \"cc97dcd2-d933-4049-b658-f84b0a58dceb\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.945994 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc97dcd2-d933-4049-b658-f84b0a58dceb-config-data\") pod \"glance-default-external-api-0\" (UID: \"cc97dcd2-d933-4049-b658-f84b0a58dceb\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.946088 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc97dcd2-d933-4049-b658-f84b0a58dceb-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cc97dcd2-d933-4049-b658-f84b0a58dceb\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.947457 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc97dcd2-d933-4049-b658-f84b0a58dceb-scripts\") pod \"glance-default-external-api-0\" (UID: \"cc97dcd2-d933-4049-b658-f84b0a58dceb\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.948407 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.949170 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/cc97dcd2-d933-4049-b658-f84b0a58dceb-ceph\") pod \"glance-default-external-api-0\" (UID: \"cc97dcd2-d933-4049-b658-f84b0a58dceb\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.956971 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpxlt\" (UniqueName: \"kubernetes.io/projected/cc97dcd2-d933-4049-b658-f84b0a58dceb-kube-api-access-dpxlt\") pod \"glance-default-external-api-0\" (UID: \"cc97dcd2-d933-4049-b658-f84b0a58dceb\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.031317 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.039817 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.039867 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.039890 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f59ac31c-2444-4acf-b7a1-d4bce77181bf-ovsdbserver-nb\") pod \"dnsmasq-dns-548c78df-gwvnq\" (UID: \"f59ac31c-2444-4acf-b7a1-d4bce77181bf\") " pod="openstack/dnsmasq-dns-548c78df-gwvnq" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.039913 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-ceph\") pod \"glance-default-internal-api-0\" (UID: \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.040130 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcrln\" (UniqueName: \"kubernetes.io/projected/f59ac31c-2444-4acf-b7a1-d4bce77181bf-kube-api-access-qcrln\") pod \"dnsmasq-dns-548c78df-gwvnq\" (UID: \"f59ac31c-2444-4acf-b7a1-d4bce77181bf\") " pod="openstack/dnsmasq-dns-548c78df-gwvnq" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.040205 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.040330 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f59ac31c-2444-4acf-b7a1-d4bce77181bf-config\") pod \"dnsmasq-dns-548c78df-gwvnq\" (UID: \"f59ac31c-2444-4acf-b7a1-d4bce77181bf\") " pod="openstack/dnsmasq-dns-548c78df-gwvnq" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.040346 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f59ac31c-2444-4acf-b7a1-d4bce77181bf-ovsdbserver-sb\") pod \"dnsmasq-dns-548c78df-gwvnq\" (UID: \"f59ac31c-2444-4acf-b7a1-d4bce77181bf\") " pod="openstack/dnsmasq-dns-548c78df-gwvnq" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.040421 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.040467 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-logs\") pod \"glance-default-internal-api-0\" (UID: \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.040534 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f59ac31c-2444-4acf-b7a1-d4bce77181bf-dns-svc\") pod \"dnsmasq-dns-548c78df-gwvnq\" (UID: \"f59ac31c-2444-4acf-b7a1-d4bce77181bf\") " pod="openstack/dnsmasq-dns-548c78df-gwvnq" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.040614 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwb5k\" (UniqueName: \"kubernetes.io/projected/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-kube-api-access-gwb5k\") pod \"glance-default-internal-api-0\" (UID: \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.040926 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f59ac31c-2444-4acf-b7a1-d4bce77181bf-ovsdbserver-nb\") pod \"dnsmasq-dns-548c78df-gwvnq\" (UID: \"f59ac31c-2444-4acf-b7a1-d4bce77181bf\") " pod="openstack/dnsmasq-dns-548c78df-gwvnq" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.040977 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f59ac31c-2444-4acf-b7a1-d4bce77181bf-ovsdbserver-sb\") pod \"dnsmasq-dns-548c78df-gwvnq\" (UID: \"f59ac31c-2444-4acf-b7a1-d4bce77181bf\") " pod="openstack/dnsmasq-dns-548c78df-gwvnq" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.041259 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f59ac31c-2444-4acf-b7a1-d4bce77181bf-config\") pod \"dnsmasq-dns-548c78df-gwvnq\" (UID: \"f59ac31c-2444-4acf-b7a1-d4bce77181bf\") " pod="openstack/dnsmasq-dns-548c78df-gwvnq" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.041962 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f59ac31c-2444-4acf-b7a1-d4bce77181bf-dns-svc\") pod \"dnsmasq-dns-548c78df-gwvnq\" (UID: \"f59ac31c-2444-4acf-b7a1-d4bce77181bf\") " pod="openstack/dnsmasq-dns-548c78df-gwvnq" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.060636 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcrln\" (UniqueName: \"kubernetes.io/projected/f59ac31c-2444-4acf-b7a1-d4bce77181bf-kube-api-access-qcrln\") pod \"dnsmasq-dns-548c78df-gwvnq\" (UID: \"f59ac31c-2444-4acf-b7a1-d4bce77181bf\") " pod="openstack/dnsmasq-dns-548c78df-gwvnq" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.142456 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-logs\") pod \"glance-default-internal-api-0\" (UID: \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.142789 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwb5k\" (UniqueName: \"kubernetes.io/projected/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-kube-api-access-gwb5k\") pod \"glance-default-internal-api-0\" (UID: \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.142819 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.142842 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.142862 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-ceph\") pod \"glance-default-internal-api-0\" (UID: \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.142888 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.142944 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.143203 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-logs\") pod \"glance-default-internal-api-0\" (UID: \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.143317 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.151535 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-ceph\") pod \"glance-default-internal-api-0\" (UID: \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.153354 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.156629 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-548c78df-gwvnq" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.162781 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.164658 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwb5k\" (UniqueName: \"kubernetes.io/projected/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-kube-api-access-gwb5k\") pod \"glance-default-internal-api-0\" (UID: \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.166752 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.235813 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.618876 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.683494 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-548c78df-gwvnq"] Jan 30 17:52:37 crc kubenswrapper[4766]: W0130 17:52:37.689293 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf59ac31c_2444_4acf_b7a1_d4bce77181bf.slice/crio-1f2e26cf3fa088fc28831e74273a48702602b1cf187d0ca6caaa1a82f45b271d WatchSource:0}: Error finding container 1f2e26cf3fa088fc28831e74273a48702602b1cf187d0ca6caaa1a82f45b271d: Status 404 returned error can't find the container with id 1f2e26cf3fa088fc28831e74273a48702602b1cf187d0ca6caaa1a82f45b271d Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.796641 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.925985 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 17:52:38 crc kubenswrapper[4766]: W0130 17:52:38.001848 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75bbeed9_9ddf_41e7_b48f_d56bb0f18cf7.slice/crio-01ef7cb908a67430e0629dbdba0634f3d450b321b3a44e9f49460e7da28dd970 WatchSource:0}: Error finding container 01ef7cb908a67430e0629dbdba0634f3d450b321b3a44e9f49460e7da28dd970: Status 404 returned error can't find the container with id 01ef7cb908a67430e0629dbdba0634f3d450b321b3a44e9f49460e7da28dd970 Jan 30 17:52:38 crc kubenswrapper[4766]: I0130 17:52:38.460930 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cc97dcd2-d933-4049-b658-f84b0a58dceb","Type":"ContainerStarted","Data":"da72c68311755ea25dda758df314f72c2d8b6fed065976f99b765262fd636841"} Jan 30 17:52:38 crc kubenswrapper[4766]: I0130 17:52:38.460974 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cc97dcd2-d933-4049-b658-f84b0a58dceb","Type":"ContainerStarted","Data":"ce16efda73744e8835d19632655d30fbc343d2c46facf078ece46d87dcbd8fe6"} Jan 30 17:52:38 crc kubenswrapper[4766]: I0130 17:52:38.465329 4766 generic.go:334] "Generic (PLEG): container finished" podID="f59ac31c-2444-4acf-b7a1-d4bce77181bf" containerID="7b3ff31ec71dbb0af27d4b0fdc9d8f231f39936bfca91b8179caf49331fe1f16" exitCode=0 Jan 30 17:52:38 crc kubenswrapper[4766]: I0130 17:52:38.465407 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-548c78df-gwvnq" event={"ID":"f59ac31c-2444-4acf-b7a1-d4bce77181bf","Type":"ContainerDied","Data":"7b3ff31ec71dbb0af27d4b0fdc9d8f231f39936bfca91b8179caf49331fe1f16"} Jan 30 17:52:38 crc kubenswrapper[4766]: I0130 17:52:38.465436 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-548c78df-gwvnq" event={"ID":"f59ac31c-2444-4acf-b7a1-d4bce77181bf","Type":"ContainerStarted","Data":"1f2e26cf3fa088fc28831e74273a48702602b1cf187d0ca6caaa1a82f45b271d"} Jan 30 17:52:38 crc kubenswrapper[4766]: I0130 17:52:38.467329 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7","Type":"ContainerStarted","Data":"01ef7cb908a67430e0629dbdba0634f3d450b321b3a44e9f49460e7da28dd970"} Jan 30 17:52:39 crc kubenswrapper[4766]: I0130 17:52:39.477341 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7","Type":"ContainerStarted","Data":"ed451d7a4e20841ebf8afa65c6b3d96b76fbb339253b21e38aa17b512e3a5852"} Jan 30 17:52:39 crc kubenswrapper[4766]: I0130 17:52:39.477690 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7","Type":"ContainerStarted","Data":"0e49f4f829acfc6a2cd703577ac126098cca854f5401088b0363fe918a8b7f23"} Jan 30 17:52:39 crc kubenswrapper[4766]: I0130 17:52:39.478828 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cc97dcd2-d933-4049-b658-f84b0a58dceb","Type":"ContainerStarted","Data":"1a4a00f692eced11fffc033594eb2387dc3616d4a948ff56de95719e2d4e4726"} Jan 30 17:52:39 crc kubenswrapper[4766]: I0130 17:52:39.478912 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="cc97dcd2-d933-4049-b658-f84b0a58dceb" containerName="glance-httpd" containerID="cri-o://1a4a00f692eced11fffc033594eb2387dc3616d4a948ff56de95719e2d4e4726" gracePeriod=30 Jan 30 17:52:39 crc kubenswrapper[4766]: I0130 17:52:39.478887 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="cc97dcd2-d933-4049-b658-f84b0a58dceb" containerName="glance-log" containerID="cri-o://da72c68311755ea25dda758df314f72c2d8b6fed065976f99b765262fd636841" gracePeriod=30 Jan 30 17:52:39 crc kubenswrapper[4766]: I0130 17:52:39.482166 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-548c78df-gwvnq" event={"ID":"f59ac31c-2444-4acf-b7a1-d4bce77181bf","Type":"ContainerStarted","Data":"12b98c744346fb1a43ddf1ff58ce20a8705aecf44b159eaae4645c7fd037788c"} Jan 30 17:52:39 crc kubenswrapper[4766]: I0130 17:52:39.483055 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-548c78df-gwvnq" Jan 30 17:52:39 crc kubenswrapper[4766]: I0130 17:52:39.496569 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.496549936 podStartE2EDuration="3.496549936s" podCreationTimestamp="2026-01-30 17:52:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:52:39.495938749 +0000 UTC m=+5414.133896115" watchObservedRunningTime="2026-01-30 17:52:39.496549936 +0000 UTC m=+5414.134507282" Jan 30 17:52:39 crc kubenswrapper[4766]: I0130 17:52:39.519334 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-548c78df-gwvnq" podStartSLOduration=3.5193150539999998 podStartE2EDuration="3.519315054s" podCreationTimestamp="2026-01-30 17:52:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:52:39.515357147 +0000 UTC m=+5414.153314503" watchObservedRunningTime="2026-01-30 17:52:39.519315054 +0000 UTC m=+5414.157272400" Jan 30 17:52:39 crc kubenswrapper[4766]: I0130 17:52:39.541094 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.541080596 podStartE2EDuration="3.541080596s" podCreationTimestamp="2026-01-30 17:52:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:52:39.54012969 +0000 UTC m=+5414.178087046" watchObservedRunningTime="2026-01-30 17:52:39.541080596 +0000 UTC m=+5414.179037942" Jan 30 17:52:39 crc kubenswrapper[4766]: I0130 17:52:39.669796 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.170437 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.229623 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc97dcd2-d933-4049-b658-f84b0a58dceb-config-data\") pod \"cc97dcd2-d933-4049-b658-f84b0a58dceb\" (UID: \"cc97dcd2-d933-4049-b658-f84b0a58dceb\") " Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.229767 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc97dcd2-d933-4049-b658-f84b0a58dceb-logs\") pod \"cc97dcd2-d933-4049-b658-f84b0a58dceb\" (UID: \"cc97dcd2-d933-4049-b658-f84b0a58dceb\") " Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.229791 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc97dcd2-d933-4049-b658-f84b0a58dceb-scripts\") pod \"cc97dcd2-d933-4049-b658-f84b0a58dceb\" (UID: \"cc97dcd2-d933-4049-b658-f84b0a58dceb\") " Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.229860 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpxlt\" (UniqueName: \"kubernetes.io/projected/cc97dcd2-d933-4049-b658-f84b0a58dceb-kube-api-access-dpxlt\") pod \"cc97dcd2-d933-4049-b658-f84b0a58dceb\" (UID: \"cc97dcd2-d933-4049-b658-f84b0a58dceb\") " Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.229933 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc97dcd2-d933-4049-b658-f84b0a58dceb-combined-ca-bundle\") pod \"cc97dcd2-d933-4049-b658-f84b0a58dceb\" (UID: \"cc97dcd2-d933-4049-b658-f84b0a58dceb\") " Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.230002 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/cc97dcd2-d933-4049-b658-f84b0a58dceb-ceph\") pod \"cc97dcd2-d933-4049-b658-f84b0a58dceb\" (UID: \"cc97dcd2-d933-4049-b658-f84b0a58dceb\") " Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.230025 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cc97dcd2-d933-4049-b658-f84b0a58dceb-httpd-run\") pod \"cc97dcd2-d933-4049-b658-f84b0a58dceb\" (UID: \"cc97dcd2-d933-4049-b658-f84b0a58dceb\") " Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.230745 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc97dcd2-d933-4049-b658-f84b0a58dceb-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "cc97dcd2-d933-4049-b658-f84b0a58dceb" (UID: "cc97dcd2-d933-4049-b658-f84b0a58dceb"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.230969 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc97dcd2-d933-4049-b658-f84b0a58dceb-logs" (OuterVolumeSpecName: "logs") pod "cc97dcd2-d933-4049-b658-f84b0a58dceb" (UID: "cc97dcd2-d933-4049-b658-f84b0a58dceb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.237534 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc97dcd2-d933-4049-b658-f84b0a58dceb-ceph" (OuterVolumeSpecName: "ceph") pod "cc97dcd2-d933-4049-b658-f84b0a58dceb" (UID: "cc97dcd2-d933-4049-b658-f84b0a58dceb"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.243826 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc97dcd2-d933-4049-b658-f84b0a58dceb-kube-api-access-dpxlt" (OuterVolumeSpecName: "kube-api-access-dpxlt") pod "cc97dcd2-d933-4049-b658-f84b0a58dceb" (UID: "cc97dcd2-d933-4049-b658-f84b0a58dceb"). InnerVolumeSpecName "kube-api-access-dpxlt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.243976 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc97dcd2-d933-4049-b658-f84b0a58dceb-scripts" (OuterVolumeSpecName: "scripts") pod "cc97dcd2-d933-4049-b658-f84b0a58dceb" (UID: "cc97dcd2-d933-4049-b658-f84b0a58dceb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.255914 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc97dcd2-d933-4049-b658-f84b0a58dceb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cc97dcd2-d933-4049-b658-f84b0a58dceb" (UID: "cc97dcd2-d933-4049-b658-f84b0a58dceb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.280059 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc97dcd2-d933-4049-b658-f84b0a58dceb-config-data" (OuterVolumeSpecName: "config-data") pod "cc97dcd2-d933-4049-b658-f84b0a58dceb" (UID: "cc97dcd2-d933-4049-b658-f84b0a58dceb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.332742 4766 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/cc97dcd2-d933-4049-b658-f84b0a58dceb-ceph\") on node \"crc\" DevicePath \"\"" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.333028 4766 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cc97dcd2-d933-4049-b658-f84b0a58dceb-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.333138 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc97dcd2-d933-4049-b658-f84b0a58dceb-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.333247 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc97dcd2-d933-4049-b658-f84b0a58dceb-logs\") on node \"crc\" DevicePath \"\"" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.333334 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc97dcd2-d933-4049-b658-f84b0a58dceb-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.333492 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dpxlt\" (UniqueName: \"kubernetes.io/projected/cc97dcd2-d933-4049-b658-f84b0a58dceb-kube-api-access-dpxlt\") on node \"crc\" DevicePath \"\"" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.333594 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc97dcd2-d933-4049-b658-f84b0a58dceb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.491875 4766 generic.go:334] "Generic (PLEG): container finished" podID="cc97dcd2-d933-4049-b658-f84b0a58dceb" containerID="1a4a00f692eced11fffc033594eb2387dc3616d4a948ff56de95719e2d4e4726" exitCode=0 Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.492913 4766 generic.go:334] "Generic (PLEG): container finished" podID="cc97dcd2-d933-4049-b658-f84b0a58dceb" containerID="da72c68311755ea25dda758df314f72c2d8b6fed065976f99b765262fd636841" exitCode=143 Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.491940 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.491918 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cc97dcd2-d933-4049-b658-f84b0a58dceb","Type":"ContainerDied","Data":"1a4a00f692eced11fffc033594eb2387dc3616d4a948ff56de95719e2d4e4726"} Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.493108 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cc97dcd2-d933-4049-b658-f84b0a58dceb","Type":"ContainerDied","Data":"da72c68311755ea25dda758df314f72c2d8b6fed065976f99b765262fd636841"} Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.493128 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cc97dcd2-d933-4049-b658-f84b0a58dceb","Type":"ContainerDied","Data":"ce16efda73744e8835d19632655d30fbc343d2c46facf078ece46d87dcbd8fe6"} Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.493200 4766 scope.go:117] "RemoveContainer" containerID="1a4a00f692eced11fffc033594eb2387dc3616d4a948ff56de95719e2d4e4726" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.531078 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.542645 4766 scope.go:117] "RemoveContainer" containerID="da72c68311755ea25dda758df314f72c2d8b6fed065976f99b765262fd636841" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.543761 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.569520 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 17:52:40 crc kubenswrapper[4766]: E0130 17:52:40.569964 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc97dcd2-d933-4049-b658-f84b0a58dceb" containerName="glance-httpd" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.569977 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc97dcd2-d933-4049-b658-f84b0a58dceb" containerName="glance-httpd" Jan 30 17:52:40 crc kubenswrapper[4766]: E0130 17:52:40.569992 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc97dcd2-d933-4049-b658-f84b0a58dceb" containerName="glance-log" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.569998 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc97dcd2-d933-4049-b658-f84b0a58dceb" containerName="glance-log" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.570165 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc97dcd2-d933-4049-b658-f84b0a58dceb" containerName="glance-httpd" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.570214 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc97dcd2-d933-4049-b658-f84b0a58dceb" containerName="glance-log" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.571224 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.574583 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.578800 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.588927 4766 scope.go:117] "RemoveContainer" containerID="1a4a00f692eced11fffc033594eb2387dc3616d4a948ff56de95719e2d4e4726" Jan 30 17:52:40 crc kubenswrapper[4766]: E0130 17:52:40.594505 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a4a00f692eced11fffc033594eb2387dc3616d4a948ff56de95719e2d4e4726\": container with ID starting with 1a4a00f692eced11fffc033594eb2387dc3616d4a948ff56de95719e2d4e4726 not found: ID does not exist" containerID="1a4a00f692eced11fffc033594eb2387dc3616d4a948ff56de95719e2d4e4726" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.594589 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a4a00f692eced11fffc033594eb2387dc3616d4a948ff56de95719e2d4e4726"} err="failed to get container status \"1a4a00f692eced11fffc033594eb2387dc3616d4a948ff56de95719e2d4e4726\": rpc error: code = NotFound desc = could not find container \"1a4a00f692eced11fffc033594eb2387dc3616d4a948ff56de95719e2d4e4726\": container with ID starting with 1a4a00f692eced11fffc033594eb2387dc3616d4a948ff56de95719e2d4e4726 not found: ID does not exist" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.594631 4766 scope.go:117] "RemoveContainer" containerID="da72c68311755ea25dda758df314f72c2d8b6fed065976f99b765262fd636841" Jan 30 17:52:40 crc kubenswrapper[4766]: E0130 17:52:40.595125 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da72c68311755ea25dda758df314f72c2d8b6fed065976f99b765262fd636841\": container with ID starting with da72c68311755ea25dda758df314f72c2d8b6fed065976f99b765262fd636841 not found: ID does not exist" containerID="da72c68311755ea25dda758df314f72c2d8b6fed065976f99b765262fd636841" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.595196 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da72c68311755ea25dda758df314f72c2d8b6fed065976f99b765262fd636841"} err="failed to get container status \"da72c68311755ea25dda758df314f72c2d8b6fed065976f99b765262fd636841\": rpc error: code = NotFound desc = could not find container \"da72c68311755ea25dda758df314f72c2d8b6fed065976f99b765262fd636841\": container with ID starting with da72c68311755ea25dda758df314f72c2d8b6fed065976f99b765262fd636841 not found: ID does not exist" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.595237 4766 scope.go:117] "RemoveContainer" containerID="1a4a00f692eced11fffc033594eb2387dc3616d4a948ff56de95719e2d4e4726" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.595569 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a4a00f692eced11fffc033594eb2387dc3616d4a948ff56de95719e2d4e4726"} err="failed to get container status \"1a4a00f692eced11fffc033594eb2387dc3616d4a948ff56de95719e2d4e4726\": rpc error: code = NotFound desc = could not find container \"1a4a00f692eced11fffc033594eb2387dc3616d4a948ff56de95719e2d4e4726\": container with ID starting with 1a4a00f692eced11fffc033594eb2387dc3616d4a948ff56de95719e2d4e4726 not found: ID does not exist" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.595591 4766 scope.go:117] "RemoveContainer" containerID="da72c68311755ea25dda758df314f72c2d8b6fed065976f99b765262fd636841" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.595792 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da72c68311755ea25dda758df314f72c2d8b6fed065976f99b765262fd636841"} err="failed to get container status \"da72c68311755ea25dda758df314f72c2d8b6fed065976f99b765262fd636841\": rpc error: code = NotFound desc = could not find container \"da72c68311755ea25dda758df314f72c2d8b6fed065976f99b765262fd636841\": container with ID starting with da72c68311755ea25dda758df314f72c2d8b6fed065976f99b765262fd636841 not found: ID does not exist" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.638616 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-config-data\") pod \"glance-default-external-api-0\" (UID: \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.638691 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.638749 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkf94\" (UniqueName: \"kubernetes.io/projected/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-kube-api-access-wkf94\") pod \"glance-default-external-api-0\" (UID: \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.638798 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.638884 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-logs\") pod \"glance-default-external-api-0\" (UID: \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.638916 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-ceph\") pod \"glance-default-external-api-0\" (UID: \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.638938 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-scripts\") pod \"glance-default-external-api-0\" (UID: \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.740947 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-logs\") pod \"glance-default-external-api-0\" (UID: \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.741008 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-ceph\") pod \"glance-default-external-api-0\" (UID: \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.741035 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-scripts\") pod \"glance-default-external-api-0\" (UID: \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.741081 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-config-data\") pod \"glance-default-external-api-0\" (UID: \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.741119 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.741573 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-logs\") pod \"glance-default-external-api-0\" (UID: \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.741642 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.742495 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkf94\" (UniqueName: \"kubernetes.io/projected/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-kube-api-access-wkf94\") pod \"glance-default-external-api-0\" (UID: \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.743103 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.746356 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-config-data\") pod \"glance-default-external-api-0\" (UID: \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.746612 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-scripts\") pod \"glance-default-external-api-0\" (UID: \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.746757 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.747981 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-ceph\") pod \"glance-default-external-api-0\" (UID: \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.759906 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkf94\" (UniqueName: \"kubernetes.io/projected/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-kube-api-access-wkf94\") pod \"glance-default-external-api-0\" (UID: \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.909571 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 17:52:41 crc kubenswrapper[4766]: I0130 17:52:41.432908 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 17:52:41 crc kubenswrapper[4766]: W0130 17:52:41.433577 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7946b0e6_2de2_4708_ac83_ce1ad398d8a5.slice/crio-d2a4e4fc66535588e46fed562ba402562d5ce80fbfd5a96ef9e01d567df2004b WatchSource:0}: Error finding container d2a4e4fc66535588e46fed562ba402562d5ce80fbfd5a96ef9e01d567df2004b: Status 404 returned error can't find the container with id d2a4e4fc66535588e46fed562ba402562d5ce80fbfd5a96ef9e01d567df2004b Jan 30 17:52:41 crc kubenswrapper[4766]: I0130 17:52:41.523231 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7946b0e6-2de2-4708-ac83-ce1ad398d8a5","Type":"ContainerStarted","Data":"d2a4e4fc66535588e46fed562ba402562d5ce80fbfd5a96ef9e01d567df2004b"} Jan 30 17:52:41 crc kubenswrapper[4766]: I0130 17:52:41.524503 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7" containerName="glance-log" containerID="cri-o://0e49f4f829acfc6a2cd703577ac126098cca854f5401088b0363fe918a8b7f23" gracePeriod=30 Jan 30 17:52:41 crc kubenswrapper[4766]: I0130 17:52:41.524629 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7" containerName="glance-httpd" containerID="cri-o://ed451d7a4e20841ebf8afa65c6b3d96b76fbb339253b21e38aa17b512e3a5852" gracePeriod=30 Jan 30 17:52:41 crc kubenswrapper[4766]: I0130 17:52:41.992147 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.053649 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc97dcd2-d933-4049-b658-f84b0a58dceb" path="/var/lib/kubelet/pods/cc97dcd2-d933-4049-b658-f84b0a58dceb/volumes" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.064888 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-httpd-run\") pod \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\" (UID: \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\") " Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.065011 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-combined-ca-bundle\") pod \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\" (UID: \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\") " Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.065080 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-config-data\") pod \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\" (UID: \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\") " Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.065115 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-ceph\") pod \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\" (UID: \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\") " Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.065728 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwb5k\" (UniqueName: \"kubernetes.io/projected/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-kube-api-access-gwb5k\") pod \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\" (UID: \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\") " Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.065787 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-scripts\") pod \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\" (UID: \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\") " Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.065841 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-logs\") pod \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\" (UID: \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\") " Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.065915 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7" (UID: "75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.066258 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-logs" (OuterVolumeSpecName: "logs") pod "75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7" (UID: "75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.067902 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-logs\") on node \"crc\" DevicePath \"\"" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.067923 4766 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.069599 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-ceph" (OuterVolumeSpecName: "ceph") pod "75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7" (UID: "75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.070087 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-scripts" (OuterVolumeSpecName: "scripts") pod "75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7" (UID: "75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.075426 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-kube-api-access-gwb5k" (OuterVolumeSpecName: "kube-api-access-gwb5k") pod "75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7" (UID: "75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7"). InnerVolumeSpecName "kube-api-access-gwb5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.093353 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7" (UID: "75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.128108 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-config-data" (OuterVolumeSpecName: "config-data") pod "75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7" (UID: "75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.169973 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.170013 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.170023 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.170032 4766 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-ceph\") on node \"crc\" DevicePath \"\"" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.170042 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwb5k\" (UniqueName: \"kubernetes.io/projected/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-kube-api-access-gwb5k\") on node \"crc\" DevicePath \"\"" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.534313 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7946b0e6-2de2-4708-ac83-ce1ad398d8a5","Type":"ContainerStarted","Data":"ba7a3a0bd3b87ff213481ded18b09fe05a378481a605d5c64f141f56bfac1eae"} Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.534369 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7946b0e6-2de2-4708-ac83-ce1ad398d8a5","Type":"ContainerStarted","Data":"cad90a5294d7a585930cf768d8e7c6d25d6344d562eb3235af5a3bc1a335ef10"} Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.535929 4766 generic.go:334] "Generic (PLEG): container finished" podID="75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7" containerID="ed451d7a4e20841ebf8afa65c6b3d96b76fbb339253b21e38aa17b512e3a5852" exitCode=0 Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.535964 4766 generic.go:334] "Generic (PLEG): container finished" podID="75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7" containerID="0e49f4f829acfc6a2cd703577ac126098cca854f5401088b0363fe918a8b7f23" exitCode=143 Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.535986 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7","Type":"ContainerDied","Data":"ed451d7a4e20841ebf8afa65c6b3d96b76fbb339253b21e38aa17b512e3a5852"} Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.536013 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7","Type":"ContainerDied","Data":"0e49f4f829acfc6a2cd703577ac126098cca854f5401088b0363fe918a8b7f23"} Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.536027 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7","Type":"ContainerDied","Data":"01ef7cb908a67430e0629dbdba0634f3d450b321b3a44e9f49460e7da28dd970"} Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.536047 4766 scope.go:117] "RemoveContainer" containerID="ed451d7a4e20841ebf8afa65c6b3d96b76fbb339253b21e38aa17b512e3a5852" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.536436 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.575687 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=2.575657271 podStartE2EDuration="2.575657271s" podCreationTimestamp="2026-01-30 17:52:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:52:42.559844111 +0000 UTC m=+5417.197801467" watchObservedRunningTime="2026-01-30 17:52:42.575657271 +0000 UTC m=+5417.213614617" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.589479 4766 scope.go:117] "RemoveContainer" containerID="0e49f4f829acfc6a2cd703577ac126098cca854f5401088b0363fe918a8b7f23" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.607257 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.621108 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.627699 4766 scope.go:117] "RemoveContainer" containerID="ed451d7a4e20841ebf8afa65c6b3d96b76fbb339253b21e38aa17b512e3a5852" Jan 30 17:52:42 crc kubenswrapper[4766]: E0130 17:52:42.628964 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed451d7a4e20841ebf8afa65c6b3d96b76fbb339253b21e38aa17b512e3a5852\": container with ID starting with ed451d7a4e20841ebf8afa65c6b3d96b76fbb339253b21e38aa17b512e3a5852 not found: ID does not exist" containerID="ed451d7a4e20841ebf8afa65c6b3d96b76fbb339253b21e38aa17b512e3a5852" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.629000 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed451d7a4e20841ebf8afa65c6b3d96b76fbb339253b21e38aa17b512e3a5852"} err="failed to get container status \"ed451d7a4e20841ebf8afa65c6b3d96b76fbb339253b21e38aa17b512e3a5852\": rpc error: code = NotFound desc = could not find container \"ed451d7a4e20841ebf8afa65c6b3d96b76fbb339253b21e38aa17b512e3a5852\": container with ID starting with ed451d7a4e20841ebf8afa65c6b3d96b76fbb339253b21e38aa17b512e3a5852 not found: ID does not exist" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.629026 4766 scope.go:117] "RemoveContainer" containerID="0e49f4f829acfc6a2cd703577ac126098cca854f5401088b0363fe918a8b7f23" Jan 30 17:52:42 crc kubenswrapper[4766]: E0130 17:52:42.629426 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e49f4f829acfc6a2cd703577ac126098cca854f5401088b0363fe918a8b7f23\": container with ID starting with 0e49f4f829acfc6a2cd703577ac126098cca854f5401088b0363fe918a8b7f23 not found: ID does not exist" containerID="0e49f4f829acfc6a2cd703577ac126098cca854f5401088b0363fe918a8b7f23" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.629444 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e49f4f829acfc6a2cd703577ac126098cca854f5401088b0363fe918a8b7f23"} err="failed to get container status \"0e49f4f829acfc6a2cd703577ac126098cca854f5401088b0363fe918a8b7f23\": rpc error: code = NotFound desc = could not find container \"0e49f4f829acfc6a2cd703577ac126098cca854f5401088b0363fe918a8b7f23\": container with ID starting with 0e49f4f829acfc6a2cd703577ac126098cca854f5401088b0363fe918a8b7f23 not found: ID does not exist" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.629457 4766 scope.go:117] "RemoveContainer" containerID="ed451d7a4e20841ebf8afa65c6b3d96b76fbb339253b21e38aa17b512e3a5852" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.629677 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed451d7a4e20841ebf8afa65c6b3d96b76fbb339253b21e38aa17b512e3a5852"} err="failed to get container status \"ed451d7a4e20841ebf8afa65c6b3d96b76fbb339253b21e38aa17b512e3a5852\": rpc error: code = NotFound desc = could not find container \"ed451d7a4e20841ebf8afa65c6b3d96b76fbb339253b21e38aa17b512e3a5852\": container with ID starting with ed451d7a4e20841ebf8afa65c6b3d96b76fbb339253b21e38aa17b512e3a5852 not found: ID does not exist" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.629690 4766 scope.go:117] "RemoveContainer" containerID="0e49f4f829acfc6a2cd703577ac126098cca854f5401088b0363fe918a8b7f23" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.630686 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e49f4f829acfc6a2cd703577ac126098cca854f5401088b0363fe918a8b7f23"} err="failed to get container status \"0e49f4f829acfc6a2cd703577ac126098cca854f5401088b0363fe918a8b7f23\": rpc error: code = NotFound desc = could not find container \"0e49f4f829acfc6a2cd703577ac126098cca854f5401088b0363fe918a8b7f23\": container with ID starting with 0e49f4f829acfc6a2cd703577ac126098cca854f5401088b0363fe918a8b7f23 not found: ID does not exist" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.636630 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 17:52:42 crc kubenswrapper[4766]: E0130 17:52:42.637105 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7" containerName="glance-log" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.637123 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7" containerName="glance-log" Jan 30 17:52:42 crc kubenswrapper[4766]: E0130 17:52:42.637165 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7" containerName="glance-httpd" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.637171 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7" containerName="glance-httpd" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.637355 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7" containerName="glance-httpd" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.637380 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7" containerName="glance-log" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.638517 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.642643 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.651302 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.686807 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6dvq\" (UniqueName: \"kubernetes.io/projected/40e23b5f-28fc-4354-94de-90d54908e61b-kube-api-access-z6dvq\") pod \"glance-default-internal-api-0\" (UID: \"40e23b5f-28fc-4354-94de-90d54908e61b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.686964 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40e23b5f-28fc-4354-94de-90d54908e61b-logs\") pod \"glance-default-internal-api-0\" (UID: \"40e23b5f-28fc-4354-94de-90d54908e61b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.687124 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40e23b5f-28fc-4354-94de-90d54908e61b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"40e23b5f-28fc-4354-94de-90d54908e61b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.687243 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/40e23b5f-28fc-4354-94de-90d54908e61b-ceph\") pod \"glance-default-internal-api-0\" (UID: \"40e23b5f-28fc-4354-94de-90d54908e61b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.690024 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40e23b5f-28fc-4354-94de-90d54908e61b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"40e23b5f-28fc-4354-94de-90d54908e61b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.690127 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40e23b5f-28fc-4354-94de-90d54908e61b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"40e23b5f-28fc-4354-94de-90d54908e61b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.690168 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/40e23b5f-28fc-4354-94de-90d54908e61b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"40e23b5f-28fc-4354-94de-90d54908e61b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.791539 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40e23b5f-28fc-4354-94de-90d54908e61b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"40e23b5f-28fc-4354-94de-90d54908e61b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.791632 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/40e23b5f-28fc-4354-94de-90d54908e61b-ceph\") pod \"glance-default-internal-api-0\" (UID: \"40e23b5f-28fc-4354-94de-90d54908e61b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.791662 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40e23b5f-28fc-4354-94de-90d54908e61b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"40e23b5f-28fc-4354-94de-90d54908e61b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.791712 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40e23b5f-28fc-4354-94de-90d54908e61b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"40e23b5f-28fc-4354-94de-90d54908e61b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.791747 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/40e23b5f-28fc-4354-94de-90d54908e61b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"40e23b5f-28fc-4354-94de-90d54908e61b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.791818 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6dvq\" (UniqueName: \"kubernetes.io/projected/40e23b5f-28fc-4354-94de-90d54908e61b-kube-api-access-z6dvq\") pod \"glance-default-internal-api-0\" (UID: \"40e23b5f-28fc-4354-94de-90d54908e61b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.791855 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40e23b5f-28fc-4354-94de-90d54908e61b-logs\") pod \"glance-default-internal-api-0\" (UID: \"40e23b5f-28fc-4354-94de-90d54908e61b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.792571 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40e23b5f-28fc-4354-94de-90d54908e61b-logs\") pod \"glance-default-internal-api-0\" (UID: \"40e23b5f-28fc-4354-94de-90d54908e61b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.798954 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40e23b5f-28fc-4354-94de-90d54908e61b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"40e23b5f-28fc-4354-94de-90d54908e61b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.801699 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/40e23b5f-28fc-4354-94de-90d54908e61b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"40e23b5f-28fc-4354-94de-90d54908e61b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.808128 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40e23b5f-28fc-4354-94de-90d54908e61b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"40e23b5f-28fc-4354-94de-90d54908e61b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.811701 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40e23b5f-28fc-4354-94de-90d54908e61b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"40e23b5f-28fc-4354-94de-90d54908e61b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.815710 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/40e23b5f-28fc-4354-94de-90d54908e61b-ceph\") pod \"glance-default-internal-api-0\" (UID: \"40e23b5f-28fc-4354-94de-90d54908e61b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.832958 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6dvq\" (UniqueName: \"kubernetes.io/projected/40e23b5f-28fc-4354-94de-90d54908e61b-kube-api-access-z6dvq\") pod \"glance-default-internal-api-0\" (UID: \"40e23b5f-28fc-4354-94de-90d54908e61b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.999206 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 17:52:43 crc kubenswrapper[4766]: I0130 17:52:43.527973 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 17:52:44 crc kubenswrapper[4766]: I0130 17:52:44.054755 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7" path="/var/lib/kubelet/pods/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7/volumes" Jan 30 17:52:44 crc kubenswrapper[4766]: I0130 17:52:44.563773 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"40e23b5f-28fc-4354-94de-90d54908e61b","Type":"ContainerStarted","Data":"ad6524bde7488d90070d2ccbcc60c3eedc219f1cc8c7fa871d2af523184d894a"} Jan 30 17:52:44 crc kubenswrapper[4766]: I0130 17:52:44.563815 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"40e23b5f-28fc-4354-94de-90d54908e61b","Type":"ContainerStarted","Data":"155d7b6244102b757f3100d53fae683f2499dd63e37d81e454b339bfe1fcf7f8"} Jan 30 17:52:44 crc kubenswrapper[4766]: I0130 17:52:44.563827 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"40e23b5f-28fc-4354-94de-90d54908e61b","Type":"ContainerStarted","Data":"a636aed8819668fe27e888c223782c929538ea199ee28b047c4b35c7334f0992"} Jan 30 17:52:44 crc kubenswrapper[4766]: I0130 17:52:44.590925 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=2.590902241 podStartE2EDuration="2.590902241s" podCreationTimestamp="2026-01-30 17:52:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:52:44.582357169 +0000 UTC m=+5419.220314525" watchObservedRunningTime="2026-01-30 17:52:44.590902241 +0000 UTC m=+5419.228859587" Jan 30 17:52:47 crc kubenswrapper[4766]: I0130 17:52:47.158394 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-548c78df-gwvnq" Jan 30 17:52:47 crc kubenswrapper[4766]: I0130 17:52:47.224624 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7dc5cbf9f7-zkw64"] Jan 30 17:52:47 crc kubenswrapper[4766]: I0130 17:52:47.224906 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7dc5cbf9f7-zkw64" podUID="3a7525bc-5e61-4580-b6ec-03ee13b7eefe" containerName="dnsmasq-dns" containerID="cri-o://90a0a5811dcd0404a316f42d00527453af74b9dc4dd4a141b0ba0cd2e2cf54c4" gracePeriod=10 Jan 30 17:52:47 crc kubenswrapper[4766]: I0130 17:52:47.591575 4766 generic.go:334] "Generic (PLEG): container finished" podID="3a7525bc-5e61-4580-b6ec-03ee13b7eefe" containerID="90a0a5811dcd0404a316f42d00527453af74b9dc4dd4a141b0ba0cd2e2cf54c4" exitCode=0 Jan 30 17:52:47 crc kubenswrapper[4766]: I0130 17:52:47.591664 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dc5cbf9f7-zkw64" event={"ID":"3a7525bc-5e61-4580-b6ec-03ee13b7eefe","Type":"ContainerDied","Data":"90a0a5811dcd0404a316f42d00527453af74b9dc4dd4a141b0ba0cd2e2cf54c4"} Jan 30 17:52:47 crc kubenswrapper[4766]: I0130 17:52:47.686860 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dc5cbf9f7-zkw64" Jan 30 17:52:47 crc kubenswrapper[4766]: I0130 17:52:47.801961 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a7525bc-5e61-4580-b6ec-03ee13b7eefe-ovsdbserver-nb\") pod \"3a7525bc-5e61-4580-b6ec-03ee13b7eefe\" (UID: \"3a7525bc-5e61-4580-b6ec-03ee13b7eefe\") " Jan 30 17:52:47 crc kubenswrapper[4766]: I0130 17:52:47.802050 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a7525bc-5e61-4580-b6ec-03ee13b7eefe-dns-svc\") pod \"3a7525bc-5e61-4580-b6ec-03ee13b7eefe\" (UID: \"3a7525bc-5e61-4580-b6ec-03ee13b7eefe\") " Jan 30 17:52:47 crc kubenswrapper[4766]: I0130 17:52:47.802101 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a7525bc-5e61-4580-b6ec-03ee13b7eefe-config\") pod \"3a7525bc-5e61-4580-b6ec-03ee13b7eefe\" (UID: \"3a7525bc-5e61-4580-b6ec-03ee13b7eefe\") " Jan 30 17:52:47 crc kubenswrapper[4766]: I0130 17:52:47.802194 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a7525bc-5e61-4580-b6ec-03ee13b7eefe-ovsdbserver-sb\") pod \"3a7525bc-5e61-4580-b6ec-03ee13b7eefe\" (UID: \"3a7525bc-5e61-4580-b6ec-03ee13b7eefe\") " Jan 30 17:52:47 crc kubenswrapper[4766]: I0130 17:52:47.802234 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fvkp\" (UniqueName: \"kubernetes.io/projected/3a7525bc-5e61-4580-b6ec-03ee13b7eefe-kube-api-access-5fvkp\") pod \"3a7525bc-5e61-4580-b6ec-03ee13b7eefe\" (UID: \"3a7525bc-5e61-4580-b6ec-03ee13b7eefe\") " Jan 30 17:52:47 crc kubenswrapper[4766]: I0130 17:52:47.817563 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a7525bc-5e61-4580-b6ec-03ee13b7eefe-kube-api-access-5fvkp" (OuterVolumeSpecName: "kube-api-access-5fvkp") pod "3a7525bc-5e61-4580-b6ec-03ee13b7eefe" (UID: "3a7525bc-5e61-4580-b6ec-03ee13b7eefe"). InnerVolumeSpecName "kube-api-access-5fvkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:52:47 crc kubenswrapper[4766]: I0130 17:52:47.845369 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a7525bc-5e61-4580-b6ec-03ee13b7eefe-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3a7525bc-5e61-4580-b6ec-03ee13b7eefe" (UID: "3a7525bc-5e61-4580-b6ec-03ee13b7eefe"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:52:47 crc kubenswrapper[4766]: I0130 17:52:47.845399 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a7525bc-5e61-4580-b6ec-03ee13b7eefe-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3a7525bc-5e61-4580-b6ec-03ee13b7eefe" (UID: "3a7525bc-5e61-4580-b6ec-03ee13b7eefe"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:52:47 crc kubenswrapper[4766]: I0130 17:52:47.846640 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a7525bc-5e61-4580-b6ec-03ee13b7eefe-config" (OuterVolumeSpecName: "config") pod "3a7525bc-5e61-4580-b6ec-03ee13b7eefe" (UID: "3a7525bc-5e61-4580-b6ec-03ee13b7eefe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:52:47 crc kubenswrapper[4766]: I0130 17:52:47.856830 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a7525bc-5e61-4580-b6ec-03ee13b7eefe-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3a7525bc-5e61-4580-b6ec-03ee13b7eefe" (UID: "3a7525bc-5e61-4580-b6ec-03ee13b7eefe"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:52:47 crc kubenswrapper[4766]: I0130 17:52:47.904662 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a7525bc-5e61-4580-b6ec-03ee13b7eefe-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 17:52:47 crc kubenswrapper[4766]: I0130 17:52:47.904697 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fvkp\" (UniqueName: \"kubernetes.io/projected/3a7525bc-5e61-4580-b6ec-03ee13b7eefe-kube-api-access-5fvkp\") on node \"crc\" DevicePath \"\"" Jan 30 17:52:47 crc kubenswrapper[4766]: I0130 17:52:47.904708 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a7525bc-5e61-4580-b6ec-03ee13b7eefe-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 17:52:47 crc kubenswrapper[4766]: I0130 17:52:47.904718 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a7525bc-5e61-4580-b6ec-03ee13b7eefe-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 17:52:47 crc kubenswrapper[4766]: I0130 17:52:47.904726 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a7525bc-5e61-4580-b6ec-03ee13b7eefe-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:52:48 crc kubenswrapper[4766]: I0130 17:52:48.606651 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dc5cbf9f7-zkw64" event={"ID":"3a7525bc-5e61-4580-b6ec-03ee13b7eefe","Type":"ContainerDied","Data":"b79a964de471e1d1b203d59d894a14ac3d8e1bae897a81215e4af1ded098934b"} Jan 30 17:52:48 crc kubenswrapper[4766]: I0130 17:52:48.606716 4766 scope.go:117] "RemoveContainer" containerID="90a0a5811dcd0404a316f42d00527453af74b9dc4dd4a141b0ba0cd2e2cf54c4" Jan 30 17:52:48 crc kubenswrapper[4766]: I0130 17:52:48.606795 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dc5cbf9f7-zkw64" Jan 30 17:52:48 crc kubenswrapper[4766]: I0130 17:52:48.641379 4766 scope.go:117] "RemoveContainer" containerID="55c241c1b1860be383ecda1eec34453e72d6dcb7f7ddf745097a4fb7e9ad2729" Jan 30 17:52:48 crc kubenswrapper[4766]: I0130 17:52:48.645808 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7dc5cbf9f7-zkw64"] Jan 30 17:52:48 crc kubenswrapper[4766]: I0130 17:52:48.656293 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7dc5cbf9f7-zkw64"] Jan 30 17:52:50 crc kubenswrapper[4766]: I0130 17:52:50.052889 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a7525bc-5e61-4580-b6ec-03ee13b7eefe" path="/var/lib/kubelet/pods/3a7525bc-5e61-4580-b6ec-03ee13b7eefe/volumes" Jan 30 17:52:50 crc kubenswrapper[4766]: I0130 17:52:50.909924 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 17:52:50 crc kubenswrapper[4766]: I0130 17:52:50.910328 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 17:52:50 crc kubenswrapper[4766]: I0130 17:52:50.949382 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 17:52:50 crc kubenswrapper[4766]: I0130 17:52:50.955249 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 17:52:51 crc kubenswrapper[4766]: I0130 17:52:51.637855 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 17:52:51 crc kubenswrapper[4766]: I0130 17:52:51.638007 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 17:52:52 crc kubenswrapper[4766]: I0130 17:52:52.525345 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7dc5cbf9f7-zkw64" podUID="3a7525bc-5e61-4580-b6ec-03ee13b7eefe" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.30:5353: i/o timeout" Jan 30 17:52:53 crc kubenswrapper[4766]: I0130 17:52:53.000127 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 17:52:53 crc kubenswrapper[4766]: I0130 17:52:53.000580 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 17:52:53 crc kubenswrapper[4766]: I0130 17:52:53.036193 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 17:52:53 crc kubenswrapper[4766]: I0130 17:52:53.052781 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 17:52:53 crc kubenswrapper[4766]: I0130 17:52:53.610262 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 17:52:53 crc kubenswrapper[4766]: I0130 17:52:53.633197 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 17:52:53 crc kubenswrapper[4766]: I0130 17:52:53.658815 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 17:52:53 crc kubenswrapper[4766]: I0130 17:52:53.658848 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 17:52:55 crc kubenswrapper[4766]: I0130 17:52:55.591326 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 17:52:55 crc kubenswrapper[4766]: I0130 17:52:55.600932 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 17:53:01 crc kubenswrapper[4766]: I0130 17:53:01.026213 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-4b67-account-create-update-85sd5"] Jan 30 17:53:01 crc kubenswrapper[4766]: E0130 17:53:01.027231 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a7525bc-5e61-4580-b6ec-03ee13b7eefe" containerName="dnsmasq-dns" Jan 30 17:53:01 crc kubenswrapper[4766]: I0130 17:53:01.027244 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a7525bc-5e61-4580-b6ec-03ee13b7eefe" containerName="dnsmasq-dns" Jan 30 17:53:01 crc kubenswrapper[4766]: E0130 17:53:01.027281 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a7525bc-5e61-4580-b6ec-03ee13b7eefe" containerName="init" Jan 30 17:53:01 crc kubenswrapper[4766]: I0130 17:53:01.027290 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a7525bc-5e61-4580-b6ec-03ee13b7eefe" containerName="init" Jan 30 17:53:01 crc kubenswrapper[4766]: I0130 17:53:01.027522 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a7525bc-5e61-4580-b6ec-03ee13b7eefe" containerName="dnsmasq-dns" Jan 30 17:53:01 crc kubenswrapper[4766]: I0130 17:53:01.028349 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4b67-account-create-update-85sd5" Jan 30 17:53:01 crc kubenswrapper[4766]: I0130 17:53:01.030619 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 30 17:53:01 crc kubenswrapper[4766]: I0130 17:53:01.032329 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-5n9p6"] Jan 30 17:53:01 crc kubenswrapper[4766]: I0130 17:53:01.033852 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-5n9p6" Jan 30 17:53:01 crc kubenswrapper[4766]: I0130 17:53:01.046818 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-5n9p6"] Jan 30 17:53:01 crc kubenswrapper[4766]: I0130 17:53:01.057418 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-4b67-account-create-update-85sd5"] Jan 30 17:53:01 crc kubenswrapper[4766]: I0130 17:53:01.115893 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb39a90f-2911-4e3f-a034-025eb6f8077d-operator-scripts\") pod \"placement-4b67-account-create-update-85sd5\" (UID: \"cb39a90f-2911-4e3f-a034-025eb6f8077d\") " pod="openstack/placement-4b67-account-create-update-85sd5" Jan 30 17:53:01 crc kubenswrapper[4766]: I0130 17:53:01.115994 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhwvw\" (UniqueName: \"kubernetes.io/projected/cb39a90f-2911-4e3f-a034-025eb6f8077d-kube-api-access-lhwvw\") pod \"placement-4b67-account-create-update-85sd5\" (UID: \"cb39a90f-2911-4e3f-a034-025eb6f8077d\") " pod="openstack/placement-4b67-account-create-update-85sd5" Jan 30 17:53:01 crc kubenswrapper[4766]: I0130 17:53:01.116322 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03ade9e5-b989-431e-995d-1dec1432ed75-operator-scripts\") pod \"placement-db-create-5n9p6\" (UID: \"03ade9e5-b989-431e-995d-1dec1432ed75\") " pod="openstack/placement-db-create-5n9p6" Jan 30 17:53:01 crc kubenswrapper[4766]: I0130 17:53:01.116489 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt5ts\" (UniqueName: \"kubernetes.io/projected/03ade9e5-b989-431e-995d-1dec1432ed75-kube-api-access-pt5ts\") pod \"placement-db-create-5n9p6\" (UID: \"03ade9e5-b989-431e-995d-1dec1432ed75\") " pod="openstack/placement-db-create-5n9p6" Jan 30 17:53:01 crc kubenswrapper[4766]: I0130 17:53:01.220325 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb39a90f-2911-4e3f-a034-025eb6f8077d-operator-scripts\") pod \"placement-4b67-account-create-update-85sd5\" (UID: \"cb39a90f-2911-4e3f-a034-025eb6f8077d\") " pod="openstack/placement-4b67-account-create-update-85sd5" Jan 30 17:53:01 crc kubenswrapper[4766]: I0130 17:53:01.220394 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhwvw\" (UniqueName: \"kubernetes.io/projected/cb39a90f-2911-4e3f-a034-025eb6f8077d-kube-api-access-lhwvw\") pod \"placement-4b67-account-create-update-85sd5\" (UID: \"cb39a90f-2911-4e3f-a034-025eb6f8077d\") " pod="openstack/placement-4b67-account-create-update-85sd5" Jan 30 17:53:01 crc kubenswrapper[4766]: I0130 17:53:01.220458 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03ade9e5-b989-431e-995d-1dec1432ed75-operator-scripts\") pod \"placement-db-create-5n9p6\" (UID: \"03ade9e5-b989-431e-995d-1dec1432ed75\") " pod="openstack/placement-db-create-5n9p6" Jan 30 17:53:01 crc kubenswrapper[4766]: I0130 17:53:01.220495 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pt5ts\" (UniqueName: \"kubernetes.io/projected/03ade9e5-b989-431e-995d-1dec1432ed75-kube-api-access-pt5ts\") pod \"placement-db-create-5n9p6\" (UID: \"03ade9e5-b989-431e-995d-1dec1432ed75\") " pod="openstack/placement-db-create-5n9p6" Jan 30 17:53:01 crc kubenswrapper[4766]: I0130 17:53:01.221092 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb39a90f-2911-4e3f-a034-025eb6f8077d-operator-scripts\") pod \"placement-4b67-account-create-update-85sd5\" (UID: \"cb39a90f-2911-4e3f-a034-025eb6f8077d\") " pod="openstack/placement-4b67-account-create-update-85sd5" Jan 30 17:53:01 crc kubenswrapper[4766]: I0130 17:53:01.221344 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03ade9e5-b989-431e-995d-1dec1432ed75-operator-scripts\") pod \"placement-db-create-5n9p6\" (UID: \"03ade9e5-b989-431e-995d-1dec1432ed75\") " pod="openstack/placement-db-create-5n9p6" Jan 30 17:53:01 crc kubenswrapper[4766]: I0130 17:53:01.245089 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pt5ts\" (UniqueName: \"kubernetes.io/projected/03ade9e5-b989-431e-995d-1dec1432ed75-kube-api-access-pt5ts\") pod \"placement-db-create-5n9p6\" (UID: \"03ade9e5-b989-431e-995d-1dec1432ed75\") " pod="openstack/placement-db-create-5n9p6" Jan 30 17:53:01 crc kubenswrapper[4766]: I0130 17:53:01.245111 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhwvw\" (UniqueName: \"kubernetes.io/projected/cb39a90f-2911-4e3f-a034-025eb6f8077d-kube-api-access-lhwvw\") pod \"placement-4b67-account-create-update-85sd5\" (UID: \"cb39a90f-2911-4e3f-a034-025eb6f8077d\") " pod="openstack/placement-4b67-account-create-update-85sd5" Jan 30 17:53:01 crc kubenswrapper[4766]: I0130 17:53:01.365029 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4b67-account-create-update-85sd5" Jan 30 17:53:01 crc kubenswrapper[4766]: I0130 17:53:01.376618 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-5n9p6" Jan 30 17:53:01 crc kubenswrapper[4766]: W0130 17:53:01.807733 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod03ade9e5_b989_431e_995d_1dec1432ed75.slice/crio-96021fa5a614247e55e96e2213f6e0ba4a531a15e49a3c793389ef7e4bd2ada3 WatchSource:0}: Error finding container 96021fa5a614247e55e96e2213f6e0ba4a531a15e49a3c793389ef7e4bd2ada3: Status 404 returned error can't find the container with id 96021fa5a614247e55e96e2213f6e0ba4a531a15e49a3c793389ef7e4bd2ada3 Jan 30 17:53:01 crc kubenswrapper[4766]: I0130 17:53:01.816664 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-5n9p6"] Jan 30 17:53:01 crc kubenswrapper[4766]: I0130 17:53:01.889087 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-4b67-account-create-update-85sd5"] Jan 30 17:53:01 crc kubenswrapper[4766]: W0130 17:53:01.890118 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb39a90f_2911_4e3f_a034_025eb6f8077d.slice/crio-c0a69fb13f8ec100c5fd9ca16c3e8d401a863c48bfe8a607bfa99aa36d69b6f1 WatchSource:0}: Error finding container c0a69fb13f8ec100c5fd9ca16c3e8d401a863c48bfe8a607bfa99aa36d69b6f1: Status 404 returned error can't find the container with id c0a69fb13f8ec100c5fd9ca16c3e8d401a863c48bfe8a607bfa99aa36d69b6f1 Jan 30 17:53:02 crc kubenswrapper[4766]: I0130 17:53:02.743070 4766 generic.go:334] "Generic (PLEG): container finished" podID="03ade9e5-b989-431e-995d-1dec1432ed75" containerID="cbcf29702f59854ea3bf4dbf2361e9f8a36e31bd05f0bda1d36ac83ec37ad3db" exitCode=0 Jan 30 17:53:02 crc kubenswrapper[4766]: I0130 17:53:02.743620 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-5n9p6" event={"ID":"03ade9e5-b989-431e-995d-1dec1432ed75","Type":"ContainerDied","Data":"cbcf29702f59854ea3bf4dbf2361e9f8a36e31bd05f0bda1d36ac83ec37ad3db"} Jan 30 17:53:02 crc kubenswrapper[4766]: I0130 17:53:02.743788 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-5n9p6" event={"ID":"03ade9e5-b989-431e-995d-1dec1432ed75","Type":"ContainerStarted","Data":"96021fa5a614247e55e96e2213f6e0ba4a531a15e49a3c793389ef7e4bd2ada3"} Jan 30 17:53:02 crc kubenswrapper[4766]: I0130 17:53:02.746495 4766 generic.go:334] "Generic (PLEG): container finished" podID="cb39a90f-2911-4e3f-a034-025eb6f8077d" containerID="8866b78d897067600b584d9dee594c511c5628be20331b784f3c260d8792a78a" exitCode=0 Jan 30 17:53:02 crc kubenswrapper[4766]: I0130 17:53:02.746548 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4b67-account-create-update-85sd5" event={"ID":"cb39a90f-2911-4e3f-a034-025eb6f8077d","Type":"ContainerDied","Data":"8866b78d897067600b584d9dee594c511c5628be20331b784f3c260d8792a78a"} Jan 30 17:53:02 crc kubenswrapper[4766]: I0130 17:53:02.746579 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4b67-account-create-update-85sd5" event={"ID":"cb39a90f-2911-4e3f-a034-025eb6f8077d","Type":"ContainerStarted","Data":"c0a69fb13f8ec100c5fd9ca16c3e8d401a863c48bfe8a607bfa99aa36d69b6f1"} Jan 30 17:53:04 crc kubenswrapper[4766]: I0130 17:53:04.040565 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4b67-account-create-update-85sd5" Jan 30 17:53:04 crc kubenswrapper[4766]: I0130 17:53:04.137837 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-5n9p6" Jan 30 17:53:04 crc kubenswrapper[4766]: I0130 17:53:04.172306 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb39a90f-2911-4e3f-a034-025eb6f8077d-operator-scripts\") pod \"cb39a90f-2911-4e3f-a034-025eb6f8077d\" (UID: \"cb39a90f-2911-4e3f-a034-025eb6f8077d\") " Jan 30 17:53:04 crc kubenswrapper[4766]: I0130 17:53:04.172519 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhwvw\" (UniqueName: \"kubernetes.io/projected/cb39a90f-2911-4e3f-a034-025eb6f8077d-kube-api-access-lhwvw\") pod \"cb39a90f-2911-4e3f-a034-025eb6f8077d\" (UID: \"cb39a90f-2911-4e3f-a034-025eb6f8077d\") " Jan 30 17:53:04 crc kubenswrapper[4766]: I0130 17:53:04.172780 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb39a90f-2911-4e3f-a034-025eb6f8077d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cb39a90f-2911-4e3f-a034-025eb6f8077d" (UID: "cb39a90f-2911-4e3f-a034-025eb6f8077d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:53:04 crc kubenswrapper[4766]: I0130 17:53:04.173361 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb39a90f-2911-4e3f-a034-025eb6f8077d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:53:04 crc kubenswrapper[4766]: I0130 17:53:04.177196 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb39a90f-2911-4e3f-a034-025eb6f8077d-kube-api-access-lhwvw" (OuterVolumeSpecName: "kube-api-access-lhwvw") pod "cb39a90f-2911-4e3f-a034-025eb6f8077d" (UID: "cb39a90f-2911-4e3f-a034-025eb6f8077d"). InnerVolumeSpecName "kube-api-access-lhwvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:53:04 crc kubenswrapper[4766]: I0130 17:53:04.274150 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03ade9e5-b989-431e-995d-1dec1432ed75-operator-scripts\") pod \"03ade9e5-b989-431e-995d-1dec1432ed75\" (UID: \"03ade9e5-b989-431e-995d-1dec1432ed75\") " Jan 30 17:53:04 crc kubenswrapper[4766]: I0130 17:53:04.274241 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pt5ts\" (UniqueName: \"kubernetes.io/projected/03ade9e5-b989-431e-995d-1dec1432ed75-kube-api-access-pt5ts\") pod \"03ade9e5-b989-431e-995d-1dec1432ed75\" (UID: \"03ade9e5-b989-431e-995d-1dec1432ed75\") " Jan 30 17:53:04 crc kubenswrapper[4766]: I0130 17:53:04.274557 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03ade9e5-b989-431e-995d-1dec1432ed75-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "03ade9e5-b989-431e-995d-1dec1432ed75" (UID: "03ade9e5-b989-431e-995d-1dec1432ed75"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:53:04 crc kubenswrapper[4766]: I0130 17:53:04.275023 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhwvw\" (UniqueName: \"kubernetes.io/projected/cb39a90f-2911-4e3f-a034-025eb6f8077d-kube-api-access-lhwvw\") on node \"crc\" DevicePath \"\"" Jan 30 17:53:04 crc kubenswrapper[4766]: I0130 17:53:04.275040 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03ade9e5-b989-431e-995d-1dec1432ed75-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:53:04 crc kubenswrapper[4766]: I0130 17:53:04.276788 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03ade9e5-b989-431e-995d-1dec1432ed75-kube-api-access-pt5ts" (OuterVolumeSpecName: "kube-api-access-pt5ts") pod "03ade9e5-b989-431e-995d-1dec1432ed75" (UID: "03ade9e5-b989-431e-995d-1dec1432ed75"). InnerVolumeSpecName "kube-api-access-pt5ts". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:53:04 crc kubenswrapper[4766]: I0130 17:53:04.376409 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pt5ts\" (UniqueName: \"kubernetes.io/projected/03ade9e5-b989-431e-995d-1dec1432ed75-kube-api-access-pt5ts\") on node \"crc\" DevicePath \"\"" Jan 30 17:53:04 crc kubenswrapper[4766]: I0130 17:53:04.765882 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-5n9p6" Jan 30 17:53:04 crc kubenswrapper[4766]: I0130 17:53:04.765895 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-5n9p6" event={"ID":"03ade9e5-b989-431e-995d-1dec1432ed75","Type":"ContainerDied","Data":"96021fa5a614247e55e96e2213f6e0ba4a531a15e49a3c793389ef7e4bd2ada3"} Jan 30 17:53:04 crc kubenswrapper[4766]: I0130 17:53:04.766018 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96021fa5a614247e55e96e2213f6e0ba4a531a15e49a3c793389ef7e4bd2ada3" Jan 30 17:53:04 crc kubenswrapper[4766]: I0130 17:53:04.767899 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4b67-account-create-update-85sd5" event={"ID":"cb39a90f-2911-4e3f-a034-025eb6f8077d","Type":"ContainerDied","Data":"c0a69fb13f8ec100c5fd9ca16c3e8d401a863c48bfe8a607bfa99aa36d69b6f1"} Jan 30 17:53:04 crc kubenswrapper[4766]: I0130 17:53:04.767920 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0a69fb13f8ec100c5fd9ca16c3e8d401a863c48bfe8a607bfa99aa36d69b6f1" Jan 30 17:53:04 crc kubenswrapper[4766]: I0130 17:53:04.767994 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4b67-account-create-update-85sd5" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.442632 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-64fdd96cfc-xc6hw"] Jan 30 17:53:06 crc kubenswrapper[4766]: E0130 17:53:06.444914 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb39a90f-2911-4e3f-a034-025eb6f8077d" containerName="mariadb-account-create-update" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.445019 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb39a90f-2911-4e3f-a034-025eb6f8077d" containerName="mariadb-account-create-update" Jan 30 17:53:06 crc kubenswrapper[4766]: E0130 17:53:06.445112 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03ade9e5-b989-431e-995d-1dec1432ed75" containerName="mariadb-database-create" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.445190 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="03ade9e5-b989-431e-995d-1dec1432ed75" containerName="mariadb-database-create" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.445405 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="03ade9e5-b989-431e-995d-1dec1432ed75" containerName="mariadb-database-create" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.445478 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb39a90f-2911-4e3f-a034-025eb6f8077d" containerName="mariadb-account-create-update" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.450755 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64fdd96cfc-xc6hw" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.460580 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-64fdd96cfc-xc6hw"] Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.481119 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-hn8dr"] Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.482273 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-hn8dr" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.484820 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-bvph7" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.484987 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.488593 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.507642 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-hn8dr"] Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.521944 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df37c2c0-49c6-46b4-a4c9-085cad77c471-dns-svc\") pod \"dnsmasq-dns-64fdd96cfc-xc6hw\" (UID: \"df37c2c0-49c6-46b4-a4c9-085cad77c471\") " pod="openstack/dnsmasq-dns-64fdd96cfc-xc6hw" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.521999 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df37c2c0-49c6-46b4-a4c9-085cad77c471-ovsdbserver-nb\") pod \"dnsmasq-dns-64fdd96cfc-xc6hw\" (UID: \"df37c2c0-49c6-46b4-a4c9-085cad77c471\") " pod="openstack/dnsmasq-dns-64fdd96cfc-xc6hw" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.522027 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df37c2c0-49c6-46b4-a4c9-085cad77c471-config\") pod \"dnsmasq-dns-64fdd96cfc-xc6hw\" (UID: \"df37c2c0-49c6-46b4-a4c9-085cad77c471\") " pod="openstack/dnsmasq-dns-64fdd96cfc-xc6hw" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.522140 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sscz4\" (UniqueName: \"kubernetes.io/projected/df37c2c0-49c6-46b4-a4c9-085cad77c471-kube-api-access-sscz4\") pod \"dnsmasq-dns-64fdd96cfc-xc6hw\" (UID: \"df37c2c0-49c6-46b4-a4c9-085cad77c471\") " pod="openstack/dnsmasq-dns-64fdd96cfc-xc6hw" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.522188 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df37c2c0-49c6-46b4-a4c9-085cad77c471-ovsdbserver-sb\") pod \"dnsmasq-dns-64fdd96cfc-xc6hw\" (UID: \"df37c2c0-49c6-46b4-a4c9-085cad77c471\") " pod="openstack/dnsmasq-dns-64fdd96cfc-xc6hw" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.623431 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62m8h\" (UniqueName: \"kubernetes.io/projected/2d89feb8-9495-4c8a-a424-37720df352bb-kube-api-access-62m8h\") pod \"placement-db-sync-hn8dr\" (UID: \"2d89feb8-9495-4c8a-a424-37720df352bb\") " pod="openstack/placement-db-sync-hn8dr" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.623528 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d89feb8-9495-4c8a-a424-37720df352bb-scripts\") pod \"placement-db-sync-hn8dr\" (UID: \"2d89feb8-9495-4c8a-a424-37720df352bb\") " pod="openstack/placement-db-sync-hn8dr" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.623572 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df37c2c0-49c6-46b4-a4c9-085cad77c471-dns-svc\") pod \"dnsmasq-dns-64fdd96cfc-xc6hw\" (UID: \"df37c2c0-49c6-46b4-a4c9-085cad77c471\") " pod="openstack/dnsmasq-dns-64fdd96cfc-xc6hw" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.623601 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d89feb8-9495-4c8a-a424-37720df352bb-logs\") pod \"placement-db-sync-hn8dr\" (UID: \"2d89feb8-9495-4c8a-a424-37720df352bb\") " pod="openstack/placement-db-sync-hn8dr" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.623638 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df37c2c0-49c6-46b4-a4c9-085cad77c471-ovsdbserver-nb\") pod \"dnsmasq-dns-64fdd96cfc-xc6hw\" (UID: \"df37c2c0-49c6-46b4-a4c9-085cad77c471\") " pod="openstack/dnsmasq-dns-64fdd96cfc-xc6hw" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.623671 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df37c2c0-49c6-46b4-a4c9-085cad77c471-config\") pod \"dnsmasq-dns-64fdd96cfc-xc6hw\" (UID: \"df37c2c0-49c6-46b4-a4c9-085cad77c471\") " pod="openstack/dnsmasq-dns-64fdd96cfc-xc6hw" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.623691 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sscz4\" (UniqueName: \"kubernetes.io/projected/df37c2c0-49c6-46b4-a4c9-085cad77c471-kube-api-access-sscz4\") pod \"dnsmasq-dns-64fdd96cfc-xc6hw\" (UID: \"df37c2c0-49c6-46b4-a4c9-085cad77c471\") " pod="openstack/dnsmasq-dns-64fdd96cfc-xc6hw" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.623726 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df37c2c0-49c6-46b4-a4c9-085cad77c471-ovsdbserver-sb\") pod \"dnsmasq-dns-64fdd96cfc-xc6hw\" (UID: \"df37c2c0-49c6-46b4-a4c9-085cad77c471\") " pod="openstack/dnsmasq-dns-64fdd96cfc-xc6hw" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.623792 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d89feb8-9495-4c8a-a424-37720df352bb-combined-ca-bundle\") pod \"placement-db-sync-hn8dr\" (UID: \"2d89feb8-9495-4c8a-a424-37720df352bb\") " pod="openstack/placement-db-sync-hn8dr" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.623821 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d89feb8-9495-4c8a-a424-37720df352bb-config-data\") pod \"placement-db-sync-hn8dr\" (UID: \"2d89feb8-9495-4c8a-a424-37720df352bb\") " pod="openstack/placement-db-sync-hn8dr" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.624747 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df37c2c0-49c6-46b4-a4c9-085cad77c471-dns-svc\") pod \"dnsmasq-dns-64fdd96cfc-xc6hw\" (UID: \"df37c2c0-49c6-46b4-a4c9-085cad77c471\") " pod="openstack/dnsmasq-dns-64fdd96cfc-xc6hw" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.625286 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df37c2c0-49c6-46b4-a4c9-085cad77c471-ovsdbserver-nb\") pod \"dnsmasq-dns-64fdd96cfc-xc6hw\" (UID: \"df37c2c0-49c6-46b4-a4c9-085cad77c471\") " pod="openstack/dnsmasq-dns-64fdd96cfc-xc6hw" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.625812 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df37c2c0-49c6-46b4-a4c9-085cad77c471-config\") pod \"dnsmasq-dns-64fdd96cfc-xc6hw\" (UID: \"df37c2c0-49c6-46b4-a4c9-085cad77c471\") " pod="openstack/dnsmasq-dns-64fdd96cfc-xc6hw" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.627116 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df37c2c0-49c6-46b4-a4c9-085cad77c471-ovsdbserver-sb\") pod \"dnsmasq-dns-64fdd96cfc-xc6hw\" (UID: \"df37c2c0-49c6-46b4-a4c9-085cad77c471\") " pod="openstack/dnsmasq-dns-64fdd96cfc-xc6hw" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.647126 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sscz4\" (UniqueName: \"kubernetes.io/projected/df37c2c0-49c6-46b4-a4c9-085cad77c471-kube-api-access-sscz4\") pod \"dnsmasq-dns-64fdd96cfc-xc6hw\" (UID: \"df37c2c0-49c6-46b4-a4c9-085cad77c471\") " pod="openstack/dnsmasq-dns-64fdd96cfc-xc6hw" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.725367 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62m8h\" (UniqueName: \"kubernetes.io/projected/2d89feb8-9495-4c8a-a424-37720df352bb-kube-api-access-62m8h\") pod \"placement-db-sync-hn8dr\" (UID: \"2d89feb8-9495-4c8a-a424-37720df352bb\") " pod="openstack/placement-db-sync-hn8dr" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.725426 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d89feb8-9495-4c8a-a424-37720df352bb-scripts\") pod \"placement-db-sync-hn8dr\" (UID: \"2d89feb8-9495-4c8a-a424-37720df352bb\") " pod="openstack/placement-db-sync-hn8dr" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.725462 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d89feb8-9495-4c8a-a424-37720df352bb-logs\") pod \"placement-db-sync-hn8dr\" (UID: \"2d89feb8-9495-4c8a-a424-37720df352bb\") " pod="openstack/placement-db-sync-hn8dr" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.725968 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d89feb8-9495-4c8a-a424-37720df352bb-logs\") pod \"placement-db-sync-hn8dr\" (UID: \"2d89feb8-9495-4c8a-a424-37720df352bb\") " pod="openstack/placement-db-sync-hn8dr" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.726031 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d89feb8-9495-4c8a-a424-37720df352bb-combined-ca-bundle\") pod \"placement-db-sync-hn8dr\" (UID: \"2d89feb8-9495-4c8a-a424-37720df352bb\") " pod="openstack/placement-db-sync-hn8dr" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.726061 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d89feb8-9495-4c8a-a424-37720df352bb-config-data\") pod \"placement-db-sync-hn8dr\" (UID: \"2d89feb8-9495-4c8a-a424-37720df352bb\") " pod="openstack/placement-db-sync-hn8dr" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.728723 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d89feb8-9495-4c8a-a424-37720df352bb-scripts\") pod \"placement-db-sync-hn8dr\" (UID: \"2d89feb8-9495-4c8a-a424-37720df352bb\") " pod="openstack/placement-db-sync-hn8dr" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.728792 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d89feb8-9495-4c8a-a424-37720df352bb-combined-ca-bundle\") pod \"placement-db-sync-hn8dr\" (UID: \"2d89feb8-9495-4c8a-a424-37720df352bb\") " pod="openstack/placement-db-sync-hn8dr" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.729614 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d89feb8-9495-4c8a-a424-37720df352bb-config-data\") pod \"placement-db-sync-hn8dr\" (UID: \"2d89feb8-9495-4c8a-a424-37720df352bb\") " pod="openstack/placement-db-sync-hn8dr" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.741775 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62m8h\" (UniqueName: \"kubernetes.io/projected/2d89feb8-9495-4c8a-a424-37720df352bb-kube-api-access-62m8h\") pod \"placement-db-sync-hn8dr\" (UID: \"2d89feb8-9495-4c8a-a424-37720df352bb\") " pod="openstack/placement-db-sync-hn8dr" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.773763 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64fdd96cfc-xc6hw" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.799774 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-hn8dr" Jan 30 17:53:07 crc kubenswrapper[4766]: I0130 17:53:07.230870 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-64fdd96cfc-xc6hw"] Jan 30 17:53:07 crc kubenswrapper[4766]: I0130 17:53:07.339131 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-hn8dr"] Jan 30 17:53:07 crc kubenswrapper[4766]: W0130 17:53:07.342387 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d89feb8_9495_4c8a_a424_37720df352bb.slice/crio-8163ecab3c0c0c830276d748dc6bd3651231e4bd0ddebf23e1fc627650e600f6 WatchSource:0}: Error finding container 8163ecab3c0c0c830276d748dc6bd3651231e4bd0ddebf23e1fc627650e600f6: Status 404 returned error can't find the container with id 8163ecab3c0c0c830276d748dc6bd3651231e4bd0ddebf23e1fc627650e600f6 Jan 30 17:53:07 crc kubenswrapper[4766]: I0130 17:53:07.792760 4766 generic.go:334] "Generic (PLEG): container finished" podID="df37c2c0-49c6-46b4-a4c9-085cad77c471" containerID="97eb96b855b10a22a6e46b822f4b71edbb3ba59805d7a1f85175cae2577f8939" exitCode=0 Jan 30 17:53:07 crc kubenswrapper[4766]: I0130 17:53:07.792822 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64fdd96cfc-xc6hw" event={"ID":"df37c2c0-49c6-46b4-a4c9-085cad77c471","Type":"ContainerDied","Data":"97eb96b855b10a22a6e46b822f4b71edbb3ba59805d7a1f85175cae2577f8939"} Jan 30 17:53:07 crc kubenswrapper[4766]: I0130 17:53:07.793470 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64fdd96cfc-xc6hw" event={"ID":"df37c2c0-49c6-46b4-a4c9-085cad77c471","Type":"ContainerStarted","Data":"62402daa4d1e00e414a6153806e7a4ebba06101c39ecd01fd579e17d1df427fb"} Jan 30 17:53:07 crc kubenswrapper[4766]: I0130 17:53:07.797446 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-hn8dr" event={"ID":"2d89feb8-9495-4c8a-a424-37720df352bb","Type":"ContainerStarted","Data":"5e5b530396781526c9ca9c2a003890cd79c6f57ae8a59f2f830e10a2d58434d2"} Jan 30 17:53:07 crc kubenswrapper[4766]: I0130 17:53:07.797478 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-hn8dr" event={"ID":"2d89feb8-9495-4c8a-a424-37720df352bb","Type":"ContainerStarted","Data":"8163ecab3c0c0c830276d748dc6bd3651231e4bd0ddebf23e1fc627650e600f6"} Jan 30 17:53:07 crc kubenswrapper[4766]: I0130 17:53:07.843680 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-hn8dr" podStartSLOduration=1.843661135 podStartE2EDuration="1.843661135s" podCreationTimestamp="2026-01-30 17:53:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:53:07.839606614 +0000 UTC m=+5442.477563980" watchObservedRunningTime="2026-01-30 17:53:07.843661135 +0000 UTC m=+5442.481618481" Jan 30 17:53:08 crc kubenswrapper[4766]: I0130 17:53:08.806361 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64fdd96cfc-xc6hw" event={"ID":"df37c2c0-49c6-46b4-a4c9-085cad77c471","Type":"ContainerStarted","Data":"5833d194064bb1f8316a6b4185acea8bc03322516d726c459b7e5ddf6211384a"} Jan 30 17:53:08 crc kubenswrapper[4766]: I0130 17:53:08.807066 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-64fdd96cfc-xc6hw" Jan 30 17:53:08 crc kubenswrapper[4766]: I0130 17:53:08.830134 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-64fdd96cfc-xc6hw" podStartSLOduration=2.830112375 podStartE2EDuration="2.830112375s" podCreationTimestamp="2026-01-30 17:53:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:53:08.821366177 +0000 UTC m=+5443.459323523" watchObservedRunningTime="2026-01-30 17:53:08.830112375 +0000 UTC m=+5443.468069721" Jan 30 17:53:09 crc kubenswrapper[4766]: I0130 17:53:09.045719 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:53:09 crc kubenswrapper[4766]: I0130 17:53:09.045829 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:53:09 crc kubenswrapper[4766]: I0130 17:53:09.817066 4766 generic.go:334] "Generic (PLEG): container finished" podID="2d89feb8-9495-4c8a-a424-37720df352bb" containerID="5e5b530396781526c9ca9c2a003890cd79c6f57ae8a59f2f830e10a2d58434d2" exitCode=0 Jan 30 17:53:09 crc kubenswrapper[4766]: I0130 17:53:09.817128 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-hn8dr" event={"ID":"2d89feb8-9495-4c8a-a424-37720df352bb","Type":"ContainerDied","Data":"5e5b530396781526c9ca9c2a003890cd79c6f57ae8a59f2f830e10a2d58434d2"} Jan 30 17:53:11 crc kubenswrapper[4766]: I0130 17:53:11.190536 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-hn8dr" Jan 30 17:53:11 crc kubenswrapper[4766]: I0130 17:53:11.318416 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62m8h\" (UniqueName: \"kubernetes.io/projected/2d89feb8-9495-4c8a-a424-37720df352bb-kube-api-access-62m8h\") pod \"2d89feb8-9495-4c8a-a424-37720df352bb\" (UID: \"2d89feb8-9495-4c8a-a424-37720df352bb\") " Jan 30 17:53:11 crc kubenswrapper[4766]: I0130 17:53:11.318471 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d89feb8-9495-4c8a-a424-37720df352bb-config-data\") pod \"2d89feb8-9495-4c8a-a424-37720df352bb\" (UID: \"2d89feb8-9495-4c8a-a424-37720df352bb\") " Jan 30 17:53:11 crc kubenswrapper[4766]: I0130 17:53:11.318644 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d89feb8-9495-4c8a-a424-37720df352bb-combined-ca-bundle\") pod \"2d89feb8-9495-4c8a-a424-37720df352bb\" (UID: \"2d89feb8-9495-4c8a-a424-37720df352bb\") " Jan 30 17:53:11 crc kubenswrapper[4766]: I0130 17:53:11.318671 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d89feb8-9495-4c8a-a424-37720df352bb-scripts\") pod \"2d89feb8-9495-4c8a-a424-37720df352bb\" (UID: \"2d89feb8-9495-4c8a-a424-37720df352bb\") " Jan 30 17:53:11 crc kubenswrapper[4766]: I0130 17:53:11.318767 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d89feb8-9495-4c8a-a424-37720df352bb-logs\") pod \"2d89feb8-9495-4c8a-a424-37720df352bb\" (UID: \"2d89feb8-9495-4c8a-a424-37720df352bb\") " Jan 30 17:53:11 crc kubenswrapper[4766]: I0130 17:53:11.319272 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d89feb8-9495-4c8a-a424-37720df352bb-logs" (OuterVolumeSpecName: "logs") pod "2d89feb8-9495-4c8a-a424-37720df352bb" (UID: "2d89feb8-9495-4c8a-a424-37720df352bb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:53:11 crc kubenswrapper[4766]: I0130 17:53:11.324493 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d89feb8-9495-4c8a-a424-37720df352bb-kube-api-access-62m8h" (OuterVolumeSpecName: "kube-api-access-62m8h") pod "2d89feb8-9495-4c8a-a424-37720df352bb" (UID: "2d89feb8-9495-4c8a-a424-37720df352bb"). InnerVolumeSpecName "kube-api-access-62m8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:53:11 crc kubenswrapper[4766]: I0130 17:53:11.337475 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d89feb8-9495-4c8a-a424-37720df352bb-scripts" (OuterVolumeSpecName: "scripts") pod "2d89feb8-9495-4c8a-a424-37720df352bb" (UID: "2d89feb8-9495-4c8a-a424-37720df352bb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:53:11 crc kubenswrapper[4766]: I0130 17:53:11.343646 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d89feb8-9495-4c8a-a424-37720df352bb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2d89feb8-9495-4c8a-a424-37720df352bb" (UID: "2d89feb8-9495-4c8a-a424-37720df352bb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:53:11 crc kubenswrapper[4766]: I0130 17:53:11.345985 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d89feb8-9495-4c8a-a424-37720df352bb-config-data" (OuterVolumeSpecName: "config-data") pod "2d89feb8-9495-4c8a-a424-37720df352bb" (UID: "2d89feb8-9495-4c8a-a424-37720df352bb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:53:11 crc kubenswrapper[4766]: I0130 17:53:11.421485 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62m8h\" (UniqueName: \"kubernetes.io/projected/2d89feb8-9495-4c8a-a424-37720df352bb-kube-api-access-62m8h\") on node \"crc\" DevicePath \"\"" Jan 30 17:53:11 crc kubenswrapper[4766]: I0130 17:53:11.421526 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d89feb8-9495-4c8a-a424-37720df352bb-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:53:11 crc kubenswrapper[4766]: I0130 17:53:11.421541 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d89feb8-9495-4c8a-a424-37720df352bb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:53:11 crc kubenswrapper[4766]: I0130 17:53:11.421553 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d89feb8-9495-4c8a-a424-37720df352bb-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:53:11 crc kubenswrapper[4766]: I0130 17:53:11.421567 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d89feb8-9495-4c8a-a424-37720df352bb-logs\") on node \"crc\" DevicePath \"\"" Jan 30 17:53:11 crc kubenswrapper[4766]: I0130 17:53:11.876164 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-hn8dr" event={"ID":"2d89feb8-9495-4c8a-a424-37720df352bb","Type":"ContainerDied","Data":"8163ecab3c0c0c830276d748dc6bd3651231e4bd0ddebf23e1fc627650e600f6"} Jan 30 17:53:11 crc kubenswrapper[4766]: I0130 17:53:11.876225 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-hn8dr" Jan 30 17:53:11 crc kubenswrapper[4766]: I0130 17:53:11.876234 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8163ecab3c0c0c830276d748dc6bd3651231e4bd0ddebf23e1fc627650e600f6" Jan 30 17:53:12 crc kubenswrapper[4766]: I0130 17:53:12.300560 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6cf79c7456-bp9jt"] Jan 30 17:53:12 crc kubenswrapper[4766]: E0130 17:53:12.300986 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d89feb8-9495-4c8a-a424-37720df352bb" containerName="placement-db-sync" Jan 30 17:53:12 crc kubenswrapper[4766]: I0130 17:53:12.300998 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d89feb8-9495-4c8a-a424-37720df352bb" containerName="placement-db-sync" Jan 30 17:53:12 crc kubenswrapper[4766]: I0130 17:53:12.301165 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d89feb8-9495-4c8a-a424-37720df352bb" containerName="placement-db-sync" Jan 30 17:53:12 crc kubenswrapper[4766]: I0130 17:53:12.303221 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6cf79c7456-bp9jt" Jan 30 17:53:12 crc kubenswrapper[4766]: I0130 17:53:12.306109 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 30 17:53:12 crc kubenswrapper[4766]: I0130 17:53:12.306267 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-bvph7" Jan 30 17:53:12 crc kubenswrapper[4766]: I0130 17:53:12.306409 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 30 17:53:12 crc kubenswrapper[4766]: I0130 17:53:12.317495 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6cf79c7456-bp9jt"] Jan 30 17:53:12 crc kubenswrapper[4766]: I0130 17:53:12.450095 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/234231ef-1ed0-40ff-a4a8-0d9f533d39de-combined-ca-bundle\") pod \"placement-6cf79c7456-bp9jt\" (UID: \"234231ef-1ed0-40ff-a4a8-0d9f533d39de\") " pod="openstack/placement-6cf79c7456-bp9jt" Jan 30 17:53:12 crc kubenswrapper[4766]: I0130 17:53:12.450207 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/234231ef-1ed0-40ff-a4a8-0d9f533d39de-config-data\") pod \"placement-6cf79c7456-bp9jt\" (UID: \"234231ef-1ed0-40ff-a4a8-0d9f533d39de\") " pod="openstack/placement-6cf79c7456-bp9jt" Jan 30 17:53:12 crc kubenswrapper[4766]: I0130 17:53:12.450451 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/234231ef-1ed0-40ff-a4a8-0d9f533d39de-scripts\") pod \"placement-6cf79c7456-bp9jt\" (UID: \"234231ef-1ed0-40ff-a4a8-0d9f533d39de\") " pod="openstack/placement-6cf79c7456-bp9jt" Jan 30 17:53:12 crc kubenswrapper[4766]: I0130 17:53:12.450646 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/234231ef-1ed0-40ff-a4a8-0d9f533d39de-logs\") pod \"placement-6cf79c7456-bp9jt\" (UID: \"234231ef-1ed0-40ff-a4a8-0d9f533d39de\") " pod="openstack/placement-6cf79c7456-bp9jt" Jan 30 17:53:12 crc kubenswrapper[4766]: I0130 17:53:12.450688 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st5t8\" (UniqueName: \"kubernetes.io/projected/234231ef-1ed0-40ff-a4a8-0d9f533d39de-kube-api-access-st5t8\") pod \"placement-6cf79c7456-bp9jt\" (UID: \"234231ef-1ed0-40ff-a4a8-0d9f533d39de\") " pod="openstack/placement-6cf79c7456-bp9jt" Jan 30 17:53:12 crc kubenswrapper[4766]: I0130 17:53:12.552643 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/234231ef-1ed0-40ff-a4a8-0d9f533d39de-combined-ca-bundle\") pod \"placement-6cf79c7456-bp9jt\" (UID: \"234231ef-1ed0-40ff-a4a8-0d9f533d39de\") " pod="openstack/placement-6cf79c7456-bp9jt" Jan 30 17:53:12 crc kubenswrapper[4766]: I0130 17:53:12.553065 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/234231ef-1ed0-40ff-a4a8-0d9f533d39de-config-data\") pod \"placement-6cf79c7456-bp9jt\" (UID: \"234231ef-1ed0-40ff-a4a8-0d9f533d39de\") " pod="openstack/placement-6cf79c7456-bp9jt" Jan 30 17:53:12 crc kubenswrapper[4766]: I0130 17:53:12.553123 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/234231ef-1ed0-40ff-a4a8-0d9f533d39de-scripts\") pod \"placement-6cf79c7456-bp9jt\" (UID: \"234231ef-1ed0-40ff-a4a8-0d9f533d39de\") " pod="openstack/placement-6cf79c7456-bp9jt" Jan 30 17:53:12 crc kubenswrapper[4766]: I0130 17:53:12.553207 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/234231ef-1ed0-40ff-a4a8-0d9f533d39de-logs\") pod \"placement-6cf79c7456-bp9jt\" (UID: \"234231ef-1ed0-40ff-a4a8-0d9f533d39de\") " pod="openstack/placement-6cf79c7456-bp9jt" Jan 30 17:53:12 crc kubenswrapper[4766]: I0130 17:53:12.553233 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st5t8\" (UniqueName: \"kubernetes.io/projected/234231ef-1ed0-40ff-a4a8-0d9f533d39de-kube-api-access-st5t8\") pod \"placement-6cf79c7456-bp9jt\" (UID: \"234231ef-1ed0-40ff-a4a8-0d9f533d39de\") " pod="openstack/placement-6cf79c7456-bp9jt" Jan 30 17:53:12 crc kubenswrapper[4766]: I0130 17:53:12.553755 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/234231ef-1ed0-40ff-a4a8-0d9f533d39de-logs\") pod \"placement-6cf79c7456-bp9jt\" (UID: \"234231ef-1ed0-40ff-a4a8-0d9f533d39de\") " pod="openstack/placement-6cf79c7456-bp9jt" Jan 30 17:53:12 crc kubenswrapper[4766]: I0130 17:53:12.557232 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/234231ef-1ed0-40ff-a4a8-0d9f533d39de-combined-ca-bundle\") pod \"placement-6cf79c7456-bp9jt\" (UID: \"234231ef-1ed0-40ff-a4a8-0d9f533d39de\") " pod="openstack/placement-6cf79c7456-bp9jt" Jan 30 17:53:12 crc kubenswrapper[4766]: I0130 17:53:12.557408 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/234231ef-1ed0-40ff-a4a8-0d9f533d39de-config-data\") pod \"placement-6cf79c7456-bp9jt\" (UID: \"234231ef-1ed0-40ff-a4a8-0d9f533d39de\") " pod="openstack/placement-6cf79c7456-bp9jt" Jan 30 17:53:12 crc kubenswrapper[4766]: I0130 17:53:12.558651 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/234231ef-1ed0-40ff-a4a8-0d9f533d39de-scripts\") pod \"placement-6cf79c7456-bp9jt\" (UID: \"234231ef-1ed0-40ff-a4a8-0d9f533d39de\") " pod="openstack/placement-6cf79c7456-bp9jt" Jan 30 17:53:12 crc kubenswrapper[4766]: I0130 17:53:12.573769 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st5t8\" (UniqueName: \"kubernetes.io/projected/234231ef-1ed0-40ff-a4a8-0d9f533d39de-kube-api-access-st5t8\") pod \"placement-6cf79c7456-bp9jt\" (UID: \"234231ef-1ed0-40ff-a4a8-0d9f533d39de\") " pod="openstack/placement-6cf79c7456-bp9jt" Jan 30 17:53:12 crc kubenswrapper[4766]: I0130 17:53:12.621336 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6cf79c7456-bp9jt" Jan 30 17:53:13 crc kubenswrapper[4766]: I0130 17:53:13.081630 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6cf79c7456-bp9jt"] Jan 30 17:53:13 crc kubenswrapper[4766]: I0130 17:53:13.895389 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6cf79c7456-bp9jt" event={"ID":"234231ef-1ed0-40ff-a4a8-0d9f533d39de","Type":"ContainerStarted","Data":"901b04b4e1ac0fafdff2182ed215c2255dca7b47f2ab1f0665b6d4476dfdb4c9"} Jan 30 17:53:13 crc kubenswrapper[4766]: I0130 17:53:13.895814 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6cf79c7456-bp9jt" event={"ID":"234231ef-1ed0-40ff-a4a8-0d9f533d39de","Type":"ContainerStarted","Data":"3cdf4ccf7a30494a80084afaed49ab019233f05b0a510591d6225b7293978583"} Jan 30 17:53:13 crc kubenswrapper[4766]: I0130 17:53:13.895827 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6cf79c7456-bp9jt" event={"ID":"234231ef-1ed0-40ff-a4a8-0d9f533d39de","Type":"ContainerStarted","Data":"239fd21e97b863b10dbab23654bad42aff7e4b17b3c9e3a5f993df36733b5427"} Jan 30 17:53:13 crc kubenswrapper[4766]: I0130 17:53:13.895843 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6cf79c7456-bp9jt" Jan 30 17:53:13 crc kubenswrapper[4766]: I0130 17:53:13.929796 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6cf79c7456-bp9jt" podStartSLOduration=1.929772105 podStartE2EDuration="1.929772105s" podCreationTimestamp="2026-01-30 17:53:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:53:13.918037956 +0000 UTC m=+5448.555995302" watchObservedRunningTime="2026-01-30 17:53:13.929772105 +0000 UTC m=+5448.567729451" Jan 30 17:53:14 crc kubenswrapper[4766]: I0130 17:53:14.904933 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6cf79c7456-bp9jt" Jan 30 17:53:16 crc kubenswrapper[4766]: I0130 17:53:16.775690 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-64fdd96cfc-xc6hw" Jan 30 17:53:16 crc kubenswrapper[4766]: I0130 17:53:16.857976 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-548c78df-gwvnq"] Jan 30 17:53:16 crc kubenswrapper[4766]: I0130 17:53:16.858241 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-548c78df-gwvnq" podUID="f59ac31c-2444-4acf-b7a1-d4bce77181bf" containerName="dnsmasq-dns" containerID="cri-o://12b98c744346fb1a43ddf1ff58ce20a8705aecf44b159eaae4645c7fd037788c" gracePeriod=10 Jan 30 17:53:17 crc kubenswrapper[4766]: I0130 17:53:17.328557 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-548c78df-gwvnq" Jan 30 17:53:17 crc kubenswrapper[4766]: I0130 17:53:17.441491 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f59ac31c-2444-4acf-b7a1-d4bce77181bf-ovsdbserver-sb\") pod \"f59ac31c-2444-4acf-b7a1-d4bce77181bf\" (UID: \"f59ac31c-2444-4acf-b7a1-d4bce77181bf\") " Jan 30 17:53:17 crc kubenswrapper[4766]: I0130 17:53:17.441617 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f59ac31c-2444-4acf-b7a1-d4bce77181bf-dns-svc\") pod \"f59ac31c-2444-4acf-b7a1-d4bce77181bf\" (UID: \"f59ac31c-2444-4acf-b7a1-d4bce77181bf\") " Jan 30 17:53:17 crc kubenswrapper[4766]: I0130 17:53:17.441719 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f59ac31c-2444-4acf-b7a1-d4bce77181bf-config\") pod \"f59ac31c-2444-4acf-b7a1-d4bce77181bf\" (UID: \"f59ac31c-2444-4acf-b7a1-d4bce77181bf\") " Jan 30 17:53:17 crc kubenswrapper[4766]: I0130 17:53:17.441877 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f59ac31c-2444-4acf-b7a1-d4bce77181bf-ovsdbserver-nb\") pod \"f59ac31c-2444-4acf-b7a1-d4bce77181bf\" (UID: \"f59ac31c-2444-4acf-b7a1-d4bce77181bf\") " Jan 30 17:53:17 crc kubenswrapper[4766]: I0130 17:53:17.441905 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcrln\" (UniqueName: \"kubernetes.io/projected/f59ac31c-2444-4acf-b7a1-d4bce77181bf-kube-api-access-qcrln\") pod \"f59ac31c-2444-4acf-b7a1-d4bce77181bf\" (UID: \"f59ac31c-2444-4acf-b7a1-d4bce77181bf\") " Jan 30 17:53:17 crc kubenswrapper[4766]: I0130 17:53:17.447802 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f59ac31c-2444-4acf-b7a1-d4bce77181bf-kube-api-access-qcrln" (OuterVolumeSpecName: "kube-api-access-qcrln") pod "f59ac31c-2444-4acf-b7a1-d4bce77181bf" (UID: "f59ac31c-2444-4acf-b7a1-d4bce77181bf"). InnerVolumeSpecName "kube-api-access-qcrln". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:53:17 crc kubenswrapper[4766]: I0130 17:53:17.486574 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f59ac31c-2444-4acf-b7a1-d4bce77181bf-config" (OuterVolumeSpecName: "config") pod "f59ac31c-2444-4acf-b7a1-d4bce77181bf" (UID: "f59ac31c-2444-4acf-b7a1-d4bce77181bf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:53:17 crc kubenswrapper[4766]: I0130 17:53:17.494022 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f59ac31c-2444-4acf-b7a1-d4bce77181bf-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f59ac31c-2444-4acf-b7a1-d4bce77181bf" (UID: "f59ac31c-2444-4acf-b7a1-d4bce77181bf"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:53:17 crc kubenswrapper[4766]: I0130 17:53:17.497611 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f59ac31c-2444-4acf-b7a1-d4bce77181bf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f59ac31c-2444-4acf-b7a1-d4bce77181bf" (UID: "f59ac31c-2444-4acf-b7a1-d4bce77181bf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:53:17 crc kubenswrapper[4766]: I0130 17:53:17.498062 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f59ac31c-2444-4acf-b7a1-d4bce77181bf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f59ac31c-2444-4acf-b7a1-d4bce77181bf" (UID: "f59ac31c-2444-4acf-b7a1-d4bce77181bf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:53:17 crc kubenswrapper[4766]: I0130 17:53:17.543655 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f59ac31c-2444-4acf-b7a1-d4bce77181bf-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 17:53:17 crc kubenswrapper[4766]: I0130 17:53:17.543688 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f59ac31c-2444-4acf-b7a1-d4bce77181bf-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:53:17 crc kubenswrapper[4766]: I0130 17:53:17.543697 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f59ac31c-2444-4acf-b7a1-d4bce77181bf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 17:53:17 crc kubenswrapper[4766]: I0130 17:53:17.543763 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qcrln\" (UniqueName: \"kubernetes.io/projected/f59ac31c-2444-4acf-b7a1-d4bce77181bf-kube-api-access-qcrln\") on node \"crc\" DevicePath \"\"" Jan 30 17:53:17 crc kubenswrapper[4766]: I0130 17:53:17.543774 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f59ac31c-2444-4acf-b7a1-d4bce77181bf-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 17:53:17 crc kubenswrapper[4766]: I0130 17:53:17.932762 4766 generic.go:334] "Generic (PLEG): container finished" podID="f59ac31c-2444-4acf-b7a1-d4bce77181bf" containerID="12b98c744346fb1a43ddf1ff58ce20a8705aecf44b159eaae4645c7fd037788c" exitCode=0 Jan 30 17:53:17 crc kubenswrapper[4766]: I0130 17:53:17.932822 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-548c78df-gwvnq" Jan 30 17:53:17 crc kubenswrapper[4766]: I0130 17:53:17.932832 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-548c78df-gwvnq" event={"ID":"f59ac31c-2444-4acf-b7a1-d4bce77181bf","Type":"ContainerDied","Data":"12b98c744346fb1a43ddf1ff58ce20a8705aecf44b159eaae4645c7fd037788c"} Jan 30 17:53:17 crc kubenswrapper[4766]: I0130 17:53:17.932943 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-548c78df-gwvnq" event={"ID":"f59ac31c-2444-4acf-b7a1-d4bce77181bf","Type":"ContainerDied","Data":"1f2e26cf3fa088fc28831e74273a48702602b1cf187d0ca6caaa1a82f45b271d"} Jan 30 17:53:17 crc kubenswrapper[4766]: I0130 17:53:17.932978 4766 scope.go:117] "RemoveContainer" containerID="12b98c744346fb1a43ddf1ff58ce20a8705aecf44b159eaae4645c7fd037788c" Jan 30 17:53:17 crc kubenswrapper[4766]: I0130 17:53:17.959608 4766 scope.go:117] "RemoveContainer" containerID="7b3ff31ec71dbb0af27d4b0fdc9d8f231f39936bfca91b8179caf49331fe1f16" Jan 30 17:53:17 crc kubenswrapper[4766]: I0130 17:53:17.983074 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-548c78df-gwvnq"] Jan 30 17:53:17 crc kubenswrapper[4766]: I0130 17:53:17.991104 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-548c78df-gwvnq"] Jan 30 17:53:18 crc kubenswrapper[4766]: I0130 17:53:18.049281 4766 scope.go:117] "RemoveContainer" containerID="12b98c744346fb1a43ddf1ff58ce20a8705aecf44b159eaae4645c7fd037788c" Jan 30 17:53:18 crc kubenswrapper[4766]: E0130 17:53:18.050270 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12b98c744346fb1a43ddf1ff58ce20a8705aecf44b159eaae4645c7fd037788c\": container with ID starting with 12b98c744346fb1a43ddf1ff58ce20a8705aecf44b159eaae4645c7fd037788c not found: ID does not exist" containerID="12b98c744346fb1a43ddf1ff58ce20a8705aecf44b159eaae4645c7fd037788c" Jan 30 17:53:18 crc kubenswrapper[4766]: I0130 17:53:18.050322 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12b98c744346fb1a43ddf1ff58ce20a8705aecf44b159eaae4645c7fd037788c"} err="failed to get container status \"12b98c744346fb1a43ddf1ff58ce20a8705aecf44b159eaae4645c7fd037788c\": rpc error: code = NotFound desc = could not find container \"12b98c744346fb1a43ddf1ff58ce20a8705aecf44b159eaae4645c7fd037788c\": container with ID starting with 12b98c744346fb1a43ddf1ff58ce20a8705aecf44b159eaae4645c7fd037788c not found: ID does not exist" Jan 30 17:53:18 crc kubenswrapper[4766]: I0130 17:53:18.050356 4766 scope.go:117] "RemoveContainer" containerID="7b3ff31ec71dbb0af27d4b0fdc9d8f231f39936bfca91b8179caf49331fe1f16" Jan 30 17:53:18 crc kubenswrapper[4766]: E0130 17:53:18.051068 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b3ff31ec71dbb0af27d4b0fdc9d8f231f39936bfca91b8179caf49331fe1f16\": container with ID starting with 7b3ff31ec71dbb0af27d4b0fdc9d8f231f39936bfca91b8179caf49331fe1f16 not found: ID does not exist" containerID="7b3ff31ec71dbb0af27d4b0fdc9d8f231f39936bfca91b8179caf49331fe1f16" Jan 30 17:53:18 crc kubenswrapper[4766]: I0130 17:53:18.051103 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b3ff31ec71dbb0af27d4b0fdc9d8f231f39936bfca91b8179caf49331fe1f16"} err="failed to get container status \"7b3ff31ec71dbb0af27d4b0fdc9d8f231f39936bfca91b8179caf49331fe1f16\": rpc error: code = NotFound desc = could not find container \"7b3ff31ec71dbb0af27d4b0fdc9d8f231f39936bfca91b8179caf49331fe1f16\": container with ID starting with 7b3ff31ec71dbb0af27d4b0fdc9d8f231f39936bfca91b8179caf49331fe1f16 not found: ID does not exist" Jan 30 17:53:18 crc kubenswrapper[4766]: I0130 17:53:18.054146 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f59ac31c-2444-4acf-b7a1-d4bce77181bf" path="/var/lib/kubelet/pods/f59ac31c-2444-4acf-b7a1-d4bce77181bf/volumes" Jan 30 17:53:22 crc kubenswrapper[4766]: I0130 17:53:22.157617 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-548c78df-gwvnq" podUID="f59ac31c-2444-4acf-b7a1-d4bce77181bf" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.36:5353: i/o timeout" Jan 30 17:53:31 crc kubenswrapper[4766]: I0130 17:53:31.813238 4766 scope.go:117] "RemoveContainer" containerID="b005c60a4add2d8581404792f9ce09c8f2b90990814a350d305efe960ab72a39" Jan 30 17:53:39 crc kubenswrapper[4766]: I0130 17:53:39.045478 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:53:39 crc kubenswrapper[4766]: I0130 17:53:39.046029 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:53:43 crc kubenswrapper[4766]: I0130 17:53:43.652601 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6cf79c7456-bp9jt" Jan 30 17:53:44 crc kubenswrapper[4766]: I0130 17:53:44.734600 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6cf79c7456-bp9jt" Jan 30 17:54:07 crc kubenswrapper[4766]: I0130 17:54:07.771467 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-dwwb9"] Jan 30 17:54:07 crc kubenswrapper[4766]: E0130 17:54:07.772245 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f59ac31c-2444-4acf-b7a1-d4bce77181bf" containerName="dnsmasq-dns" Jan 30 17:54:07 crc kubenswrapper[4766]: I0130 17:54:07.772259 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f59ac31c-2444-4acf-b7a1-d4bce77181bf" containerName="dnsmasq-dns" Jan 30 17:54:07 crc kubenswrapper[4766]: E0130 17:54:07.772285 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f59ac31c-2444-4acf-b7a1-d4bce77181bf" containerName="init" Jan 30 17:54:07 crc kubenswrapper[4766]: I0130 17:54:07.772293 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f59ac31c-2444-4acf-b7a1-d4bce77181bf" containerName="init" Jan 30 17:54:07 crc kubenswrapper[4766]: I0130 17:54:07.772444 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="f59ac31c-2444-4acf-b7a1-d4bce77181bf" containerName="dnsmasq-dns" Jan 30 17:54:07 crc kubenswrapper[4766]: I0130 17:54:07.773044 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dwwb9" Jan 30 17:54:07 crc kubenswrapper[4766]: I0130 17:54:07.838263 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-dwwb9"] Jan 30 17:54:07 crc kubenswrapper[4766]: I0130 17:54:07.877126 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-hsbm5"] Jan 30 17:54:07 crc kubenswrapper[4766]: I0130 17:54:07.878663 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-hsbm5" Jan 30 17:54:07 crc kubenswrapper[4766]: I0130 17:54:07.891153 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-hsbm5"] Jan 30 17:54:07 crc kubenswrapper[4766]: I0130 17:54:07.943223 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hs9bj\" (UniqueName: \"kubernetes.io/projected/eda85bd2-cef5-4dba-b322-a9f16aced872-kube-api-access-hs9bj\") pod \"nova-api-db-create-dwwb9\" (UID: \"eda85bd2-cef5-4dba-b322-a9f16aced872\") " pod="openstack/nova-api-db-create-dwwb9" Jan 30 17:54:07 crc kubenswrapper[4766]: I0130 17:54:07.943452 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eda85bd2-cef5-4dba-b322-a9f16aced872-operator-scripts\") pod \"nova-api-db-create-dwwb9\" (UID: \"eda85bd2-cef5-4dba-b322-a9f16aced872\") " pod="openstack/nova-api-db-create-dwwb9" Jan 30 17:54:07 crc kubenswrapper[4766]: I0130 17:54:07.972958 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-4207-account-create-update-5677m"] Jan 30 17:54:07 crc kubenswrapper[4766]: I0130 17:54:07.974194 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4207-account-create-update-5677m" Jan 30 17:54:07 crc kubenswrapper[4766]: I0130 17:54:07.976454 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 30 17:54:07 crc kubenswrapper[4766]: I0130 17:54:07.982289 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-4207-account-create-update-5677m"] Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.052290 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v74xp\" (UniqueName: \"kubernetes.io/projected/03cd48e2-831c-4067-ae82-6aa11c3ed219-kube-api-access-v74xp\") pod \"nova-cell0-db-create-hsbm5\" (UID: \"03cd48e2-831c-4067-ae82-6aa11c3ed219\") " pod="openstack/nova-cell0-db-create-hsbm5" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.052389 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hs9bj\" (UniqueName: \"kubernetes.io/projected/eda85bd2-cef5-4dba-b322-a9f16aced872-kube-api-access-hs9bj\") pod \"nova-api-db-create-dwwb9\" (UID: \"eda85bd2-cef5-4dba-b322-a9f16aced872\") " pod="openstack/nova-api-db-create-dwwb9" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.052744 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03cd48e2-831c-4067-ae82-6aa11c3ed219-operator-scripts\") pod \"nova-cell0-db-create-hsbm5\" (UID: \"03cd48e2-831c-4067-ae82-6aa11c3ed219\") " pod="openstack/nova-cell0-db-create-hsbm5" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.052923 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eda85bd2-cef5-4dba-b322-a9f16aced872-operator-scripts\") pod \"nova-api-db-create-dwwb9\" (UID: \"eda85bd2-cef5-4dba-b322-a9f16aced872\") " pod="openstack/nova-api-db-create-dwwb9" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.054761 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eda85bd2-cef5-4dba-b322-a9f16aced872-operator-scripts\") pod \"nova-api-db-create-dwwb9\" (UID: \"eda85bd2-cef5-4dba-b322-a9f16aced872\") " pod="openstack/nova-api-db-create-dwwb9" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.069163 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-hkg9q"] Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.071256 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-hkg9q" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.078644 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-hkg9q"] Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.084452 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hs9bj\" (UniqueName: \"kubernetes.io/projected/eda85bd2-cef5-4dba-b322-a9f16aced872-kube-api-access-hs9bj\") pod \"nova-api-db-create-dwwb9\" (UID: \"eda85bd2-cef5-4dba-b322-a9f16aced872\") " pod="openstack/nova-api-db-create-dwwb9" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.145248 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dwwb9" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.154631 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkgdb\" (UniqueName: \"kubernetes.io/projected/230985b1-39a5-440c-b67a-97bed8481bd6-kube-api-access-rkgdb\") pod \"nova-api-4207-account-create-update-5677m\" (UID: \"230985b1-39a5-440c-b67a-97bed8481bd6\") " pod="openstack/nova-api-4207-account-create-update-5677m" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.154744 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v74xp\" (UniqueName: \"kubernetes.io/projected/03cd48e2-831c-4067-ae82-6aa11c3ed219-kube-api-access-v74xp\") pod \"nova-cell0-db-create-hsbm5\" (UID: \"03cd48e2-831c-4067-ae82-6aa11c3ed219\") " pod="openstack/nova-cell0-db-create-hsbm5" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.154775 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/230985b1-39a5-440c-b67a-97bed8481bd6-operator-scripts\") pod \"nova-api-4207-account-create-update-5677m\" (UID: \"230985b1-39a5-440c-b67a-97bed8481bd6\") " pod="openstack/nova-api-4207-account-create-update-5677m" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.154857 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03cd48e2-831c-4067-ae82-6aa11c3ed219-operator-scripts\") pod \"nova-cell0-db-create-hsbm5\" (UID: \"03cd48e2-831c-4067-ae82-6aa11c3ed219\") " pod="openstack/nova-cell0-db-create-hsbm5" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.155894 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03cd48e2-831c-4067-ae82-6aa11c3ed219-operator-scripts\") pod \"nova-cell0-db-create-hsbm5\" (UID: \"03cd48e2-831c-4067-ae82-6aa11c3ed219\") " pod="openstack/nova-cell0-db-create-hsbm5" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.175632 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v74xp\" (UniqueName: \"kubernetes.io/projected/03cd48e2-831c-4067-ae82-6aa11c3ed219-kube-api-access-v74xp\") pod \"nova-cell0-db-create-hsbm5\" (UID: \"03cd48e2-831c-4067-ae82-6aa11c3ed219\") " pod="openstack/nova-cell0-db-create-hsbm5" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.184650 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-1549-account-create-update-qksfj"] Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.186426 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1549-account-create-update-qksfj" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.191487 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.193742 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-hsbm5" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.202764 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-1549-account-create-update-qksfj"] Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.257936 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkgdb\" (UniqueName: \"kubernetes.io/projected/230985b1-39a5-440c-b67a-97bed8481bd6-kube-api-access-rkgdb\") pod \"nova-api-4207-account-create-update-5677m\" (UID: \"230985b1-39a5-440c-b67a-97bed8481bd6\") " pod="openstack/nova-api-4207-account-create-update-5677m" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.258063 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/230985b1-39a5-440c-b67a-97bed8481bd6-operator-scripts\") pod \"nova-api-4207-account-create-update-5677m\" (UID: \"230985b1-39a5-440c-b67a-97bed8481bd6\") " pod="openstack/nova-api-4207-account-create-update-5677m" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.258113 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/caa501cc-1f23-4a0c-b845-31c9ae218be6-operator-scripts\") pod \"nova-cell1-db-create-hkg9q\" (UID: \"caa501cc-1f23-4a0c-b845-31c9ae218be6\") " pod="openstack/nova-cell1-db-create-hkg9q" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.258160 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k27qd\" (UniqueName: \"kubernetes.io/projected/caa501cc-1f23-4a0c-b845-31c9ae218be6-kube-api-access-k27qd\") pod \"nova-cell1-db-create-hkg9q\" (UID: \"caa501cc-1f23-4a0c-b845-31c9ae218be6\") " pod="openstack/nova-cell1-db-create-hkg9q" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.259386 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/230985b1-39a5-440c-b67a-97bed8481bd6-operator-scripts\") pod \"nova-api-4207-account-create-update-5677m\" (UID: \"230985b1-39a5-440c-b67a-97bed8481bd6\") " pod="openstack/nova-api-4207-account-create-update-5677m" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.290033 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkgdb\" (UniqueName: \"kubernetes.io/projected/230985b1-39a5-440c-b67a-97bed8481bd6-kube-api-access-rkgdb\") pod \"nova-api-4207-account-create-update-5677m\" (UID: \"230985b1-39a5-440c-b67a-97bed8481bd6\") " pod="openstack/nova-api-4207-account-create-update-5677m" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.291321 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4207-account-create-update-5677m" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.344120 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-4379-account-create-update-xxk7g"] Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.345805 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4379-account-create-update-xxk7g" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.348565 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.359422 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8e9cfc2-7b7d-47eb-aece-ed9fe716594a-operator-scripts\") pod \"nova-cell0-1549-account-create-update-qksfj\" (UID: \"a8e9cfc2-7b7d-47eb-aece-ed9fe716594a\") " pod="openstack/nova-cell0-1549-account-create-update-qksfj" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.359551 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/caa501cc-1f23-4a0c-b845-31c9ae218be6-operator-scripts\") pod \"nova-cell1-db-create-hkg9q\" (UID: \"caa501cc-1f23-4a0c-b845-31c9ae218be6\") " pod="openstack/nova-cell1-db-create-hkg9q" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.359588 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k27qd\" (UniqueName: \"kubernetes.io/projected/caa501cc-1f23-4a0c-b845-31c9ae218be6-kube-api-access-k27qd\") pod \"nova-cell1-db-create-hkg9q\" (UID: \"caa501cc-1f23-4a0c-b845-31c9ae218be6\") " pod="openstack/nova-cell1-db-create-hkg9q" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.359609 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4szf4\" (UniqueName: \"kubernetes.io/projected/a8e9cfc2-7b7d-47eb-aece-ed9fe716594a-kube-api-access-4szf4\") pod \"nova-cell0-1549-account-create-update-qksfj\" (UID: \"a8e9cfc2-7b7d-47eb-aece-ed9fe716594a\") " pod="openstack/nova-cell0-1549-account-create-update-qksfj" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.365596 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/caa501cc-1f23-4a0c-b845-31c9ae218be6-operator-scripts\") pod \"nova-cell1-db-create-hkg9q\" (UID: \"caa501cc-1f23-4a0c-b845-31c9ae218be6\") " pod="openstack/nova-cell1-db-create-hkg9q" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.376909 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-4379-account-create-update-xxk7g"] Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.379107 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k27qd\" (UniqueName: \"kubernetes.io/projected/caa501cc-1f23-4a0c-b845-31c9ae218be6-kube-api-access-k27qd\") pod \"nova-cell1-db-create-hkg9q\" (UID: \"caa501cc-1f23-4a0c-b845-31c9ae218be6\") " pod="openstack/nova-cell1-db-create-hkg9q" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.427199 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-hkg9q" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.461054 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq758\" (UniqueName: \"kubernetes.io/projected/e2114339-89f3-4232-94e1-d4323d23978b-kube-api-access-tq758\") pod \"nova-cell1-4379-account-create-update-xxk7g\" (UID: \"e2114339-89f3-4232-94e1-d4323d23978b\") " pod="openstack/nova-cell1-4379-account-create-update-xxk7g" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.461460 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4szf4\" (UniqueName: \"kubernetes.io/projected/a8e9cfc2-7b7d-47eb-aece-ed9fe716594a-kube-api-access-4szf4\") pod \"nova-cell0-1549-account-create-update-qksfj\" (UID: \"a8e9cfc2-7b7d-47eb-aece-ed9fe716594a\") " pod="openstack/nova-cell0-1549-account-create-update-qksfj" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.461548 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8e9cfc2-7b7d-47eb-aece-ed9fe716594a-operator-scripts\") pod \"nova-cell0-1549-account-create-update-qksfj\" (UID: \"a8e9cfc2-7b7d-47eb-aece-ed9fe716594a\") " pod="openstack/nova-cell0-1549-account-create-update-qksfj" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.461596 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2114339-89f3-4232-94e1-d4323d23978b-operator-scripts\") pod \"nova-cell1-4379-account-create-update-xxk7g\" (UID: \"e2114339-89f3-4232-94e1-d4323d23978b\") " pod="openstack/nova-cell1-4379-account-create-update-xxk7g" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.462485 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8e9cfc2-7b7d-47eb-aece-ed9fe716594a-operator-scripts\") pod \"nova-cell0-1549-account-create-update-qksfj\" (UID: \"a8e9cfc2-7b7d-47eb-aece-ed9fe716594a\") " pod="openstack/nova-cell0-1549-account-create-update-qksfj" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.480988 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4szf4\" (UniqueName: \"kubernetes.io/projected/a8e9cfc2-7b7d-47eb-aece-ed9fe716594a-kube-api-access-4szf4\") pod \"nova-cell0-1549-account-create-update-qksfj\" (UID: \"a8e9cfc2-7b7d-47eb-aece-ed9fe716594a\") " pod="openstack/nova-cell0-1549-account-create-update-qksfj" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.564214 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2114339-89f3-4232-94e1-d4323d23978b-operator-scripts\") pod \"nova-cell1-4379-account-create-update-xxk7g\" (UID: \"e2114339-89f3-4232-94e1-d4323d23978b\") " pod="openstack/nova-cell1-4379-account-create-update-xxk7g" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.564601 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tq758\" (UniqueName: \"kubernetes.io/projected/e2114339-89f3-4232-94e1-d4323d23978b-kube-api-access-tq758\") pod \"nova-cell1-4379-account-create-update-xxk7g\" (UID: \"e2114339-89f3-4232-94e1-d4323d23978b\") " pod="openstack/nova-cell1-4379-account-create-update-xxk7g" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.565641 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2114339-89f3-4232-94e1-d4323d23978b-operator-scripts\") pod \"nova-cell1-4379-account-create-update-xxk7g\" (UID: \"e2114339-89f3-4232-94e1-d4323d23978b\") " pod="openstack/nova-cell1-4379-account-create-update-xxk7g" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.582065 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tq758\" (UniqueName: \"kubernetes.io/projected/e2114339-89f3-4232-94e1-d4323d23978b-kube-api-access-tq758\") pod \"nova-cell1-4379-account-create-update-xxk7g\" (UID: \"e2114339-89f3-4232-94e1-d4323d23978b\") " pod="openstack/nova-cell1-4379-account-create-update-xxk7g" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.658239 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1549-account-create-update-qksfj" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.677661 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4379-account-create-update-xxk7g" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.692034 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-dwwb9"] Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.821854 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-hsbm5"] Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.908463 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-4207-account-create-update-5677m"] Jan 30 17:54:08 crc kubenswrapper[4766]: W0130 17:54:08.916150 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod230985b1_39a5_440c_b67a_97bed8481bd6.slice/crio-4df8453040f71d801da8b8ca0a2ae0dde4dd8c9c7fe86c544648e44c452883c8 WatchSource:0}: Error finding container 4df8453040f71d801da8b8ca0a2ae0dde4dd8c9c7fe86c544648e44c452883c8: Status 404 returned error can't find the container with id 4df8453040f71d801da8b8ca0a2ae0dde4dd8c9c7fe86c544648e44c452883c8 Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.986249 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-hkg9q"] Jan 30 17:54:09 crc kubenswrapper[4766]: I0130 17:54:09.047421 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:54:09 crc kubenswrapper[4766]: I0130 17:54:09.047488 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:54:09 crc kubenswrapper[4766]: I0130 17:54:09.047547 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 17:54:09 crc kubenswrapper[4766]: I0130 17:54:09.048595 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8e6d5be2cdd78ae95945579ba29f0735f8e5f2a5f43aacf73ebc0159baabfa78"} pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 17:54:09 crc kubenswrapper[4766]: I0130 17:54:09.048653 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" containerID="cri-o://8e6d5be2cdd78ae95945579ba29f0735f8e5f2a5f43aacf73ebc0159baabfa78" gracePeriod=600 Jan 30 17:54:09 crc kubenswrapper[4766]: I0130 17:54:09.060170 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-4379-account-create-update-xxk7g"] Jan 30 17:54:09 crc kubenswrapper[4766]: W0130 17:54:09.072615 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2114339_89f3_4232_94e1_d4323d23978b.slice/crio-952cbf0601aef443296d22bf560f08f3ce7ab0143e573f7b64998299ae03d1aa WatchSource:0}: Error finding container 952cbf0601aef443296d22bf560f08f3ce7ab0143e573f7b64998299ae03d1aa: Status 404 returned error can't find the container with id 952cbf0601aef443296d22bf560f08f3ce7ab0143e573f7b64998299ae03d1aa Jan 30 17:54:09 crc kubenswrapper[4766]: I0130 17:54:09.182645 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-1549-account-create-update-qksfj"] Jan 30 17:54:09 crc kubenswrapper[4766]: W0130 17:54:09.202805 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8e9cfc2_7b7d_47eb_aece_ed9fe716594a.slice/crio-69623ba06332d560d143325ba175dab6815a3ac4a1aeb10d6d5b6496ee8ea290 WatchSource:0}: Error finding container 69623ba06332d560d143325ba175dab6815a3ac4a1aeb10d6d5b6496ee8ea290: Status 404 returned error can't find the container with id 69623ba06332d560d143325ba175dab6815a3ac4a1aeb10d6d5b6496ee8ea290 Jan 30 17:54:09 crc kubenswrapper[4766]: I0130 17:54:09.465053 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-4379-account-create-update-xxk7g" event={"ID":"e2114339-89f3-4232-94e1-d4323d23978b","Type":"ContainerStarted","Data":"952cbf0601aef443296d22bf560f08f3ce7ab0143e573f7b64998299ae03d1aa"} Jan 30 17:54:09 crc kubenswrapper[4766]: I0130 17:54:09.470580 4766 generic.go:334] "Generic (PLEG): container finished" podID="eda85bd2-cef5-4dba-b322-a9f16aced872" containerID="3e558c3b2bd50c7543806cf36f97bd5a41e96ea64aaa7d83bb37281ff7150079" exitCode=0 Jan 30 17:54:09 crc kubenswrapper[4766]: I0130 17:54:09.470623 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dwwb9" event={"ID":"eda85bd2-cef5-4dba-b322-a9f16aced872","Type":"ContainerDied","Data":"3e558c3b2bd50c7543806cf36f97bd5a41e96ea64aaa7d83bb37281ff7150079"} Jan 30 17:54:09 crc kubenswrapper[4766]: I0130 17:54:09.470666 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dwwb9" event={"ID":"eda85bd2-cef5-4dba-b322-a9f16aced872","Type":"ContainerStarted","Data":"8ee0f1d156658e15015a8f4ede4d9bf7567fcbe3196666eeaeba39144fe9c7a6"} Jan 30 17:54:09 crc kubenswrapper[4766]: I0130 17:54:09.474802 4766 generic.go:334] "Generic (PLEG): container finished" podID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerID="8e6d5be2cdd78ae95945579ba29f0735f8e5f2a5f43aacf73ebc0159baabfa78" exitCode=0 Jan 30 17:54:09 crc kubenswrapper[4766]: I0130 17:54:09.474919 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerDied","Data":"8e6d5be2cdd78ae95945579ba29f0735f8e5f2a5f43aacf73ebc0159baabfa78"} Jan 30 17:54:09 crc kubenswrapper[4766]: I0130 17:54:09.475032 4766 scope.go:117] "RemoveContainer" containerID="7fc5828a50a187fe8ccf98b89913744accb72a8e7e151fd7516b11ddfbd0849c" Jan 30 17:54:09 crc kubenswrapper[4766]: I0130 17:54:09.476645 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-hkg9q" event={"ID":"caa501cc-1f23-4a0c-b845-31c9ae218be6","Type":"ContainerStarted","Data":"c2b789ef95f3bb31aab314d166e76c52bafd3d8c831caf0f2ec3ac9970ef8e2e"} Jan 30 17:54:09 crc kubenswrapper[4766]: I0130 17:54:09.481522 4766 generic.go:334] "Generic (PLEG): container finished" podID="03cd48e2-831c-4067-ae82-6aa11c3ed219" containerID="c11a5160103bd776a6a5d2558dca488af7e839c269a24583ddad14de582e241f" exitCode=0 Jan 30 17:54:09 crc kubenswrapper[4766]: I0130 17:54:09.481616 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-hsbm5" event={"ID":"03cd48e2-831c-4067-ae82-6aa11c3ed219","Type":"ContainerDied","Data":"c11a5160103bd776a6a5d2558dca488af7e839c269a24583ddad14de582e241f"} Jan 30 17:54:09 crc kubenswrapper[4766]: I0130 17:54:09.481655 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-hsbm5" event={"ID":"03cd48e2-831c-4067-ae82-6aa11c3ed219","Type":"ContainerStarted","Data":"202bc9e4e5f83742e250c3aba46345f39d24bdf37e9b3af3dd0ed7e6f1d63c64"} Jan 30 17:54:09 crc kubenswrapper[4766]: I0130 17:54:09.486257 4766 generic.go:334] "Generic (PLEG): container finished" podID="230985b1-39a5-440c-b67a-97bed8481bd6" containerID="afbdcdecad349aa223b487405699fc3f46bcbef54133e0b074eec4a93f302638" exitCode=0 Jan 30 17:54:09 crc kubenswrapper[4766]: I0130 17:54:09.486304 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-4207-account-create-update-5677m" event={"ID":"230985b1-39a5-440c-b67a-97bed8481bd6","Type":"ContainerDied","Data":"afbdcdecad349aa223b487405699fc3f46bcbef54133e0b074eec4a93f302638"} Jan 30 17:54:09 crc kubenswrapper[4766]: I0130 17:54:09.486592 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-4207-account-create-update-5677m" event={"ID":"230985b1-39a5-440c-b67a-97bed8481bd6","Type":"ContainerStarted","Data":"4df8453040f71d801da8b8ca0a2ae0dde4dd8c9c7fe86c544648e44c452883c8"} Jan 30 17:54:09 crc kubenswrapper[4766]: I0130 17:54:09.489355 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1549-account-create-update-qksfj" event={"ID":"a8e9cfc2-7b7d-47eb-aece-ed9fe716594a","Type":"ContainerStarted","Data":"69623ba06332d560d143325ba175dab6815a3ac4a1aeb10d6d5b6496ee8ea290"} Jan 30 17:54:10 crc kubenswrapper[4766]: I0130 17:54:10.502591 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerStarted","Data":"e52209522ad6ff86d8d333246ee724873200c61e028d93d4fc612ac4b0977354"} Jan 30 17:54:10 crc kubenswrapper[4766]: I0130 17:54:10.506798 4766 generic.go:334] "Generic (PLEG): container finished" podID="caa501cc-1f23-4a0c-b845-31c9ae218be6" containerID="84255a253283b95cc39831e777619bfbcbdd030c283ced85e388fb2e68a58195" exitCode=0 Jan 30 17:54:10 crc kubenswrapper[4766]: I0130 17:54:10.506919 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-hkg9q" event={"ID":"caa501cc-1f23-4a0c-b845-31c9ae218be6","Type":"ContainerDied","Data":"84255a253283b95cc39831e777619bfbcbdd030c283ced85e388fb2e68a58195"} Jan 30 17:54:10 crc kubenswrapper[4766]: I0130 17:54:10.509941 4766 generic.go:334] "Generic (PLEG): container finished" podID="a8e9cfc2-7b7d-47eb-aece-ed9fe716594a" containerID="4ceebfac5a0b227e854681a12bc5a1070dab4586e24997f6e4a7f702a9563e66" exitCode=0 Jan 30 17:54:10 crc kubenswrapper[4766]: I0130 17:54:10.510052 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1549-account-create-update-qksfj" event={"ID":"a8e9cfc2-7b7d-47eb-aece-ed9fe716594a","Type":"ContainerDied","Data":"4ceebfac5a0b227e854681a12bc5a1070dab4586e24997f6e4a7f702a9563e66"} Jan 30 17:54:10 crc kubenswrapper[4766]: I0130 17:54:10.512551 4766 generic.go:334] "Generic (PLEG): container finished" podID="e2114339-89f3-4232-94e1-d4323d23978b" containerID="69d76b9aa9a9c3d7d1a5e0b77ed7034745afa17d311bd1f48a0c475c88982f61" exitCode=0 Jan 30 17:54:10 crc kubenswrapper[4766]: I0130 17:54:10.512619 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-4379-account-create-update-xxk7g" event={"ID":"e2114339-89f3-4232-94e1-d4323d23978b","Type":"ContainerDied","Data":"69d76b9aa9a9c3d7d1a5e0b77ed7034745afa17d311bd1f48a0c475c88982f61"} Jan 30 17:54:10 crc kubenswrapper[4766]: I0130 17:54:10.998856 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-hsbm5" Jan 30 17:54:11 crc kubenswrapper[4766]: I0130 17:54:11.005906 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dwwb9" Jan 30 17:54:11 crc kubenswrapper[4766]: I0130 17:54:11.012498 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4207-account-create-update-5677m" Jan 30 17:54:11 crc kubenswrapper[4766]: I0130 17:54:11.115890 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkgdb\" (UniqueName: \"kubernetes.io/projected/230985b1-39a5-440c-b67a-97bed8481bd6-kube-api-access-rkgdb\") pod \"230985b1-39a5-440c-b67a-97bed8481bd6\" (UID: \"230985b1-39a5-440c-b67a-97bed8481bd6\") " Jan 30 17:54:11 crc kubenswrapper[4766]: I0130 17:54:11.116320 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/230985b1-39a5-440c-b67a-97bed8481bd6-operator-scripts\") pod \"230985b1-39a5-440c-b67a-97bed8481bd6\" (UID: \"230985b1-39a5-440c-b67a-97bed8481bd6\") " Jan 30 17:54:11 crc kubenswrapper[4766]: I0130 17:54:11.116461 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03cd48e2-831c-4067-ae82-6aa11c3ed219-operator-scripts\") pod \"03cd48e2-831c-4067-ae82-6aa11c3ed219\" (UID: \"03cd48e2-831c-4067-ae82-6aa11c3ed219\") " Jan 30 17:54:11 crc kubenswrapper[4766]: I0130 17:54:11.116495 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hs9bj\" (UniqueName: \"kubernetes.io/projected/eda85bd2-cef5-4dba-b322-a9f16aced872-kube-api-access-hs9bj\") pod \"eda85bd2-cef5-4dba-b322-a9f16aced872\" (UID: \"eda85bd2-cef5-4dba-b322-a9f16aced872\") " Jan 30 17:54:11 crc kubenswrapper[4766]: I0130 17:54:11.116568 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v74xp\" (UniqueName: \"kubernetes.io/projected/03cd48e2-831c-4067-ae82-6aa11c3ed219-kube-api-access-v74xp\") pod \"03cd48e2-831c-4067-ae82-6aa11c3ed219\" (UID: \"03cd48e2-831c-4067-ae82-6aa11c3ed219\") " Jan 30 17:54:11 crc kubenswrapper[4766]: I0130 17:54:11.116632 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eda85bd2-cef5-4dba-b322-a9f16aced872-operator-scripts\") pod \"eda85bd2-cef5-4dba-b322-a9f16aced872\" (UID: \"eda85bd2-cef5-4dba-b322-a9f16aced872\") " Jan 30 17:54:11 crc kubenswrapper[4766]: I0130 17:54:11.117860 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03cd48e2-831c-4067-ae82-6aa11c3ed219-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "03cd48e2-831c-4067-ae82-6aa11c3ed219" (UID: "03cd48e2-831c-4067-ae82-6aa11c3ed219"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:54:11 crc kubenswrapper[4766]: I0130 17:54:11.117907 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eda85bd2-cef5-4dba-b322-a9f16aced872-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "eda85bd2-cef5-4dba-b322-a9f16aced872" (UID: "eda85bd2-cef5-4dba-b322-a9f16aced872"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:54:11 crc kubenswrapper[4766]: I0130 17:54:11.118280 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/230985b1-39a5-440c-b67a-97bed8481bd6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "230985b1-39a5-440c-b67a-97bed8481bd6" (UID: "230985b1-39a5-440c-b67a-97bed8481bd6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:54:11 crc kubenswrapper[4766]: I0130 17:54:11.125395 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/230985b1-39a5-440c-b67a-97bed8481bd6-kube-api-access-rkgdb" (OuterVolumeSpecName: "kube-api-access-rkgdb") pod "230985b1-39a5-440c-b67a-97bed8481bd6" (UID: "230985b1-39a5-440c-b67a-97bed8481bd6"). InnerVolumeSpecName "kube-api-access-rkgdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:54:11 crc kubenswrapper[4766]: I0130 17:54:11.131342 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03cd48e2-831c-4067-ae82-6aa11c3ed219-kube-api-access-v74xp" (OuterVolumeSpecName: "kube-api-access-v74xp") pod "03cd48e2-831c-4067-ae82-6aa11c3ed219" (UID: "03cd48e2-831c-4067-ae82-6aa11c3ed219"). InnerVolumeSpecName "kube-api-access-v74xp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:54:11 crc kubenswrapper[4766]: I0130 17:54:11.131679 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eda85bd2-cef5-4dba-b322-a9f16aced872-kube-api-access-hs9bj" (OuterVolumeSpecName: "kube-api-access-hs9bj") pod "eda85bd2-cef5-4dba-b322-a9f16aced872" (UID: "eda85bd2-cef5-4dba-b322-a9f16aced872"). InnerVolumeSpecName "kube-api-access-hs9bj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:54:11 crc kubenswrapper[4766]: I0130 17:54:11.220023 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03cd48e2-831c-4067-ae82-6aa11c3ed219-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:11 crc kubenswrapper[4766]: I0130 17:54:11.220075 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hs9bj\" (UniqueName: \"kubernetes.io/projected/eda85bd2-cef5-4dba-b322-a9f16aced872-kube-api-access-hs9bj\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:11 crc kubenswrapper[4766]: I0130 17:54:11.220092 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v74xp\" (UniqueName: \"kubernetes.io/projected/03cd48e2-831c-4067-ae82-6aa11c3ed219-kube-api-access-v74xp\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:11 crc kubenswrapper[4766]: I0130 17:54:11.220107 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eda85bd2-cef5-4dba-b322-a9f16aced872-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:11 crc kubenswrapper[4766]: I0130 17:54:11.220121 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkgdb\" (UniqueName: \"kubernetes.io/projected/230985b1-39a5-440c-b67a-97bed8481bd6-kube-api-access-rkgdb\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:11 crc kubenswrapper[4766]: I0130 17:54:11.220135 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/230985b1-39a5-440c-b67a-97bed8481bd6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:11 crc kubenswrapper[4766]: I0130 17:54:11.523404 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dwwb9" event={"ID":"eda85bd2-cef5-4dba-b322-a9f16aced872","Type":"ContainerDied","Data":"8ee0f1d156658e15015a8f4ede4d9bf7567fcbe3196666eeaeba39144fe9c7a6"} Jan 30 17:54:11 crc kubenswrapper[4766]: I0130 17:54:11.523464 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ee0f1d156658e15015a8f4ede4d9bf7567fcbe3196666eeaeba39144fe9c7a6" Jan 30 17:54:11 crc kubenswrapper[4766]: I0130 17:54:11.523488 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dwwb9" Jan 30 17:54:11 crc kubenswrapper[4766]: I0130 17:54:11.527879 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-hsbm5" event={"ID":"03cd48e2-831c-4067-ae82-6aa11c3ed219","Type":"ContainerDied","Data":"202bc9e4e5f83742e250c3aba46345f39d24bdf37e9b3af3dd0ed7e6f1d63c64"} Jan 30 17:54:11 crc kubenswrapper[4766]: I0130 17:54:11.527935 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="202bc9e4e5f83742e250c3aba46345f39d24bdf37e9b3af3dd0ed7e6f1d63c64" Jan 30 17:54:11 crc kubenswrapper[4766]: I0130 17:54:11.528009 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-hsbm5" Jan 30 17:54:11 crc kubenswrapper[4766]: I0130 17:54:11.531580 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4207-account-create-update-5677m" Jan 30 17:54:11 crc kubenswrapper[4766]: I0130 17:54:11.533338 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-4207-account-create-update-5677m" event={"ID":"230985b1-39a5-440c-b67a-97bed8481bd6","Type":"ContainerDied","Data":"4df8453040f71d801da8b8ca0a2ae0dde4dd8c9c7fe86c544648e44c452883c8"} Jan 30 17:54:11 crc kubenswrapper[4766]: I0130 17:54:11.533387 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4df8453040f71d801da8b8ca0a2ae0dde4dd8c9c7fe86c544648e44c452883c8" Jan 30 17:54:11 crc kubenswrapper[4766]: I0130 17:54:11.927810 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-hkg9q" Jan 30 17:54:12 crc kubenswrapper[4766]: I0130 17:54:12.029416 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1549-account-create-update-qksfj" Jan 30 17:54:12 crc kubenswrapper[4766]: I0130 17:54:12.036011 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/caa501cc-1f23-4a0c-b845-31c9ae218be6-operator-scripts\") pod \"caa501cc-1f23-4a0c-b845-31c9ae218be6\" (UID: \"caa501cc-1f23-4a0c-b845-31c9ae218be6\") " Jan 30 17:54:12 crc kubenswrapper[4766]: I0130 17:54:12.036322 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k27qd\" (UniqueName: \"kubernetes.io/projected/caa501cc-1f23-4a0c-b845-31c9ae218be6-kube-api-access-k27qd\") pod \"caa501cc-1f23-4a0c-b845-31c9ae218be6\" (UID: \"caa501cc-1f23-4a0c-b845-31c9ae218be6\") " Jan 30 17:54:12 crc kubenswrapper[4766]: I0130 17:54:12.036903 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/caa501cc-1f23-4a0c-b845-31c9ae218be6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "caa501cc-1f23-4a0c-b845-31c9ae218be6" (UID: "caa501cc-1f23-4a0c-b845-31c9ae218be6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:54:12 crc kubenswrapper[4766]: I0130 17:54:12.061525 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/caa501cc-1f23-4a0c-b845-31c9ae218be6-kube-api-access-k27qd" (OuterVolumeSpecName: "kube-api-access-k27qd") pod "caa501cc-1f23-4a0c-b845-31c9ae218be6" (UID: "caa501cc-1f23-4a0c-b845-31c9ae218be6"). InnerVolumeSpecName "kube-api-access-k27qd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:54:12 crc kubenswrapper[4766]: I0130 17:54:12.062223 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/caa501cc-1f23-4a0c-b845-31c9ae218be6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:12 crc kubenswrapper[4766]: I0130 17:54:12.088353 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4379-account-create-update-xxk7g" Jan 30 17:54:12 crc kubenswrapper[4766]: I0130 17:54:12.165195 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2114339-89f3-4232-94e1-d4323d23978b-operator-scripts\") pod \"e2114339-89f3-4232-94e1-d4323d23978b\" (UID: \"e2114339-89f3-4232-94e1-d4323d23978b\") " Jan 30 17:54:12 crc kubenswrapper[4766]: I0130 17:54:12.165273 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4szf4\" (UniqueName: \"kubernetes.io/projected/a8e9cfc2-7b7d-47eb-aece-ed9fe716594a-kube-api-access-4szf4\") pod \"a8e9cfc2-7b7d-47eb-aece-ed9fe716594a\" (UID: \"a8e9cfc2-7b7d-47eb-aece-ed9fe716594a\") " Jan 30 17:54:12 crc kubenswrapper[4766]: I0130 17:54:12.165385 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8e9cfc2-7b7d-47eb-aece-ed9fe716594a-operator-scripts\") pod \"a8e9cfc2-7b7d-47eb-aece-ed9fe716594a\" (UID: \"a8e9cfc2-7b7d-47eb-aece-ed9fe716594a\") " Jan 30 17:54:12 crc kubenswrapper[4766]: I0130 17:54:12.165447 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tq758\" (UniqueName: \"kubernetes.io/projected/e2114339-89f3-4232-94e1-d4323d23978b-kube-api-access-tq758\") pod \"e2114339-89f3-4232-94e1-d4323d23978b\" (UID: \"e2114339-89f3-4232-94e1-d4323d23978b\") " Jan 30 17:54:12 crc kubenswrapper[4766]: I0130 17:54:12.166140 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k27qd\" (UniqueName: \"kubernetes.io/projected/caa501cc-1f23-4a0c-b845-31c9ae218be6-kube-api-access-k27qd\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:12 crc kubenswrapper[4766]: I0130 17:54:12.167152 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2114339-89f3-4232-94e1-d4323d23978b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e2114339-89f3-4232-94e1-d4323d23978b" (UID: "e2114339-89f3-4232-94e1-d4323d23978b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:54:12 crc kubenswrapper[4766]: I0130 17:54:12.167279 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8e9cfc2-7b7d-47eb-aece-ed9fe716594a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a8e9cfc2-7b7d-47eb-aece-ed9fe716594a" (UID: "a8e9cfc2-7b7d-47eb-aece-ed9fe716594a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:54:12 crc kubenswrapper[4766]: I0130 17:54:12.169132 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2114339-89f3-4232-94e1-d4323d23978b-kube-api-access-tq758" (OuterVolumeSpecName: "kube-api-access-tq758") pod "e2114339-89f3-4232-94e1-d4323d23978b" (UID: "e2114339-89f3-4232-94e1-d4323d23978b"). InnerVolumeSpecName "kube-api-access-tq758". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:54:12 crc kubenswrapper[4766]: I0130 17:54:12.171208 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8e9cfc2-7b7d-47eb-aece-ed9fe716594a-kube-api-access-4szf4" (OuterVolumeSpecName: "kube-api-access-4szf4") pod "a8e9cfc2-7b7d-47eb-aece-ed9fe716594a" (UID: "a8e9cfc2-7b7d-47eb-aece-ed9fe716594a"). InnerVolumeSpecName "kube-api-access-4szf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:54:12 crc kubenswrapper[4766]: I0130 17:54:12.268949 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2114339-89f3-4232-94e1-d4323d23978b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:12 crc kubenswrapper[4766]: I0130 17:54:12.268996 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4szf4\" (UniqueName: \"kubernetes.io/projected/a8e9cfc2-7b7d-47eb-aece-ed9fe716594a-kube-api-access-4szf4\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:12 crc kubenswrapper[4766]: I0130 17:54:12.269027 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8e9cfc2-7b7d-47eb-aece-ed9fe716594a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:12 crc kubenswrapper[4766]: I0130 17:54:12.269040 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tq758\" (UniqueName: \"kubernetes.io/projected/e2114339-89f3-4232-94e1-d4323d23978b-kube-api-access-tq758\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:12 crc kubenswrapper[4766]: I0130 17:54:12.552121 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1549-account-create-update-qksfj" event={"ID":"a8e9cfc2-7b7d-47eb-aece-ed9fe716594a","Type":"ContainerDied","Data":"69623ba06332d560d143325ba175dab6815a3ac4a1aeb10d6d5b6496ee8ea290"} Jan 30 17:54:12 crc kubenswrapper[4766]: I0130 17:54:12.552240 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69623ba06332d560d143325ba175dab6815a3ac4a1aeb10d6d5b6496ee8ea290" Jan 30 17:54:12 crc kubenswrapper[4766]: I0130 17:54:12.552148 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1549-account-create-update-qksfj" Jan 30 17:54:12 crc kubenswrapper[4766]: I0130 17:54:12.554873 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-4379-account-create-update-xxk7g" event={"ID":"e2114339-89f3-4232-94e1-d4323d23978b","Type":"ContainerDied","Data":"952cbf0601aef443296d22bf560f08f3ce7ab0143e573f7b64998299ae03d1aa"} Jan 30 17:54:12 crc kubenswrapper[4766]: I0130 17:54:12.554942 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="952cbf0601aef443296d22bf560f08f3ce7ab0143e573f7b64998299ae03d1aa" Jan 30 17:54:12 crc kubenswrapper[4766]: I0130 17:54:12.554952 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4379-account-create-update-xxk7g" Jan 30 17:54:12 crc kubenswrapper[4766]: I0130 17:54:12.557961 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-hkg9q" event={"ID":"caa501cc-1f23-4a0c-b845-31c9ae218be6","Type":"ContainerDied","Data":"c2b789ef95f3bb31aab314d166e76c52bafd3d8c831caf0f2ec3ac9970ef8e2e"} Jan 30 17:54:12 crc kubenswrapper[4766]: I0130 17:54:12.558019 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2b789ef95f3bb31aab314d166e76c52bafd3d8c831caf0f2ec3ac9970ef8e2e" Jan 30 17:54:12 crc kubenswrapper[4766]: I0130 17:54:12.558101 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-hkg9q" Jan 30 17:54:13 crc kubenswrapper[4766]: I0130 17:54:13.480631 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-jccb8"] Jan 30 17:54:13 crc kubenswrapper[4766]: E0130 17:54:13.481585 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2114339-89f3-4232-94e1-d4323d23978b" containerName="mariadb-account-create-update" Jan 30 17:54:13 crc kubenswrapper[4766]: I0130 17:54:13.481606 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2114339-89f3-4232-94e1-d4323d23978b" containerName="mariadb-account-create-update" Jan 30 17:54:13 crc kubenswrapper[4766]: E0130 17:54:13.481627 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="caa501cc-1f23-4a0c-b845-31c9ae218be6" containerName="mariadb-database-create" Jan 30 17:54:13 crc kubenswrapper[4766]: I0130 17:54:13.481636 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="caa501cc-1f23-4a0c-b845-31c9ae218be6" containerName="mariadb-database-create" Jan 30 17:54:13 crc kubenswrapper[4766]: E0130 17:54:13.481660 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03cd48e2-831c-4067-ae82-6aa11c3ed219" containerName="mariadb-database-create" Jan 30 17:54:13 crc kubenswrapper[4766]: I0130 17:54:13.481668 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="03cd48e2-831c-4067-ae82-6aa11c3ed219" containerName="mariadb-database-create" Jan 30 17:54:13 crc kubenswrapper[4766]: E0130 17:54:13.481682 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8e9cfc2-7b7d-47eb-aece-ed9fe716594a" containerName="mariadb-account-create-update" Jan 30 17:54:13 crc kubenswrapper[4766]: I0130 17:54:13.481690 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8e9cfc2-7b7d-47eb-aece-ed9fe716594a" containerName="mariadb-account-create-update" Jan 30 17:54:13 crc kubenswrapper[4766]: E0130 17:54:13.481708 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eda85bd2-cef5-4dba-b322-a9f16aced872" containerName="mariadb-database-create" Jan 30 17:54:13 crc kubenswrapper[4766]: I0130 17:54:13.481716 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="eda85bd2-cef5-4dba-b322-a9f16aced872" containerName="mariadb-database-create" Jan 30 17:54:13 crc kubenswrapper[4766]: E0130 17:54:13.481730 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="230985b1-39a5-440c-b67a-97bed8481bd6" containerName="mariadb-account-create-update" Jan 30 17:54:13 crc kubenswrapper[4766]: I0130 17:54:13.481737 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="230985b1-39a5-440c-b67a-97bed8481bd6" containerName="mariadb-account-create-update" Jan 30 17:54:13 crc kubenswrapper[4766]: I0130 17:54:13.481918 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="eda85bd2-cef5-4dba-b322-a9f16aced872" containerName="mariadb-database-create" Jan 30 17:54:13 crc kubenswrapper[4766]: I0130 17:54:13.481931 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="03cd48e2-831c-4067-ae82-6aa11c3ed219" containerName="mariadb-database-create" Jan 30 17:54:13 crc kubenswrapper[4766]: I0130 17:54:13.481942 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8e9cfc2-7b7d-47eb-aece-ed9fe716594a" containerName="mariadb-account-create-update" Jan 30 17:54:13 crc kubenswrapper[4766]: I0130 17:54:13.481966 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="caa501cc-1f23-4a0c-b845-31c9ae218be6" containerName="mariadb-database-create" Jan 30 17:54:13 crc kubenswrapper[4766]: I0130 17:54:13.481981 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2114339-89f3-4232-94e1-d4323d23978b" containerName="mariadb-account-create-update" Jan 30 17:54:13 crc kubenswrapper[4766]: I0130 17:54:13.482003 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="230985b1-39a5-440c-b67a-97bed8481bd6" containerName="mariadb-account-create-update" Jan 30 17:54:13 crc kubenswrapper[4766]: I0130 17:54:13.482854 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-jccb8" Jan 30 17:54:13 crc kubenswrapper[4766]: I0130 17:54:13.486222 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 30 17:54:13 crc kubenswrapper[4766]: I0130 17:54:13.487491 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-cbjlt" Jan 30 17:54:13 crc kubenswrapper[4766]: I0130 17:54:13.489974 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 30 17:54:13 crc kubenswrapper[4766]: I0130 17:54:13.509633 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-jccb8"] Jan 30 17:54:13 crc kubenswrapper[4766]: I0130 17:54:13.592219 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b37a2812-82ad-4535-84e6-569f9b3765a6-scripts\") pod \"nova-cell0-conductor-db-sync-jccb8\" (UID: \"b37a2812-82ad-4535-84e6-569f9b3765a6\") " pod="openstack/nova-cell0-conductor-db-sync-jccb8" Jan 30 17:54:13 crc kubenswrapper[4766]: I0130 17:54:13.592271 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b37a2812-82ad-4535-84e6-569f9b3765a6-config-data\") pod \"nova-cell0-conductor-db-sync-jccb8\" (UID: \"b37a2812-82ad-4535-84e6-569f9b3765a6\") " pod="openstack/nova-cell0-conductor-db-sync-jccb8" Jan 30 17:54:13 crc kubenswrapper[4766]: I0130 17:54:13.592297 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5cwz\" (UniqueName: \"kubernetes.io/projected/b37a2812-82ad-4535-84e6-569f9b3765a6-kube-api-access-v5cwz\") pod \"nova-cell0-conductor-db-sync-jccb8\" (UID: \"b37a2812-82ad-4535-84e6-569f9b3765a6\") " pod="openstack/nova-cell0-conductor-db-sync-jccb8" Jan 30 17:54:13 crc kubenswrapper[4766]: I0130 17:54:13.592318 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b37a2812-82ad-4535-84e6-569f9b3765a6-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-jccb8\" (UID: \"b37a2812-82ad-4535-84e6-569f9b3765a6\") " pod="openstack/nova-cell0-conductor-db-sync-jccb8" Jan 30 17:54:13 crc kubenswrapper[4766]: I0130 17:54:13.698393 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b37a2812-82ad-4535-84e6-569f9b3765a6-scripts\") pod \"nova-cell0-conductor-db-sync-jccb8\" (UID: \"b37a2812-82ad-4535-84e6-569f9b3765a6\") " pod="openstack/nova-cell0-conductor-db-sync-jccb8" Jan 30 17:54:13 crc kubenswrapper[4766]: I0130 17:54:13.699010 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b37a2812-82ad-4535-84e6-569f9b3765a6-config-data\") pod \"nova-cell0-conductor-db-sync-jccb8\" (UID: \"b37a2812-82ad-4535-84e6-569f9b3765a6\") " pod="openstack/nova-cell0-conductor-db-sync-jccb8" Jan 30 17:54:13 crc kubenswrapper[4766]: I0130 17:54:13.699044 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5cwz\" (UniqueName: \"kubernetes.io/projected/b37a2812-82ad-4535-84e6-569f9b3765a6-kube-api-access-v5cwz\") pod \"nova-cell0-conductor-db-sync-jccb8\" (UID: \"b37a2812-82ad-4535-84e6-569f9b3765a6\") " pod="openstack/nova-cell0-conductor-db-sync-jccb8" Jan 30 17:54:13 crc kubenswrapper[4766]: I0130 17:54:13.699072 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b37a2812-82ad-4535-84e6-569f9b3765a6-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-jccb8\" (UID: \"b37a2812-82ad-4535-84e6-569f9b3765a6\") " pod="openstack/nova-cell0-conductor-db-sync-jccb8" Jan 30 17:54:13 crc kubenswrapper[4766]: I0130 17:54:13.706375 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b37a2812-82ad-4535-84e6-569f9b3765a6-config-data\") pod \"nova-cell0-conductor-db-sync-jccb8\" (UID: \"b37a2812-82ad-4535-84e6-569f9b3765a6\") " pod="openstack/nova-cell0-conductor-db-sync-jccb8" Jan 30 17:54:13 crc kubenswrapper[4766]: I0130 17:54:13.707099 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b37a2812-82ad-4535-84e6-569f9b3765a6-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-jccb8\" (UID: \"b37a2812-82ad-4535-84e6-569f9b3765a6\") " pod="openstack/nova-cell0-conductor-db-sync-jccb8" Jan 30 17:54:13 crc kubenswrapper[4766]: I0130 17:54:13.720000 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b37a2812-82ad-4535-84e6-569f9b3765a6-scripts\") pod \"nova-cell0-conductor-db-sync-jccb8\" (UID: \"b37a2812-82ad-4535-84e6-569f9b3765a6\") " pod="openstack/nova-cell0-conductor-db-sync-jccb8" Jan 30 17:54:13 crc kubenswrapper[4766]: I0130 17:54:13.725723 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5cwz\" (UniqueName: \"kubernetes.io/projected/b37a2812-82ad-4535-84e6-569f9b3765a6-kube-api-access-v5cwz\") pod \"nova-cell0-conductor-db-sync-jccb8\" (UID: \"b37a2812-82ad-4535-84e6-569f9b3765a6\") " pod="openstack/nova-cell0-conductor-db-sync-jccb8" Jan 30 17:54:13 crc kubenswrapper[4766]: I0130 17:54:13.804088 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-jccb8" Jan 30 17:54:14 crc kubenswrapper[4766]: I0130 17:54:14.348266 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-jccb8"] Jan 30 17:54:14 crc kubenswrapper[4766]: I0130 17:54:14.597874 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-jccb8" event={"ID":"b37a2812-82ad-4535-84e6-569f9b3765a6","Type":"ContainerStarted","Data":"da172445bfeb555287be406b3b1bc1f24619d25d1d44b2a900720a4c67714131"} Jan 30 17:54:15 crc kubenswrapper[4766]: I0130 17:54:15.609889 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-jccb8" event={"ID":"b37a2812-82ad-4535-84e6-569f9b3765a6","Type":"ContainerStarted","Data":"b484886b7344df11c7a295d1efb6eeefa526673bc8fccf2d500d87883c528256"} Jan 30 17:54:15 crc kubenswrapper[4766]: I0130 17:54:15.633752 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-jccb8" podStartSLOduration=2.63373121 podStartE2EDuration="2.63373121s" podCreationTimestamp="2026-01-30 17:54:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:54:15.628005584 +0000 UTC m=+5510.265962960" watchObservedRunningTime="2026-01-30 17:54:15.63373121 +0000 UTC m=+5510.271688576" Jan 30 17:54:22 crc kubenswrapper[4766]: I0130 17:54:22.680158 4766 generic.go:334] "Generic (PLEG): container finished" podID="b37a2812-82ad-4535-84e6-569f9b3765a6" containerID="b484886b7344df11c7a295d1efb6eeefa526673bc8fccf2d500d87883c528256" exitCode=0 Jan 30 17:54:22 crc kubenswrapper[4766]: I0130 17:54:22.680340 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-jccb8" event={"ID":"b37a2812-82ad-4535-84e6-569f9b3765a6","Type":"ContainerDied","Data":"b484886b7344df11c7a295d1efb6eeefa526673bc8fccf2d500d87883c528256"} Jan 30 17:54:24 crc kubenswrapper[4766]: I0130 17:54:24.080946 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-jccb8" Jan 30 17:54:24 crc kubenswrapper[4766]: I0130 17:54:24.201170 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5cwz\" (UniqueName: \"kubernetes.io/projected/b37a2812-82ad-4535-84e6-569f9b3765a6-kube-api-access-v5cwz\") pod \"b37a2812-82ad-4535-84e6-569f9b3765a6\" (UID: \"b37a2812-82ad-4535-84e6-569f9b3765a6\") " Jan 30 17:54:24 crc kubenswrapper[4766]: I0130 17:54:24.201584 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b37a2812-82ad-4535-84e6-569f9b3765a6-config-data\") pod \"b37a2812-82ad-4535-84e6-569f9b3765a6\" (UID: \"b37a2812-82ad-4535-84e6-569f9b3765a6\") " Jan 30 17:54:24 crc kubenswrapper[4766]: I0130 17:54:24.201747 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b37a2812-82ad-4535-84e6-569f9b3765a6-scripts\") pod \"b37a2812-82ad-4535-84e6-569f9b3765a6\" (UID: \"b37a2812-82ad-4535-84e6-569f9b3765a6\") " Jan 30 17:54:24 crc kubenswrapper[4766]: I0130 17:54:24.201781 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b37a2812-82ad-4535-84e6-569f9b3765a6-combined-ca-bundle\") pod \"b37a2812-82ad-4535-84e6-569f9b3765a6\" (UID: \"b37a2812-82ad-4535-84e6-569f9b3765a6\") " Jan 30 17:54:24 crc kubenswrapper[4766]: I0130 17:54:24.207429 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b37a2812-82ad-4535-84e6-569f9b3765a6-scripts" (OuterVolumeSpecName: "scripts") pod "b37a2812-82ad-4535-84e6-569f9b3765a6" (UID: "b37a2812-82ad-4535-84e6-569f9b3765a6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:54:24 crc kubenswrapper[4766]: I0130 17:54:24.207587 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b37a2812-82ad-4535-84e6-569f9b3765a6-kube-api-access-v5cwz" (OuterVolumeSpecName: "kube-api-access-v5cwz") pod "b37a2812-82ad-4535-84e6-569f9b3765a6" (UID: "b37a2812-82ad-4535-84e6-569f9b3765a6"). InnerVolumeSpecName "kube-api-access-v5cwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:54:24 crc kubenswrapper[4766]: I0130 17:54:24.227488 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b37a2812-82ad-4535-84e6-569f9b3765a6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b37a2812-82ad-4535-84e6-569f9b3765a6" (UID: "b37a2812-82ad-4535-84e6-569f9b3765a6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:54:24 crc kubenswrapper[4766]: I0130 17:54:24.228362 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b37a2812-82ad-4535-84e6-569f9b3765a6-config-data" (OuterVolumeSpecName: "config-data") pod "b37a2812-82ad-4535-84e6-569f9b3765a6" (UID: "b37a2812-82ad-4535-84e6-569f9b3765a6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:54:24 crc kubenswrapper[4766]: I0130 17:54:24.303541 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b37a2812-82ad-4535-84e6-569f9b3765a6-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:24 crc kubenswrapper[4766]: I0130 17:54:24.303817 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b37a2812-82ad-4535-84e6-569f9b3765a6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:24 crc kubenswrapper[4766]: I0130 17:54:24.303912 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5cwz\" (UniqueName: \"kubernetes.io/projected/b37a2812-82ad-4535-84e6-569f9b3765a6-kube-api-access-v5cwz\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:24 crc kubenswrapper[4766]: I0130 17:54:24.303996 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b37a2812-82ad-4535-84e6-569f9b3765a6-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:24 crc kubenswrapper[4766]: I0130 17:54:24.724305 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-jccb8" event={"ID":"b37a2812-82ad-4535-84e6-569f9b3765a6","Type":"ContainerDied","Data":"da172445bfeb555287be406b3b1bc1f24619d25d1d44b2a900720a4c67714131"} Jan 30 17:54:24 crc kubenswrapper[4766]: I0130 17:54:24.724388 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da172445bfeb555287be406b3b1bc1f24619d25d1d44b2a900720a4c67714131" Jan 30 17:54:24 crc kubenswrapper[4766]: I0130 17:54:24.724336 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-jccb8" Jan 30 17:54:24 crc kubenswrapper[4766]: I0130 17:54:24.808679 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 17:54:24 crc kubenswrapper[4766]: E0130 17:54:24.809299 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b37a2812-82ad-4535-84e6-569f9b3765a6" containerName="nova-cell0-conductor-db-sync" Jan 30 17:54:24 crc kubenswrapper[4766]: I0130 17:54:24.809323 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="b37a2812-82ad-4535-84e6-569f9b3765a6" containerName="nova-cell0-conductor-db-sync" Jan 30 17:54:24 crc kubenswrapper[4766]: I0130 17:54:24.809550 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="b37a2812-82ad-4535-84e6-569f9b3765a6" containerName="nova-cell0-conductor-db-sync" Jan 30 17:54:24 crc kubenswrapper[4766]: I0130 17:54:24.810339 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 17:54:24 crc kubenswrapper[4766]: I0130 17:54:24.815319 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 30 17:54:24 crc kubenswrapper[4766]: I0130 17:54:24.815534 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-cbjlt" Jan 30 17:54:24 crc kubenswrapper[4766]: I0130 17:54:24.817944 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 17:54:24 crc kubenswrapper[4766]: I0130 17:54:24.914143 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6725384-f878-416e-832e-64ea63dc6698-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"c6725384-f878-416e-832e-64ea63dc6698\") " pod="openstack/nova-cell0-conductor-0" Jan 30 17:54:24 crc kubenswrapper[4766]: I0130 17:54:24.914522 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kwb2\" (UniqueName: \"kubernetes.io/projected/c6725384-f878-416e-832e-64ea63dc6698-kube-api-access-6kwb2\") pod \"nova-cell0-conductor-0\" (UID: \"c6725384-f878-416e-832e-64ea63dc6698\") " pod="openstack/nova-cell0-conductor-0" Jan 30 17:54:24 crc kubenswrapper[4766]: I0130 17:54:24.914583 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6725384-f878-416e-832e-64ea63dc6698-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"c6725384-f878-416e-832e-64ea63dc6698\") " pod="openstack/nova-cell0-conductor-0" Jan 30 17:54:25 crc kubenswrapper[4766]: I0130 17:54:25.015783 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kwb2\" (UniqueName: \"kubernetes.io/projected/c6725384-f878-416e-832e-64ea63dc6698-kube-api-access-6kwb2\") pod \"nova-cell0-conductor-0\" (UID: \"c6725384-f878-416e-832e-64ea63dc6698\") " pod="openstack/nova-cell0-conductor-0" Jan 30 17:54:25 crc kubenswrapper[4766]: I0130 17:54:25.015826 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6725384-f878-416e-832e-64ea63dc6698-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"c6725384-f878-416e-832e-64ea63dc6698\") " pod="openstack/nova-cell0-conductor-0" Jan 30 17:54:25 crc kubenswrapper[4766]: I0130 17:54:25.015905 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6725384-f878-416e-832e-64ea63dc6698-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"c6725384-f878-416e-832e-64ea63dc6698\") " pod="openstack/nova-cell0-conductor-0" Jan 30 17:54:25 crc kubenswrapper[4766]: I0130 17:54:25.020360 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6725384-f878-416e-832e-64ea63dc6698-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"c6725384-f878-416e-832e-64ea63dc6698\") " pod="openstack/nova-cell0-conductor-0" Jan 30 17:54:25 crc kubenswrapper[4766]: I0130 17:54:25.021367 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6725384-f878-416e-832e-64ea63dc6698-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"c6725384-f878-416e-832e-64ea63dc6698\") " pod="openstack/nova-cell0-conductor-0" Jan 30 17:54:25 crc kubenswrapper[4766]: I0130 17:54:25.042379 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kwb2\" (UniqueName: \"kubernetes.io/projected/c6725384-f878-416e-832e-64ea63dc6698-kube-api-access-6kwb2\") pod \"nova-cell0-conductor-0\" (UID: \"c6725384-f878-416e-832e-64ea63dc6698\") " pod="openstack/nova-cell0-conductor-0" Jan 30 17:54:25 crc kubenswrapper[4766]: I0130 17:54:25.136526 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 17:54:25 crc kubenswrapper[4766]: I0130 17:54:25.575945 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 17:54:25 crc kubenswrapper[4766]: W0130 17:54:25.578446 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6725384_f878_416e_832e_64ea63dc6698.slice/crio-04dfddcb65778a7ed5dd4fe1da7afcca1ade4d7f0563c40559bc94e19e6acdc2 WatchSource:0}: Error finding container 04dfddcb65778a7ed5dd4fe1da7afcca1ade4d7f0563c40559bc94e19e6acdc2: Status 404 returned error can't find the container with id 04dfddcb65778a7ed5dd4fe1da7afcca1ade4d7f0563c40559bc94e19e6acdc2 Jan 30 17:54:25 crc kubenswrapper[4766]: I0130 17:54:25.735933 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"c6725384-f878-416e-832e-64ea63dc6698","Type":"ContainerStarted","Data":"04dfddcb65778a7ed5dd4fe1da7afcca1ade4d7f0563c40559bc94e19e6acdc2"} Jan 30 17:54:26 crc kubenswrapper[4766]: I0130 17:54:26.747583 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"c6725384-f878-416e-832e-64ea63dc6698","Type":"ContainerStarted","Data":"c2aeea8ee2f173823cfcba5d88e64c5feb602801106b43496eaa109ece4c74aa"} Jan 30 17:54:26 crc kubenswrapper[4766]: I0130 17:54:26.748050 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 30 17:54:26 crc kubenswrapper[4766]: I0130 17:54:26.781595 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.7815695099999997 podStartE2EDuration="2.78156951s" podCreationTimestamp="2026-01-30 17:54:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:54:26.773316885 +0000 UTC m=+5521.411274261" watchObservedRunningTime="2026-01-30 17:54:26.78156951 +0000 UTC m=+5521.419526886" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.163360 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.559323 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-5xsrx"] Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.560757 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-5xsrx" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.563542 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.563826 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.568985 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-5xsrx"] Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.690626 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.694515 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.696738 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.702933 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.722003 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/083bdb6d-c3f3-412d-9097-48e66c7f28d0-config-data\") pod \"nova-cell0-cell-mapping-5xsrx\" (UID: \"083bdb6d-c3f3-412d-9097-48e66c7f28d0\") " pod="openstack/nova-cell0-cell-mapping-5xsrx" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.722092 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/083bdb6d-c3f3-412d-9097-48e66c7f28d0-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-5xsrx\" (UID: \"083bdb6d-c3f3-412d-9097-48e66c7f28d0\") " pod="openstack/nova-cell0-cell-mapping-5xsrx" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.722150 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/083bdb6d-c3f3-412d-9097-48e66c7f28d0-scripts\") pod \"nova-cell0-cell-mapping-5xsrx\" (UID: \"083bdb6d-c3f3-412d-9097-48e66c7f28d0\") " pod="openstack/nova-cell0-cell-mapping-5xsrx" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.722194 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qrht\" (UniqueName: \"kubernetes.io/projected/083bdb6d-c3f3-412d-9097-48e66c7f28d0-kube-api-access-9qrht\") pod \"nova-cell0-cell-mapping-5xsrx\" (UID: \"083bdb6d-c3f3-412d-9097-48e66c7f28d0\") " pod="openstack/nova-cell0-cell-mapping-5xsrx" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.771752 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.773106 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.776840 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.788470 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.824066 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ff66025-4eb1-4da2-886f-e5ef9bf4831d-logs\") pod \"nova-api-0\" (UID: \"7ff66025-4eb1-4da2-886f-e5ef9bf4831d\") " pod="openstack/nova-api-0" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.824139 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ff66025-4eb1-4da2-886f-e5ef9bf4831d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7ff66025-4eb1-4da2-886f-e5ef9bf4831d\") " pod="openstack/nova-api-0" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.824241 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/083bdb6d-c3f3-412d-9097-48e66c7f28d0-config-data\") pod \"nova-cell0-cell-mapping-5xsrx\" (UID: \"083bdb6d-c3f3-412d-9097-48e66c7f28d0\") " pod="openstack/nova-cell0-cell-mapping-5xsrx" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.824302 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/083bdb6d-c3f3-412d-9097-48e66c7f28d0-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-5xsrx\" (UID: \"083bdb6d-c3f3-412d-9097-48e66c7f28d0\") " pod="openstack/nova-cell0-cell-mapping-5xsrx" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.824348 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tv7t\" (UniqueName: \"kubernetes.io/projected/7ff66025-4eb1-4da2-886f-e5ef9bf4831d-kube-api-access-5tv7t\") pod \"nova-api-0\" (UID: \"7ff66025-4eb1-4da2-886f-e5ef9bf4831d\") " pod="openstack/nova-api-0" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.824379 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/083bdb6d-c3f3-412d-9097-48e66c7f28d0-scripts\") pod \"nova-cell0-cell-mapping-5xsrx\" (UID: \"083bdb6d-c3f3-412d-9097-48e66c7f28d0\") " pod="openstack/nova-cell0-cell-mapping-5xsrx" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.824412 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qrht\" (UniqueName: \"kubernetes.io/projected/083bdb6d-c3f3-412d-9097-48e66c7f28d0-kube-api-access-9qrht\") pod \"nova-cell0-cell-mapping-5xsrx\" (UID: \"083bdb6d-c3f3-412d-9097-48e66c7f28d0\") " pod="openstack/nova-cell0-cell-mapping-5xsrx" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.824458 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ff66025-4eb1-4da2-886f-e5ef9bf4831d-config-data\") pod \"nova-api-0\" (UID: \"7ff66025-4eb1-4da2-886f-e5ef9bf4831d\") " pod="openstack/nova-api-0" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.831469 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/083bdb6d-c3f3-412d-9097-48e66c7f28d0-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-5xsrx\" (UID: \"083bdb6d-c3f3-412d-9097-48e66c7f28d0\") " pod="openstack/nova-cell0-cell-mapping-5xsrx" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.835729 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/083bdb6d-c3f3-412d-9097-48e66c7f28d0-scripts\") pod \"nova-cell0-cell-mapping-5xsrx\" (UID: \"083bdb6d-c3f3-412d-9097-48e66c7f28d0\") " pod="openstack/nova-cell0-cell-mapping-5xsrx" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.846764 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/083bdb6d-c3f3-412d-9097-48e66c7f28d0-config-data\") pod \"nova-cell0-cell-mapping-5xsrx\" (UID: \"083bdb6d-c3f3-412d-9097-48e66c7f28d0\") " pod="openstack/nova-cell0-cell-mapping-5xsrx" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.848967 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qrht\" (UniqueName: \"kubernetes.io/projected/083bdb6d-c3f3-412d-9097-48e66c7f28d0-kube-api-access-9qrht\") pod \"nova-cell0-cell-mapping-5xsrx\" (UID: \"083bdb6d-c3f3-412d-9097-48e66c7f28d0\") " pod="openstack/nova-cell0-cell-mapping-5xsrx" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.867143 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.868651 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.875216 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.883787 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.926209 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ff66025-4eb1-4da2-886f-e5ef9bf4831d-config-data\") pod \"nova-api-0\" (UID: \"7ff66025-4eb1-4da2-886f-e5ef9bf4831d\") " pod="openstack/nova-api-0" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.926281 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ff66025-4eb1-4da2-886f-e5ef9bf4831d-logs\") pod \"nova-api-0\" (UID: \"7ff66025-4eb1-4da2-886f-e5ef9bf4831d\") " pod="openstack/nova-api-0" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.926310 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ff66025-4eb1-4da2-886f-e5ef9bf4831d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7ff66025-4eb1-4da2-886f-e5ef9bf4831d\") " pod="openstack/nova-api-0" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.926346 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/960be176-b983-4be1-90cc-05fdc39fb4e3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"960be176-b983-4be1-90cc-05fdc39fb4e3\") " pod="openstack/nova-scheduler-0" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.926372 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvx58\" (UniqueName: \"kubernetes.io/projected/960be176-b983-4be1-90cc-05fdc39fb4e3-kube-api-access-gvx58\") pod \"nova-scheduler-0\" (UID: \"960be176-b983-4be1-90cc-05fdc39fb4e3\") " pod="openstack/nova-scheduler-0" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.926436 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tv7t\" (UniqueName: \"kubernetes.io/projected/7ff66025-4eb1-4da2-886f-e5ef9bf4831d-kube-api-access-5tv7t\") pod \"nova-api-0\" (UID: \"7ff66025-4eb1-4da2-886f-e5ef9bf4831d\") " pod="openstack/nova-api-0" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.926468 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/960be176-b983-4be1-90cc-05fdc39fb4e3-config-data\") pod \"nova-scheduler-0\" (UID: \"960be176-b983-4be1-90cc-05fdc39fb4e3\") " pod="openstack/nova-scheduler-0" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.927155 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ff66025-4eb1-4da2-886f-e5ef9bf4831d-logs\") pod \"nova-api-0\" (UID: \"7ff66025-4eb1-4da2-886f-e5ef9bf4831d\") " pod="openstack/nova-api-0" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.931626 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-5xsrx" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.931994 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ff66025-4eb1-4da2-886f-e5ef9bf4831d-config-data\") pod \"nova-api-0\" (UID: \"7ff66025-4eb1-4da2-886f-e5ef9bf4831d\") " pod="openstack/nova-api-0" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.932610 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ff66025-4eb1-4da2-886f-e5ef9bf4831d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7ff66025-4eb1-4da2-886f-e5ef9bf4831d\") " pod="openstack/nova-api-0" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.959500 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.980384 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tv7t\" (UniqueName: \"kubernetes.io/projected/7ff66025-4eb1-4da2-886f-e5ef9bf4831d-kube-api-access-5tv7t\") pod \"nova-api-0\" (UID: \"7ff66025-4eb1-4da2-886f-e5ef9bf4831d\") " pod="openstack/nova-api-0" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.989018 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.996398 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.016689 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.031198 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/960be176-b983-4be1-90cc-05fdc39fb4e3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"960be176-b983-4be1-90cc-05fdc39fb4e3\") " pod="openstack/nova-scheduler-0" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.031277 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvx58\" (UniqueName: \"kubernetes.io/projected/960be176-b983-4be1-90cc-05fdc39fb4e3-kube-api-access-gvx58\") pod \"nova-scheduler-0\" (UID: \"960be176-b983-4be1-90cc-05fdc39fb4e3\") " pod="openstack/nova-scheduler-0" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.031356 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10a919f2-e41c-45e8-ba7f-882408152952-config-data\") pod \"nova-metadata-0\" (UID: \"10a919f2-e41c-45e8-ba7f-882408152952\") " pod="openstack/nova-metadata-0" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.031474 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10a919f2-e41c-45e8-ba7f-882408152952-logs\") pod \"nova-metadata-0\" (UID: \"10a919f2-e41c-45e8-ba7f-882408152952\") " pod="openstack/nova-metadata-0" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.031498 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10a919f2-e41c-45e8-ba7f-882408152952-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"10a919f2-e41c-45e8-ba7f-882408152952\") " pod="openstack/nova-metadata-0" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.031820 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nppkt\" (UniqueName: \"kubernetes.io/projected/10a919f2-e41c-45e8-ba7f-882408152952-kube-api-access-nppkt\") pod \"nova-metadata-0\" (UID: \"10a919f2-e41c-45e8-ba7f-882408152952\") " pod="openstack/nova-metadata-0" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.032043 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/960be176-b983-4be1-90cc-05fdc39fb4e3-config-data\") pod \"nova-scheduler-0\" (UID: \"960be176-b983-4be1-90cc-05fdc39fb4e3\") " pod="openstack/nova-scheduler-0" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.043364 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/960be176-b983-4be1-90cc-05fdc39fb4e3-config-data\") pod \"nova-scheduler-0\" (UID: \"960be176-b983-4be1-90cc-05fdc39fb4e3\") " pod="openstack/nova-scheduler-0" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.043986 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/960be176-b983-4be1-90cc-05fdc39fb4e3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"960be176-b983-4be1-90cc-05fdc39fb4e3\") " pod="openstack/nova-scheduler-0" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.061198 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvx58\" (UniqueName: \"kubernetes.io/projected/960be176-b983-4be1-90cc-05fdc39fb4e3-kube-api-access-gvx58\") pod \"nova-scheduler-0\" (UID: \"960be176-b983-4be1-90cc-05fdc39fb4e3\") " pod="openstack/nova-scheduler-0" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.088876 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.092755 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.121683 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8c8b5f8b9-npmjq"] Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.123545 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8c8b5f8b9-npmjq" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.130246 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8c8b5f8b9-npmjq"] Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.135947 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f688a02-a337-43d9-9cc8-ca5d7ba19898-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f688a02-a337-43d9-9cc8-ca5d7ba19898\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.136029 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lv8v\" (UniqueName: \"kubernetes.io/projected/1f688a02-a337-43d9-9cc8-ca5d7ba19898-kube-api-access-7lv8v\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f688a02-a337-43d9-9cc8-ca5d7ba19898\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.136072 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f688a02-a337-43d9-9cc8-ca5d7ba19898-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f688a02-a337-43d9-9cc8-ca5d7ba19898\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.136154 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10a919f2-e41c-45e8-ba7f-882408152952-config-data\") pod \"nova-metadata-0\" (UID: \"10a919f2-e41c-45e8-ba7f-882408152952\") " pod="openstack/nova-metadata-0" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.136201 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10a919f2-e41c-45e8-ba7f-882408152952-logs\") pod \"nova-metadata-0\" (UID: \"10a919f2-e41c-45e8-ba7f-882408152952\") " pod="openstack/nova-metadata-0" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.136218 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10a919f2-e41c-45e8-ba7f-882408152952-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"10a919f2-e41c-45e8-ba7f-882408152952\") " pod="openstack/nova-metadata-0" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.136251 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nppkt\" (UniqueName: \"kubernetes.io/projected/10a919f2-e41c-45e8-ba7f-882408152952-kube-api-access-nppkt\") pod \"nova-metadata-0\" (UID: \"10a919f2-e41c-45e8-ba7f-882408152952\") " pod="openstack/nova-metadata-0" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.137851 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10a919f2-e41c-45e8-ba7f-882408152952-logs\") pod \"nova-metadata-0\" (UID: \"10a919f2-e41c-45e8-ba7f-882408152952\") " pod="openstack/nova-metadata-0" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.141336 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10a919f2-e41c-45e8-ba7f-882408152952-config-data\") pod \"nova-metadata-0\" (UID: \"10a919f2-e41c-45e8-ba7f-882408152952\") " pod="openstack/nova-metadata-0" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.159434 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10a919f2-e41c-45e8-ba7f-882408152952-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"10a919f2-e41c-45e8-ba7f-882408152952\") " pod="openstack/nova-metadata-0" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.200787 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nppkt\" (UniqueName: \"kubernetes.io/projected/10a919f2-e41c-45e8-ba7f-882408152952-kube-api-access-nppkt\") pod \"nova-metadata-0\" (UID: \"10a919f2-e41c-45e8-ba7f-882408152952\") " pod="openstack/nova-metadata-0" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.226627 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.237460 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xqt5\" (UniqueName: \"kubernetes.io/projected/c15b6b4f-b273-4ad3-bd5b-c8c21421d672-kube-api-access-5xqt5\") pod \"dnsmasq-dns-8c8b5f8b9-npmjq\" (UID: \"c15b6b4f-b273-4ad3-bd5b-c8c21421d672\") " pod="openstack/dnsmasq-dns-8c8b5f8b9-npmjq" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.237543 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c15b6b4f-b273-4ad3-bd5b-c8c21421d672-config\") pod \"dnsmasq-dns-8c8b5f8b9-npmjq\" (UID: \"c15b6b4f-b273-4ad3-bd5b-c8c21421d672\") " pod="openstack/dnsmasq-dns-8c8b5f8b9-npmjq" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.237594 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c15b6b4f-b273-4ad3-bd5b-c8c21421d672-ovsdbserver-sb\") pod \"dnsmasq-dns-8c8b5f8b9-npmjq\" (UID: \"c15b6b4f-b273-4ad3-bd5b-c8c21421d672\") " pod="openstack/dnsmasq-dns-8c8b5f8b9-npmjq" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.237630 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f688a02-a337-43d9-9cc8-ca5d7ba19898-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f688a02-a337-43d9-9cc8-ca5d7ba19898\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.237655 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c15b6b4f-b273-4ad3-bd5b-c8c21421d672-ovsdbserver-nb\") pod \"dnsmasq-dns-8c8b5f8b9-npmjq\" (UID: \"c15b6b4f-b273-4ad3-bd5b-c8c21421d672\") " pod="openstack/dnsmasq-dns-8c8b5f8b9-npmjq" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.237678 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c15b6b4f-b273-4ad3-bd5b-c8c21421d672-dns-svc\") pod \"dnsmasq-dns-8c8b5f8b9-npmjq\" (UID: \"c15b6b4f-b273-4ad3-bd5b-c8c21421d672\") " pod="openstack/dnsmasq-dns-8c8b5f8b9-npmjq" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.237694 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lv8v\" (UniqueName: \"kubernetes.io/projected/1f688a02-a337-43d9-9cc8-ca5d7ba19898-kube-api-access-7lv8v\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f688a02-a337-43d9-9cc8-ca5d7ba19898\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.237729 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f688a02-a337-43d9-9cc8-ca5d7ba19898-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f688a02-a337-43d9-9cc8-ca5d7ba19898\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.243121 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f688a02-a337-43d9-9cc8-ca5d7ba19898-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f688a02-a337-43d9-9cc8-ca5d7ba19898\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.259935 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f688a02-a337-43d9-9cc8-ca5d7ba19898-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f688a02-a337-43d9-9cc8-ca5d7ba19898\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.268200 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lv8v\" (UniqueName: \"kubernetes.io/projected/1f688a02-a337-43d9-9cc8-ca5d7ba19898-kube-api-access-7lv8v\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f688a02-a337-43d9-9cc8-ca5d7ba19898\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.342898 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c15b6b4f-b273-4ad3-bd5b-c8c21421d672-dns-svc\") pod \"dnsmasq-dns-8c8b5f8b9-npmjq\" (UID: \"c15b6b4f-b273-4ad3-bd5b-c8c21421d672\") " pod="openstack/dnsmasq-dns-8c8b5f8b9-npmjq" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.342972 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xqt5\" (UniqueName: \"kubernetes.io/projected/c15b6b4f-b273-4ad3-bd5b-c8c21421d672-kube-api-access-5xqt5\") pod \"dnsmasq-dns-8c8b5f8b9-npmjq\" (UID: \"c15b6b4f-b273-4ad3-bd5b-c8c21421d672\") " pod="openstack/dnsmasq-dns-8c8b5f8b9-npmjq" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.343028 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c15b6b4f-b273-4ad3-bd5b-c8c21421d672-config\") pod \"dnsmasq-dns-8c8b5f8b9-npmjq\" (UID: \"c15b6b4f-b273-4ad3-bd5b-c8c21421d672\") " pod="openstack/dnsmasq-dns-8c8b5f8b9-npmjq" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.343076 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c15b6b4f-b273-4ad3-bd5b-c8c21421d672-ovsdbserver-sb\") pod \"dnsmasq-dns-8c8b5f8b9-npmjq\" (UID: \"c15b6b4f-b273-4ad3-bd5b-c8c21421d672\") " pod="openstack/dnsmasq-dns-8c8b5f8b9-npmjq" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.343120 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c15b6b4f-b273-4ad3-bd5b-c8c21421d672-ovsdbserver-nb\") pod \"dnsmasq-dns-8c8b5f8b9-npmjq\" (UID: \"c15b6b4f-b273-4ad3-bd5b-c8c21421d672\") " pod="openstack/dnsmasq-dns-8c8b5f8b9-npmjq" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.344132 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c15b6b4f-b273-4ad3-bd5b-c8c21421d672-ovsdbserver-nb\") pod \"dnsmasq-dns-8c8b5f8b9-npmjq\" (UID: \"c15b6b4f-b273-4ad3-bd5b-c8c21421d672\") " pod="openstack/dnsmasq-dns-8c8b5f8b9-npmjq" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.344603 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c15b6b4f-b273-4ad3-bd5b-c8c21421d672-config\") pod \"dnsmasq-dns-8c8b5f8b9-npmjq\" (UID: \"c15b6b4f-b273-4ad3-bd5b-c8c21421d672\") " pod="openstack/dnsmasq-dns-8c8b5f8b9-npmjq" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.344790 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c15b6b4f-b273-4ad3-bd5b-c8c21421d672-dns-svc\") pod \"dnsmasq-dns-8c8b5f8b9-npmjq\" (UID: \"c15b6b4f-b273-4ad3-bd5b-c8c21421d672\") " pod="openstack/dnsmasq-dns-8c8b5f8b9-npmjq" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.362018 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c15b6b4f-b273-4ad3-bd5b-c8c21421d672-ovsdbserver-sb\") pod \"dnsmasq-dns-8c8b5f8b9-npmjq\" (UID: \"c15b6b4f-b273-4ad3-bd5b-c8c21421d672\") " pod="openstack/dnsmasq-dns-8c8b5f8b9-npmjq" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.381961 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xqt5\" (UniqueName: \"kubernetes.io/projected/c15b6b4f-b273-4ad3-bd5b-c8c21421d672-kube-api-access-5xqt5\") pod \"dnsmasq-dns-8c8b5f8b9-npmjq\" (UID: \"c15b6b4f-b273-4ad3-bd5b-c8c21421d672\") " pod="openstack/dnsmasq-dns-8c8b5f8b9-npmjq" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.387938 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.484383 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8c8b5f8b9-npmjq" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.682018 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-5xsrx"] Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.784087 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.804463 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-5xsrx" event={"ID":"083bdb6d-c3f3-412d-9097-48e66c7f28d0","Type":"ContainerStarted","Data":"8e9ba534c0b1a1f9f460915fbcc26e1ca1c39179bfef7532d76f178d02f53c08"} Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.837724 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-247jx"] Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.840138 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-247jx" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.844846 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.845892 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.850288 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-247jx"] Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.885907 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.931387 4766 scope.go:117] "RemoveContainer" containerID="d83ad14fd8f4b675ceb3460a2bf958a20357e50f2d888a5402edc7fdebd9aa08" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.960862 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.962339 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/202a732a-6c9d-427a-9c87-af7c4af5d184-scripts\") pod \"nova-cell1-conductor-db-sync-247jx\" (UID: \"202a732a-6c9d-427a-9c87-af7c4af5d184\") " pod="openstack/nova-cell1-conductor-db-sync-247jx" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.962410 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/202a732a-6c9d-427a-9c87-af7c4af5d184-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-247jx\" (UID: \"202a732a-6c9d-427a-9c87-af7c4af5d184\") " pod="openstack/nova-cell1-conductor-db-sync-247jx" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.962434 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hsmd\" (UniqueName: \"kubernetes.io/projected/202a732a-6c9d-427a-9c87-af7c4af5d184-kube-api-access-2hsmd\") pod \"nova-cell1-conductor-db-sync-247jx\" (UID: \"202a732a-6c9d-427a-9c87-af7c4af5d184\") " pod="openstack/nova-cell1-conductor-db-sync-247jx" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.962577 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/202a732a-6c9d-427a-9c87-af7c4af5d184-config-data\") pod \"nova-cell1-conductor-db-sync-247jx\" (UID: \"202a732a-6c9d-427a-9c87-af7c4af5d184\") " pod="openstack/nova-cell1-conductor-db-sync-247jx" Jan 30 17:54:31 crc kubenswrapper[4766]: W0130 17:54:31.967502 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f688a02_a337_43d9_9cc8_ca5d7ba19898.slice/crio-9cb907c7defc84de9011e676b2b253841c9ace45df34403f36c123319269cc8b WatchSource:0}: Error finding container 9cb907c7defc84de9011e676b2b253841c9ace45df34403f36c123319269cc8b: Status 404 returned error can't find the container with id 9cb907c7defc84de9011e676b2b253841c9ace45df34403f36c123319269cc8b Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.984231 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 17:54:32 crc kubenswrapper[4766]: I0130 17:54:32.063951 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/202a732a-6c9d-427a-9c87-af7c4af5d184-config-data\") pod \"nova-cell1-conductor-db-sync-247jx\" (UID: \"202a732a-6c9d-427a-9c87-af7c4af5d184\") " pod="openstack/nova-cell1-conductor-db-sync-247jx" Jan 30 17:54:32 crc kubenswrapper[4766]: I0130 17:54:32.064001 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/202a732a-6c9d-427a-9c87-af7c4af5d184-scripts\") pod \"nova-cell1-conductor-db-sync-247jx\" (UID: \"202a732a-6c9d-427a-9c87-af7c4af5d184\") " pod="openstack/nova-cell1-conductor-db-sync-247jx" Jan 30 17:54:32 crc kubenswrapper[4766]: I0130 17:54:32.064044 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/202a732a-6c9d-427a-9c87-af7c4af5d184-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-247jx\" (UID: \"202a732a-6c9d-427a-9c87-af7c4af5d184\") " pod="openstack/nova-cell1-conductor-db-sync-247jx" Jan 30 17:54:32 crc kubenswrapper[4766]: I0130 17:54:32.064063 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hsmd\" (UniqueName: \"kubernetes.io/projected/202a732a-6c9d-427a-9c87-af7c4af5d184-kube-api-access-2hsmd\") pod \"nova-cell1-conductor-db-sync-247jx\" (UID: \"202a732a-6c9d-427a-9c87-af7c4af5d184\") " pod="openstack/nova-cell1-conductor-db-sync-247jx" Jan 30 17:54:32 crc kubenswrapper[4766]: I0130 17:54:32.069837 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/202a732a-6c9d-427a-9c87-af7c4af5d184-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-247jx\" (UID: \"202a732a-6c9d-427a-9c87-af7c4af5d184\") " pod="openstack/nova-cell1-conductor-db-sync-247jx" Jan 30 17:54:32 crc kubenswrapper[4766]: I0130 17:54:32.070709 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/202a732a-6c9d-427a-9c87-af7c4af5d184-config-data\") pod \"nova-cell1-conductor-db-sync-247jx\" (UID: \"202a732a-6c9d-427a-9c87-af7c4af5d184\") " pod="openstack/nova-cell1-conductor-db-sync-247jx" Jan 30 17:54:32 crc kubenswrapper[4766]: I0130 17:54:32.071672 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/202a732a-6c9d-427a-9c87-af7c4af5d184-scripts\") pod \"nova-cell1-conductor-db-sync-247jx\" (UID: \"202a732a-6c9d-427a-9c87-af7c4af5d184\") " pod="openstack/nova-cell1-conductor-db-sync-247jx" Jan 30 17:54:32 crc kubenswrapper[4766]: I0130 17:54:32.086614 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hsmd\" (UniqueName: \"kubernetes.io/projected/202a732a-6c9d-427a-9c87-af7c4af5d184-kube-api-access-2hsmd\") pod \"nova-cell1-conductor-db-sync-247jx\" (UID: \"202a732a-6c9d-427a-9c87-af7c4af5d184\") " pod="openstack/nova-cell1-conductor-db-sync-247jx" Jan 30 17:54:32 crc kubenswrapper[4766]: I0130 17:54:32.092276 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8c8b5f8b9-npmjq"] Jan 30 17:54:32 crc kubenswrapper[4766]: I0130 17:54:32.098484 4766 scope.go:117] "RemoveContainer" containerID="1b90a80f4637be44b39402681550752b5fc9bcb70acb1239adbe9ebd8ef0ae15" Jan 30 17:54:32 crc kubenswrapper[4766]: I0130 17:54:32.194386 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-247jx" Jan 30 17:54:32 crc kubenswrapper[4766]: E0130 17:54:32.530458 4766 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc15b6b4f_b273_4ad3_bd5b_c8c21421d672.slice/crio-48587e2732cd32e892ed02ac055074931d40c14169e13e67d1daec4e62c27639.scope\": RecentStats: unable to find data in memory cache]" Jan 30 17:54:32 crc kubenswrapper[4766]: I0130 17:54:32.660441 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-247jx"] Jan 30 17:54:32 crc kubenswrapper[4766]: W0130 17:54:32.664223 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod202a732a_6c9d_427a_9c87_af7c4af5d184.slice/crio-fe924abbdcf3b481acc8f2f30bd7b8f39b64f1fdc382d7ba842e1f9b708fd84f WatchSource:0}: Error finding container fe924abbdcf3b481acc8f2f30bd7b8f39b64f1fdc382d7ba842e1f9b708fd84f: Status 404 returned error can't find the container with id fe924abbdcf3b481acc8f2f30bd7b8f39b64f1fdc382d7ba842e1f9b708fd84f Jan 30 17:54:32 crc kubenswrapper[4766]: I0130 17:54:32.829535 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1f688a02-a337-43d9-9cc8-ca5d7ba19898","Type":"ContainerStarted","Data":"587a65d7acafa092b997b244d4f222dc6767a0e73e3ea386b5711720a3c42308"} Jan 30 17:54:32 crc kubenswrapper[4766]: I0130 17:54:32.829597 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1f688a02-a337-43d9-9cc8-ca5d7ba19898","Type":"ContainerStarted","Data":"9cb907c7defc84de9011e676b2b253841c9ace45df34403f36c123319269cc8b"} Jan 30 17:54:32 crc kubenswrapper[4766]: I0130 17:54:32.850365 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.850347009 podStartE2EDuration="2.850347009s" podCreationTimestamp="2026-01-30 17:54:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:54:32.847757538 +0000 UTC m=+5527.485714884" watchObservedRunningTime="2026-01-30 17:54:32.850347009 +0000 UTC m=+5527.488304345" Jan 30 17:54:32 crc kubenswrapper[4766]: I0130 17:54:32.879314 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7ff66025-4eb1-4da2-886f-e5ef9bf4831d","Type":"ContainerStarted","Data":"41a4d8172568caba641e9ac0436c5f3b1aaf335e2d914d7553e5d2e3af3e1491"} Jan 30 17:54:32 crc kubenswrapper[4766]: I0130 17:54:32.879359 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7ff66025-4eb1-4da2-886f-e5ef9bf4831d","Type":"ContainerStarted","Data":"e35194ade8587723d8cb57e970764cad70a2f443d63f80d8d0f87f8168979ce1"} Jan 30 17:54:32 crc kubenswrapper[4766]: I0130 17:54:32.879369 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7ff66025-4eb1-4da2-886f-e5ef9bf4831d","Type":"ContainerStarted","Data":"172f11d1f08481e85c172028438948e00677ee40db5df64052d44e88f3ee8c9f"} Jan 30 17:54:32 crc kubenswrapper[4766]: I0130 17:54:32.882357 4766 generic.go:334] "Generic (PLEG): container finished" podID="c15b6b4f-b273-4ad3-bd5b-c8c21421d672" containerID="48587e2732cd32e892ed02ac055074931d40c14169e13e67d1daec4e62c27639" exitCode=0 Jan 30 17:54:32 crc kubenswrapper[4766]: I0130 17:54:32.882625 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8c8b5f8b9-npmjq" event={"ID":"c15b6b4f-b273-4ad3-bd5b-c8c21421d672","Type":"ContainerDied","Data":"48587e2732cd32e892ed02ac055074931d40c14169e13e67d1daec4e62c27639"} Jan 30 17:54:32 crc kubenswrapper[4766]: I0130 17:54:32.882751 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8c8b5f8b9-npmjq" event={"ID":"c15b6b4f-b273-4ad3-bd5b-c8c21421d672","Type":"ContainerStarted","Data":"13c5060fcca39fb869c73e11390c606da85a656c1300d5ab6aa472270e9bf8ab"} Jan 30 17:54:32 crc kubenswrapper[4766]: I0130 17:54:32.884907 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"10a919f2-e41c-45e8-ba7f-882408152952","Type":"ContainerStarted","Data":"8c8a7551ecc3fbbbe1fbb478fce1f122ca45e30abb4e612c95ccf0bc7e68b4e2"} Jan 30 17:54:32 crc kubenswrapper[4766]: I0130 17:54:32.884962 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"10a919f2-e41c-45e8-ba7f-882408152952","Type":"ContainerStarted","Data":"40bf0f13fd5bcef58df4201f8bf2739cd3e440f1b138f60ee0532fff136fc71e"} Jan 30 17:54:32 crc kubenswrapper[4766]: I0130 17:54:32.884975 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"10a919f2-e41c-45e8-ba7f-882408152952","Type":"ContainerStarted","Data":"48c091b4127999cd92b0b2a6c8a5cc747b40f38f27c502854438f5732d970c5c"} Jan 30 17:54:32 crc kubenswrapper[4766]: I0130 17:54:32.887514 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-247jx" event={"ID":"202a732a-6c9d-427a-9c87-af7c4af5d184","Type":"ContainerStarted","Data":"fe924abbdcf3b481acc8f2f30bd7b8f39b64f1fdc382d7ba842e1f9b708fd84f"} Jan 30 17:54:32 crc kubenswrapper[4766]: I0130 17:54:32.889157 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"960be176-b983-4be1-90cc-05fdc39fb4e3","Type":"ContainerStarted","Data":"74ec0f163618dbe2e221c599805812701caedb5c8a8888ec9f014fdebc3d808e"} Jan 30 17:54:32 crc kubenswrapper[4766]: I0130 17:54:32.889220 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"960be176-b983-4be1-90cc-05fdc39fb4e3","Type":"ContainerStarted","Data":"92dbd7b1b8a472aec7c8d9dd2722ad2e6ddf00a37ec9c45580a2afbf75ca87fa"} Jan 30 17:54:32 crc kubenswrapper[4766]: I0130 17:54:32.890995 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-5xsrx" event={"ID":"083bdb6d-c3f3-412d-9097-48e66c7f28d0","Type":"ContainerStarted","Data":"a0f13e7a67d3cb517e1228d6222bbee0f7e7c79bd8b7aaaddf752c4e348579af"} Jan 30 17:54:32 crc kubenswrapper[4766]: I0130 17:54:32.907103 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.907081572 podStartE2EDuration="2.907081572s" podCreationTimestamp="2026-01-30 17:54:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:54:32.903875994 +0000 UTC m=+5527.541833350" watchObservedRunningTime="2026-01-30 17:54:32.907081572 +0000 UTC m=+5527.545038918" Jan 30 17:54:32 crc kubenswrapper[4766]: I0130 17:54:32.927484 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-247jx" podStartSLOduration=1.927463235 podStartE2EDuration="1.927463235s" podCreationTimestamp="2026-01-30 17:54:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:54:32.922624863 +0000 UTC m=+5527.560582209" watchObservedRunningTime="2026-01-30 17:54:32.927463235 +0000 UTC m=+5527.565420581" Jan 30 17:54:32 crc kubenswrapper[4766]: I0130 17:54:32.974274 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.9742551170000002 podStartE2EDuration="2.974255117s" podCreationTimestamp="2026-01-30 17:54:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:54:32.971372069 +0000 UTC m=+5527.609329425" watchObservedRunningTime="2026-01-30 17:54:32.974255117 +0000 UTC m=+5527.612212453" Jan 30 17:54:32 crc kubenswrapper[4766]: I0130 17:54:32.992046 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.99202146 podStartE2EDuration="2.99202146s" podCreationTimestamp="2026-01-30 17:54:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:54:32.985647457 +0000 UTC m=+5527.623604803" watchObservedRunningTime="2026-01-30 17:54:32.99202146 +0000 UTC m=+5527.629978806" Jan 30 17:54:33 crc kubenswrapper[4766]: I0130 17:54:33.010563 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-5xsrx" podStartSLOduration=3.010542413 podStartE2EDuration="3.010542413s" podCreationTimestamp="2026-01-30 17:54:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:54:33.00160093 +0000 UTC m=+5527.639558276" watchObservedRunningTime="2026-01-30 17:54:33.010542413 +0000 UTC m=+5527.648499769" Jan 30 17:54:33 crc kubenswrapper[4766]: I0130 17:54:33.905819 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-247jx" event={"ID":"202a732a-6c9d-427a-9c87-af7c4af5d184","Type":"ContainerStarted","Data":"5aac27e83d1cb5ca2446b49d301ad805fafea78ed00e6ab9d06fdf982c7ca496"} Jan 30 17:54:33 crc kubenswrapper[4766]: I0130 17:54:33.909697 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8c8b5f8b9-npmjq" event={"ID":"c15b6b4f-b273-4ad3-bd5b-c8c21421d672","Type":"ContainerStarted","Data":"33dd30f82fc48228e76ef54a0322f1648befca3fd844df31778e934e699e4743"} Jan 30 17:54:33 crc kubenswrapper[4766]: I0130 17:54:33.946615 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8c8b5f8b9-npmjq" podStartSLOduration=3.9465930240000002 podStartE2EDuration="3.946593024s" podCreationTimestamp="2026-01-30 17:54:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:54:33.935717098 +0000 UTC m=+5528.573674474" watchObservedRunningTime="2026-01-30 17:54:33.946593024 +0000 UTC m=+5528.584550370" Jan 30 17:54:34 crc kubenswrapper[4766]: I0130 17:54:34.916375 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8c8b5f8b9-npmjq" Jan 30 17:54:35 crc kubenswrapper[4766]: I0130 17:54:35.926473 4766 generic.go:334] "Generic (PLEG): container finished" podID="202a732a-6c9d-427a-9c87-af7c4af5d184" containerID="5aac27e83d1cb5ca2446b49d301ad805fafea78ed00e6ab9d06fdf982c7ca496" exitCode=0 Jan 30 17:54:35 crc kubenswrapper[4766]: I0130 17:54:35.926574 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-247jx" event={"ID":"202a732a-6c9d-427a-9c87-af7c4af5d184","Type":"ContainerDied","Data":"5aac27e83d1cb5ca2446b49d301ad805fafea78ed00e6ab9d06fdf982c7ca496"} Jan 30 17:54:36 crc kubenswrapper[4766]: I0130 17:54:36.090284 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 17:54:36 crc kubenswrapper[4766]: I0130 17:54:36.227411 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 17:54:36 crc kubenswrapper[4766]: I0130 17:54:36.227467 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 17:54:36 crc kubenswrapper[4766]: I0130 17:54:36.389609 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:54:36 crc kubenswrapper[4766]: I0130 17:54:36.936198 4766 generic.go:334] "Generic (PLEG): container finished" podID="083bdb6d-c3f3-412d-9097-48e66c7f28d0" containerID="a0f13e7a67d3cb517e1228d6222bbee0f7e7c79bd8b7aaaddf752c4e348579af" exitCode=0 Jan 30 17:54:36 crc kubenswrapper[4766]: I0130 17:54:36.936418 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-5xsrx" event={"ID":"083bdb6d-c3f3-412d-9097-48e66c7f28d0","Type":"ContainerDied","Data":"a0f13e7a67d3cb517e1228d6222bbee0f7e7c79bd8b7aaaddf752c4e348579af"} Jan 30 17:54:37 crc kubenswrapper[4766]: I0130 17:54:37.290504 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-247jx" Jan 30 17:54:37 crc kubenswrapper[4766]: I0130 17:54:37.388542 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/202a732a-6c9d-427a-9c87-af7c4af5d184-combined-ca-bundle\") pod \"202a732a-6c9d-427a-9c87-af7c4af5d184\" (UID: \"202a732a-6c9d-427a-9c87-af7c4af5d184\") " Jan 30 17:54:37 crc kubenswrapper[4766]: I0130 17:54:37.388715 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/202a732a-6c9d-427a-9c87-af7c4af5d184-config-data\") pod \"202a732a-6c9d-427a-9c87-af7c4af5d184\" (UID: \"202a732a-6c9d-427a-9c87-af7c4af5d184\") " Jan 30 17:54:37 crc kubenswrapper[4766]: I0130 17:54:37.388741 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/202a732a-6c9d-427a-9c87-af7c4af5d184-scripts\") pod \"202a732a-6c9d-427a-9c87-af7c4af5d184\" (UID: \"202a732a-6c9d-427a-9c87-af7c4af5d184\") " Jan 30 17:54:37 crc kubenswrapper[4766]: I0130 17:54:37.388780 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hsmd\" (UniqueName: \"kubernetes.io/projected/202a732a-6c9d-427a-9c87-af7c4af5d184-kube-api-access-2hsmd\") pod \"202a732a-6c9d-427a-9c87-af7c4af5d184\" (UID: \"202a732a-6c9d-427a-9c87-af7c4af5d184\") " Jan 30 17:54:37 crc kubenswrapper[4766]: I0130 17:54:37.395404 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/202a732a-6c9d-427a-9c87-af7c4af5d184-scripts" (OuterVolumeSpecName: "scripts") pod "202a732a-6c9d-427a-9c87-af7c4af5d184" (UID: "202a732a-6c9d-427a-9c87-af7c4af5d184"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:54:37 crc kubenswrapper[4766]: I0130 17:54:37.407399 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/202a732a-6c9d-427a-9c87-af7c4af5d184-kube-api-access-2hsmd" (OuterVolumeSpecName: "kube-api-access-2hsmd") pod "202a732a-6c9d-427a-9c87-af7c4af5d184" (UID: "202a732a-6c9d-427a-9c87-af7c4af5d184"). InnerVolumeSpecName "kube-api-access-2hsmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:54:37 crc kubenswrapper[4766]: I0130 17:54:37.419349 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/202a732a-6c9d-427a-9c87-af7c4af5d184-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "202a732a-6c9d-427a-9c87-af7c4af5d184" (UID: "202a732a-6c9d-427a-9c87-af7c4af5d184"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:54:37 crc kubenswrapper[4766]: I0130 17:54:37.430328 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/202a732a-6c9d-427a-9c87-af7c4af5d184-config-data" (OuterVolumeSpecName: "config-data") pod "202a732a-6c9d-427a-9c87-af7c4af5d184" (UID: "202a732a-6c9d-427a-9c87-af7c4af5d184"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:54:37 crc kubenswrapper[4766]: I0130 17:54:37.491115 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/202a732a-6c9d-427a-9c87-af7c4af5d184-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:37 crc kubenswrapper[4766]: I0130 17:54:37.491216 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hsmd\" (UniqueName: \"kubernetes.io/projected/202a732a-6c9d-427a-9c87-af7c4af5d184-kube-api-access-2hsmd\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:37 crc kubenswrapper[4766]: I0130 17:54:37.491249 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/202a732a-6c9d-427a-9c87-af7c4af5d184-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:37 crc kubenswrapper[4766]: I0130 17:54:37.491258 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/202a732a-6c9d-427a-9c87-af7c4af5d184-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:37 crc kubenswrapper[4766]: I0130 17:54:37.945635 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-247jx" event={"ID":"202a732a-6c9d-427a-9c87-af7c4af5d184","Type":"ContainerDied","Data":"fe924abbdcf3b481acc8f2f30bd7b8f39b64f1fdc382d7ba842e1f9b708fd84f"} Jan 30 17:54:37 crc kubenswrapper[4766]: I0130 17:54:37.945675 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe924abbdcf3b481acc8f2f30bd7b8f39b64f1fdc382d7ba842e1f9b708fd84f" Jan 30 17:54:37 crc kubenswrapper[4766]: I0130 17:54:37.945653 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-247jx" Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.052971 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 17:54:38 crc kubenswrapper[4766]: E0130 17:54:38.053311 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="202a732a-6c9d-427a-9c87-af7c4af5d184" containerName="nova-cell1-conductor-db-sync" Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.053327 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="202a732a-6c9d-427a-9c87-af7c4af5d184" containerName="nova-cell1-conductor-db-sync" Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.053495 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="202a732a-6c9d-427a-9c87-af7c4af5d184" containerName="nova-cell1-conductor-db-sync" Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.054198 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.055589 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.071660 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.101092 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42ca03b3-7414-49ac-8fb1-7d2489d1c251-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"42ca03b3-7414-49ac-8fb1-7d2489d1c251\") " pod="openstack/nova-cell1-conductor-0" Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.101332 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42ca03b3-7414-49ac-8fb1-7d2489d1c251-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"42ca03b3-7414-49ac-8fb1-7d2489d1c251\") " pod="openstack/nova-cell1-conductor-0" Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.101370 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz5gq\" (UniqueName: \"kubernetes.io/projected/42ca03b3-7414-49ac-8fb1-7d2489d1c251-kube-api-access-qz5gq\") pod \"nova-cell1-conductor-0\" (UID: \"42ca03b3-7414-49ac-8fb1-7d2489d1c251\") " pod="openstack/nova-cell1-conductor-0" Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.202792 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42ca03b3-7414-49ac-8fb1-7d2489d1c251-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"42ca03b3-7414-49ac-8fb1-7d2489d1c251\") " pod="openstack/nova-cell1-conductor-0" Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.203236 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42ca03b3-7414-49ac-8fb1-7d2489d1c251-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"42ca03b3-7414-49ac-8fb1-7d2489d1c251\") " pod="openstack/nova-cell1-conductor-0" Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.203281 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qz5gq\" (UniqueName: \"kubernetes.io/projected/42ca03b3-7414-49ac-8fb1-7d2489d1c251-kube-api-access-qz5gq\") pod \"nova-cell1-conductor-0\" (UID: \"42ca03b3-7414-49ac-8fb1-7d2489d1c251\") " pod="openstack/nova-cell1-conductor-0" Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.208925 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42ca03b3-7414-49ac-8fb1-7d2489d1c251-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"42ca03b3-7414-49ac-8fb1-7d2489d1c251\") " pod="openstack/nova-cell1-conductor-0" Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.209004 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42ca03b3-7414-49ac-8fb1-7d2489d1c251-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"42ca03b3-7414-49ac-8fb1-7d2489d1c251\") " pod="openstack/nova-cell1-conductor-0" Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.221762 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qz5gq\" (UniqueName: \"kubernetes.io/projected/42ca03b3-7414-49ac-8fb1-7d2489d1c251-kube-api-access-qz5gq\") pod \"nova-cell1-conductor-0\" (UID: \"42ca03b3-7414-49ac-8fb1-7d2489d1c251\") " pod="openstack/nova-cell1-conductor-0" Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.388469 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.419310 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-5xsrx" Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.513807 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/083bdb6d-c3f3-412d-9097-48e66c7f28d0-config-data\") pod \"083bdb6d-c3f3-412d-9097-48e66c7f28d0\" (UID: \"083bdb6d-c3f3-412d-9097-48e66c7f28d0\") " Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.513845 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/083bdb6d-c3f3-412d-9097-48e66c7f28d0-combined-ca-bundle\") pod \"083bdb6d-c3f3-412d-9097-48e66c7f28d0\" (UID: \"083bdb6d-c3f3-412d-9097-48e66c7f28d0\") " Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.513874 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/083bdb6d-c3f3-412d-9097-48e66c7f28d0-scripts\") pod \"083bdb6d-c3f3-412d-9097-48e66c7f28d0\" (UID: \"083bdb6d-c3f3-412d-9097-48e66c7f28d0\") " Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.513933 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qrht\" (UniqueName: \"kubernetes.io/projected/083bdb6d-c3f3-412d-9097-48e66c7f28d0-kube-api-access-9qrht\") pod \"083bdb6d-c3f3-412d-9097-48e66c7f28d0\" (UID: \"083bdb6d-c3f3-412d-9097-48e66c7f28d0\") " Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.517279 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/083bdb6d-c3f3-412d-9097-48e66c7f28d0-scripts" (OuterVolumeSpecName: "scripts") pod "083bdb6d-c3f3-412d-9097-48e66c7f28d0" (UID: "083bdb6d-c3f3-412d-9097-48e66c7f28d0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.517687 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/083bdb6d-c3f3-412d-9097-48e66c7f28d0-kube-api-access-9qrht" (OuterVolumeSpecName: "kube-api-access-9qrht") pod "083bdb6d-c3f3-412d-9097-48e66c7f28d0" (UID: "083bdb6d-c3f3-412d-9097-48e66c7f28d0"). InnerVolumeSpecName "kube-api-access-9qrht". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.538771 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/083bdb6d-c3f3-412d-9097-48e66c7f28d0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "083bdb6d-c3f3-412d-9097-48e66c7f28d0" (UID: "083bdb6d-c3f3-412d-9097-48e66c7f28d0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.589386 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/083bdb6d-c3f3-412d-9097-48e66c7f28d0-config-data" (OuterVolumeSpecName: "config-data") pod "083bdb6d-c3f3-412d-9097-48e66c7f28d0" (UID: "083bdb6d-c3f3-412d-9097-48e66c7f28d0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.616815 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/083bdb6d-c3f3-412d-9097-48e66c7f28d0-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.617218 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/083bdb6d-c3f3-412d-9097-48e66c7f28d0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.617237 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/083bdb6d-c3f3-412d-9097-48e66c7f28d0-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.617249 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qrht\" (UniqueName: \"kubernetes.io/projected/083bdb6d-c3f3-412d-9097-48e66c7f28d0-kube-api-access-9qrht\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.922829 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 17:54:38 crc kubenswrapper[4766]: W0130 17:54:38.925959 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42ca03b3_7414_49ac_8fb1_7d2489d1c251.slice/crio-18d42518db1b0bb06251f082044f954d0b9d14d82dbcc6772e7d16a38b44879b WatchSource:0}: Error finding container 18d42518db1b0bb06251f082044f954d0b9d14d82dbcc6772e7d16a38b44879b: Status 404 returned error can't find the container with id 18d42518db1b0bb06251f082044f954d0b9d14d82dbcc6772e7d16a38b44879b Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.957161 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"42ca03b3-7414-49ac-8fb1-7d2489d1c251","Type":"ContainerStarted","Data":"18d42518db1b0bb06251f082044f954d0b9d14d82dbcc6772e7d16a38b44879b"} Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.958961 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-5xsrx" event={"ID":"083bdb6d-c3f3-412d-9097-48e66c7f28d0","Type":"ContainerDied","Data":"8e9ba534c0b1a1f9f460915fbcc26e1ca1c39179bfef7532d76f178d02f53c08"} Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.958992 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e9ba534c0b1a1f9f460915fbcc26e1ca1c39179bfef7532d76f178d02f53c08" Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.959055 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-5xsrx" Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.124916 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.125270 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7ff66025-4eb1-4da2-886f-e5ef9bf4831d" containerName="nova-api-log" containerID="cri-o://e35194ade8587723d8cb57e970764cad70a2f443d63f80d8d0f87f8168979ce1" gracePeriod=30 Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.125428 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7ff66025-4eb1-4da2-886f-e5ef9bf4831d" containerName="nova-api-api" containerID="cri-o://41a4d8172568caba641e9ac0436c5f3b1aaf335e2d914d7553e5d2e3af3e1491" gracePeriod=30 Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.152812 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.153461 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="960be176-b983-4be1-90cc-05fdc39fb4e3" containerName="nova-scheduler-scheduler" containerID="cri-o://74ec0f163618dbe2e221c599805812701caedb5c8a8888ec9f014fdebc3d808e" gracePeriod=30 Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.193379 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.193686 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="10a919f2-e41c-45e8-ba7f-882408152952" containerName="nova-metadata-log" containerID="cri-o://40bf0f13fd5bcef58df4201f8bf2739cd3e440f1b138f60ee0532fff136fc71e" gracePeriod=30 Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.193988 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="10a919f2-e41c-45e8-ba7f-882408152952" containerName="nova-metadata-metadata" containerID="cri-o://8c8a7551ecc3fbbbe1fbb478fce1f122ca45e30abb4e612c95ccf0bc7e68b4e2" gracePeriod=30 Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.691349 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.748787 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tv7t\" (UniqueName: \"kubernetes.io/projected/7ff66025-4eb1-4da2-886f-e5ef9bf4831d-kube-api-access-5tv7t\") pod \"7ff66025-4eb1-4da2-886f-e5ef9bf4831d\" (UID: \"7ff66025-4eb1-4da2-886f-e5ef9bf4831d\") " Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.748860 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ff66025-4eb1-4da2-886f-e5ef9bf4831d-logs\") pod \"7ff66025-4eb1-4da2-886f-e5ef9bf4831d\" (UID: \"7ff66025-4eb1-4da2-886f-e5ef9bf4831d\") " Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.748900 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ff66025-4eb1-4da2-886f-e5ef9bf4831d-combined-ca-bundle\") pod \"7ff66025-4eb1-4da2-886f-e5ef9bf4831d\" (UID: \"7ff66025-4eb1-4da2-886f-e5ef9bf4831d\") " Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.748916 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ff66025-4eb1-4da2-886f-e5ef9bf4831d-config-data\") pod \"7ff66025-4eb1-4da2-886f-e5ef9bf4831d\" (UID: \"7ff66025-4eb1-4da2-886f-e5ef9bf4831d\") " Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.749307 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ff66025-4eb1-4da2-886f-e5ef9bf4831d-logs" (OuterVolumeSpecName: "logs") pod "7ff66025-4eb1-4da2-886f-e5ef9bf4831d" (UID: "7ff66025-4eb1-4da2-886f-e5ef9bf4831d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.753782 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ff66025-4eb1-4da2-886f-e5ef9bf4831d-kube-api-access-5tv7t" (OuterVolumeSpecName: "kube-api-access-5tv7t") pod "7ff66025-4eb1-4da2-886f-e5ef9bf4831d" (UID: "7ff66025-4eb1-4da2-886f-e5ef9bf4831d"). InnerVolumeSpecName "kube-api-access-5tv7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.756036 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.784238 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ff66025-4eb1-4da2-886f-e5ef9bf4831d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7ff66025-4eb1-4da2-886f-e5ef9bf4831d" (UID: "7ff66025-4eb1-4da2-886f-e5ef9bf4831d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.801570 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ff66025-4eb1-4da2-886f-e5ef9bf4831d-config-data" (OuterVolumeSpecName: "config-data") pod "7ff66025-4eb1-4da2-886f-e5ef9bf4831d" (UID: "7ff66025-4eb1-4da2-886f-e5ef9bf4831d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.850171 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10a919f2-e41c-45e8-ba7f-882408152952-logs\") pod \"10a919f2-e41c-45e8-ba7f-882408152952\" (UID: \"10a919f2-e41c-45e8-ba7f-882408152952\") " Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.850260 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10a919f2-e41c-45e8-ba7f-882408152952-config-data\") pod \"10a919f2-e41c-45e8-ba7f-882408152952\" (UID: \"10a919f2-e41c-45e8-ba7f-882408152952\") " Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.850317 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10a919f2-e41c-45e8-ba7f-882408152952-combined-ca-bundle\") pod \"10a919f2-e41c-45e8-ba7f-882408152952\" (UID: \"10a919f2-e41c-45e8-ba7f-882408152952\") " Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.850399 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nppkt\" (UniqueName: \"kubernetes.io/projected/10a919f2-e41c-45e8-ba7f-882408152952-kube-api-access-nppkt\") pod \"10a919f2-e41c-45e8-ba7f-882408152952\" (UID: \"10a919f2-e41c-45e8-ba7f-882408152952\") " Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.850587 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10a919f2-e41c-45e8-ba7f-882408152952-logs" (OuterVolumeSpecName: "logs") pod "10a919f2-e41c-45e8-ba7f-882408152952" (UID: "10a919f2-e41c-45e8-ba7f-882408152952"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.851814 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tv7t\" (UniqueName: \"kubernetes.io/projected/7ff66025-4eb1-4da2-886f-e5ef9bf4831d-kube-api-access-5tv7t\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.851845 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ff66025-4eb1-4da2-886f-e5ef9bf4831d-logs\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.851863 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ff66025-4eb1-4da2-886f-e5ef9bf4831d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.851878 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ff66025-4eb1-4da2-886f-e5ef9bf4831d-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.851891 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10a919f2-e41c-45e8-ba7f-882408152952-logs\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.853751 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10a919f2-e41c-45e8-ba7f-882408152952-kube-api-access-nppkt" (OuterVolumeSpecName: "kube-api-access-nppkt") pod "10a919f2-e41c-45e8-ba7f-882408152952" (UID: "10a919f2-e41c-45e8-ba7f-882408152952"). InnerVolumeSpecName "kube-api-access-nppkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.870926 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10a919f2-e41c-45e8-ba7f-882408152952-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "10a919f2-e41c-45e8-ba7f-882408152952" (UID: "10a919f2-e41c-45e8-ba7f-882408152952"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.877826 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10a919f2-e41c-45e8-ba7f-882408152952-config-data" (OuterVolumeSpecName: "config-data") pod "10a919f2-e41c-45e8-ba7f-882408152952" (UID: "10a919f2-e41c-45e8-ba7f-882408152952"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.953147 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10a919f2-e41c-45e8-ba7f-882408152952-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.953188 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10a919f2-e41c-45e8-ba7f-882408152952-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.953201 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nppkt\" (UniqueName: \"kubernetes.io/projected/10a919f2-e41c-45e8-ba7f-882408152952-kube-api-access-nppkt\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.971609 4766 generic.go:334] "Generic (PLEG): container finished" podID="10a919f2-e41c-45e8-ba7f-882408152952" containerID="8c8a7551ecc3fbbbe1fbb478fce1f122ca45e30abb4e612c95ccf0bc7e68b4e2" exitCode=0 Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.971663 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.971672 4766 generic.go:334] "Generic (PLEG): container finished" podID="10a919f2-e41c-45e8-ba7f-882408152952" containerID="40bf0f13fd5bcef58df4201f8bf2739cd3e440f1b138f60ee0532fff136fc71e" exitCode=143 Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.971671 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"10a919f2-e41c-45e8-ba7f-882408152952","Type":"ContainerDied","Data":"8c8a7551ecc3fbbbe1fbb478fce1f122ca45e30abb4e612c95ccf0bc7e68b4e2"} Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.971739 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"10a919f2-e41c-45e8-ba7f-882408152952","Type":"ContainerDied","Data":"40bf0f13fd5bcef58df4201f8bf2739cd3e440f1b138f60ee0532fff136fc71e"} Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.971751 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"10a919f2-e41c-45e8-ba7f-882408152952","Type":"ContainerDied","Data":"48c091b4127999cd92b0b2a6c8a5cc747b40f38f27c502854438f5732d970c5c"} Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.971783 4766 scope.go:117] "RemoveContainer" containerID="8c8a7551ecc3fbbbe1fbb478fce1f122ca45e30abb4e612c95ccf0bc7e68b4e2" Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.976656 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"42ca03b3-7414-49ac-8fb1-7d2489d1c251","Type":"ContainerStarted","Data":"4d171e1c1b30aea9ede9ee62f7dba8a4c3e8962d723b023c388dffbcb48561c5"} Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.977883 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.979989 4766 generic.go:334] "Generic (PLEG): container finished" podID="7ff66025-4eb1-4da2-886f-e5ef9bf4831d" containerID="41a4d8172568caba641e9ac0436c5f3b1aaf335e2d914d7553e5d2e3af3e1491" exitCode=0 Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.980014 4766 generic.go:334] "Generic (PLEG): container finished" podID="7ff66025-4eb1-4da2-886f-e5ef9bf4831d" containerID="e35194ade8587723d8cb57e970764cad70a2f443d63f80d8d0f87f8168979ce1" exitCode=143 Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.980031 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7ff66025-4eb1-4da2-886f-e5ef9bf4831d","Type":"ContainerDied","Data":"41a4d8172568caba641e9ac0436c5f3b1aaf335e2d914d7553e5d2e3af3e1491"} Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.980046 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7ff66025-4eb1-4da2-886f-e5ef9bf4831d","Type":"ContainerDied","Data":"e35194ade8587723d8cb57e970764cad70a2f443d63f80d8d0f87f8168979ce1"} Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.980057 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7ff66025-4eb1-4da2-886f-e5ef9bf4831d","Type":"ContainerDied","Data":"172f11d1f08481e85c172028438948e00677ee40db5df64052d44e88f3ee8c9f"} Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.980102 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.000439 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.000393436 podStartE2EDuration="2.000393436s" podCreationTimestamp="2026-01-30 17:54:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:54:39.996690965 +0000 UTC m=+5534.634648311" watchObservedRunningTime="2026-01-30 17:54:40.000393436 +0000 UTC m=+5534.638350782" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.006487 4766 scope.go:117] "RemoveContainer" containerID="40bf0f13fd5bcef58df4201f8bf2739cd3e440f1b138f60ee0532fff136fc71e" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.028038 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.038500 4766 scope.go:117] "RemoveContainer" containerID="8c8a7551ecc3fbbbe1fbb478fce1f122ca45e30abb4e612c95ccf0bc7e68b4e2" Jan 30 17:54:40 crc kubenswrapper[4766]: E0130 17:54:40.042666 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c8a7551ecc3fbbbe1fbb478fce1f122ca45e30abb4e612c95ccf0bc7e68b4e2\": container with ID starting with 8c8a7551ecc3fbbbe1fbb478fce1f122ca45e30abb4e612c95ccf0bc7e68b4e2 not found: ID does not exist" containerID="8c8a7551ecc3fbbbe1fbb478fce1f122ca45e30abb4e612c95ccf0bc7e68b4e2" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.042720 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c8a7551ecc3fbbbe1fbb478fce1f122ca45e30abb4e612c95ccf0bc7e68b4e2"} err="failed to get container status \"8c8a7551ecc3fbbbe1fbb478fce1f122ca45e30abb4e612c95ccf0bc7e68b4e2\": rpc error: code = NotFound desc = could not find container \"8c8a7551ecc3fbbbe1fbb478fce1f122ca45e30abb4e612c95ccf0bc7e68b4e2\": container with ID starting with 8c8a7551ecc3fbbbe1fbb478fce1f122ca45e30abb4e612c95ccf0bc7e68b4e2 not found: ID does not exist" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.042752 4766 scope.go:117] "RemoveContainer" containerID="40bf0f13fd5bcef58df4201f8bf2739cd3e440f1b138f60ee0532fff136fc71e" Jan 30 17:54:40 crc kubenswrapper[4766]: E0130 17:54:40.044532 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40bf0f13fd5bcef58df4201f8bf2739cd3e440f1b138f60ee0532fff136fc71e\": container with ID starting with 40bf0f13fd5bcef58df4201f8bf2739cd3e440f1b138f60ee0532fff136fc71e not found: ID does not exist" containerID="40bf0f13fd5bcef58df4201f8bf2739cd3e440f1b138f60ee0532fff136fc71e" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.044599 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40bf0f13fd5bcef58df4201f8bf2739cd3e440f1b138f60ee0532fff136fc71e"} err="failed to get container status \"40bf0f13fd5bcef58df4201f8bf2739cd3e440f1b138f60ee0532fff136fc71e\": rpc error: code = NotFound desc = could not find container \"40bf0f13fd5bcef58df4201f8bf2739cd3e440f1b138f60ee0532fff136fc71e\": container with ID starting with 40bf0f13fd5bcef58df4201f8bf2739cd3e440f1b138f60ee0532fff136fc71e not found: ID does not exist" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.044631 4766 scope.go:117] "RemoveContainer" containerID="8c8a7551ecc3fbbbe1fbb478fce1f122ca45e30abb4e612c95ccf0bc7e68b4e2" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.045105 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c8a7551ecc3fbbbe1fbb478fce1f122ca45e30abb4e612c95ccf0bc7e68b4e2"} err="failed to get container status \"8c8a7551ecc3fbbbe1fbb478fce1f122ca45e30abb4e612c95ccf0bc7e68b4e2\": rpc error: code = NotFound desc = could not find container \"8c8a7551ecc3fbbbe1fbb478fce1f122ca45e30abb4e612c95ccf0bc7e68b4e2\": container with ID starting with 8c8a7551ecc3fbbbe1fbb478fce1f122ca45e30abb4e612c95ccf0bc7e68b4e2 not found: ID does not exist" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.045140 4766 scope.go:117] "RemoveContainer" containerID="40bf0f13fd5bcef58df4201f8bf2739cd3e440f1b138f60ee0532fff136fc71e" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.046044 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40bf0f13fd5bcef58df4201f8bf2739cd3e440f1b138f60ee0532fff136fc71e"} err="failed to get container status \"40bf0f13fd5bcef58df4201f8bf2739cd3e440f1b138f60ee0532fff136fc71e\": rpc error: code = NotFound desc = could not find container \"40bf0f13fd5bcef58df4201f8bf2739cd3e440f1b138f60ee0532fff136fc71e\": container with ID starting with 40bf0f13fd5bcef58df4201f8bf2739cd3e440f1b138f60ee0532fff136fc71e not found: ID does not exist" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.046075 4766 scope.go:117] "RemoveContainer" containerID="41a4d8172568caba641e9ac0436c5f3b1aaf335e2d914d7553e5d2e3af3e1491" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.061901 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.061941 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.068985 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.078075 4766 scope.go:117] "RemoveContainer" containerID="e35194ade8587723d8cb57e970764cad70a2f443d63f80d8d0f87f8168979ce1" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.086334 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 17:54:40 crc kubenswrapper[4766]: E0130 17:54:40.086888 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ff66025-4eb1-4da2-886f-e5ef9bf4831d" containerName="nova-api-log" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.086963 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ff66025-4eb1-4da2-886f-e5ef9bf4831d" containerName="nova-api-log" Jan 30 17:54:40 crc kubenswrapper[4766]: E0130 17:54:40.087047 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ff66025-4eb1-4da2-886f-e5ef9bf4831d" containerName="nova-api-api" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.087102 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ff66025-4eb1-4da2-886f-e5ef9bf4831d" containerName="nova-api-api" Jan 30 17:54:40 crc kubenswrapper[4766]: E0130 17:54:40.087171 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="083bdb6d-c3f3-412d-9097-48e66c7f28d0" containerName="nova-manage" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.087253 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="083bdb6d-c3f3-412d-9097-48e66c7f28d0" containerName="nova-manage" Jan 30 17:54:40 crc kubenswrapper[4766]: E0130 17:54:40.087329 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10a919f2-e41c-45e8-ba7f-882408152952" containerName="nova-metadata-log" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.089509 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="10a919f2-e41c-45e8-ba7f-882408152952" containerName="nova-metadata-log" Jan 30 17:54:40 crc kubenswrapper[4766]: E0130 17:54:40.089623 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10a919f2-e41c-45e8-ba7f-882408152952" containerName="nova-metadata-metadata" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.089714 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="10a919f2-e41c-45e8-ba7f-882408152952" containerName="nova-metadata-metadata" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.090207 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="083bdb6d-c3f3-412d-9097-48e66c7f28d0" containerName="nova-manage" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.090314 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ff66025-4eb1-4da2-886f-e5ef9bf4831d" containerName="nova-api-api" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.090474 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="10a919f2-e41c-45e8-ba7f-882408152952" containerName="nova-metadata-metadata" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.090558 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ff66025-4eb1-4da2-886f-e5ef9bf4831d" containerName="nova-api-log" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.090656 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="10a919f2-e41c-45e8-ba7f-882408152952" containerName="nova-metadata-log" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.093126 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.102220 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.105309 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.120300 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.120677 4766 scope.go:117] "RemoveContainer" containerID="41a4d8172568caba641e9ac0436c5f3b1aaf335e2d914d7553e5d2e3af3e1491" Jan 30 17:54:40 crc kubenswrapper[4766]: E0130 17:54:40.121648 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41a4d8172568caba641e9ac0436c5f3b1aaf335e2d914d7553e5d2e3af3e1491\": container with ID starting with 41a4d8172568caba641e9ac0436c5f3b1aaf335e2d914d7553e5d2e3af3e1491 not found: ID does not exist" containerID="41a4d8172568caba641e9ac0436c5f3b1aaf335e2d914d7553e5d2e3af3e1491" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.121829 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41a4d8172568caba641e9ac0436c5f3b1aaf335e2d914d7553e5d2e3af3e1491"} err="failed to get container status \"41a4d8172568caba641e9ac0436c5f3b1aaf335e2d914d7553e5d2e3af3e1491\": rpc error: code = NotFound desc = could not find container \"41a4d8172568caba641e9ac0436c5f3b1aaf335e2d914d7553e5d2e3af3e1491\": container with ID starting with 41a4d8172568caba641e9ac0436c5f3b1aaf335e2d914d7553e5d2e3af3e1491 not found: ID does not exist" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.121969 4766 scope.go:117] "RemoveContainer" containerID="e35194ade8587723d8cb57e970764cad70a2f443d63f80d8d0f87f8168979ce1" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.126860 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 17:54:40 crc kubenswrapper[4766]: E0130 17:54:40.127487 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e35194ade8587723d8cb57e970764cad70a2f443d63f80d8d0f87f8168979ce1\": container with ID starting with e35194ade8587723d8cb57e970764cad70a2f443d63f80d8d0f87f8168979ce1 not found: ID does not exist" containerID="e35194ade8587723d8cb57e970764cad70a2f443d63f80d8d0f87f8168979ce1" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.130266 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e35194ade8587723d8cb57e970764cad70a2f443d63f80d8d0f87f8168979ce1"} err="failed to get container status \"e35194ade8587723d8cb57e970764cad70a2f443d63f80d8d0f87f8168979ce1\": rpc error: code = NotFound desc = could not find container \"e35194ade8587723d8cb57e970764cad70a2f443d63f80d8d0f87f8168979ce1\": container with ID starting with e35194ade8587723d8cb57e970764cad70a2f443d63f80d8d0f87f8168979ce1 not found: ID does not exist" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.130417 4766 scope.go:117] "RemoveContainer" containerID="41a4d8172568caba641e9ac0436c5f3b1aaf335e2d914d7553e5d2e3af3e1491" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.128896 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.132566 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.135588 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41a4d8172568caba641e9ac0436c5f3b1aaf335e2d914d7553e5d2e3af3e1491"} err="failed to get container status \"41a4d8172568caba641e9ac0436c5f3b1aaf335e2d914d7553e5d2e3af3e1491\": rpc error: code = NotFound desc = could not find container \"41a4d8172568caba641e9ac0436c5f3b1aaf335e2d914d7553e5d2e3af3e1491\": container with ID starting with 41a4d8172568caba641e9ac0436c5f3b1aaf335e2d914d7553e5d2e3af3e1491 not found: ID does not exist" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.138151 4766 scope.go:117] "RemoveContainer" containerID="e35194ade8587723d8cb57e970764cad70a2f443d63f80d8d0f87f8168979ce1" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.139202 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e35194ade8587723d8cb57e970764cad70a2f443d63f80d8d0f87f8168979ce1"} err="failed to get container status \"e35194ade8587723d8cb57e970764cad70a2f443d63f80d8d0f87f8168979ce1\": rpc error: code = NotFound desc = could not find container \"e35194ade8587723d8cb57e970764cad70a2f443d63f80d8d0f87f8168979ce1\": container with ID starting with e35194ade8587723d8cb57e970764cad70a2f443d63f80d8d0f87f8168979ce1 not found: ID does not exist" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.157573 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5\") " pod="openstack/nova-metadata-0" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.157991 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/450c67ff-a16a-43cf-8852-663c4c0073af-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"450c67ff-a16a-43cf-8852-663c4c0073af\") " pod="openstack/nova-api-0" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.158172 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/450c67ff-a16a-43cf-8852-663c4c0073af-config-data\") pod \"nova-api-0\" (UID: \"450c67ff-a16a-43cf-8852-663c4c0073af\") " pod="openstack/nova-api-0" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.158318 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/450c67ff-a16a-43cf-8852-663c4c0073af-logs\") pod \"nova-api-0\" (UID: \"450c67ff-a16a-43cf-8852-663c4c0073af\") " pod="openstack/nova-api-0" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.158435 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhf5r\" (UniqueName: \"kubernetes.io/projected/450c67ff-a16a-43cf-8852-663c4c0073af-kube-api-access-fhf5r\") pod \"nova-api-0\" (UID: \"450c67ff-a16a-43cf-8852-663c4c0073af\") " pod="openstack/nova-api-0" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.158632 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-987z9\" (UniqueName: \"kubernetes.io/projected/f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5-kube-api-access-987z9\") pod \"nova-metadata-0\" (UID: \"f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5\") " pod="openstack/nova-metadata-0" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.158823 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5-logs\") pod \"nova-metadata-0\" (UID: \"f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5\") " pod="openstack/nova-metadata-0" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.158984 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5-config-data\") pod \"nova-metadata-0\" (UID: \"f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5\") " pod="openstack/nova-metadata-0" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.261371 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5-config-data\") pod \"nova-metadata-0\" (UID: \"f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5\") " pod="openstack/nova-metadata-0" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.261772 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5\") " pod="openstack/nova-metadata-0" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.262230 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/450c67ff-a16a-43cf-8852-663c4c0073af-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"450c67ff-a16a-43cf-8852-663c4c0073af\") " pod="openstack/nova-api-0" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.262391 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/450c67ff-a16a-43cf-8852-663c4c0073af-config-data\") pod \"nova-api-0\" (UID: \"450c67ff-a16a-43cf-8852-663c4c0073af\") " pod="openstack/nova-api-0" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.262485 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/450c67ff-a16a-43cf-8852-663c4c0073af-logs\") pod \"nova-api-0\" (UID: \"450c67ff-a16a-43cf-8852-663c4c0073af\") " pod="openstack/nova-api-0" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.262610 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhf5r\" (UniqueName: \"kubernetes.io/projected/450c67ff-a16a-43cf-8852-663c4c0073af-kube-api-access-fhf5r\") pod \"nova-api-0\" (UID: \"450c67ff-a16a-43cf-8852-663c4c0073af\") " pod="openstack/nova-api-0" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.262740 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-987z9\" (UniqueName: \"kubernetes.io/projected/f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5-kube-api-access-987z9\") pod \"nova-metadata-0\" (UID: \"f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5\") " pod="openstack/nova-metadata-0" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.262897 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5-logs\") pod \"nova-metadata-0\" (UID: \"f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5\") " pod="openstack/nova-metadata-0" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.264708 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5-logs\") pod \"nova-metadata-0\" (UID: \"f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5\") " pod="openstack/nova-metadata-0" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.265093 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/450c67ff-a16a-43cf-8852-663c4c0073af-logs\") pod \"nova-api-0\" (UID: \"450c67ff-a16a-43cf-8852-663c4c0073af\") " pod="openstack/nova-api-0" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.267464 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5\") " pod="openstack/nova-metadata-0" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.269367 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/450c67ff-a16a-43cf-8852-663c4c0073af-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"450c67ff-a16a-43cf-8852-663c4c0073af\") " pod="openstack/nova-api-0" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.270114 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5-config-data\") pod \"nova-metadata-0\" (UID: \"f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5\") " pod="openstack/nova-metadata-0" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.278967 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/450c67ff-a16a-43cf-8852-663c4c0073af-config-data\") pod \"nova-api-0\" (UID: \"450c67ff-a16a-43cf-8852-663c4c0073af\") " pod="openstack/nova-api-0" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.282119 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhf5r\" (UniqueName: \"kubernetes.io/projected/450c67ff-a16a-43cf-8852-663c4c0073af-kube-api-access-fhf5r\") pod \"nova-api-0\" (UID: \"450c67ff-a16a-43cf-8852-663c4c0073af\") " pod="openstack/nova-api-0" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.283922 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-987z9\" (UniqueName: \"kubernetes.io/projected/f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5-kube-api-access-987z9\") pod \"nova-metadata-0\" (UID: \"f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5\") " pod="openstack/nova-metadata-0" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.428772 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.452861 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.871459 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 17:54:40 crc kubenswrapper[4766]: W0130 17:54:40.871921 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod450c67ff_a16a_43cf_8852_663c4c0073af.slice/crio-64b6cff4081d91692275bfc39b9e03b4bbec983aeeaa2f9495ea140e691acbab WatchSource:0}: Error finding container 64b6cff4081d91692275bfc39b9e03b4bbec983aeeaa2f9495ea140e691acbab: Status 404 returned error can't find the container with id 64b6cff4081d91692275bfc39b9e03b4bbec983aeeaa2f9495ea140e691acbab Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.994134 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"450c67ff-a16a-43cf-8852-663c4c0073af","Type":"ContainerStarted","Data":"64b6cff4081d91692275bfc39b9e03b4bbec983aeeaa2f9495ea140e691acbab"} Jan 30 17:54:41 crc kubenswrapper[4766]: I0130 17:54:41.034133 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 17:54:41 crc kubenswrapper[4766]: I0130 17:54:41.389607 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:54:41 crc kubenswrapper[4766]: I0130 17:54:41.404923 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:54:41 crc kubenswrapper[4766]: I0130 17:54:41.486468 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8c8b5f8b9-npmjq" Jan 30 17:54:41 crc kubenswrapper[4766]: I0130 17:54:41.564771 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64fdd96cfc-xc6hw"] Jan 30 17:54:41 crc kubenswrapper[4766]: I0130 17:54:41.565486 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-64fdd96cfc-xc6hw" podUID="df37c2c0-49c6-46b4-a4c9-085cad77c471" containerName="dnsmasq-dns" containerID="cri-o://5833d194064bb1f8316a6b4185acea8bc03322516d726c459b7e5ddf6211384a" gracePeriod=10 Jan 30 17:54:41 crc kubenswrapper[4766]: I0130 17:54:41.774715 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-64fdd96cfc-xc6hw" podUID="df37c2c0-49c6-46b4-a4c9-085cad77c471" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.42:5353: connect: connection refused" Jan 30 17:54:42 crc kubenswrapper[4766]: I0130 17:54:42.018772 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"450c67ff-a16a-43cf-8852-663c4c0073af","Type":"ContainerStarted","Data":"4594d9a8d1377e41403f374bd14e980ff695fcce28e881434e8bad7eed7a1f0f"} Jan 30 17:54:42 crc kubenswrapper[4766]: I0130 17:54:42.019063 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"450c67ff-a16a-43cf-8852-663c4c0073af","Type":"ContainerStarted","Data":"96ff05c6824774ab3542ad515add8dc5231abeafd8be736ff82970a43f6e74cc"} Jan 30 17:54:42 crc kubenswrapper[4766]: I0130 17:54:42.047405 4766 generic.go:334] "Generic (PLEG): container finished" podID="df37c2c0-49c6-46b4-a4c9-085cad77c471" containerID="5833d194064bb1f8316a6b4185acea8bc03322516d726c459b7e5ddf6211384a" exitCode=0 Jan 30 17:54:42 crc kubenswrapper[4766]: I0130 17:54:42.071620 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10a919f2-e41c-45e8-ba7f-882408152952" path="/var/lib/kubelet/pods/10a919f2-e41c-45e8-ba7f-882408152952/volumes" Jan 30 17:54:42 crc kubenswrapper[4766]: I0130 17:54:42.072551 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ff66025-4eb1-4da2-886f-e5ef9bf4831d" path="/var/lib/kubelet/pods/7ff66025-4eb1-4da2-886f-e5ef9bf4831d/volumes" Jan 30 17:54:42 crc kubenswrapper[4766]: I0130 17:54:42.073217 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.073201182 podStartE2EDuration="2.073201182s" podCreationTimestamp="2026-01-30 17:54:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:54:42.062661445 +0000 UTC m=+5536.700618811" watchObservedRunningTime="2026-01-30 17:54:42.073201182 +0000 UTC m=+5536.711158538" Jan 30 17:54:42 crc kubenswrapper[4766]: I0130 17:54:42.073328 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64fdd96cfc-xc6hw" event={"ID":"df37c2c0-49c6-46b4-a4c9-085cad77c471","Type":"ContainerDied","Data":"5833d194064bb1f8316a6b4185acea8bc03322516d726c459b7e5ddf6211384a"} Jan 30 17:54:42 crc kubenswrapper[4766]: I0130 17:54:42.073359 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5","Type":"ContainerStarted","Data":"8bfdf6d3bbf636cc3091d8977591217cb762d90d47739cb908609c1bf7193891"} Jan 30 17:54:42 crc kubenswrapper[4766]: I0130 17:54:42.073384 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5","Type":"ContainerStarted","Data":"24761a58c39941a476bb25332d3444a08bebbe87402e559bd577cf7061d30912"} Jan 30 17:54:42 crc kubenswrapper[4766]: I0130 17:54:42.073447 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:54:42 crc kubenswrapper[4766]: I0130 17:54:42.073462 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5","Type":"ContainerStarted","Data":"0b18c0a6248e0f08e59e0f76327c26fc51a0ee3b357d761ada211d388f46fe36"} Jan 30 17:54:42 crc kubenswrapper[4766]: I0130 17:54:42.095993 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.095972521 podStartE2EDuration="2.095972521s" podCreationTimestamp="2026-01-30 17:54:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:54:42.084754116 +0000 UTC m=+5536.722711462" watchObservedRunningTime="2026-01-30 17:54:42.095972521 +0000 UTC m=+5536.733929867" Jan 30 17:54:42 crc kubenswrapper[4766]: I0130 17:54:42.148167 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64fdd96cfc-xc6hw" Jan 30 17:54:42 crc kubenswrapper[4766]: I0130 17:54:42.250605 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df37c2c0-49c6-46b4-a4c9-085cad77c471-config\") pod \"df37c2c0-49c6-46b4-a4c9-085cad77c471\" (UID: \"df37c2c0-49c6-46b4-a4c9-085cad77c471\") " Jan 30 17:54:42 crc kubenswrapper[4766]: I0130 17:54:42.250663 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sscz4\" (UniqueName: \"kubernetes.io/projected/df37c2c0-49c6-46b4-a4c9-085cad77c471-kube-api-access-sscz4\") pod \"df37c2c0-49c6-46b4-a4c9-085cad77c471\" (UID: \"df37c2c0-49c6-46b4-a4c9-085cad77c471\") " Jan 30 17:54:42 crc kubenswrapper[4766]: I0130 17:54:42.250758 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df37c2c0-49c6-46b4-a4c9-085cad77c471-dns-svc\") pod \"df37c2c0-49c6-46b4-a4c9-085cad77c471\" (UID: \"df37c2c0-49c6-46b4-a4c9-085cad77c471\") " Jan 30 17:54:42 crc kubenswrapper[4766]: I0130 17:54:42.250803 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df37c2c0-49c6-46b4-a4c9-085cad77c471-ovsdbserver-nb\") pod \"df37c2c0-49c6-46b4-a4c9-085cad77c471\" (UID: \"df37c2c0-49c6-46b4-a4c9-085cad77c471\") " Jan 30 17:54:42 crc kubenswrapper[4766]: I0130 17:54:42.250838 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df37c2c0-49c6-46b4-a4c9-085cad77c471-ovsdbserver-sb\") pod \"df37c2c0-49c6-46b4-a4c9-085cad77c471\" (UID: \"df37c2c0-49c6-46b4-a4c9-085cad77c471\") " Jan 30 17:54:42 crc kubenswrapper[4766]: I0130 17:54:42.255346 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df37c2c0-49c6-46b4-a4c9-085cad77c471-kube-api-access-sscz4" (OuterVolumeSpecName: "kube-api-access-sscz4") pod "df37c2c0-49c6-46b4-a4c9-085cad77c471" (UID: "df37c2c0-49c6-46b4-a4c9-085cad77c471"). InnerVolumeSpecName "kube-api-access-sscz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:54:42 crc kubenswrapper[4766]: I0130 17:54:42.294694 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df37c2c0-49c6-46b4-a4c9-085cad77c471-config" (OuterVolumeSpecName: "config") pod "df37c2c0-49c6-46b4-a4c9-085cad77c471" (UID: "df37c2c0-49c6-46b4-a4c9-085cad77c471"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:54:42 crc kubenswrapper[4766]: I0130 17:54:42.295437 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df37c2c0-49c6-46b4-a4c9-085cad77c471-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "df37c2c0-49c6-46b4-a4c9-085cad77c471" (UID: "df37c2c0-49c6-46b4-a4c9-085cad77c471"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:54:42 crc kubenswrapper[4766]: I0130 17:54:42.298144 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df37c2c0-49c6-46b4-a4c9-085cad77c471-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "df37c2c0-49c6-46b4-a4c9-085cad77c471" (UID: "df37c2c0-49c6-46b4-a4c9-085cad77c471"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:54:42 crc kubenswrapper[4766]: I0130 17:54:42.303115 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df37c2c0-49c6-46b4-a4c9-085cad77c471-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "df37c2c0-49c6-46b4-a4c9-085cad77c471" (UID: "df37c2c0-49c6-46b4-a4c9-085cad77c471"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:54:42 crc kubenswrapper[4766]: I0130 17:54:42.353444 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df37c2c0-49c6-46b4-a4c9-085cad77c471-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:42 crc kubenswrapper[4766]: I0130 17:54:42.353481 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sscz4\" (UniqueName: \"kubernetes.io/projected/df37c2c0-49c6-46b4-a4c9-085cad77c471-kube-api-access-sscz4\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:42 crc kubenswrapper[4766]: I0130 17:54:42.353495 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df37c2c0-49c6-46b4-a4c9-085cad77c471-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:42 crc kubenswrapper[4766]: I0130 17:54:42.353513 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df37c2c0-49c6-46b4-a4c9-085cad77c471-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:42 crc kubenswrapper[4766]: I0130 17:54:42.353544 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df37c2c0-49c6-46b4-a4c9-085cad77c471-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:43 crc kubenswrapper[4766]: I0130 17:54:43.065107 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64fdd96cfc-xc6hw" Jan 30 17:54:43 crc kubenswrapper[4766]: I0130 17:54:43.066365 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64fdd96cfc-xc6hw" event={"ID":"df37c2c0-49c6-46b4-a4c9-085cad77c471","Type":"ContainerDied","Data":"62402daa4d1e00e414a6153806e7a4ebba06101c39ecd01fd579e17d1df427fb"} Jan 30 17:54:43 crc kubenswrapper[4766]: I0130 17:54:43.066479 4766 scope.go:117] "RemoveContainer" containerID="5833d194064bb1f8316a6b4185acea8bc03322516d726c459b7e5ddf6211384a" Jan 30 17:54:43 crc kubenswrapper[4766]: I0130 17:54:43.092555 4766 scope.go:117] "RemoveContainer" containerID="97eb96b855b10a22a6e46b822f4b71edbb3ba59805d7a1f85175cae2577f8939" Jan 30 17:54:43 crc kubenswrapper[4766]: I0130 17:54:43.119042 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64fdd96cfc-xc6hw"] Jan 30 17:54:43 crc kubenswrapper[4766]: I0130 17:54:43.132379 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-64fdd96cfc-xc6hw"] Jan 30 17:54:43 crc kubenswrapper[4766]: I0130 17:54:43.925101 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 17:54:43 crc kubenswrapper[4766]: I0130 17:54:43.978905 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/960be176-b983-4be1-90cc-05fdc39fb4e3-combined-ca-bundle\") pod \"960be176-b983-4be1-90cc-05fdc39fb4e3\" (UID: \"960be176-b983-4be1-90cc-05fdc39fb4e3\") " Jan 30 17:54:43 crc kubenswrapper[4766]: I0130 17:54:43.979335 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvx58\" (UniqueName: \"kubernetes.io/projected/960be176-b983-4be1-90cc-05fdc39fb4e3-kube-api-access-gvx58\") pod \"960be176-b983-4be1-90cc-05fdc39fb4e3\" (UID: \"960be176-b983-4be1-90cc-05fdc39fb4e3\") " Jan 30 17:54:43 crc kubenswrapper[4766]: I0130 17:54:43.979621 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/960be176-b983-4be1-90cc-05fdc39fb4e3-config-data\") pod \"960be176-b983-4be1-90cc-05fdc39fb4e3\" (UID: \"960be176-b983-4be1-90cc-05fdc39fb4e3\") " Jan 30 17:54:43 crc kubenswrapper[4766]: I0130 17:54:43.984496 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/960be176-b983-4be1-90cc-05fdc39fb4e3-kube-api-access-gvx58" (OuterVolumeSpecName: "kube-api-access-gvx58") pod "960be176-b983-4be1-90cc-05fdc39fb4e3" (UID: "960be176-b983-4be1-90cc-05fdc39fb4e3"). InnerVolumeSpecName "kube-api-access-gvx58". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.005320 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/960be176-b983-4be1-90cc-05fdc39fb4e3-config-data" (OuterVolumeSpecName: "config-data") pod "960be176-b983-4be1-90cc-05fdc39fb4e3" (UID: "960be176-b983-4be1-90cc-05fdc39fb4e3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.013539 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/960be176-b983-4be1-90cc-05fdc39fb4e3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "960be176-b983-4be1-90cc-05fdc39fb4e3" (UID: "960be176-b983-4be1-90cc-05fdc39fb4e3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.051748 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df37c2c0-49c6-46b4-a4c9-085cad77c471" path="/var/lib/kubelet/pods/df37c2c0-49c6-46b4-a4c9-085cad77c471/volumes" Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.077073 4766 generic.go:334] "Generic (PLEG): container finished" podID="960be176-b983-4be1-90cc-05fdc39fb4e3" containerID="74ec0f163618dbe2e221c599805812701caedb5c8a8888ec9f014fdebc3d808e" exitCode=0 Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.077120 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"960be176-b983-4be1-90cc-05fdc39fb4e3","Type":"ContainerDied","Data":"74ec0f163618dbe2e221c599805812701caedb5c8a8888ec9f014fdebc3d808e"} Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.077124 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.077155 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"960be176-b983-4be1-90cc-05fdc39fb4e3","Type":"ContainerDied","Data":"92dbd7b1b8a472aec7c8d9dd2722ad2e6ddf00a37ec9c45580a2afbf75ca87fa"} Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.077215 4766 scope.go:117] "RemoveContainer" containerID="74ec0f163618dbe2e221c599805812701caedb5c8a8888ec9f014fdebc3d808e" Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.081653 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/960be176-b983-4be1-90cc-05fdc39fb4e3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.081674 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvx58\" (UniqueName: \"kubernetes.io/projected/960be176-b983-4be1-90cc-05fdc39fb4e3-kube-api-access-gvx58\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.081684 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/960be176-b983-4be1-90cc-05fdc39fb4e3-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.107679 4766 scope.go:117] "RemoveContainer" containerID="74ec0f163618dbe2e221c599805812701caedb5c8a8888ec9f014fdebc3d808e" Jan 30 17:54:44 crc kubenswrapper[4766]: E0130 17:54:44.108167 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74ec0f163618dbe2e221c599805812701caedb5c8a8888ec9f014fdebc3d808e\": container with ID starting with 74ec0f163618dbe2e221c599805812701caedb5c8a8888ec9f014fdebc3d808e not found: ID does not exist" containerID="74ec0f163618dbe2e221c599805812701caedb5c8a8888ec9f014fdebc3d808e" Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.108221 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74ec0f163618dbe2e221c599805812701caedb5c8a8888ec9f014fdebc3d808e"} err="failed to get container status \"74ec0f163618dbe2e221c599805812701caedb5c8a8888ec9f014fdebc3d808e\": rpc error: code = NotFound desc = could not find container \"74ec0f163618dbe2e221c599805812701caedb5c8a8888ec9f014fdebc3d808e\": container with ID starting with 74ec0f163618dbe2e221c599805812701caedb5c8a8888ec9f014fdebc3d808e not found: ID does not exist" Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.108544 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.126550 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.136616 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 17:54:44 crc kubenswrapper[4766]: E0130 17:54:44.137110 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df37c2c0-49c6-46b4-a4c9-085cad77c471" containerName="dnsmasq-dns" Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.137138 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="df37c2c0-49c6-46b4-a4c9-085cad77c471" containerName="dnsmasq-dns" Jan 30 17:54:44 crc kubenswrapper[4766]: E0130 17:54:44.137205 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df37c2c0-49c6-46b4-a4c9-085cad77c471" containerName="init" Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.137216 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="df37c2c0-49c6-46b4-a4c9-085cad77c471" containerName="init" Jan 30 17:54:44 crc kubenswrapper[4766]: E0130 17:54:44.137241 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="960be176-b983-4be1-90cc-05fdc39fb4e3" containerName="nova-scheduler-scheduler" Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.137251 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="960be176-b983-4be1-90cc-05fdc39fb4e3" containerName="nova-scheduler-scheduler" Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.137471 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="960be176-b983-4be1-90cc-05fdc39fb4e3" containerName="nova-scheduler-scheduler" Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.137514 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="df37c2c0-49c6-46b4-a4c9-085cad77c471" containerName="dnsmasq-dns" Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.138447 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.145420 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.146184 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.286952 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/682ac4fd-3610-40e1-8c35-8396cf9f5342-config-data\") pod \"nova-scheduler-0\" (UID: \"682ac4fd-3610-40e1-8c35-8396cf9f5342\") " pod="openstack/nova-scheduler-0" Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.287051 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/682ac4fd-3610-40e1-8c35-8396cf9f5342-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"682ac4fd-3610-40e1-8c35-8396cf9f5342\") " pod="openstack/nova-scheduler-0" Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.287449 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mbf2\" (UniqueName: \"kubernetes.io/projected/682ac4fd-3610-40e1-8c35-8396cf9f5342-kube-api-access-8mbf2\") pod \"nova-scheduler-0\" (UID: \"682ac4fd-3610-40e1-8c35-8396cf9f5342\") " pod="openstack/nova-scheduler-0" Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.388763 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mbf2\" (UniqueName: \"kubernetes.io/projected/682ac4fd-3610-40e1-8c35-8396cf9f5342-kube-api-access-8mbf2\") pod \"nova-scheduler-0\" (UID: \"682ac4fd-3610-40e1-8c35-8396cf9f5342\") " pod="openstack/nova-scheduler-0" Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.388837 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/682ac4fd-3610-40e1-8c35-8396cf9f5342-config-data\") pod \"nova-scheduler-0\" (UID: \"682ac4fd-3610-40e1-8c35-8396cf9f5342\") " pod="openstack/nova-scheduler-0" Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.388874 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/682ac4fd-3610-40e1-8c35-8396cf9f5342-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"682ac4fd-3610-40e1-8c35-8396cf9f5342\") " pod="openstack/nova-scheduler-0" Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.392998 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/682ac4fd-3610-40e1-8c35-8396cf9f5342-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"682ac4fd-3610-40e1-8c35-8396cf9f5342\") " pod="openstack/nova-scheduler-0" Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.394632 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/682ac4fd-3610-40e1-8c35-8396cf9f5342-config-data\") pod \"nova-scheduler-0\" (UID: \"682ac4fd-3610-40e1-8c35-8396cf9f5342\") " pod="openstack/nova-scheduler-0" Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.405281 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mbf2\" (UniqueName: \"kubernetes.io/projected/682ac4fd-3610-40e1-8c35-8396cf9f5342-kube-api-access-8mbf2\") pod \"nova-scheduler-0\" (UID: \"682ac4fd-3610-40e1-8c35-8396cf9f5342\") " pod="openstack/nova-scheduler-0" Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.457317 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.872598 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 17:54:45 crc kubenswrapper[4766]: I0130 17:54:45.089799 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"682ac4fd-3610-40e1-8c35-8396cf9f5342","Type":"ContainerStarted","Data":"762ee95b18763ac4363056a859b53a7202c53d1b4f7202a690be49ba98fb2927"} Jan 30 17:54:45 crc kubenswrapper[4766]: I0130 17:54:45.089892 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"682ac4fd-3610-40e1-8c35-8396cf9f5342","Type":"ContainerStarted","Data":"dbf668e645f6a44821ff790f6478d0f15ef68055392d35449de7aa0dcc2f94d1"} Jan 30 17:54:45 crc kubenswrapper[4766]: I0130 17:54:45.115446 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.115420334 podStartE2EDuration="1.115420334s" podCreationTimestamp="2026-01-30 17:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:54:45.109830853 +0000 UTC m=+5539.747788199" watchObservedRunningTime="2026-01-30 17:54:45.115420334 +0000 UTC m=+5539.753377670" Jan 30 17:54:45 crc kubenswrapper[4766]: I0130 17:54:45.454010 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 17:54:45 crc kubenswrapper[4766]: I0130 17:54:45.454157 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 17:54:46 crc kubenswrapper[4766]: I0130 17:54:46.053930 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="960be176-b983-4be1-90cc-05fdc39fb4e3" path="/var/lib/kubelet/pods/960be176-b983-4be1-90cc-05fdc39fb4e3/volumes" Jan 30 17:54:48 crc kubenswrapper[4766]: I0130 17:54:48.418865 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 30 17:54:48 crc kubenswrapper[4766]: I0130 17:54:48.949274 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-nfnj2"] Jan 30 17:54:48 crc kubenswrapper[4766]: I0130 17:54:48.951497 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-nfnj2" Jan 30 17:54:48 crc kubenswrapper[4766]: I0130 17:54:48.954512 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 30 17:54:48 crc kubenswrapper[4766]: I0130 17:54:48.954749 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 30 17:54:48 crc kubenswrapper[4766]: I0130 17:54:48.975399 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-nfnj2"] Jan 30 17:54:49 crc kubenswrapper[4766]: I0130 17:54:49.078805 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/018ff185-8917-437b-9c5a-ec143d1fc84a-scripts\") pod \"nova-cell1-cell-mapping-nfnj2\" (UID: \"018ff185-8917-437b-9c5a-ec143d1fc84a\") " pod="openstack/nova-cell1-cell-mapping-nfnj2" Jan 30 17:54:49 crc kubenswrapper[4766]: I0130 17:54:49.078912 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5xlg\" (UniqueName: \"kubernetes.io/projected/018ff185-8917-437b-9c5a-ec143d1fc84a-kube-api-access-b5xlg\") pod \"nova-cell1-cell-mapping-nfnj2\" (UID: \"018ff185-8917-437b-9c5a-ec143d1fc84a\") " pod="openstack/nova-cell1-cell-mapping-nfnj2" Jan 30 17:54:49 crc kubenswrapper[4766]: I0130 17:54:49.079132 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/018ff185-8917-437b-9c5a-ec143d1fc84a-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-nfnj2\" (UID: \"018ff185-8917-437b-9c5a-ec143d1fc84a\") " pod="openstack/nova-cell1-cell-mapping-nfnj2" Jan 30 17:54:49 crc kubenswrapper[4766]: I0130 17:54:49.079243 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/018ff185-8917-437b-9c5a-ec143d1fc84a-config-data\") pod \"nova-cell1-cell-mapping-nfnj2\" (UID: \"018ff185-8917-437b-9c5a-ec143d1fc84a\") " pod="openstack/nova-cell1-cell-mapping-nfnj2" Jan 30 17:54:49 crc kubenswrapper[4766]: I0130 17:54:49.181036 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/018ff185-8917-437b-9c5a-ec143d1fc84a-scripts\") pod \"nova-cell1-cell-mapping-nfnj2\" (UID: \"018ff185-8917-437b-9c5a-ec143d1fc84a\") " pod="openstack/nova-cell1-cell-mapping-nfnj2" Jan 30 17:54:49 crc kubenswrapper[4766]: I0130 17:54:49.181104 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5xlg\" (UniqueName: \"kubernetes.io/projected/018ff185-8917-437b-9c5a-ec143d1fc84a-kube-api-access-b5xlg\") pod \"nova-cell1-cell-mapping-nfnj2\" (UID: \"018ff185-8917-437b-9c5a-ec143d1fc84a\") " pod="openstack/nova-cell1-cell-mapping-nfnj2" Jan 30 17:54:49 crc kubenswrapper[4766]: I0130 17:54:49.181146 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/018ff185-8917-437b-9c5a-ec143d1fc84a-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-nfnj2\" (UID: \"018ff185-8917-437b-9c5a-ec143d1fc84a\") " pod="openstack/nova-cell1-cell-mapping-nfnj2" Jan 30 17:54:49 crc kubenswrapper[4766]: I0130 17:54:49.181166 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/018ff185-8917-437b-9c5a-ec143d1fc84a-config-data\") pod \"nova-cell1-cell-mapping-nfnj2\" (UID: \"018ff185-8917-437b-9c5a-ec143d1fc84a\") " pod="openstack/nova-cell1-cell-mapping-nfnj2" Jan 30 17:54:49 crc kubenswrapper[4766]: I0130 17:54:49.188856 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/018ff185-8917-437b-9c5a-ec143d1fc84a-scripts\") pod \"nova-cell1-cell-mapping-nfnj2\" (UID: \"018ff185-8917-437b-9c5a-ec143d1fc84a\") " pod="openstack/nova-cell1-cell-mapping-nfnj2" Jan 30 17:54:49 crc kubenswrapper[4766]: I0130 17:54:49.198834 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/018ff185-8917-437b-9c5a-ec143d1fc84a-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-nfnj2\" (UID: \"018ff185-8917-437b-9c5a-ec143d1fc84a\") " pod="openstack/nova-cell1-cell-mapping-nfnj2" Jan 30 17:54:49 crc kubenswrapper[4766]: I0130 17:54:49.200247 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/018ff185-8917-437b-9c5a-ec143d1fc84a-config-data\") pod \"nova-cell1-cell-mapping-nfnj2\" (UID: \"018ff185-8917-437b-9c5a-ec143d1fc84a\") " pod="openstack/nova-cell1-cell-mapping-nfnj2" Jan 30 17:54:49 crc kubenswrapper[4766]: I0130 17:54:49.202911 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5xlg\" (UniqueName: \"kubernetes.io/projected/018ff185-8917-437b-9c5a-ec143d1fc84a-kube-api-access-b5xlg\") pod \"nova-cell1-cell-mapping-nfnj2\" (UID: \"018ff185-8917-437b-9c5a-ec143d1fc84a\") " pod="openstack/nova-cell1-cell-mapping-nfnj2" Jan 30 17:54:49 crc kubenswrapper[4766]: I0130 17:54:49.277334 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-nfnj2" Jan 30 17:54:49 crc kubenswrapper[4766]: I0130 17:54:49.457477 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 17:54:49 crc kubenswrapper[4766]: I0130 17:54:49.721900 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-nfnj2"] Jan 30 17:54:49 crc kubenswrapper[4766]: W0130 17:54:49.734487 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod018ff185_8917_437b_9c5a_ec143d1fc84a.slice/crio-80c2641525f0b192d6bdd7054ff66d4a229f3df731ea25d56d68b4f67c258b3d WatchSource:0}: Error finding container 80c2641525f0b192d6bdd7054ff66d4a229f3df731ea25d56d68b4f67c258b3d: Status 404 returned error can't find the container with id 80c2641525f0b192d6bdd7054ff66d4a229f3df731ea25d56d68b4f67c258b3d Jan 30 17:54:50 crc kubenswrapper[4766]: I0130 17:54:50.137205 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-nfnj2" event={"ID":"018ff185-8917-437b-9c5a-ec143d1fc84a","Type":"ContainerStarted","Data":"1027fcfd70b26fa66fbb26590d7374bf1ac4b410943bffac851c340bb52079f0"} Jan 30 17:54:50 crc kubenswrapper[4766]: I0130 17:54:50.137739 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-nfnj2" event={"ID":"018ff185-8917-437b-9c5a-ec143d1fc84a","Type":"ContainerStarted","Data":"80c2641525f0b192d6bdd7054ff66d4a229f3df731ea25d56d68b4f67c258b3d"} Jan 30 17:54:50 crc kubenswrapper[4766]: I0130 17:54:50.159137 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-nfnj2" podStartSLOduration=2.159112844 podStartE2EDuration="2.159112844s" podCreationTimestamp="2026-01-30 17:54:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:54:50.154119758 +0000 UTC m=+5544.792077104" watchObservedRunningTime="2026-01-30 17:54:50.159112844 +0000 UTC m=+5544.797070190" Jan 30 17:54:50 crc kubenswrapper[4766]: I0130 17:54:50.429796 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 17:54:50 crc kubenswrapper[4766]: I0130 17:54:50.429857 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 17:54:50 crc kubenswrapper[4766]: I0130 17:54:50.453207 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 17:54:50 crc kubenswrapper[4766]: I0130 17:54:50.453685 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 17:54:51 crc kubenswrapper[4766]: I0130 17:54:51.554458 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.62:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:54:51 crc kubenswrapper[4766]: I0130 17:54:51.554691 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="450c67ff-a16a-43cf-8852-663c4c0073af" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.61:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:54:51 crc kubenswrapper[4766]: I0130 17:54:51.555008 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.62:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:54:51 crc kubenswrapper[4766]: I0130 17:54:51.555039 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="450c67ff-a16a-43cf-8852-663c4c0073af" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.61:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:54:54 crc kubenswrapper[4766]: I0130 17:54:54.457735 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 17:54:54 crc kubenswrapper[4766]: I0130 17:54:54.482738 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 17:54:55 crc kubenswrapper[4766]: I0130 17:54:55.177268 4766 generic.go:334] "Generic (PLEG): container finished" podID="018ff185-8917-437b-9c5a-ec143d1fc84a" containerID="1027fcfd70b26fa66fbb26590d7374bf1ac4b410943bffac851c340bb52079f0" exitCode=0 Jan 30 17:54:55 crc kubenswrapper[4766]: I0130 17:54:55.177341 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-nfnj2" event={"ID":"018ff185-8917-437b-9c5a-ec143d1fc84a","Type":"ContainerDied","Data":"1027fcfd70b26fa66fbb26590d7374bf1ac4b410943bffac851c340bb52079f0"} Jan 30 17:54:55 crc kubenswrapper[4766]: I0130 17:54:55.206611 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 17:54:56 crc kubenswrapper[4766]: I0130 17:54:56.501285 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-nfnj2" Jan 30 17:54:56 crc kubenswrapper[4766]: I0130 17:54:56.538043 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/018ff185-8917-437b-9c5a-ec143d1fc84a-config-data\") pod \"018ff185-8917-437b-9c5a-ec143d1fc84a\" (UID: \"018ff185-8917-437b-9c5a-ec143d1fc84a\") " Jan 30 17:54:56 crc kubenswrapper[4766]: I0130 17:54:56.538226 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5xlg\" (UniqueName: \"kubernetes.io/projected/018ff185-8917-437b-9c5a-ec143d1fc84a-kube-api-access-b5xlg\") pod \"018ff185-8917-437b-9c5a-ec143d1fc84a\" (UID: \"018ff185-8917-437b-9c5a-ec143d1fc84a\") " Jan 30 17:54:56 crc kubenswrapper[4766]: I0130 17:54:56.538371 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/018ff185-8917-437b-9c5a-ec143d1fc84a-scripts\") pod \"018ff185-8917-437b-9c5a-ec143d1fc84a\" (UID: \"018ff185-8917-437b-9c5a-ec143d1fc84a\") " Jan 30 17:54:56 crc kubenswrapper[4766]: I0130 17:54:56.539075 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/018ff185-8917-437b-9c5a-ec143d1fc84a-combined-ca-bundle\") pod \"018ff185-8917-437b-9c5a-ec143d1fc84a\" (UID: \"018ff185-8917-437b-9c5a-ec143d1fc84a\") " Jan 30 17:54:56 crc kubenswrapper[4766]: I0130 17:54:56.544360 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/018ff185-8917-437b-9c5a-ec143d1fc84a-kube-api-access-b5xlg" (OuterVolumeSpecName: "kube-api-access-b5xlg") pod "018ff185-8917-437b-9c5a-ec143d1fc84a" (UID: "018ff185-8917-437b-9c5a-ec143d1fc84a"). InnerVolumeSpecName "kube-api-access-b5xlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:54:56 crc kubenswrapper[4766]: I0130 17:54:56.545029 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/018ff185-8917-437b-9c5a-ec143d1fc84a-scripts" (OuterVolumeSpecName: "scripts") pod "018ff185-8917-437b-9c5a-ec143d1fc84a" (UID: "018ff185-8917-437b-9c5a-ec143d1fc84a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:54:56 crc kubenswrapper[4766]: I0130 17:54:56.571112 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/018ff185-8917-437b-9c5a-ec143d1fc84a-config-data" (OuterVolumeSpecName: "config-data") pod "018ff185-8917-437b-9c5a-ec143d1fc84a" (UID: "018ff185-8917-437b-9c5a-ec143d1fc84a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:54:56 crc kubenswrapper[4766]: I0130 17:54:56.590364 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/018ff185-8917-437b-9c5a-ec143d1fc84a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "018ff185-8917-437b-9c5a-ec143d1fc84a" (UID: "018ff185-8917-437b-9c5a-ec143d1fc84a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:54:56 crc kubenswrapper[4766]: I0130 17:54:56.641347 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/018ff185-8917-437b-9c5a-ec143d1fc84a-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:56 crc kubenswrapper[4766]: I0130 17:54:56.641384 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/018ff185-8917-437b-9c5a-ec143d1fc84a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:56 crc kubenswrapper[4766]: I0130 17:54:56.641400 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/018ff185-8917-437b-9c5a-ec143d1fc84a-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:56 crc kubenswrapper[4766]: I0130 17:54:56.641413 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5xlg\" (UniqueName: \"kubernetes.io/projected/018ff185-8917-437b-9c5a-ec143d1fc84a-kube-api-access-b5xlg\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:57 crc kubenswrapper[4766]: I0130 17:54:57.200597 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-nfnj2" event={"ID":"018ff185-8917-437b-9c5a-ec143d1fc84a","Type":"ContainerDied","Data":"80c2641525f0b192d6bdd7054ff66d4a229f3df731ea25d56d68b4f67c258b3d"} Jan 30 17:54:57 crc kubenswrapper[4766]: I0130 17:54:57.200929 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80c2641525f0b192d6bdd7054ff66d4a229f3df731ea25d56d68b4f67c258b3d" Jan 30 17:54:57 crc kubenswrapper[4766]: I0130 17:54:57.200651 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-nfnj2" Jan 30 17:54:57 crc kubenswrapper[4766]: I0130 17:54:57.367124 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 17:54:57 crc kubenswrapper[4766]: I0130 17:54:57.367712 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="450c67ff-a16a-43cf-8852-663c4c0073af" containerName="nova-api-log" containerID="cri-o://96ff05c6824774ab3542ad515add8dc5231abeafd8be736ff82970a43f6e74cc" gracePeriod=30 Jan 30 17:54:57 crc kubenswrapper[4766]: I0130 17:54:57.367968 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="450c67ff-a16a-43cf-8852-663c4c0073af" containerName="nova-api-api" containerID="cri-o://4594d9a8d1377e41403f374bd14e980ff695fcce28e881434e8bad7eed7a1f0f" gracePeriod=30 Jan 30 17:54:57 crc kubenswrapper[4766]: I0130 17:54:57.388142 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 17:54:57 crc kubenswrapper[4766]: I0130 17:54:57.388598 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="682ac4fd-3610-40e1-8c35-8396cf9f5342" containerName="nova-scheduler-scheduler" containerID="cri-o://762ee95b18763ac4363056a859b53a7202c53d1b4f7202a690be49ba98fb2927" gracePeriod=30 Jan 30 17:54:57 crc kubenswrapper[4766]: I0130 17:54:57.443061 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 17:54:57 crc kubenswrapper[4766]: I0130 17:54:57.443361 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5" containerName="nova-metadata-log" containerID="cri-o://24761a58c39941a476bb25332d3444a08bebbe87402e559bd577cf7061d30912" gracePeriod=30 Jan 30 17:54:57 crc kubenswrapper[4766]: I0130 17:54:57.443981 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5" containerName="nova-metadata-metadata" containerID="cri-o://8bfdf6d3bbf636cc3091d8977591217cb762d90d47739cb908609c1bf7193891" gracePeriod=30 Jan 30 17:54:58 crc kubenswrapper[4766]: I0130 17:54:58.209543 4766 generic.go:334] "Generic (PLEG): container finished" podID="f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5" containerID="24761a58c39941a476bb25332d3444a08bebbe87402e559bd577cf7061d30912" exitCode=143 Jan 30 17:54:58 crc kubenswrapper[4766]: I0130 17:54:58.209593 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5","Type":"ContainerDied","Data":"24761a58c39941a476bb25332d3444a08bebbe87402e559bd577cf7061d30912"} Jan 30 17:54:58 crc kubenswrapper[4766]: I0130 17:54:58.211406 4766 generic.go:334] "Generic (PLEG): container finished" podID="450c67ff-a16a-43cf-8852-663c4c0073af" containerID="96ff05c6824774ab3542ad515add8dc5231abeafd8be736ff82970a43f6e74cc" exitCode=143 Jan 30 17:54:58 crc kubenswrapper[4766]: I0130 17:54:58.211428 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"450c67ff-a16a-43cf-8852-663c4c0073af","Type":"ContainerDied","Data":"96ff05c6824774ab3542ad515add8dc5231abeafd8be736ff82970a43f6e74cc"} Jan 30 17:54:59 crc kubenswrapper[4766]: E0130 17:54:59.459236 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="762ee95b18763ac4363056a859b53a7202c53d1b4f7202a690be49ba98fb2927" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 17:54:59 crc kubenswrapper[4766]: E0130 17:54:59.460984 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="762ee95b18763ac4363056a859b53a7202c53d1b4f7202a690be49ba98fb2927" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 17:54:59 crc kubenswrapper[4766]: E0130 17:54:59.463869 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="762ee95b18763ac4363056a859b53a7202c53d1b4f7202a690be49ba98fb2927" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 17:54:59 crc kubenswrapper[4766]: E0130 17:54:59.463920 4766 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="682ac4fd-3610-40e1-8c35-8396cf9f5342" containerName="nova-scheduler-scheduler" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.137902 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.149760 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.232829 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/450c67ff-a16a-43cf-8852-663c4c0073af-combined-ca-bundle\") pod \"450c67ff-a16a-43cf-8852-663c4c0073af\" (UID: \"450c67ff-a16a-43cf-8852-663c4c0073af\") " Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.232891 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-987z9\" (UniqueName: \"kubernetes.io/projected/f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5-kube-api-access-987z9\") pod \"f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5\" (UID: \"f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5\") " Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.232947 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/450c67ff-a16a-43cf-8852-663c4c0073af-logs\") pod \"450c67ff-a16a-43cf-8852-663c4c0073af\" (UID: \"450c67ff-a16a-43cf-8852-663c4c0073af\") " Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.232984 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5-combined-ca-bundle\") pod \"f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5\" (UID: \"f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5\") " Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.233063 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/450c67ff-a16a-43cf-8852-663c4c0073af-config-data\") pod \"450c67ff-a16a-43cf-8852-663c4c0073af\" (UID: \"450c67ff-a16a-43cf-8852-663c4c0073af\") " Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.233095 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5-logs\") pod \"f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5\" (UID: \"f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5\") " Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.233137 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhf5r\" (UniqueName: \"kubernetes.io/projected/450c67ff-a16a-43cf-8852-663c4c0073af-kube-api-access-fhf5r\") pod \"450c67ff-a16a-43cf-8852-663c4c0073af\" (UID: \"450c67ff-a16a-43cf-8852-663c4c0073af\") " Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.233202 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5-config-data\") pod \"f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5\" (UID: \"f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5\") " Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.234607 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5-logs" (OuterVolumeSpecName: "logs") pod "f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5" (UID: "f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.235042 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/450c67ff-a16a-43cf-8852-663c4c0073af-logs" (OuterVolumeSpecName: "logs") pod "450c67ff-a16a-43cf-8852-663c4c0073af" (UID: "450c67ff-a16a-43cf-8852-663c4c0073af"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.241861 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5-kube-api-access-987z9" (OuterVolumeSpecName: "kube-api-access-987z9") pod "f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5" (UID: "f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5"). InnerVolumeSpecName "kube-api-access-987z9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.242033 4766 generic.go:334] "Generic (PLEG): container finished" podID="f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5" containerID="8bfdf6d3bbf636cc3091d8977591217cb762d90d47739cb908609c1bf7193891" exitCode=0 Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.242060 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5","Type":"ContainerDied","Data":"8bfdf6d3bbf636cc3091d8977591217cb762d90d47739cb908609c1bf7193891"} Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.242251 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5","Type":"ContainerDied","Data":"0b18c0a6248e0f08e59e0f76327c26fc51a0ee3b357d761ada211d388f46fe36"} Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.242276 4766 scope.go:117] "RemoveContainer" containerID="8bfdf6d3bbf636cc3091d8977591217cb762d90d47739cb908609c1bf7193891" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.244241 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/450c67ff-a16a-43cf-8852-663c4c0073af-kube-api-access-fhf5r" (OuterVolumeSpecName: "kube-api-access-fhf5r") pod "450c67ff-a16a-43cf-8852-663c4c0073af" (UID: "450c67ff-a16a-43cf-8852-663c4c0073af"). InnerVolumeSpecName "kube-api-access-fhf5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.244784 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.245479 4766 generic.go:334] "Generic (PLEG): container finished" podID="450c67ff-a16a-43cf-8852-663c4c0073af" containerID="4594d9a8d1377e41403f374bd14e980ff695fcce28e881434e8bad7eed7a1f0f" exitCode=0 Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.245601 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"450c67ff-a16a-43cf-8852-663c4c0073af","Type":"ContainerDied","Data":"4594d9a8d1377e41403f374bd14e980ff695fcce28e881434e8bad7eed7a1f0f"} Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.245719 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"450c67ff-a16a-43cf-8852-663c4c0073af","Type":"ContainerDied","Data":"64b6cff4081d91692275bfc39b9e03b4bbec983aeeaa2f9495ea140e691acbab"} Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.245689 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.263636 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/450c67ff-a16a-43cf-8852-663c4c0073af-config-data" (OuterVolumeSpecName: "config-data") pod "450c67ff-a16a-43cf-8852-663c4c0073af" (UID: "450c67ff-a16a-43cf-8852-663c4c0073af"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.263836 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5" (UID: "f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.269375 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5-config-data" (OuterVolumeSpecName: "config-data") pod "f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5" (UID: "f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.277784 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/450c67ff-a16a-43cf-8852-663c4c0073af-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "450c67ff-a16a-43cf-8852-663c4c0073af" (UID: "450c67ff-a16a-43cf-8852-663c4c0073af"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.308059 4766 scope.go:117] "RemoveContainer" containerID="24761a58c39941a476bb25332d3444a08bebbe87402e559bd577cf7061d30912" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.323843 4766 scope.go:117] "RemoveContainer" containerID="8bfdf6d3bbf636cc3091d8977591217cb762d90d47739cb908609c1bf7193891" Jan 30 17:55:01 crc kubenswrapper[4766]: E0130 17:55:01.324409 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bfdf6d3bbf636cc3091d8977591217cb762d90d47739cb908609c1bf7193891\": container with ID starting with 8bfdf6d3bbf636cc3091d8977591217cb762d90d47739cb908609c1bf7193891 not found: ID does not exist" containerID="8bfdf6d3bbf636cc3091d8977591217cb762d90d47739cb908609c1bf7193891" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.324476 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bfdf6d3bbf636cc3091d8977591217cb762d90d47739cb908609c1bf7193891"} err="failed to get container status \"8bfdf6d3bbf636cc3091d8977591217cb762d90d47739cb908609c1bf7193891\": rpc error: code = NotFound desc = could not find container \"8bfdf6d3bbf636cc3091d8977591217cb762d90d47739cb908609c1bf7193891\": container with ID starting with 8bfdf6d3bbf636cc3091d8977591217cb762d90d47739cb908609c1bf7193891 not found: ID does not exist" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.324507 4766 scope.go:117] "RemoveContainer" containerID="24761a58c39941a476bb25332d3444a08bebbe87402e559bd577cf7061d30912" Jan 30 17:55:01 crc kubenswrapper[4766]: E0130 17:55:01.324941 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24761a58c39941a476bb25332d3444a08bebbe87402e559bd577cf7061d30912\": container with ID starting with 24761a58c39941a476bb25332d3444a08bebbe87402e559bd577cf7061d30912 not found: ID does not exist" containerID="24761a58c39941a476bb25332d3444a08bebbe87402e559bd577cf7061d30912" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.324994 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24761a58c39941a476bb25332d3444a08bebbe87402e559bd577cf7061d30912"} err="failed to get container status \"24761a58c39941a476bb25332d3444a08bebbe87402e559bd577cf7061d30912\": rpc error: code = NotFound desc = could not find container \"24761a58c39941a476bb25332d3444a08bebbe87402e559bd577cf7061d30912\": container with ID starting with 24761a58c39941a476bb25332d3444a08bebbe87402e559bd577cf7061d30912 not found: ID does not exist" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.325029 4766 scope.go:117] "RemoveContainer" containerID="4594d9a8d1377e41403f374bd14e980ff695fcce28e881434e8bad7eed7a1f0f" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.335728 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5-logs\") on node \"crc\" DevicePath \"\"" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.335762 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhf5r\" (UniqueName: \"kubernetes.io/projected/450c67ff-a16a-43cf-8852-663c4c0073af-kube-api-access-fhf5r\") on node \"crc\" DevicePath \"\"" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.335772 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.335782 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/450c67ff-a16a-43cf-8852-663c4c0073af-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.335790 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-987z9\" (UniqueName: \"kubernetes.io/projected/f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5-kube-api-access-987z9\") on node \"crc\" DevicePath \"\"" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.335798 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/450c67ff-a16a-43cf-8852-663c4c0073af-logs\") on node \"crc\" DevicePath \"\"" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.335806 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.335814 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/450c67ff-a16a-43cf-8852-663c4c0073af-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.342376 4766 scope.go:117] "RemoveContainer" containerID="96ff05c6824774ab3542ad515add8dc5231abeafd8be736ff82970a43f6e74cc" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.359714 4766 scope.go:117] "RemoveContainer" containerID="4594d9a8d1377e41403f374bd14e980ff695fcce28e881434e8bad7eed7a1f0f" Jan 30 17:55:01 crc kubenswrapper[4766]: E0130 17:55:01.360113 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4594d9a8d1377e41403f374bd14e980ff695fcce28e881434e8bad7eed7a1f0f\": container with ID starting with 4594d9a8d1377e41403f374bd14e980ff695fcce28e881434e8bad7eed7a1f0f not found: ID does not exist" containerID="4594d9a8d1377e41403f374bd14e980ff695fcce28e881434e8bad7eed7a1f0f" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.360152 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4594d9a8d1377e41403f374bd14e980ff695fcce28e881434e8bad7eed7a1f0f"} err="failed to get container status \"4594d9a8d1377e41403f374bd14e980ff695fcce28e881434e8bad7eed7a1f0f\": rpc error: code = NotFound desc = could not find container \"4594d9a8d1377e41403f374bd14e980ff695fcce28e881434e8bad7eed7a1f0f\": container with ID starting with 4594d9a8d1377e41403f374bd14e980ff695fcce28e881434e8bad7eed7a1f0f not found: ID does not exist" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.360317 4766 scope.go:117] "RemoveContainer" containerID="96ff05c6824774ab3542ad515add8dc5231abeafd8be736ff82970a43f6e74cc" Jan 30 17:55:01 crc kubenswrapper[4766]: E0130 17:55:01.360681 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96ff05c6824774ab3542ad515add8dc5231abeafd8be736ff82970a43f6e74cc\": container with ID starting with 96ff05c6824774ab3542ad515add8dc5231abeafd8be736ff82970a43f6e74cc not found: ID does not exist" containerID="96ff05c6824774ab3542ad515add8dc5231abeafd8be736ff82970a43f6e74cc" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.360732 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96ff05c6824774ab3542ad515add8dc5231abeafd8be736ff82970a43f6e74cc"} err="failed to get container status \"96ff05c6824774ab3542ad515add8dc5231abeafd8be736ff82970a43f6e74cc\": rpc error: code = NotFound desc = could not find container \"96ff05c6824774ab3542ad515add8dc5231abeafd8be736ff82970a43f6e74cc\": container with ID starting with 96ff05c6824774ab3542ad515add8dc5231abeafd8be736ff82970a43f6e74cc not found: ID does not exist" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.599797 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.614359 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.676288 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 17:55:01 crc kubenswrapper[4766]: E0130 17:55:01.677747 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5" containerName="nova-metadata-log" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.677774 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5" containerName="nova-metadata-log" Jan 30 17:55:01 crc kubenswrapper[4766]: E0130 17:55:01.677811 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5" containerName="nova-metadata-metadata" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.677967 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5" containerName="nova-metadata-metadata" Jan 30 17:55:01 crc kubenswrapper[4766]: E0130 17:55:01.678010 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="018ff185-8917-437b-9c5a-ec143d1fc84a" containerName="nova-manage" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.678019 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="018ff185-8917-437b-9c5a-ec143d1fc84a" containerName="nova-manage" Jan 30 17:55:01 crc kubenswrapper[4766]: E0130 17:55:01.678033 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="450c67ff-a16a-43cf-8852-663c4c0073af" containerName="nova-api-log" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.678039 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="450c67ff-a16a-43cf-8852-663c4c0073af" containerName="nova-api-log" Jan 30 17:55:01 crc kubenswrapper[4766]: E0130 17:55:01.678056 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="450c67ff-a16a-43cf-8852-663c4c0073af" containerName="nova-api-api" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.678065 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="450c67ff-a16a-43cf-8852-663c4c0073af" containerName="nova-api-api" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.688590 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5" containerName="nova-metadata-metadata" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.688666 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="018ff185-8917-437b-9c5a-ec143d1fc84a" containerName="nova-manage" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.688684 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="450c67ff-a16a-43cf-8852-663c4c0073af" containerName="nova-api-api" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.688711 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="450c67ff-a16a-43cf-8852-663c4c0073af" containerName="nova-api-log" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.688760 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5" containerName="nova-metadata-log" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.694148 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.697435 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.708093 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.728139 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.743382 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.743933 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0670fd5-b8de-408e-9cfa-b594e8e3aa84-logs\") pod \"nova-metadata-0\" (UID: \"d0670fd5-b8de-408e-9cfa-b594e8e3aa84\") " pod="openstack/nova-metadata-0" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.743980 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0670fd5-b8de-408e-9cfa-b594e8e3aa84-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d0670fd5-b8de-408e-9cfa-b594e8e3aa84\") " pod="openstack/nova-metadata-0" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.744009 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0670fd5-b8de-408e-9cfa-b594e8e3aa84-config-data\") pod \"nova-metadata-0\" (UID: \"d0670fd5-b8de-408e-9cfa-b594e8e3aa84\") " pod="openstack/nova-metadata-0" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.744038 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92bns\" (UniqueName: \"kubernetes.io/projected/d0670fd5-b8de-408e-9cfa-b594e8e3aa84-kube-api-access-92bns\") pod \"nova-metadata-0\" (UID: \"d0670fd5-b8de-408e-9cfa-b594e8e3aa84\") " pod="openstack/nova-metadata-0" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.752223 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.754894 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.757674 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.762474 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.800047 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.845446 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mbf2\" (UniqueName: \"kubernetes.io/projected/682ac4fd-3610-40e1-8c35-8396cf9f5342-kube-api-access-8mbf2\") pod \"682ac4fd-3610-40e1-8c35-8396cf9f5342\" (UID: \"682ac4fd-3610-40e1-8c35-8396cf9f5342\") " Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.845593 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/682ac4fd-3610-40e1-8c35-8396cf9f5342-combined-ca-bundle\") pod \"682ac4fd-3610-40e1-8c35-8396cf9f5342\" (UID: \"682ac4fd-3610-40e1-8c35-8396cf9f5342\") " Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.845632 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/682ac4fd-3610-40e1-8c35-8396cf9f5342-config-data\") pod \"682ac4fd-3610-40e1-8c35-8396cf9f5342\" (UID: \"682ac4fd-3610-40e1-8c35-8396cf9f5342\") " Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.845939 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92bns\" (UniqueName: \"kubernetes.io/projected/d0670fd5-b8de-408e-9cfa-b594e8e3aa84-kube-api-access-92bns\") pod \"nova-metadata-0\" (UID: \"d0670fd5-b8de-408e-9cfa-b594e8e3aa84\") " pod="openstack/nova-metadata-0" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.846077 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82bd49a0-efdc-46f1-95b8-a706be68208d-logs\") pod \"nova-api-0\" (UID: \"82bd49a0-efdc-46f1-95b8-a706be68208d\") " pod="openstack/nova-api-0" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.846135 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8tgj\" (UniqueName: \"kubernetes.io/projected/82bd49a0-efdc-46f1-95b8-a706be68208d-kube-api-access-h8tgj\") pod \"nova-api-0\" (UID: \"82bd49a0-efdc-46f1-95b8-a706be68208d\") " pod="openstack/nova-api-0" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.846165 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82bd49a0-efdc-46f1-95b8-a706be68208d-config-data\") pod \"nova-api-0\" (UID: \"82bd49a0-efdc-46f1-95b8-a706be68208d\") " pod="openstack/nova-api-0" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.846238 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82bd49a0-efdc-46f1-95b8-a706be68208d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"82bd49a0-efdc-46f1-95b8-a706be68208d\") " pod="openstack/nova-api-0" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.846285 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0670fd5-b8de-408e-9cfa-b594e8e3aa84-logs\") pod \"nova-metadata-0\" (UID: \"d0670fd5-b8de-408e-9cfa-b594e8e3aa84\") " pod="openstack/nova-metadata-0" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.846308 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0670fd5-b8de-408e-9cfa-b594e8e3aa84-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d0670fd5-b8de-408e-9cfa-b594e8e3aa84\") " pod="openstack/nova-metadata-0" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.846370 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0670fd5-b8de-408e-9cfa-b594e8e3aa84-config-data\") pod \"nova-metadata-0\" (UID: \"d0670fd5-b8de-408e-9cfa-b594e8e3aa84\") " pod="openstack/nova-metadata-0" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.847721 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0670fd5-b8de-408e-9cfa-b594e8e3aa84-logs\") pod \"nova-metadata-0\" (UID: \"d0670fd5-b8de-408e-9cfa-b594e8e3aa84\") " pod="openstack/nova-metadata-0" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.851402 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/682ac4fd-3610-40e1-8c35-8396cf9f5342-kube-api-access-8mbf2" (OuterVolumeSpecName: "kube-api-access-8mbf2") pod "682ac4fd-3610-40e1-8c35-8396cf9f5342" (UID: "682ac4fd-3610-40e1-8c35-8396cf9f5342"). InnerVolumeSpecName "kube-api-access-8mbf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.852949 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0670fd5-b8de-408e-9cfa-b594e8e3aa84-config-data\") pod \"nova-metadata-0\" (UID: \"d0670fd5-b8de-408e-9cfa-b594e8e3aa84\") " pod="openstack/nova-metadata-0" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.854782 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0670fd5-b8de-408e-9cfa-b594e8e3aa84-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d0670fd5-b8de-408e-9cfa-b594e8e3aa84\") " pod="openstack/nova-metadata-0" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.867573 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92bns\" (UniqueName: \"kubernetes.io/projected/d0670fd5-b8de-408e-9cfa-b594e8e3aa84-kube-api-access-92bns\") pod \"nova-metadata-0\" (UID: \"d0670fd5-b8de-408e-9cfa-b594e8e3aa84\") " pod="openstack/nova-metadata-0" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.874709 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/682ac4fd-3610-40e1-8c35-8396cf9f5342-config-data" (OuterVolumeSpecName: "config-data") pod "682ac4fd-3610-40e1-8c35-8396cf9f5342" (UID: "682ac4fd-3610-40e1-8c35-8396cf9f5342"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.882885 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/682ac4fd-3610-40e1-8c35-8396cf9f5342-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "682ac4fd-3610-40e1-8c35-8396cf9f5342" (UID: "682ac4fd-3610-40e1-8c35-8396cf9f5342"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.947902 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82bd49a0-efdc-46f1-95b8-a706be68208d-config-data\") pod \"nova-api-0\" (UID: \"82bd49a0-efdc-46f1-95b8-a706be68208d\") " pod="openstack/nova-api-0" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.947985 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82bd49a0-efdc-46f1-95b8-a706be68208d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"82bd49a0-efdc-46f1-95b8-a706be68208d\") " pod="openstack/nova-api-0" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.948080 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82bd49a0-efdc-46f1-95b8-a706be68208d-logs\") pod \"nova-api-0\" (UID: \"82bd49a0-efdc-46f1-95b8-a706be68208d\") " pod="openstack/nova-api-0" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.948108 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8tgj\" (UniqueName: \"kubernetes.io/projected/82bd49a0-efdc-46f1-95b8-a706be68208d-kube-api-access-h8tgj\") pod \"nova-api-0\" (UID: \"82bd49a0-efdc-46f1-95b8-a706be68208d\") " pod="openstack/nova-api-0" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.948153 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mbf2\" (UniqueName: \"kubernetes.io/projected/682ac4fd-3610-40e1-8c35-8396cf9f5342-kube-api-access-8mbf2\") on node \"crc\" DevicePath \"\"" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.948165 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/682ac4fd-3610-40e1-8c35-8396cf9f5342-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.948191 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/682ac4fd-3610-40e1-8c35-8396cf9f5342-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.950034 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82bd49a0-efdc-46f1-95b8-a706be68208d-logs\") pod \"nova-api-0\" (UID: \"82bd49a0-efdc-46f1-95b8-a706be68208d\") " pod="openstack/nova-api-0" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.952620 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82bd49a0-efdc-46f1-95b8-a706be68208d-config-data\") pod \"nova-api-0\" (UID: \"82bd49a0-efdc-46f1-95b8-a706be68208d\") " pod="openstack/nova-api-0" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.956495 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82bd49a0-efdc-46f1-95b8-a706be68208d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"82bd49a0-efdc-46f1-95b8-a706be68208d\") " pod="openstack/nova-api-0" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.965808 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8tgj\" (UniqueName: \"kubernetes.io/projected/82bd49a0-efdc-46f1-95b8-a706be68208d-kube-api-access-h8tgj\") pod \"nova-api-0\" (UID: \"82bd49a0-efdc-46f1-95b8-a706be68208d\") " pod="openstack/nova-api-0" Jan 30 17:55:02 crc kubenswrapper[4766]: I0130 17:55:02.053693 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="450c67ff-a16a-43cf-8852-663c4c0073af" path="/var/lib/kubelet/pods/450c67ff-a16a-43cf-8852-663c4c0073af/volumes" Jan 30 17:55:02 crc kubenswrapper[4766]: I0130 17:55:02.054531 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5" path="/var/lib/kubelet/pods/f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5/volumes" Jan 30 17:55:02 crc kubenswrapper[4766]: I0130 17:55:02.097736 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 17:55:02 crc kubenswrapper[4766]: I0130 17:55:02.128548 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 17:55:02 crc kubenswrapper[4766]: I0130 17:55:02.264408 4766 generic.go:334] "Generic (PLEG): container finished" podID="682ac4fd-3610-40e1-8c35-8396cf9f5342" containerID="762ee95b18763ac4363056a859b53a7202c53d1b4f7202a690be49ba98fb2927" exitCode=0 Jan 30 17:55:02 crc kubenswrapper[4766]: I0130 17:55:02.264475 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"682ac4fd-3610-40e1-8c35-8396cf9f5342","Type":"ContainerDied","Data":"762ee95b18763ac4363056a859b53a7202c53d1b4f7202a690be49ba98fb2927"} Jan 30 17:55:02 crc kubenswrapper[4766]: I0130 17:55:02.264502 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"682ac4fd-3610-40e1-8c35-8396cf9f5342","Type":"ContainerDied","Data":"dbf668e645f6a44821ff790f6478d0f15ef68055392d35449de7aa0dcc2f94d1"} Jan 30 17:55:02 crc kubenswrapper[4766]: I0130 17:55:02.264521 4766 scope.go:117] "RemoveContainer" containerID="762ee95b18763ac4363056a859b53a7202c53d1b4f7202a690be49ba98fb2927" Jan 30 17:55:02 crc kubenswrapper[4766]: I0130 17:55:02.264535 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 17:55:02 crc kubenswrapper[4766]: I0130 17:55:02.289303 4766 scope.go:117] "RemoveContainer" containerID="762ee95b18763ac4363056a859b53a7202c53d1b4f7202a690be49ba98fb2927" Jan 30 17:55:02 crc kubenswrapper[4766]: E0130 17:55:02.289896 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"762ee95b18763ac4363056a859b53a7202c53d1b4f7202a690be49ba98fb2927\": container with ID starting with 762ee95b18763ac4363056a859b53a7202c53d1b4f7202a690be49ba98fb2927 not found: ID does not exist" containerID="762ee95b18763ac4363056a859b53a7202c53d1b4f7202a690be49ba98fb2927" Jan 30 17:55:02 crc kubenswrapper[4766]: I0130 17:55:02.289937 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"762ee95b18763ac4363056a859b53a7202c53d1b4f7202a690be49ba98fb2927"} err="failed to get container status \"762ee95b18763ac4363056a859b53a7202c53d1b4f7202a690be49ba98fb2927\": rpc error: code = NotFound desc = could not find container \"762ee95b18763ac4363056a859b53a7202c53d1b4f7202a690be49ba98fb2927\": container with ID starting with 762ee95b18763ac4363056a859b53a7202c53d1b4f7202a690be49ba98fb2927 not found: ID does not exist" Jan 30 17:55:02 crc kubenswrapper[4766]: I0130 17:55:02.291351 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 17:55:02 crc kubenswrapper[4766]: I0130 17:55:02.305247 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 17:55:02 crc kubenswrapper[4766]: I0130 17:55:02.320159 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 17:55:02 crc kubenswrapper[4766]: E0130 17:55:02.320646 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="682ac4fd-3610-40e1-8c35-8396cf9f5342" containerName="nova-scheduler-scheduler" Jan 30 17:55:02 crc kubenswrapper[4766]: I0130 17:55:02.320662 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="682ac4fd-3610-40e1-8c35-8396cf9f5342" containerName="nova-scheduler-scheduler" Jan 30 17:55:02 crc kubenswrapper[4766]: I0130 17:55:02.320854 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="682ac4fd-3610-40e1-8c35-8396cf9f5342" containerName="nova-scheduler-scheduler" Jan 30 17:55:02 crc kubenswrapper[4766]: I0130 17:55:02.321609 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 17:55:02 crc kubenswrapper[4766]: I0130 17:55:02.324385 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 17:55:02 crc kubenswrapper[4766]: I0130 17:55:02.331127 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 17:55:02 crc kubenswrapper[4766]: I0130 17:55:02.357554 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f204102e-c8ed-4d40-b8c3-87c1921f66fb-config-data\") pod \"nova-scheduler-0\" (UID: \"f204102e-c8ed-4d40-b8c3-87c1921f66fb\") " pod="openstack/nova-scheduler-0" Jan 30 17:55:02 crc kubenswrapper[4766]: I0130 17:55:02.357915 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f204102e-c8ed-4d40-b8c3-87c1921f66fb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f204102e-c8ed-4d40-b8c3-87c1921f66fb\") " pod="openstack/nova-scheduler-0" Jan 30 17:55:02 crc kubenswrapper[4766]: I0130 17:55:02.357945 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrcnb\" (UniqueName: \"kubernetes.io/projected/f204102e-c8ed-4d40-b8c3-87c1921f66fb-kube-api-access-nrcnb\") pod \"nova-scheduler-0\" (UID: \"f204102e-c8ed-4d40-b8c3-87c1921f66fb\") " pod="openstack/nova-scheduler-0" Jan 30 17:55:02 crc kubenswrapper[4766]: I0130 17:55:02.459916 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f204102e-c8ed-4d40-b8c3-87c1921f66fb-config-data\") pod \"nova-scheduler-0\" (UID: \"f204102e-c8ed-4d40-b8c3-87c1921f66fb\") " pod="openstack/nova-scheduler-0" Jan 30 17:55:02 crc kubenswrapper[4766]: I0130 17:55:02.460027 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f204102e-c8ed-4d40-b8c3-87c1921f66fb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f204102e-c8ed-4d40-b8c3-87c1921f66fb\") " pod="openstack/nova-scheduler-0" Jan 30 17:55:02 crc kubenswrapper[4766]: I0130 17:55:02.460053 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrcnb\" (UniqueName: \"kubernetes.io/projected/f204102e-c8ed-4d40-b8c3-87c1921f66fb-kube-api-access-nrcnb\") pod \"nova-scheduler-0\" (UID: \"f204102e-c8ed-4d40-b8c3-87c1921f66fb\") " pod="openstack/nova-scheduler-0" Jan 30 17:55:02 crc kubenswrapper[4766]: I0130 17:55:02.465216 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f204102e-c8ed-4d40-b8c3-87c1921f66fb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f204102e-c8ed-4d40-b8c3-87c1921f66fb\") " pod="openstack/nova-scheduler-0" Jan 30 17:55:02 crc kubenswrapper[4766]: I0130 17:55:02.465635 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f204102e-c8ed-4d40-b8c3-87c1921f66fb-config-data\") pod \"nova-scheduler-0\" (UID: \"f204102e-c8ed-4d40-b8c3-87c1921f66fb\") " pod="openstack/nova-scheduler-0" Jan 30 17:55:02 crc kubenswrapper[4766]: I0130 17:55:02.475221 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrcnb\" (UniqueName: \"kubernetes.io/projected/f204102e-c8ed-4d40-b8c3-87c1921f66fb-kube-api-access-nrcnb\") pod \"nova-scheduler-0\" (UID: \"f204102e-c8ed-4d40-b8c3-87c1921f66fb\") " pod="openstack/nova-scheduler-0" Jan 30 17:55:02 crc kubenswrapper[4766]: I0130 17:55:02.566232 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 17:55:02 crc kubenswrapper[4766]: W0130 17:55:02.570987 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0670fd5_b8de_408e_9cfa_b594e8e3aa84.slice/crio-79c9df2100d6bd4132153d14d3ae6f09c3f6598da8bf5ede5fb0e766b11c0c04 WatchSource:0}: Error finding container 79c9df2100d6bd4132153d14d3ae6f09c3f6598da8bf5ede5fb0e766b11c0c04: Status 404 returned error can't find the container with id 79c9df2100d6bd4132153d14d3ae6f09c3f6598da8bf5ede5fb0e766b11c0c04 Jan 30 17:55:02 crc kubenswrapper[4766]: W0130 17:55:02.639928 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82bd49a0_efdc_46f1_95b8_a706be68208d.slice/crio-66dc3da390f241d612fa55fe27e56687a1e8882de35f533a122e60bb3d2e3202 WatchSource:0}: Error finding container 66dc3da390f241d612fa55fe27e56687a1e8882de35f533a122e60bb3d2e3202: Status 404 returned error can't find the container with id 66dc3da390f241d612fa55fe27e56687a1e8882de35f533a122e60bb3d2e3202 Jan 30 17:55:02 crc kubenswrapper[4766]: I0130 17:55:02.641379 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 17:55:02 crc kubenswrapper[4766]: I0130 17:55:02.641590 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 17:55:03 crc kubenswrapper[4766]: I0130 17:55:03.096596 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 17:55:03 crc kubenswrapper[4766]: W0130 17:55:03.099066 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf204102e_c8ed_4d40_b8c3_87c1921f66fb.slice/crio-6e814b2c7e1b2d9913b671b1270737b16334d9fda854ba42eb91f70d84e1ec11 WatchSource:0}: Error finding container 6e814b2c7e1b2d9913b671b1270737b16334d9fda854ba42eb91f70d84e1ec11: Status 404 returned error can't find the container with id 6e814b2c7e1b2d9913b671b1270737b16334d9fda854ba42eb91f70d84e1ec11 Jan 30 17:55:03 crc kubenswrapper[4766]: I0130 17:55:03.278192 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d0670fd5-b8de-408e-9cfa-b594e8e3aa84","Type":"ContainerStarted","Data":"34892f0d77a4bfb5e47c1f7f0fc93f06bb57eddf06d58f3f97423ed2b6e202d3"} Jan 30 17:55:03 crc kubenswrapper[4766]: I0130 17:55:03.278496 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d0670fd5-b8de-408e-9cfa-b594e8e3aa84","Type":"ContainerStarted","Data":"200bcd264043dcad571b98db0257dd6c2f6205e9a8442561bca96aee3f006c3d"} Jan 30 17:55:03 crc kubenswrapper[4766]: I0130 17:55:03.278507 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d0670fd5-b8de-408e-9cfa-b594e8e3aa84","Type":"ContainerStarted","Data":"79c9df2100d6bd4132153d14d3ae6f09c3f6598da8bf5ede5fb0e766b11c0c04"} Jan 30 17:55:03 crc kubenswrapper[4766]: I0130 17:55:03.284503 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"82bd49a0-efdc-46f1-95b8-a706be68208d","Type":"ContainerStarted","Data":"f7a15f090c543f159f64b81fc90febf534407d29f511b8ad8202cf69378c21f4"} Jan 30 17:55:03 crc kubenswrapper[4766]: I0130 17:55:03.284557 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"82bd49a0-efdc-46f1-95b8-a706be68208d","Type":"ContainerStarted","Data":"a0d7c7e6d2cb5633e8a0b4e0bc52406e3e7faf95042bec5169821f0c2ab91d39"} Jan 30 17:55:03 crc kubenswrapper[4766]: I0130 17:55:03.284572 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"82bd49a0-efdc-46f1-95b8-a706be68208d","Type":"ContainerStarted","Data":"66dc3da390f241d612fa55fe27e56687a1e8882de35f533a122e60bb3d2e3202"} Jan 30 17:55:03 crc kubenswrapper[4766]: I0130 17:55:03.287574 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f204102e-c8ed-4d40-b8c3-87c1921f66fb","Type":"ContainerStarted","Data":"86cf6939663ece5ac286102f34cb70f143bd14f042d754439fd56045814822dd"} Jan 30 17:55:03 crc kubenswrapper[4766]: I0130 17:55:03.287614 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f204102e-c8ed-4d40-b8c3-87c1921f66fb","Type":"ContainerStarted","Data":"6e814b2c7e1b2d9913b671b1270737b16334d9fda854ba42eb91f70d84e1ec11"} Jan 30 17:55:03 crc kubenswrapper[4766]: I0130 17:55:03.302554 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.302535711 podStartE2EDuration="2.302535711s" podCreationTimestamp="2026-01-30 17:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:55:03.296009343 +0000 UTC m=+5557.933966699" watchObservedRunningTime="2026-01-30 17:55:03.302535711 +0000 UTC m=+5557.940493047" Jan 30 17:55:03 crc kubenswrapper[4766]: I0130 17:55:03.323071 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.323053438 podStartE2EDuration="2.323053438s" podCreationTimestamp="2026-01-30 17:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:55:03.314988719 +0000 UTC m=+5557.952946065" watchObservedRunningTime="2026-01-30 17:55:03.323053438 +0000 UTC m=+5557.961010784" Jan 30 17:55:03 crc kubenswrapper[4766]: I0130 17:55:03.338828 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.338807477 podStartE2EDuration="1.338807477s" podCreationTimestamp="2026-01-30 17:55:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:55:03.330814419 +0000 UTC m=+5557.968771755" watchObservedRunningTime="2026-01-30 17:55:03.338807477 +0000 UTC m=+5557.976764823" Jan 30 17:55:04 crc kubenswrapper[4766]: I0130 17:55:04.049198 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="682ac4fd-3610-40e1-8c35-8396cf9f5342" path="/var/lib/kubelet/pods/682ac4fd-3610-40e1-8c35-8396cf9f5342/volumes" Jan 30 17:55:07 crc kubenswrapper[4766]: I0130 17:55:07.098826 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 17:55:07 crc kubenswrapper[4766]: I0130 17:55:07.099976 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 17:55:07 crc kubenswrapper[4766]: I0130 17:55:07.642315 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 17:55:12 crc kubenswrapper[4766]: I0130 17:55:12.098982 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 17:55:12 crc kubenswrapper[4766]: I0130 17:55:12.099876 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 17:55:12 crc kubenswrapper[4766]: I0130 17:55:12.129323 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 17:55:12 crc kubenswrapper[4766]: I0130 17:55:12.129382 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 17:55:12 crc kubenswrapper[4766]: I0130 17:55:12.642226 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 17:55:12 crc kubenswrapper[4766]: I0130 17:55:12.674284 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 17:55:13 crc kubenswrapper[4766]: I0130 17:55:13.099739 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d0670fd5-b8de-408e-9cfa-b594e8e3aa84" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.65:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:55:13 crc kubenswrapper[4766]: I0130 17:55:13.222375 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="82bd49a0-efdc-46f1-95b8-a706be68208d" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.66:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:55:13 crc kubenswrapper[4766]: I0130 17:55:13.222705 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d0670fd5-b8de-408e-9cfa-b594e8e3aa84" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.65:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:55:13 crc kubenswrapper[4766]: I0130 17:55:13.222893 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="82bd49a0-efdc-46f1-95b8-a706be68208d" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.66:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:55:13 crc kubenswrapper[4766]: I0130 17:55:13.397611 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 17:55:22 crc kubenswrapper[4766]: I0130 17:55:22.100608 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 17:55:22 crc kubenswrapper[4766]: I0130 17:55:22.101244 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 17:55:22 crc kubenswrapper[4766]: I0130 17:55:22.104568 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 17:55:22 crc kubenswrapper[4766]: I0130 17:55:22.104722 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 17:55:22 crc kubenswrapper[4766]: I0130 17:55:22.137136 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 17:55:22 crc kubenswrapper[4766]: I0130 17:55:22.138603 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 17:55:22 crc kubenswrapper[4766]: I0130 17:55:22.138948 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 17:55:22 crc kubenswrapper[4766]: I0130 17:55:22.148361 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 17:55:22 crc kubenswrapper[4766]: I0130 17:55:22.430199 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 17:55:22 crc kubenswrapper[4766]: I0130 17:55:22.437543 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 17:55:22 crc kubenswrapper[4766]: I0130 17:55:22.614721 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85bdb4454f-9zxvr"] Jan 30 17:55:22 crc kubenswrapper[4766]: I0130 17:55:22.617229 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85bdb4454f-9zxvr" Jan 30 17:55:22 crc kubenswrapper[4766]: I0130 17:55:22.653844 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85bdb4454f-9zxvr"] Jan 30 17:55:22 crc kubenswrapper[4766]: I0130 17:55:22.734848 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5061d92-9c4a-4434-a5ff-32dcdd752ee7-ovsdbserver-nb\") pod \"dnsmasq-dns-85bdb4454f-9zxvr\" (UID: \"c5061d92-9c4a-4434-a5ff-32dcdd752ee7\") " pod="openstack/dnsmasq-dns-85bdb4454f-9zxvr" Jan 30 17:55:22 crc kubenswrapper[4766]: I0130 17:55:22.734999 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7855\" (UniqueName: \"kubernetes.io/projected/c5061d92-9c4a-4434-a5ff-32dcdd752ee7-kube-api-access-d7855\") pod \"dnsmasq-dns-85bdb4454f-9zxvr\" (UID: \"c5061d92-9c4a-4434-a5ff-32dcdd752ee7\") " pod="openstack/dnsmasq-dns-85bdb4454f-9zxvr" Jan 30 17:55:22 crc kubenswrapper[4766]: I0130 17:55:22.735048 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5061d92-9c4a-4434-a5ff-32dcdd752ee7-config\") pod \"dnsmasq-dns-85bdb4454f-9zxvr\" (UID: \"c5061d92-9c4a-4434-a5ff-32dcdd752ee7\") " pod="openstack/dnsmasq-dns-85bdb4454f-9zxvr" Jan 30 17:55:22 crc kubenswrapper[4766]: I0130 17:55:22.735100 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5061d92-9c4a-4434-a5ff-32dcdd752ee7-ovsdbserver-sb\") pod \"dnsmasq-dns-85bdb4454f-9zxvr\" (UID: \"c5061d92-9c4a-4434-a5ff-32dcdd752ee7\") " pod="openstack/dnsmasq-dns-85bdb4454f-9zxvr" Jan 30 17:55:22 crc kubenswrapper[4766]: I0130 17:55:22.735123 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5061d92-9c4a-4434-a5ff-32dcdd752ee7-dns-svc\") pod \"dnsmasq-dns-85bdb4454f-9zxvr\" (UID: \"c5061d92-9c4a-4434-a5ff-32dcdd752ee7\") " pod="openstack/dnsmasq-dns-85bdb4454f-9zxvr" Jan 30 17:55:22 crc kubenswrapper[4766]: I0130 17:55:22.837389 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5061d92-9c4a-4434-a5ff-32dcdd752ee7-ovsdbserver-sb\") pod \"dnsmasq-dns-85bdb4454f-9zxvr\" (UID: \"c5061d92-9c4a-4434-a5ff-32dcdd752ee7\") " pod="openstack/dnsmasq-dns-85bdb4454f-9zxvr" Jan 30 17:55:22 crc kubenswrapper[4766]: I0130 17:55:22.837454 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5061d92-9c4a-4434-a5ff-32dcdd752ee7-dns-svc\") pod \"dnsmasq-dns-85bdb4454f-9zxvr\" (UID: \"c5061d92-9c4a-4434-a5ff-32dcdd752ee7\") " pod="openstack/dnsmasq-dns-85bdb4454f-9zxvr" Jan 30 17:55:22 crc kubenswrapper[4766]: I0130 17:55:22.837499 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5061d92-9c4a-4434-a5ff-32dcdd752ee7-ovsdbserver-nb\") pod \"dnsmasq-dns-85bdb4454f-9zxvr\" (UID: \"c5061d92-9c4a-4434-a5ff-32dcdd752ee7\") " pod="openstack/dnsmasq-dns-85bdb4454f-9zxvr" Jan 30 17:55:22 crc kubenswrapper[4766]: I0130 17:55:22.837596 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7855\" (UniqueName: \"kubernetes.io/projected/c5061d92-9c4a-4434-a5ff-32dcdd752ee7-kube-api-access-d7855\") pod \"dnsmasq-dns-85bdb4454f-9zxvr\" (UID: \"c5061d92-9c4a-4434-a5ff-32dcdd752ee7\") " pod="openstack/dnsmasq-dns-85bdb4454f-9zxvr" Jan 30 17:55:22 crc kubenswrapper[4766]: I0130 17:55:22.837653 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5061d92-9c4a-4434-a5ff-32dcdd752ee7-config\") pod \"dnsmasq-dns-85bdb4454f-9zxvr\" (UID: \"c5061d92-9c4a-4434-a5ff-32dcdd752ee7\") " pod="openstack/dnsmasq-dns-85bdb4454f-9zxvr" Jan 30 17:55:22 crc kubenswrapper[4766]: I0130 17:55:22.838617 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5061d92-9c4a-4434-a5ff-32dcdd752ee7-ovsdbserver-nb\") pod \"dnsmasq-dns-85bdb4454f-9zxvr\" (UID: \"c5061d92-9c4a-4434-a5ff-32dcdd752ee7\") " pod="openstack/dnsmasq-dns-85bdb4454f-9zxvr" Jan 30 17:55:22 crc kubenswrapper[4766]: I0130 17:55:22.838917 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5061d92-9c4a-4434-a5ff-32dcdd752ee7-dns-svc\") pod \"dnsmasq-dns-85bdb4454f-9zxvr\" (UID: \"c5061d92-9c4a-4434-a5ff-32dcdd752ee7\") " pod="openstack/dnsmasq-dns-85bdb4454f-9zxvr" Jan 30 17:55:22 crc kubenswrapper[4766]: I0130 17:55:22.838925 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5061d92-9c4a-4434-a5ff-32dcdd752ee7-config\") pod \"dnsmasq-dns-85bdb4454f-9zxvr\" (UID: \"c5061d92-9c4a-4434-a5ff-32dcdd752ee7\") " pod="openstack/dnsmasq-dns-85bdb4454f-9zxvr" Jan 30 17:55:22 crc kubenswrapper[4766]: I0130 17:55:22.839214 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5061d92-9c4a-4434-a5ff-32dcdd752ee7-ovsdbserver-sb\") pod \"dnsmasq-dns-85bdb4454f-9zxvr\" (UID: \"c5061d92-9c4a-4434-a5ff-32dcdd752ee7\") " pod="openstack/dnsmasq-dns-85bdb4454f-9zxvr" Jan 30 17:55:22 crc kubenswrapper[4766]: I0130 17:55:22.859357 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7855\" (UniqueName: \"kubernetes.io/projected/c5061d92-9c4a-4434-a5ff-32dcdd752ee7-kube-api-access-d7855\") pod \"dnsmasq-dns-85bdb4454f-9zxvr\" (UID: \"c5061d92-9c4a-4434-a5ff-32dcdd752ee7\") " pod="openstack/dnsmasq-dns-85bdb4454f-9zxvr" Jan 30 17:55:22 crc kubenswrapper[4766]: I0130 17:55:22.959938 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85bdb4454f-9zxvr" Jan 30 17:55:23 crc kubenswrapper[4766]: I0130 17:55:23.521773 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85bdb4454f-9zxvr"] Jan 30 17:55:24 crc kubenswrapper[4766]: I0130 17:55:24.451210 4766 generic.go:334] "Generic (PLEG): container finished" podID="c5061d92-9c4a-4434-a5ff-32dcdd752ee7" containerID="90f36e10b94a3c5bc50fec38f23b2482936896584f12ca38c604afc3476596d6" exitCode=0 Jan 30 17:55:24 crc kubenswrapper[4766]: I0130 17:55:24.451317 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85bdb4454f-9zxvr" event={"ID":"c5061d92-9c4a-4434-a5ff-32dcdd752ee7","Type":"ContainerDied","Data":"90f36e10b94a3c5bc50fec38f23b2482936896584f12ca38c604afc3476596d6"} Jan 30 17:55:24 crc kubenswrapper[4766]: I0130 17:55:24.451687 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85bdb4454f-9zxvr" event={"ID":"c5061d92-9c4a-4434-a5ff-32dcdd752ee7","Type":"ContainerStarted","Data":"eab82cb398525f14ced0104b7ca1271c77f56fe1657116a66a65ddcab59d73d5"} Jan 30 17:55:25 crc kubenswrapper[4766]: I0130 17:55:25.462840 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85bdb4454f-9zxvr" event={"ID":"c5061d92-9c4a-4434-a5ff-32dcdd752ee7","Type":"ContainerStarted","Data":"8ab9e20fe65596558ff546eec38b875f8a3ae64a2bfbdfcfc73bc1b504627cd6"} Jan 30 17:55:25 crc kubenswrapper[4766]: I0130 17:55:25.463149 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85bdb4454f-9zxvr" Jan 30 17:55:25 crc kubenswrapper[4766]: I0130 17:55:25.496638 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85bdb4454f-9zxvr" podStartSLOduration=3.49662073 podStartE2EDuration="3.49662073s" podCreationTimestamp="2026-01-30 17:55:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:55:25.493617798 +0000 UTC m=+5580.131575164" watchObservedRunningTime="2026-01-30 17:55:25.49662073 +0000 UTC m=+5580.134578076" Jan 30 17:55:32 crc kubenswrapper[4766]: I0130 17:55:32.204391 4766 scope.go:117] "RemoveContainer" containerID="39cb977a0be995f7d392e56740fc2759cd94bc46c0c9536f717062f35b225716" Jan 30 17:55:32 crc kubenswrapper[4766]: I0130 17:55:32.962535 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85bdb4454f-9zxvr" Jan 30 17:55:33 crc kubenswrapper[4766]: I0130 17:55:33.027943 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8c8b5f8b9-npmjq"] Jan 30 17:55:33 crc kubenswrapper[4766]: I0130 17:55:33.028211 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8c8b5f8b9-npmjq" podUID="c15b6b4f-b273-4ad3-bd5b-c8c21421d672" containerName="dnsmasq-dns" containerID="cri-o://33dd30f82fc48228e76ef54a0322f1648befca3fd844df31778e934e699e4743" gracePeriod=10 Jan 30 17:55:33 crc kubenswrapper[4766]: I0130 17:55:33.508639 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8c8b5f8b9-npmjq" Jan 30 17:55:33 crc kubenswrapper[4766]: I0130 17:55:33.533497 4766 generic.go:334] "Generic (PLEG): container finished" podID="c15b6b4f-b273-4ad3-bd5b-c8c21421d672" containerID="33dd30f82fc48228e76ef54a0322f1648befca3fd844df31778e934e699e4743" exitCode=0 Jan 30 17:55:33 crc kubenswrapper[4766]: I0130 17:55:33.533556 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8c8b5f8b9-npmjq" event={"ID":"c15b6b4f-b273-4ad3-bd5b-c8c21421d672","Type":"ContainerDied","Data":"33dd30f82fc48228e76ef54a0322f1648befca3fd844df31778e934e699e4743"} Jan 30 17:55:33 crc kubenswrapper[4766]: I0130 17:55:33.533589 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8c8b5f8b9-npmjq" event={"ID":"c15b6b4f-b273-4ad3-bd5b-c8c21421d672","Type":"ContainerDied","Data":"13c5060fcca39fb869c73e11390c606da85a656c1300d5ab6aa472270e9bf8ab"} Jan 30 17:55:33 crc kubenswrapper[4766]: I0130 17:55:33.533617 4766 scope.go:117] "RemoveContainer" containerID="33dd30f82fc48228e76ef54a0322f1648befca3fd844df31778e934e699e4743" Jan 30 17:55:33 crc kubenswrapper[4766]: I0130 17:55:33.533822 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8c8b5f8b9-npmjq" Jan 30 17:55:33 crc kubenswrapper[4766]: I0130 17:55:33.559864 4766 scope.go:117] "RemoveContainer" containerID="48587e2732cd32e892ed02ac055074931d40c14169e13e67d1daec4e62c27639" Jan 30 17:55:33 crc kubenswrapper[4766]: I0130 17:55:33.580676 4766 scope.go:117] "RemoveContainer" containerID="33dd30f82fc48228e76ef54a0322f1648befca3fd844df31778e934e699e4743" Jan 30 17:55:33 crc kubenswrapper[4766]: E0130 17:55:33.581076 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33dd30f82fc48228e76ef54a0322f1648befca3fd844df31778e934e699e4743\": container with ID starting with 33dd30f82fc48228e76ef54a0322f1648befca3fd844df31778e934e699e4743 not found: ID does not exist" containerID="33dd30f82fc48228e76ef54a0322f1648befca3fd844df31778e934e699e4743" Jan 30 17:55:33 crc kubenswrapper[4766]: I0130 17:55:33.581109 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33dd30f82fc48228e76ef54a0322f1648befca3fd844df31778e934e699e4743"} err="failed to get container status \"33dd30f82fc48228e76ef54a0322f1648befca3fd844df31778e934e699e4743\": rpc error: code = NotFound desc = could not find container \"33dd30f82fc48228e76ef54a0322f1648befca3fd844df31778e934e699e4743\": container with ID starting with 33dd30f82fc48228e76ef54a0322f1648befca3fd844df31778e934e699e4743 not found: ID does not exist" Jan 30 17:55:33 crc kubenswrapper[4766]: I0130 17:55:33.581134 4766 scope.go:117] "RemoveContainer" containerID="48587e2732cd32e892ed02ac055074931d40c14169e13e67d1daec4e62c27639" Jan 30 17:55:33 crc kubenswrapper[4766]: E0130 17:55:33.581523 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48587e2732cd32e892ed02ac055074931d40c14169e13e67d1daec4e62c27639\": container with ID starting with 48587e2732cd32e892ed02ac055074931d40c14169e13e67d1daec4e62c27639 not found: ID does not exist" containerID="48587e2732cd32e892ed02ac055074931d40c14169e13e67d1daec4e62c27639" Jan 30 17:55:33 crc kubenswrapper[4766]: I0130 17:55:33.581548 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48587e2732cd32e892ed02ac055074931d40c14169e13e67d1daec4e62c27639"} err="failed to get container status \"48587e2732cd32e892ed02ac055074931d40c14169e13e67d1daec4e62c27639\": rpc error: code = NotFound desc = could not find container \"48587e2732cd32e892ed02ac055074931d40c14169e13e67d1daec4e62c27639\": container with ID starting with 48587e2732cd32e892ed02ac055074931d40c14169e13e67d1daec4e62c27639 not found: ID does not exist" Jan 30 17:55:33 crc kubenswrapper[4766]: I0130 17:55:33.653900 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xqt5\" (UniqueName: \"kubernetes.io/projected/c15b6b4f-b273-4ad3-bd5b-c8c21421d672-kube-api-access-5xqt5\") pod \"c15b6b4f-b273-4ad3-bd5b-c8c21421d672\" (UID: \"c15b6b4f-b273-4ad3-bd5b-c8c21421d672\") " Jan 30 17:55:33 crc kubenswrapper[4766]: I0130 17:55:33.653945 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c15b6b4f-b273-4ad3-bd5b-c8c21421d672-dns-svc\") pod \"c15b6b4f-b273-4ad3-bd5b-c8c21421d672\" (UID: \"c15b6b4f-b273-4ad3-bd5b-c8c21421d672\") " Jan 30 17:55:33 crc kubenswrapper[4766]: I0130 17:55:33.654072 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c15b6b4f-b273-4ad3-bd5b-c8c21421d672-ovsdbserver-nb\") pod \"c15b6b4f-b273-4ad3-bd5b-c8c21421d672\" (UID: \"c15b6b4f-b273-4ad3-bd5b-c8c21421d672\") " Jan 30 17:55:33 crc kubenswrapper[4766]: I0130 17:55:33.654099 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c15b6b4f-b273-4ad3-bd5b-c8c21421d672-ovsdbserver-sb\") pod \"c15b6b4f-b273-4ad3-bd5b-c8c21421d672\" (UID: \"c15b6b4f-b273-4ad3-bd5b-c8c21421d672\") " Jan 30 17:55:33 crc kubenswrapper[4766]: I0130 17:55:33.654235 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c15b6b4f-b273-4ad3-bd5b-c8c21421d672-config\") pod \"c15b6b4f-b273-4ad3-bd5b-c8c21421d672\" (UID: \"c15b6b4f-b273-4ad3-bd5b-c8c21421d672\") " Jan 30 17:55:33 crc kubenswrapper[4766]: I0130 17:55:33.664397 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c15b6b4f-b273-4ad3-bd5b-c8c21421d672-kube-api-access-5xqt5" (OuterVolumeSpecName: "kube-api-access-5xqt5") pod "c15b6b4f-b273-4ad3-bd5b-c8c21421d672" (UID: "c15b6b4f-b273-4ad3-bd5b-c8c21421d672"). InnerVolumeSpecName "kube-api-access-5xqt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:55:33 crc kubenswrapper[4766]: I0130 17:55:33.704259 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c15b6b4f-b273-4ad3-bd5b-c8c21421d672-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c15b6b4f-b273-4ad3-bd5b-c8c21421d672" (UID: "c15b6b4f-b273-4ad3-bd5b-c8c21421d672"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:55:33 crc kubenswrapper[4766]: I0130 17:55:33.705771 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c15b6b4f-b273-4ad3-bd5b-c8c21421d672-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c15b6b4f-b273-4ad3-bd5b-c8c21421d672" (UID: "c15b6b4f-b273-4ad3-bd5b-c8c21421d672"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:55:33 crc kubenswrapper[4766]: I0130 17:55:33.709078 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c15b6b4f-b273-4ad3-bd5b-c8c21421d672-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c15b6b4f-b273-4ad3-bd5b-c8c21421d672" (UID: "c15b6b4f-b273-4ad3-bd5b-c8c21421d672"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:55:33 crc kubenswrapper[4766]: I0130 17:55:33.726468 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c15b6b4f-b273-4ad3-bd5b-c8c21421d672-config" (OuterVolumeSpecName: "config") pod "c15b6b4f-b273-4ad3-bd5b-c8c21421d672" (UID: "c15b6b4f-b273-4ad3-bd5b-c8c21421d672"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:55:33 crc kubenswrapper[4766]: I0130 17:55:33.756359 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c15b6b4f-b273-4ad3-bd5b-c8c21421d672-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 17:55:33 crc kubenswrapper[4766]: I0130 17:55:33.756391 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c15b6b4f-b273-4ad3-bd5b-c8c21421d672-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 17:55:33 crc kubenswrapper[4766]: I0130 17:55:33.756404 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c15b6b4f-b273-4ad3-bd5b-c8c21421d672-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:55:33 crc kubenswrapper[4766]: I0130 17:55:33.756414 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xqt5\" (UniqueName: \"kubernetes.io/projected/c15b6b4f-b273-4ad3-bd5b-c8c21421d672-kube-api-access-5xqt5\") on node \"crc\" DevicePath \"\"" Jan 30 17:55:33 crc kubenswrapper[4766]: I0130 17:55:33.756423 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c15b6b4f-b273-4ad3-bd5b-c8c21421d672-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 17:55:33 crc kubenswrapper[4766]: I0130 17:55:33.871905 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8c8b5f8b9-npmjq"] Jan 30 17:55:33 crc kubenswrapper[4766]: I0130 17:55:33.881299 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8c8b5f8b9-npmjq"] Jan 30 17:55:33 crc kubenswrapper[4766]: E0130 17:55:33.987737 4766 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc15b6b4f_b273_4ad3_bd5b_c8c21421d672.slice\": RecentStats: unable to find data in memory cache]" Jan 30 17:55:34 crc kubenswrapper[4766]: I0130 17:55:34.050145 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c15b6b4f-b273-4ad3-bd5b-c8c21421d672" path="/var/lib/kubelet/pods/c15b6b4f-b273-4ad3-bd5b-c8c21421d672/volumes" Jan 30 17:55:34 crc kubenswrapper[4766]: I0130 17:55:34.970255 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-h7zjx"] Jan 30 17:55:34 crc kubenswrapper[4766]: E0130 17:55:34.970637 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c15b6b4f-b273-4ad3-bd5b-c8c21421d672" containerName="init" Jan 30 17:55:34 crc kubenswrapper[4766]: I0130 17:55:34.970648 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c15b6b4f-b273-4ad3-bd5b-c8c21421d672" containerName="init" Jan 30 17:55:34 crc kubenswrapper[4766]: E0130 17:55:34.970663 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c15b6b4f-b273-4ad3-bd5b-c8c21421d672" containerName="dnsmasq-dns" Jan 30 17:55:34 crc kubenswrapper[4766]: I0130 17:55:34.970669 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c15b6b4f-b273-4ad3-bd5b-c8c21421d672" containerName="dnsmasq-dns" Jan 30 17:55:34 crc kubenswrapper[4766]: I0130 17:55:34.970854 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c15b6b4f-b273-4ad3-bd5b-c8c21421d672" containerName="dnsmasq-dns" Jan 30 17:55:34 crc kubenswrapper[4766]: I0130 17:55:34.971478 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-h7zjx" Jan 30 17:55:34 crc kubenswrapper[4766]: I0130 17:55:34.984881 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-h7zjx"] Jan 30 17:55:34 crc kubenswrapper[4766]: I0130 17:55:34.997877 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-b2b1-account-create-update-vjtsm"] Jan 30 17:55:34 crc kubenswrapper[4766]: I0130 17:55:34.999487 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b2b1-account-create-update-vjtsm" Jan 30 17:55:35 crc kubenswrapper[4766]: I0130 17:55:35.002196 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 30 17:55:35 crc kubenswrapper[4766]: I0130 17:55:35.023640 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b2b1-account-create-update-vjtsm"] Jan 30 17:55:35 crc kubenswrapper[4766]: I0130 17:55:35.078150 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f3c8440-d3be-418a-a446-f3f592a864bd-operator-scripts\") pod \"cinder-b2b1-account-create-update-vjtsm\" (UID: \"3f3c8440-d3be-418a-a446-f3f592a864bd\") " pod="openstack/cinder-b2b1-account-create-update-vjtsm" Jan 30 17:55:35 crc kubenswrapper[4766]: I0130 17:55:35.078400 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5hdh\" (UniqueName: \"kubernetes.io/projected/912d4cef-a7f3-40a4-b498-f1da7361a15c-kube-api-access-p5hdh\") pod \"cinder-db-create-h7zjx\" (UID: \"912d4cef-a7f3-40a4-b498-f1da7361a15c\") " pod="openstack/cinder-db-create-h7zjx" Jan 30 17:55:35 crc kubenswrapper[4766]: I0130 17:55:35.078497 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssg6l\" (UniqueName: \"kubernetes.io/projected/3f3c8440-d3be-418a-a446-f3f592a864bd-kube-api-access-ssg6l\") pod \"cinder-b2b1-account-create-update-vjtsm\" (UID: \"3f3c8440-d3be-418a-a446-f3f592a864bd\") " pod="openstack/cinder-b2b1-account-create-update-vjtsm" Jan 30 17:55:35 crc kubenswrapper[4766]: I0130 17:55:35.078632 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/912d4cef-a7f3-40a4-b498-f1da7361a15c-operator-scripts\") pod \"cinder-db-create-h7zjx\" (UID: \"912d4cef-a7f3-40a4-b498-f1da7361a15c\") " pod="openstack/cinder-db-create-h7zjx" Jan 30 17:55:35 crc kubenswrapper[4766]: I0130 17:55:35.181313 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f3c8440-d3be-418a-a446-f3f592a864bd-operator-scripts\") pod \"cinder-b2b1-account-create-update-vjtsm\" (UID: \"3f3c8440-d3be-418a-a446-f3f592a864bd\") " pod="openstack/cinder-b2b1-account-create-update-vjtsm" Jan 30 17:55:35 crc kubenswrapper[4766]: I0130 17:55:35.182121 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5hdh\" (UniqueName: \"kubernetes.io/projected/912d4cef-a7f3-40a4-b498-f1da7361a15c-kube-api-access-p5hdh\") pod \"cinder-db-create-h7zjx\" (UID: \"912d4cef-a7f3-40a4-b498-f1da7361a15c\") " pod="openstack/cinder-db-create-h7zjx" Jan 30 17:55:35 crc kubenswrapper[4766]: I0130 17:55:35.182202 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f3c8440-d3be-418a-a446-f3f592a864bd-operator-scripts\") pod \"cinder-b2b1-account-create-update-vjtsm\" (UID: \"3f3c8440-d3be-418a-a446-f3f592a864bd\") " pod="openstack/cinder-b2b1-account-create-update-vjtsm" Jan 30 17:55:35 crc kubenswrapper[4766]: I0130 17:55:35.182345 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssg6l\" (UniqueName: \"kubernetes.io/projected/3f3c8440-d3be-418a-a446-f3f592a864bd-kube-api-access-ssg6l\") pod \"cinder-b2b1-account-create-update-vjtsm\" (UID: \"3f3c8440-d3be-418a-a446-f3f592a864bd\") " pod="openstack/cinder-b2b1-account-create-update-vjtsm" Jan 30 17:55:35 crc kubenswrapper[4766]: I0130 17:55:35.182643 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/912d4cef-a7f3-40a4-b498-f1da7361a15c-operator-scripts\") pod \"cinder-db-create-h7zjx\" (UID: \"912d4cef-a7f3-40a4-b498-f1da7361a15c\") " pod="openstack/cinder-db-create-h7zjx" Jan 30 17:55:35 crc kubenswrapper[4766]: I0130 17:55:35.183670 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/912d4cef-a7f3-40a4-b498-f1da7361a15c-operator-scripts\") pod \"cinder-db-create-h7zjx\" (UID: \"912d4cef-a7f3-40a4-b498-f1da7361a15c\") " pod="openstack/cinder-db-create-h7zjx" Jan 30 17:55:35 crc kubenswrapper[4766]: I0130 17:55:35.203635 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssg6l\" (UniqueName: \"kubernetes.io/projected/3f3c8440-d3be-418a-a446-f3f592a864bd-kube-api-access-ssg6l\") pod \"cinder-b2b1-account-create-update-vjtsm\" (UID: \"3f3c8440-d3be-418a-a446-f3f592a864bd\") " pod="openstack/cinder-b2b1-account-create-update-vjtsm" Jan 30 17:55:35 crc kubenswrapper[4766]: I0130 17:55:35.208044 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5hdh\" (UniqueName: \"kubernetes.io/projected/912d4cef-a7f3-40a4-b498-f1da7361a15c-kube-api-access-p5hdh\") pod \"cinder-db-create-h7zjx\" (UID: \"912d4cef-a7f3-40a4-b498-f1da7361a15c\") " pod="openstack/cinder-db-create-h7zjx" Jan 30 17:55:35 crc kubenswrapper[4766]: I0130 17:55:35.288890 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-h7zjx" Jan 30 17:55:35 crc kubenswrapper[4766]: I0130 17:55:35.316670 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b2b1-account-create-update-vjtsm" Jan 30 17:55:35 crc kubenswrapper[4766]: I0130 17:55:35.743309 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-h7zjx"] Jan 30 17:55:35 crc kubenswrapper[4766]: I0130 17:55:35.817804 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b2b1-account-create-update-vjtsm"] Jan 30 17:55:36 crc kubenswrapper[4766]: I0130 17:55:36.576994 4766 generic.go:334] "Generic (PLEG): container finished" podID="3f3c8440-d3be-418a-a446-f3f592a864bd" containerID="9bcd8e7065331188bb35aae678322da7e0860c541ad8d16bf36d90aeac08ac0d" exitCode=0 Jan 30 17:55:36 crc kubenswrapper[4766]: I0130 17:55:36.577073 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b2b1-account-create-update-vjtsm" event={"ID":"3f3c8440-d3be-418a-a446-f3f592a864bd","Type":"ContainerDied","Data":"9bcd8e7065331188bb35aae678322da7e0860c541ad8d16bf36d90aeac08ac0d"} Jan 30 17:55:36 crc kubenswrapper[4766]: I0130 17:55:36.577107 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b2b1-account-create-update-vjtsm" event={"ID":"3f3c8440-d3be-418a-a446-f3f592a864bd","Type":"ContainerStarted","Data":"07d17721c40f3bc1a831170de6726f64da456addb58247ead7a703131e06d161"} Jan 30 17:55:36 crc kubenswrapper[4766]: I0130 17:55:36.580577 4766 generic.go:334] "Generic (PLEG): container finished" podID="912d4cef-a7f3-40a4-b498-f1da7361a15c" containerID="d2335e8782f353fb6442350bea576a44e02bef8eea5ae5d217798cc04d676963" exitCode=0 Jan 30 17:55:36 crc kubenswrapper[4766]: I0130 17:55:36.580645 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-h7zjx" event={"ID":"912d4cef-a7f3-40a4-b498-f1da7361a15c","Type":"ContainerDied","Data":"d2335e8782f353fb6442350bea576a44e02bef8eea5ae5d217798cc04d676963"} Jan 30 17:55:36 crc kubenswrapper[4766]: I0130 17:55:36.580686 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-h7zjx" event={"ID":"912d4cef-a7f3-40a4-b498-f1da7361a15c","Type":"ContainerStarted","Data":"c13cfae4971c2f3c308d2e0901f4b258d140239011d32194fef3bdbcf0a24355"} Jan 30 17:55:38 crc kubenswrapper[4766]: I0130 17:55:38.008225 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-h7zjx" Jan 30 17:55:38 crc kubenswrapper[4766]: I0130 17:55:38.014394 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b2b1-account-create-update-vjtsm" Jan 30 17:55:38 crc kubenswrapper[4766]: I0130 17:55:38.142024 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/912d4cef-a7f3-40a4-b498-f1da7361a15c-operator-scripts\") pod \"912d4cef-a7f3-40a4-b498-f1da7361a15c\" (UID: \"912d4cef-a7f3-40a4-b498-f1da7361a15c\") " Jan 30 17:55:38 crc kubenswrapper[4766]: I0130 17:55:38.142167 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5hdh\" (UniqueName: \"kubernetes.io/projected/912d4cef-a7f3-40a4-b498-f1da7361a15c-kube-api-access-p5hdh\") pod \"912d4cef-a7f3-40a4-b498-f1da7361a15c\" (UID: \"912d4cef-a7f3-40a4-b498-f1da7361a15c\") " Jan 30 17:55:38 crc kubenswrapper[4766]: I0130 17:55:38.142219 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssg6l\" (UniqueName: \"kubernetes.io/projected/3f3c8440-d3be-418a-a446-f3f592a864bd-kube-api-access-ssg6l\") pod \"3f3c8440-d3be-418a-a446-f3f592a864bd\" (UID: \"3f3c8440-d3be-418a-a446-f3f592a864bd\") " Jan 30 17:55:38 crc kubenswrapper[4766]: I0130 17:55:38.142337 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f3c8440-d3be-418a-a446-f3f592a864bd-operator-scripts\") pod \"3f3c8440-d3be-418a-a446-f3f592a864bd\" (UID: \"3f3c8440-d3be-418a-a446-f3f592a864bd\") " Jan 30 17:55:38 crc kubenswrapper[4766]: I0130 17:55:38.143085 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/912d4cef-a7f3-40a4-b498-f1da7361a15c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "912d4cef-a7f3-40a4-b498-f1da7361a15c" (UID: "912d4cef-a7f3-40a4-b498-f1da7361a15c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:55:38 crc kubenswrapper[4766]: I0130 17:55:38.143107 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f3c8440-d3be-418a-a446-f3f592a864bd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3f3c8440-d3be-418a-a446-f3f592a864bd" (UID: "3f3c8440-d3be-418a-a446-f3f592a864bd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:55:38 crc kubenswrapper[4766]: I0130 17:55:38.149946 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f3c8440-d3be-418a-a446-f3f592a864bd-kube-api-access-ssg6l" (OuterVolumeSpecName: "kube-api-access-ssg6l") pod "3f3c8440-d3be-418a-a446-f3f592a864bd" (UID: "3f3c8440-d3be-418a-a446-f3f592a864bd"). InnerVolumeSpecName "kube-api-access-ssg6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:55:38 crc kubenswrapper[4766]: I0130 17:55:38.150757 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/912d4cef-a7f3-40a4-b498-f1da7361a15c-kube-api-access-p5hdh" (OuterVolumeSpecName: "kube-api-access-p5hdh") pod "912d4cef-a7f3-40a4-b498-f1da7361a15c" (UID: "912d4cef-a7f3-40a4-b498-f1da7361a15c"). InnerVolumeSpecName "kube-api-access-p5hdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:55:38 crc kubenswrapper[4766]: I0130 17:55:38.244395 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5hdh\" (UniqueName: \"kubernetes.io/projected/912d4cef-a7f3-40a4-b498-f1da7361a15c-kube-api-access-p5hdh\") on node \"crc\" DevicePath \"\"" Jan 30 17:55:38 crc kubenswrapper[4766]: I0130 17:55:38.244456 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssg6l\" (UniqueName: \"kubernetes.io/projected/3f3c8440-d3be-418a-a446-f3f592a864bd-kube-api-access-ssg6l\") on node \"crc\" DevicePath \"\"" Jan 30 17:55:38 crc kubenswrapper[4766]: I0130 17:55:38.244468 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f3c8440-d3be-418a-a446-f3f592a864bd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:55:38 crc kubenswrapper[4766]: I0130 17:55:38.244478 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/912d4cef-a7f3-40a4-b498-f1da7361a15c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:55:38 crc kubenswrapper[4766]: I0130 17:55:38.598153 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-h7zjx" event={"ID":"912d4cef-a7f3-40a4-b498-f1da7361a15c","Type":"ContainerDied","Data":"c13cfae4971c2f3c308d2e0901f4b258d140239011d32194fef3bdbcf0a24355"} Jan 30 17:55:38 crc kubenswrapper[4766]: I0130 17:55:38.598204 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c13cfae4971c2f3c308d2e0901f4b258d140239011d32194fef3bdbcf0a24355" Jan 30 17:55:38 crc kubenswrapper[4766]: I0130 17:55:38.598529 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-h7zjx" Jan 30 17:55:38 crc kubenswrapper[4766]: I0130 17:55:38.599735 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b2b1-account-create-update-vjtsm" event={"ID":"3f3c8440-d3be-418a-a446-f3f592a864bd","Type":"ContainerDied","Data":"07d17721c40f3bc1a831170de6726f64da456addb58247ead7a703131e06d161"} Jan 30 17:55:38 crc kubenswrapper[4766]: I0130 17:55:38.599757 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07d17721c40f3bc1a831170de6726f64da456addb58247ead7a703131e06d161" Jan 30 17:55:38 crc kubenswrapper[4766]: I0130 17:55:38.599779 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b2b1-account-create-update-vjtsm" Jan 30 17:55:40 crc kubenswrapper[4766]: I0130 17:55:40.326412 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-7fd4h"] Jan 30 17:55:40 crc kubenswrapper[4766]: E0130 17:55:40.327009 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="912d4cef-a7f3-40a4-b498-f1da7361a15c" containerName="mariadb-database-create" Jan 30 17:55:40 crc kubenswrapper[4766]: I0130 17:55:40.327021 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="912d4cef-a7f3-40a4-b498-f1da7361a15c" containerName="mariadb-database-create" Jan 30 17:55:40 crc kubenswrapper[4766]: E0130 17:55:40.327052 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f3c8440-d3be-418a-a446-f3f592a864bd" containerName="mariadb-account-create-update" Jan 30 17:55:40 crc kubenswrapper[4766]: I0130 17:55:40.327059 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f3c8440-d3be-418a-a446-f3f592a864bd" containerName="mariadb-account-create-update" Jan 30 17:55:40 crc kubenswrapper[4766]: I0130 17:55:40.327254 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="912d4cef-a7f3-40a4-b498-f1da7361a15c" containerName="mariadb-database-create" Jan 30 17:55:40 crc kubenswrapper[4766]: I0130 17:55:40.327281 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f3c8440-d3be-418a-a446-f3f592a864bd" containerName="mariadb-account-create-update" Jan 30 17:55:40 crc kubenswrapper[4766]: I0130 17:55:40.327853 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7fd4h" Jan 30 17:55:40 crc kubenswrapper[4766]: I0130 17:55:40.330447 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 30 17:55:40 crc kubenswrapper[4766]: I0130 17:55:40.336228 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 30 17:55:40 crc kubenswrapper[4766]: I0130 17:55:40.336385 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-zh4ls" Jan 30 17:55:40 crc kubenswrapper[4766]: I0130 17:55:40.347461 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-7fd4h"] Jan 30 17:55:40 crc kubenswrapper[4766]: I0130 17:55:40.382566 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-db-sync-config-data\") pod \"cinder-db-sync-7fd4h\" (UID: \"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d\") " pod="openstack/cinder-db-sync-7fd4h" Jan 30 17:55:40 crc kubenswrapper[4766]: I0130 17:55:40.382642 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-scripts\") pod \"cinder-db-sync-7fd4h\" (UID: \"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d\") " pod="openstack/cinder-db-sync-7fd4h" Jan 30 17:55:40 crc kubenswrapper[4766]: I0130 17:55:40.382768 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-combined-ca-bundle\") pod \"cinder-db-sync-7fd4h\" (UID: \"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d\") " pod="openstack/cinder-db-sync-7fd4h" Jan 30 17:55:40 crc kubenswrapper[4766]: I0130 17:55:40.382901 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-config-data\") pod \"cinder-db-sync-7fd4h\" (UID: \"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d\") " pod="openstack/cinder-db-sync-7fd4h" Jan 30 17:55:40 crc kubenswrapper[4766]: I0130 17:55:40.382972 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptlwg\" (UniqueName: \"kubernetes.io/projected/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-kube-api-access-ptlwg\") pod \"cinder-db-sync-7fd4h\" (UID: \"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d\") " pod="openstack/cinder-db-sync-7fd4h" Jan 30 17:55:40 crc kubenswrapper[4766]: I0130 17:55:40.383016 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-etc-machine-id\") pod \"cinder-db-sync-7fd4h\" (UID: \"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d\") " pod="openstack/cinder-db-sync-7fd4h" Jan 30 17:55:40 crc kubenswrapper[4766]: I0130 17:55:40.484095 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-db-sync-config-data\") pod \"cinder-db-sync-7fd4h\" (UID: \"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d\") " pod="openstack/cinder-db-sync-7fd4h" Jan 30 17:55:40 crc kubenswrapper[4766]: I0130 17:55:40.484154 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-scripts\") pod \"cinder-db-sync-7fd4h\" (UID: \"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d\") " pod="openstack/cinder-db-sync-7fd4h" Jan 30 17:55:40 crc kubenswrapper[4766]: I0130 17:55:40.484192 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-combined-ca-bundle\") pod \"cinder-db-sync-7fd4h\" (UID: \"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d\") " pod="openstack/cinder-db-sync-7fd4h" Jan 30 17:55:40 crc kubenswrapper[4766]: I0130 17:55:40.484231 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-config-data\") pod \"cinder-db-sync-7fd4h\" (UID: \"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d\") " pod="openstack/cinder-db-sync-7fd4h" Jan 30 17:55:40 crc kubenswrapper[4766]: I0130 17:55:40.484256 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptlwg\" (UniqueName: \"kubernetes.io/projected/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-kube-api-access-ptlwg\") pod \"cinder-db-sync-7fd4h\" (UID: \"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d\") " pod="openstack/cinder-db-sync-7fd4h" Jan 30 17:55:40 crc kubenswrapper[4766]: I0130 17:55:40.484273 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-etc-machine-id\") pod \"cinder-db-sync-7fd4h\" (UID: \"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d\") " pod="openstack/cinder-db-sync-7fd4h" Jan 30 17:55:40 crc kubenswrapper[4766]: I0130 17:55:40.484409 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-etc-machine-id\") pod \"cinder-db-sync-7fd4h\" (UID: \"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d\") " pod="openstack/cinder-db-sync-7fd4h" Jan 30 17:55:40 crc kubenswrapper[4766]: I0130 17:55:40.489326 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-db-sync-config-data\") pod \"cinder-db-sync-7fd4h\" (UID: \"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d\") " pod="openstack/cinder-db-sync-7fd4h" Jan 30 17:55:40 crc kubenswrapper[4766]: I0130 17:55:40.489332 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-scripts\") pod \"cinder-db-sync-7fd4h\" (UID: \"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d\") " pod="openstack/cinder-db-sync-7fd4h" Jan 30 17:55:40 crc kubenswrapper[4766]: I0130 17:55:40.489776 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-config-data\") pod \"cinder-db-sync-7fd4h\" (UID: \"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d\") " pod="openstack/cinder-db-sync-7fd4h" Jan 30 17:55:40 crc kubenswrapper[4766]: I0130 17:55:40.491103 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-combined-ca-bundle\") pod \"cinder-db-sync-7fd4h\" (UID: \"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d\") " pod="openstack/cinder-db-sync-7fd4h" Jan 30 17:55:40 crc kubenswrapper[4766]: I0130 17:55:40.507509 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptlwg\" (UniqueName: \"kubernetes.io/projected/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-kube-api-access-ptlwg\") pod \"cinder-db-sync-7fd4h\" (UID: \"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d\") " pod="openstack/cinder-db-sync-7fd4h" Jan 30 17:55:40 crc kubenswrapper[4766]: I0130 17:55:40.660089 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7fd4h" Jan 30 17:55:41 crc kubenswrapper[4766]: I0130 17:55:41.100070 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-7fd4h"] Jan 30 17:55:41 crc kubenswrapper[4766]: W0130 17:55:41.102978 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d92cbfe_71f2_4dc5_981b_0c52c1169a2d.slice/crio-011d76fe082353b96cc970bb72dff4b8c55c1db17c40364128d01bf738df0e38 WatchSource:0}: Error finding container 011d76fe082353b96cc970bb72dff4b8c55c1db17c40364128d01bf738df0e38: Status 404 returned error can't find the container with id 011d76fe082353b96cc970bb72dff4b8c55c1db17c40364128d01bf738df0e38 Jan 30 17:55:41 crc kubenswrapper[4766]: I0130 17:55:41.626666 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7fd4h" event={"ID":"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d","Type":"ContainerStarted","Data":"011d76fe082353b96cc970bb72dff4b8c55c1db17c40364128d01bf738df0e38"} Jan 30 17:55:42 crc kubenswrapper[4766]: I0130 17:55:42.635106 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7fd4h" event={"ID":"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d","Type":"ContainerStarted","Data":"7890c44e699b67486d1b5e46be24d9577006c39ba9eaa68133e8d00b60940bba"} Jan 30 17:55:42 crc kubenswrapper[4766]: I0130 17:55:42.660557 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-7fd4h" podStartSLOduration=2.660539588 podStartE2EDuration="2.660539588s" podCreationTimestamp="2026-01-30 17:55:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:55:42.653295411 +0000 UTC m=+5597.291252767" watchObservedRunningTime="2026-01-30 17:55:42.660539588 +0000 UTC m=+5597.298496934" Jan 30 17:55:44 crc kubenswrapper[4766]: E0130 17:55:44.224730 4766 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d92cbfe_71f2_4dc5_981b_0c52c1169a2d.slice/crio-conmon-7890c44e699b67486d1b5e46be24d9577006c39ba9eaa68133e8d00b60940bba.scope\": RecentStats: unable to find data in memory cache]" Jan 30 17:55:44 crc kubenswrapper[4766]: I0130 17:55:44.669582 4766 generic.go:334] "Generic (PLEG): container finished" podID="7d92cbfe-71f2-4dc5-981b-0c52c1169a2d" containerID="7890c44e699b67486d1b5e46be24d9577006c39ba9eaa68133e8d00b60940bba" exitCode=0 Jan 30 17:55:44 crc kubenswrapper[4766]: I0130 17:55:44.669669 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7fd4h" event={"ID":"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d","Type":"ContainerDied","Data":"7890c44e699b67486d1b5e46be24d9577006c39ba9eaa68133e8d00b60940bba"} Jan 30 17:55:46 crc kubenswrapper[4766]: I0130 17:55:46.037305 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7fd4h" Jan 30 17:55:46 crc kubenswrapper[4766]: I0130 17:55:46.096042 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-etc-machine-id\") pod \"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d\" (UID: \"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d\") " Jan 30 17:55:46 crc kubenswrapper[4766]: I0130 17:55:46.096148 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-db-sync-config-data\") pod \"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d\" (UID: \"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d\") " Jan 30 17:55:46 crc kubenswrapper[4766]: I0130 17:55:46.096227 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-config-data\") pod \"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d\" (UID: \"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d\") " Jan 30 17:55:46 crc kubenswrapper[4766]: I0130 17:55:46.096224 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "7d92cbfe-71f2-4dc5-981b-0c52c1169a2d" (UID: "7d92cbfe-71f2-4dc5-981b-0c52c1169a2d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:55:46 crc kubenswrapper[4766]: I0130 17:55:46.096276 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptlwg\" (UniqueName: \"kubernetes.io/projected/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-kube-api-access-ptlwg\") pod \"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d\" (UID: \"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d\") " Jan 30 17:55:46 crc kubenswrapper[4766]: I0130 17:55:46.096378 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-combined-ca-bundle\") pod \"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d\" (UID: \"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d\") " Jan 30 17:55:46 crc kubenswrapper[4766]: I0130 17:55:46.096416 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-scripts\") pod \"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d\" (UID: \"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d\") " Jan 30 17:55:46 crc kubenswrapper[4766]: I0130 17:55:46.096774 4766 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 17:55:46 crc kubenswrapper[4766]: I0130 17:55:46.101739 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-kube-api-access-ptlwg" (OuterVolumeSpecName: "kube-api-access-ptlwg") pod "7d92cbfe-71f2-4dc5-981b-0c52c1169a2d" (UID: "7d92cbfe-71f2-4dc5-981b-0c52c1169a2d"). InnerVolumeSpecName "kube-api-access-ptlwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:55:46 crc kubenswrapper[4766]: I0130 17:55:46.101912 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-scripts" (OuterVolumeSpecName: "scripts") pod "7d92cbfe-71f2-4dc5-981b-0c52c1169a2d" (UID: "7d92cbfe-71f2-4dc5-981b-0c52c1169a2d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:55:46 crc kubenswrapper[4766]: I0130 17:55:46.101983 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "7d92cbfe-71f2-4dc5-981b-0c52c1169a2d" (UID: "7d92cbfe-71f2-4dc5-981b-0c52c1169a2d"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:55:46 crc kubenswrapper[4766]: I0130 17:55:46.122691 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7d92cbfe-71f2-4dc5-981b-0c52c1169a2d" (UID: "7d92cbfe-71f2-4dc5-981b-0c52c1169a2d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:55:46 crc kubenswrapper[4766]: I0130 17:55:46.141474 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-config-data" (OuterVolumeSpecName: "config-data") pod "7d92cbfe-71f2-4dc5-981b-0c52c1169a2d" (UID: "7d92cbfe-71f2-4dc5-981b-0c52c1169a2d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:55:46 crc kubenswrapper[4766]: I0130 17:55:46.198745 4766 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:55:46 crc kubenswrapper[4766]: I0130 17:55:46.198776 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:55:46 crc kubenswrapper[4766]: I0130 17:55:46.198786 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptlwg\" (UniqueName: \"kubernetes.io/projected/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-kube-api-access-ptlwg\") on node \"crc\" DevicePath \"\"" Jan 30 17:55:46 crc kubenswrapper[4766]: I0130 17:55:46.198798 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:55:46 crc kubenswrapper[4766]: I0130 17:55:46.198806 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:55:46 crc kubenswrapper[4766]: I0130 17:55:46.691436 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7fd4h" event={"ID":"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d","Type":"ContainerDied","Data":"011d76fe082353b96cc970bb72dff4b8c55c1db17c40364128d01bf738df0e38"} Jan 30 17:55:46 crc kubenswrapper[4766]: I0130 17:55:46.691486 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="011d76fe082353b96cc970bb72dff4b8c55c1db17c40364128d01bf738df0e38" Jan 30 17:55:46 crc kubenswrapper[4766]: I0130 17:55:46.691563 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7fd4h" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.065603 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8687c8cf7-7zxrr"] Jan 30 17:55:47 crc kubenswrapper[4766]: E0130 17:55:47.066103 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d92cbfe-71f2-4dc5-981b-0c52c1169a2d" containerName="cinder-db-sync" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.066119 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d92cbfe-71f2-4dc5-981b-0c52c1169a2d" containerName="cinder-db-sync" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.066366 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d92cbfe-71f2-4dc5-981b-0c52c1169a2d" containerName="cinder-db-sync" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.067616 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8687c8cf7-7zxrr" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.090452 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8687c8cf7-7zxrr"] Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.116882 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2333655-ed62-419c-a0cc-04a4c9f36938-ovsdbserver-sb\") pod \"dnsmasq-dns-8687c8cf7-7zxrr\" (UID: \"c2333655-ed62-419c-a0cc-04a4c9f36938\") " pod="openstack/dnsmasq-dns-8687c8cf7-7zxrr" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.116962 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2333655-ed62-419c-a0cc-04a4c9f36938-ovsdbserver-nb\") pod \"dnsmasq-dns-8687c8cf7-7zxrr\" (UID: \"c2333655-ed62-419c-a0cc-04a4c9f36938\") " pod="openstack/dnsmasq-dns-8687c8cf7-7zxrr" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.117093 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5x2s\" (UniqueName: \"kubernetes.io/projected/c2333655-ed62-419c-a0cc-04a4c9f36938-kube-api-access-k5x2s\") pod \"dnsmasq-dns-8687c8cf7-7zxrr\" (UID: \"c2333655-ed62-419c-a0cc-04a4c9f36938\") " pod="openstack/dnsmasq-dns-8687c8cf7-7zxrr" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.117123 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2333655-ed62-419c-a0cc-04a4c9f36938-config\") pod \"dnsmasq-dns-8687c8cf7-7zxrr\" (UID: \"c2333655-ed62-419c-a0cc-04a4c9f36938\") " pod="openstack/dnsmasq-dns-8687c8cf7-7zxrr" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.117169 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2333655-ed62-419c-a0cc-04a4c9f36938-dns-svc\") pod \"dnsmasq-dns-8687c8cf7-7zxrr\" (UID: \"c2333655-ed62-419c-a0cc-04a4c9f36938\") " pod="openstack/dnsmasq-dns-8687c8cf7-7zxrr" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.202704 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.204240 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.207292 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.208014 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-zh4ls" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.208033 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.211178 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.218398 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5x2s\" (UniqueName: \"kubernetes.io/projected/c2333655-ed62-419c-a0cc-04a4c9f36938-kube-api-access-k5x2s\") pod \"dnsmasq-dns-8687c8cf7-7zxrr\" (UID: \"c2333655-ed62-419c-a0cc-04a4c9f36938\") " pod="openstack/dnsmasq-dns-8687c8cf7-7zxrr" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.218458 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2333655-ed62-419c-a0cc-04a4c9f36938-config\") pod \"dnsmasq-dns-8687c8cf7-7zxrr\" (UID: \"c2333655-ed62-419c-a0cc-04a4c9f36938\") " pod="openstack/dnsmasq-dns-8687c8cf7-7zxrr" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.218523 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2333655-ed62-419c-a0cc-04a4c9f36938-dns-svc\") pod \"dnsmasq-dns-8687c8cf7-7zxrr\" (UID: \"c2333655-ed62-419c-a0cc-04a4c9f36938\") " pod="openstack/dnsmasq-dns-8687c8cf7-7zxrr" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.218622 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2333655-ed62-419c-a0cc-04a4c9f36938-ovsdbserver-sb\") pod \"dnsmasq-dns-8687c8cf7-7zxrr\" (UID: \"c2333655-ed62-419c-a0cc-04a4c9f36938\") " pod="openstack/dnsmasq-dns-8687c8cf7-7zxrr" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.218671 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2333655-ed62-419c-a0cc-04a4c9f36938-ovsdbserver-nb\") pod \"dnsmasq-dns-8687c8cf7-7zxrr\" (UID: \"c2333655-ed62-419c-a0cc-04a4c9f36938\") " pod="openstack/dnsmasq-dns-8687c8cf7-7zxrr" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.219647 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2333655-ed62-419c-a0cc-04a4c9f36938-config\") pod \"dnsmasq-dns-8687c8cf7-7zxrr\" (UID: \"c2333655-ed62-419c-a0cc-04a4c9f36938\") " pod="openstack/dnsmasq-dns-8687c8cf7-7zxrr" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.219671 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2333655-ed62-419c-a0cc-04a4c9f36938-dns-svc\") pod \"dnsmasq-dns-8687c8cf7-7zxrr\" (UID: \"c2333655-ed62-419c-a0cc-04a4c9f36938\") " pod="openstack/dnsmasq-dns-8687c8cf7-7zxrr" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.220294 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2333655-ed62-419c-a0cc-04a4c9f36938-ovsdbserver-sb\") pod \"dnsmasq-dns-8687c8cf7-7zxrr\" (UID: \"c2333655-ed62-419c-a0cc-04a4c9f36938\") " pod="openstack/dnsmasq-dns-8687c8cf7-7zxrr" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.220358 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2333655-ed62-419c-a0cc-04a4c9f36938-ovsdbserver-nb\") pod \"dnsmasq-dns-8687c8cf7-7zxrr\" (UID: \"c2333655-ed62-419c-a0cc-04a4c9f36938\") " pod="openstack/dnsmasq-dns-8687c8cf7-7zxrr" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.226036 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.265142 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5x2s\" (UniqueName: \"kubernetes.io/projected/c2333655-ed62-419c-a0cc-04a4c9f36938-kube-api-access-k5x2s\") pod \"dnsmasq-dns-8687c8cf7-7zxrr\" (UID: \"c2333655-ed62-419c-a0cc-04a4c9f36938\") " pod="openstack/dnsmasq-dns-8687c8cf7-7zxrr" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.320898 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae29236e-6325-4cee-99e8-45b5dbfdae9d-config-data\") pod \"cinder-api-0\" (UID: \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\") " pod="openstack/cinder-api-0" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.320976 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae29236e-6325-4cee-99e8-45b5dbfdae9d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\") " pod="openstack/cinder-api-0" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.321002 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ae29236e-6325-4cee-99e8-45b5dbfdae9d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\") " pod="openstack/cinder-api-0" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.321146 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5c4b\" (UniqueName: \"kubernetes.io/projected/ae29236e-6325-4cee-99e8-45b5dbfdae9d-kube-api-access-z5c4b\") pod \"cinder-api-0\" (UID: \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\") " pod="openstack/cinder-api-0" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.321280 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae29236e-6325-4cee-99e8-45b5dbfdae9d-logs\") pod \"cinder-api-0\" (UID: \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\") " pod="openstack/cinder-api-0" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.321327 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae29236e-6325-4cee-99e8-45b5dbfdae9d-scripts\") pod \"cinder-api-0\" (UID: \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\") " pod="openstack/cinder-api-0" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.321365 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae29236e-6325-4cee-99e8-45b5dbfdae9d-config-data-custom\") pod \"cinder-api-0\" (UID: \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\") " pod="openstack/cinder-api-0" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.388731 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8687c8cf7-7zxrr" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.422712 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae29236e-6325-4cee-99e8-45b5dbfdae9d-config-data-custom\") pod \"cinder-api-0\" (UID: \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\") " pod="openstack/cinder-api-0" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.422862 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae29236e-6325-4cee-99e8-45b5dbfdae9d-config-data\") pod \"cinder-api-0\" (UID: \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\") " pod="openstack/cinder-api-0" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.422929 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae29236e-6325-4cee-99e8-45b5dbfdae9d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\") " pod="openstack/cinder-api-0" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.422955 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ae29236e-6325-4cee-99e8-45b5dbfdae9d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\") " pod="openstack/cinder-api-0" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.422990 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5c4b\" (UniqueName: \"kubernetes.io/projected/ae29236e-6325-4cee-99e8-45b5dbfdae9d-kube-api-access-z5c4b\") pod \"cinder-api-0\" (UID: \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\") " pod="openstack/cinder-api-0" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.423035 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae29236e-6325-4cee-99e8-45b5dbfdae9d-logs\") pod \"cinder-api-0\" (UID: \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\") " pod="openstack/cinder-api-0" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.423061 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae29236e-6325-4cee-99e8-45b5dbfdae9d-scripts\") pod \"cinder-api-0\" (UID: \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\") " pod="openstack/cinder-api-0" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.423105 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ae29236e-6325-4cee-99e8-45b5dbfdae9d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\") " pod="openstack/cinder-api-0" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.423769 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae29236e-6325-4cee-99e8-45b5dbfdae9d-logs\") pod \"cinder-api-0\" (UID: \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\") " pod="openstack/cinder-api-0" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.427092 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae29236e-6325-4cee-99e8-45b5dbfdae9d-config-data-custom\") pod \"cinder-api-0\" (UID: \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\") " pod="openstack/cinder-api-0" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.427896 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae29236e-6325-4cee-99e8-45b5dbfdae9d-config-data\") pod \"cinder-api-0\" (UID: \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\") " pod="openstack/cinder-api-0" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.430789 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae29236e-6325-4cee-99e8-45b5dbfdae9d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\") " pod="openstack/cinder-api-0" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.438146 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae29236e-6325-4cee-99e8-45b5dbfdae9d-scripts\") pod \"cinder-api-0\" (UID: \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\") " pod="openstack/cinder-api-0" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.448827 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5c4b\" (UniqueName: \"kubernetes.io/projected/ae29236e-6325-4cee-99e8-45b5dbfdae9d-kube-api-access-z5c4b\") pod \"cinder-api-0\" (UID: \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\") " pod="openstack/cinder-api-0" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.526631 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.941181 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8687c8cf7-7zxrr"] Jan 30 17:55:48 crc kubenswrapper[4766]: I0130 17:55:48.232253 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 17:55:48 crc kubenswrapper[4766]: W0130 17:55:48.268226 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae29236e_6325_4cee_99e8_45b5dbfdae9d.slice/crio-603dadae66a61f77a7416e142f56c189014aab67ac07047867138dbd5a061aa5 WatchSource:0}: Error finding container 603dadae66a61f77a7416e142f56c189014aab67ac07047867138dbd5a061aa5: Status 404 returned error can't find the container with id 603dadae66a61f77a7416e142f56c189014aab67ac07047867138dbd5a061aa5 Jan 30 17:55:48 crc kubenswrapper[4766]: I0130 17:55:48.724390 4766 generic.go:334] "Generic (PLEG): container finished" podID="c2333655-ed62-419c-a0cc-04a4c9f36938" containerID="5ca75ddc325a95514c6e15fdb6e4fc3a54b7c81eb0c7e459dacf544d3c7f63c0" exitCode=0 Jan 30 17:55:48 crc kubenswrapper[4766]: I0130 17:55:48.724506 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8687c8cf7-7zxrr" event={"ID":"c2333655-ed62-419c-a0cc-04a4c9f36938","Type":"ContainerDied","Data":"5ca75ddc325a95514c6e15fdb6e4fc3a54b7c81eb0c7e459dacf544d3c7f63c0"} Jan 30 17:55:48 crc kubenswrapper[4766]: I0130 17:55:48.724545 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8687c8cf7-7zxrr" event={"ID":"c2333655-ed62-419c-a0cc-04a4c9f36938","Type":"ContainerStarted","Data":"fa1bd6f41a82e121deea0f18d4981f1f2d28b4f7c6dc486fddee74ee05ad0cb8"} Jan 30 17:55:48 crc kubenswrapper[4766]: I0130 17:55:48.729894 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ae29236e-6325-4cee-99e8-45b5dbfdae9d","Type":"ContainerStarted","Data":"603dadae66a61f77a7416e142f56c189014aab67ac07047867138dbd5a061aa5"} Jan 30 17:55:49 crc kubenswrapper[4766]: I0130 17:55:49.740753 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8687c8cf7-7zxrr" event={"ID":"c2333655-ed62-419c-a0cc-04a4c9f36938","Type":"ContainerStarted","Data":"ed8c661ef47eb4ff1b1df085e3ffe9a1985ea919620d4430d8986d970f83d80c"} Jan 30 17:55:49 crc kubenswrapper[4766]: I0130 17:55:49.741705 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8687c8cf7-7zxrr" Jan 30 17:55:49 crc kubenswrapper[4766]: I0130 17:55:49.745253 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ae29236e-6325-4cee-99e8-45b5dbfdae9d","Type":"ContainerStarted","Data":"c08867f77925f297d8364ac04af980e47fe8184765c9411990b3db0e28b7c360"} Jan 30 17:55:49 crc kubenswrapper[4766]: I0130 17:55:49.745285 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ae29236e-6325-4cee-99e8-45b5dbfdae9d","Type":"ContainerStarted","Data":"7d3561f611119703905071b1a184200e4fd9b43325527e17a71ec76489c683e7"} Jan 30 17:55:49 crc kubenswrapper[4766]: I0130 17:55:49.745434 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 30 17:55:49 crc kubenswrapper[4766]: I0130 17:55:49.765338 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8687c8cf7-7zxrr" podStartSLOduration=2.765322754 podStartE2EDuration="2.765322754s" podCreationTimestamp="2026-01-30 17:55:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:55:49.763217947 +0000 UTC m=+5604.401175303" watchObservedRunningTime="2026-01-30 17:55:49.765322754 +0000 UTC m=+5604.403280100" Jan 30 17:55:49 crc kubenswrapper[4766]: I0130 17:55:49.782057 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=2.7820399780000002 podStartE2EDuration="2.782039978s" podCreationTimestamp="2026-01-30 17:55:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:55:49.77768644 +0000 UTC m=+5604.415643786" watchObservedRunningTime="2026-01-30 17:55:49.782039978 +0000 UTC m=+5604.419997324" Jan 30 17:55:57 crc kubenswrapper[4766]: I0130 17:55:57.391413 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8687c8cf7-7zxrr" Jan 30 17:55:57 crc kubenswrapper[4766]: I0130 17:55:57.461035 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85bdb4454f-9zxvr"] Jan 30 17:55:57 crc kubenswrapper[4766]: I0130 17:55:57.461465 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85bdb4454f-9zxvr" podUID="c5061d92-9c4a-4434-a5ff-32dcdd752ee7" containerName="dnsmasq-dns" containerID="cri-o://8ab9e20fe65596558ff546eec38b875f8a3ae64a2bfbdfcfc73bc1b504627cd6" gracePeriod=10 Jan 30 17:55:57 crc kubenswrapper[4766]: I0130 17:55:57.877170 4766 generic.go:334] "Generic (PLEG): container finished" podID="c5061d92-9c4a-4434-a5ff-32dcdd752ee7" containerID="8ab9e20fe65596558ff546eec38b875f8a3ae64a2bfbdfcfc73bc1b504627cd6" exitCode=0 Jan 30 17:55:57 crc kubenswrapper[4766]: I0130 17:55:57.877253 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85bdb4454f-9zxvr" event={"ID":"c5061d92-9c4a-4434-a5ff-32dcdd752ee7","Type":"ContainerDied","Data":"8ab9e20fe65596558ff546eec38b875f8a3ae64a2bfbdfcfc73bc1b504627cd6"} Jan 30 17:55:58 crc kubenswrapper[4766]: I0130 17:55:58.077714 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85bdb4454f-9zxvr" Jan 30 17:55:58 crc kubenswrapper[4766]: I0130 17:55:58.150790 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5061d92-9c4a-4434-a5ff-32dcdd752ee7-config\") pod \"c5061d92-9c4a-4434-a5ff-32dcdd752ee7\" (UID: \"c5061d92-9c4a-4434-a5ff-32dcdd752ee7\") " Jan 30 17:55:58 crc kubenswrapper[4766]: I0130 17:55:58.150926 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5061d92-9c4a-4434-a5ff-32dcdd752ee7-ovsdbserver-sb\") pod \"c5061d92-9c4a-4434-a5ff-32dcdd752ee7\" (UID: \"c5061d92-9c4a-4434-a5ff-32dcdd752ee7\") " Jan 30 17:55:58 crc kubenswrapper[4766]: I0130 17:55:58.151021 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5061d92-9c4a-4434-a5ff-32dcdd752ee7-ovsdbserver-nb\") pod \"c5061d92-9c4a-4434-a5ff-32dcdd752ee7\" (UID: \"c5061d92-9c4a-4434-a5ff-32dcdd752ee7\") " Jan 30 17:55:58 crc kubenswrapper[4766]: I0130 17:55:58.151118 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5061d92-9c4a-4434-a5ff-32dcdd752ee7-dns-svc\") pod \"c5061d92-9c4a-4434-a5ff-32dcdd752ee7\" (UID: \"c5061d92-9c4a-4434-a5ff-32dcdd752ee7\") " Jan 30 17:55:58 crc kubenswrapper[4766]: I0130 17:55:58.151201 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7855\" (UniqueName: \"kubernetes.io/projected/c5061d92-9c4a-4434-a5ff-32dcdd752ee7-kube-api-access-d7855\") pod \"c5061d92-9c4a-4434-a5ff-32dcdd752ee7\" (UID: \"c5061d92-9c4a-4434-a5ff-32dcdd752ee7\") " Jan 30 17:55:58 crc kubenswrapper[4766]: I0130 17:55:58.178095 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5061d92-9c4a-4434-a5ff-32dcdd752ee7-kube-api-access-d7855" (OuterVolumeSpecName: "kube-api-access-d7855") pod "c5061d92-9c4a-4434-a5ff-32dcdd752ee7" (UID: "c5061d92-9c4a-4434-a5ff-32dcdd752ee7"). InnerVolumeSpecName "kube-api-access-d7855". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:55:58 crc kubenswrapper[4766]: I0130 17:55:58.226727 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5061d92-9c4a-4434-a5ff-32dcdd752ee7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c5061d92-9c4a-4434-a5ff-32dcdd752ee7" (UID: "c5061d92-9c4a-4434-a5ff-32dcdd752ee7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:55:58 crc kubenswrapper[4766]: I0130 17:55:58.242843 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5061d92-9c4a-4434-a5ff-32dcdd752ee7-config" (OuterVolumeSpecName: "config") pod "c5061d92-9c4a-4434-a5ff-32dcdd752ee7" (UID: "c5061d92-9c4a-4434-a5ff-32dcdd752ee7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:55:58 crc kubenswrapper[4766]: I0130 17:55:58.248804 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5061d92-9c4a-4434-a5ff-32dcdd752ee7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c5061d92-9c4a-4434-a5ff-32dcdd752ee7" (UID: "c5061d92-9c4a-4434-a5ff-32dcdd752ee7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:55:58 crc kubenswrapper[4766]: I0130 17:55:58.253825 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5061d92-9c4a-4434-a5ff-32dcdd752ee7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 17:55:58 crc kubenswrapper[4766]: I0130 17:55:58.254092 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7855\" (UniqueName: \"kubernetes.io/projected/c5061d92-9c4a-4434-a5ff-32dcdd752ee7-kube-api-access-d7855\") on node \"crc\" DevicePath \"\"" Jan 30 17:55:58 crc kubenswrapper[4766]: I0130 17:55:58.254220 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5061d92-9c4a-4434-a5ff-32dcdd752ee7-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:55:58 crc kubenswrapper[4766]: I0130 17:55:58.254306 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5061d92-9c4a-4434-a5ff-32dcdd752ee7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 17:55:58 crc kubenswrapper[4766]: I0130 17:55:58.255550 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5061d92-9c4a-4434-a5ff-32dcdd752ee7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c5061d92-9c4a-4434-a5ff-32dcdd752ee7" (UID: "c5061d92-9c4a-4434-a5ff-32dcdd752ee7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:55:58 crc kubenswrapper[4766]: I0130 17:55:58.356555 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5061d92-9c4a-4434-a5ff-32dcdd752ee7-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 17:55:58 crc kubenswrapper[4766]: I0130 17:55:58.889400 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85bdb4454f-9zxvr" event={"ID":"c5061d92-9c4a-4434-a5ff-32dcdd752ee7","Type":"ContainerDied","Data":"eab82cb398525f14ced0104b7ca1271c77f56fe1657116a66a65ddcab59d73d5"} Jan 30 17:55:58 crc kubenswrapper[4766]: I0130 17:55:58.889672 4766 scope.go:117] "RemoveContainer" containerID="8ab9e20fe65596558ff546eec38b875f8a3ae64a2bfbdfcfc73bc1b504627cd6" Jan 30 17:55:58 crc kubenswrapper[4766]: I0130 17:55:58.889611 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85bdb4454f-9zxvr" Jan 30 17:55:58 crc kubenswrapper[4766]: I0130 17:55:58.934219 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85bdb4454f-9zxvr"] Jan 30 17:55:58 crc kubenswrapper[4766]: I0130 17:55:58.934517 4766 scope.go:117] "RemoveContainer" containerID="90f36e10b94a3c5bc50fec38f23b2482936896584f12ca38c604afc3476596d6" Jan 30 17:55:58 crc kubenswrapper[4766]: I0130 17:55:58.943199 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85bdb4454f-9zxvr"] Jan 30 17:55:59 crc kubenswrapper[4766]: I0130 17:55:59.318904 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 17:55:59 crc kubenswrapper[4766]: I0130 17:55:59.319255 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d0670fd5-b8de-408e-9cfa-b594e8e3aa84" containerName="nova-metadata-log" containerID="cri-o://200bcd264043dcad571b98db0257dd6c2f6205e9a8442561bca96aee3f006c3d" gracePeriod=30 Jan 30 17:55:59 crc kubenswrapper[4766]: I0130 17:55:59.319301 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d0670fd5-b8de-408e-9cfa-b594e8e3aa84" containerName="nova-metadata-metadata" containerID="cri-o://34892f0d77a4bfb5e47c1f7f0fc93f06bb57eddf06d58f3f97423ed2b6e202d3" gracePeriod=30 Jan 30 17:55:59 crc kubenswrapper[4766]: I0130 17:55:59.333082 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 17:55:59 crc kubenswrapper[4766]: I0130 17:55:59.333296 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="f204102e-c8ed-4d40-b8c3-87c1921f66fb" containerName="nova-scheduler-scheduler" containerID="cri-o://86cf6939663ece5ac286102f34cb70f143bd14f042d754439fd56045814822dd" gracePeriod=30 Jan 30 17:55:59 crc kubenswrapper[4766]: I0130 17:55:59.341449 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 17:55:59 crc kubenswrapper[4766]: I0130 17:55:59.341674 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="c6725384-f878-416e-832e-64ea63dc6698" containerName="nova-cell0-conductor-conductor" containerID="cri-o://c2aeea8ee2f173823cfcba5d88e64c5feb602801106b43496eaa109ece4c74aa" gracePeriod=30 Jan 30 17:55:59 crc kubenswrapper[4766]: I0130 17:55:59.365639 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 17:55:59 crc kubenswrapper[4766]: I0130 17:55:59.365885 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="1f688a02-a337-43d9-9cc8-ca5d7ba19898" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://587a65d7acafa092b997b244d4f222dc6767a0e73e3ea386b5711720a3c42308" gracePeriod=30 Jan 30 17:55:59 crc kubenswrapper[4766]: I0130 17:55:59.386527 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 17:55:59 crc kubenswrapper[4766]: I0130 17:55:59.386793 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="82bd49a0-efdc-46f1-95b8-a706be68208d" containerName="nova-api-log" containerID="cri-o://a0d7c7e6d2cb5633e8a0b4e0bc52406e3e7faf95042bec5169821f0c2ab91d39" gracePeriod=30 Jan 30 17:55:59 crc kubenswrapper[4766]: I0130 17:55:59.387241 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="82bd49a0-efdc-46f1-95b8-a706be68208d" containerName="nova-api-api" containerID="cri-o://f7a15f090c543f159f64b81fc90febf534407d29f511b8ad8202cf69378c21f4" gracePeriod=30 Jan 30 17:55:59 crc kubenswrapper[4766]: I0130 17:55:59.753131 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 30 17:55:59 crc kubenswrapper[4766]: I0130 17:55:59.904469 4766 generic.go:334] "Generic (PLEG): container finished" podID="1f688a02-a337-43d9-9cc8-ca5d7ba19898" containerID="587a65d7acafa092b997b244d4f222dc6767a0e73e3ea386b5711720a3c42308" exitCode=0 Jan 30 17:55:59 crc kubenswrapper[4766]: I0130 17:55:59.904548 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1f688a02-a337-43d9-9cc8-ca5d7ba19898","Type":"ContainerDied","Data":"587a65d7acafa092b997b244d4f222dc6767a0e73e3ea386b5711720a3c42308"} Jan 30 17:55:59 crc kubenswrapper[4766]: I0130 17:55:59.914306 4766 generic.go:334] "Generic (PLEG): container finished" podID="82bd49a0-efdc-46f1-95b8-a706be68208d" containerID="a0d7c7e6d2cb5633e8a0b4e0bc52406e3e7faf95042bec5169821f0c2ab91d39" exitCode=143 Jan 30 17:55:59 crc kubenswrapper[4766]: I0130 17:55:59.914389 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"82bd49a0-efdc-46f1-95b8-a706be68208d","Type":"ContainerDied","Data":"a0d7c7e6d2cb5633e8a0b4e0bc52406e3e7faf95042bec5169821f0c2ab91d39"} Jan 30 17:55:59 crc kubenswrapper[4766]: I0130 17:55:59.926078 4766 generic.go:334] "Generic (PLEG): container finished" podID="d0670fd5-b8de-408e-9cfa-b594e8e3aa84" containerID="200bcd264043dcad571b98db0257dd6c2f6205e9a8442561bca96aee3f006c3d" exitCode=143 Jan 30 17:55:59 crc kubenswrapper[4766]: I0130 17:55:59.926124 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d0670fd5-b8de-408e-9cfa-b594e8e3aa84","Type":"ContainerDied","Data":"200bcd264043dcad571b98db0257dd6c2f6205e9a8442561bca96aee3f006c3d"} Jan 30 17:56:00 crc kubenswrapper[4766]: I0130 17:56:00.054725 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5061d92-9c4a-4434-a5ff-32dcdd752ee7" path="/var/lib/kubelet/pods/c5061d92-9c4a-4434-a5ff-32dcdd752ee7/volumes" Jan 30 17:56:00 crc kubenswrapper[4766]: E0130 17:56:00.141260 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c2aeea8ee2f173823cfcba5d88e64c5feb602801106b43496eaa109ece4c74aa" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 17:56:00 crc kubenswrapper[4766]: E0130 17:56:00.144493 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c2aeea8ee2f173823cfcba5d88e64c5feb602801106b43496eaa109ece4c74aa" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 17:56:00 crc kubenswrapper[4766]: E0130 17:56:00.146034 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c2aeea8ee2f173823cfcba5d88e64c5feb602801106b43496eaa109ece4c74aa" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 17:56:00 crc kubenswrapper[4766]: E0130 17:56:00.146115 4766 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="c6725384-f878-416e-832e-64ea63dc6698" containerName="nova-cell0-conductor-conductor" Jan 30 17:56:00 crc kubenswrapper[4766]: I0130 17:56:00.319279 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:56:00 crc kubenswrapper[4766]: I0130 17:56:00.397352 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f688a02-a337-43d9-9cc8-ca5d7ba19898-combined-ca-bundle\") pod \"1f688a02-a337-43d9-9cc8-ca5d7ba19898\" (UID: \"1f688a02-a337-43d9-9cc8-ca5d7ba19898\") " Jan 30 17:56:00 crc kubenswrapper[4766]: I0130 17:56:00.397434 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f688a02-a337-43d9-9cc8-ca5d7ba19898-config-data\") pod \"1f688a02-a337-43d9-9cc8-ca5d7ba19898\" (UID: \"1f688a02-a337-43d9-9cc8-ca5d7ba19898\") " Jan 30 17:56:00 crc kubenswrapper[4766]: I0130 17:56:00.397589 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lv8v\" (UniqueName: \"kubernetes.io/projected/1f688a02-a337-43d9-9cc8-ca5d7ba19898-kube-api-access-7lv8v\") pod \"1f688a02-a337-43d9-9cc8-ca5d7ba19898\" (UID: \"1f688a02-a337-43d9-9cc8-ca5d7ba19898\") " Jan 30 17:56:00 crc kubenswrapper[4766]: I0130 17:56:00.405127 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f688a02-a337-43d9-9cc8-ca5d7ba19898-kube-api-access-7lv8v" (OuterVolumeSpecName: "kube-api-access-7lv8v") pod "1f688a02-a337-43d9-9cc8-ca5d7ba19898" (UID: "1f688a02-a337-43d9-9cc8-ca5d7ba19898"). InnerVolumeSpecName "kube-api-access-7lv8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:56:00 crc kubenswrapper[4766]: I0130 17:56:00.447145 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f688a02-a337-43d9-9cc8-ca5d7ba19898-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f688a02-a337-43d9-9cc8-ca5d7ba19898" (UID: "1f688a02-a337-43d9-9cc8-ca5d7ba19898"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:56:00 crc kubenswrapper[4766]: I0130 17:56:00.449311 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f688a02-a337-43d9-9cc8-ca5d7ba19898-config-data" (OuterVolumeSpecName: "config-data") pod "1f688a02-a337-43d9-9cc8-ca5d7ba19898" (UID: "1f688a02-a337-43d9-9cc8-ca5d7ba19898"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:56:00 crc kubenswrapper[4766]: I0130 17:56:00.502524 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f688a02-a337-43d9-9cc8-ca5d7ba19898-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:00 crc kubenswrapper[4766]: I0130 17:56:00.502560 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f688a02-a337-43d9-9cc8-ca5d7ba19898-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:00 crc kubenswrapper[4766]: I0130 17:56:00.502574 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lv8v\" (UniqueName: \"kubernetes.io/projected/1f688a02-a337-43d9-9cc8-ca5d7ba19898-kube-api-access-7lv8v\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:00 crc kubenswrapper[4766]: I0130 17:56:00.938444 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1f688a02-a337-43d9-9cc8-ca5d7ba19898","Type":"ContainerDied","Data":"9cb907c7defc84de9011e676b2b253841c9ace45df34403f36c123319269cc8b"} Jan 30 17:56:00 crc kubenswrapper[4766]: I0130 17:56:00.938513 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:56:00 crc kubenswrapper[4766]: I0130 17:56:00.938774 4766 scope.go:117] "RemoveContainer" containerID="587a65d7acafa092b997b244d4f222dc6767a0e73e3ea386b5711720a3c42308" Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.010254 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.074660 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.074735 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 17:56:01 crc kubenswrapper[4766]: E0130 17:56:01.075089 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f688a02-a337-43d9-9cc8-ca5d7ba19898" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.075101 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f688a02-a337-43d9-9cc8-ca5d7ba19898" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 17:56:01 crc kubenswrapper[4766]: E0130 17:56:01.075110 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5061d92-9c4a-4434-a5ff-32dcdd752ee7" containerName="init" Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.075115 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5061d92-9c4a-4434-a5ff-32dcdd752ee7" containerName="init" Jan 30 17:56:01 crc kubenswrapper[4766]: E0130 17:56:01.075131 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5061d92-9c4a-4434-a5ff-32dcdd752ee7" containerName="dnsmasq-dns" Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.075136 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5061d92-9c4a-4434-a5ff-32dcdd752ee7" containerName="dnsmasq-dns" Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.075613 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5061d92-9c4a-4434-a5ff-32dcdd752ee7" containerName="dnsmasq-dns" Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.075632 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f688a02-a337-43d9-9cc8-ca5d7ba19898" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.076306 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.081135 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.082014 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.114539 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d4aa9c5-4f42-495a-921f-986b170dafe4-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5d4aa9c5-4f42-495a-921f-986b170dafe4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.114581 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsq5r\" (UniqueName: \"kubernetes.io/projected/5d4aa9c5-4f42-495a-921f-986b170dafe4-kube-api-access-hsq5r\") pod \"nova-cell1-novncproxy-0\" (UID: \"5d4aa9c5-4f42-495a-921f-986b170dafe4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.114667 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d4aa9c5-4f42-495a-921f-986b170dafe4-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5d4aa9c5-4f42-495a-921f-986b170dafe4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.216622 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d4aa9c5-4f42-495a-921f-986b170dafe4-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5d4aa9c5-4f42-495a-921f-986b170dafe4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.217399 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d4aa9c5-4f42-495a-921f-986b170dafe4-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5d4aa9c5-4f42-495a-921f-986b170dafe4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.217426 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsq5r\" (UniqueName: \"kubernetes.io/projected/5d4aa9c5-4f42-495a-921f-986b170dafe4-kube-api-access-hsq5r\") pod \"nova-cell1-novncproxy-0\" (UID: \"5d4aa9c5-4f42-495a-921f-986b170dafe4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.220704 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d4aa9c5-4f42-495a-921f-986b170dafe4-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5d4aa9c5-4f42-495a-921f-986b170dafe4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.223911 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d4aa9c5-4f42-495a-921f-986b170dafe4-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5d4aa9c5-4f42-495a-921f-986b170dafe4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.233896 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsq5r\" (UniqueName: \"kubernetes.io/projected/5d4aa9c5-4f42-495a-921f-986b170dafe4-kube-api-access-hsq5r\") pod \"nova-cell1-novncproxy-0\" (UID: \"5d4aa9c5-4f42-495a-921f-986b170dafe4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.399868 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.596215 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.623568 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrcnb\" (UniqueName: \"kubernetes.io/projected/f204102e-c8ed-4d40-b8c3-87c1921f66fb-kube-api-access-nrcnb\") pod \"f204102e-c8ed-4d40-b8c3-87c1921f66fb\" (UID: \"f204102e-c8ed-4d40-b8c3-87c1921f66fb\") " Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.623683 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f204102e-c8ed-4d40-b8c3-87c1921f66fb-config-data\") pod \"f204102e-c8ed-4d40-b8c3-87c1921f66fb\" (UID: \"f204102e-c8ed-4d40-b8c3-87c1921f66fb\") " Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.623716 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f204102e-c8ed-4d40-b8c3-87c1921f66fb-combined-ca-bundle\") pod \"f204102e-c8ed-4d40-b8c3-87c1921f66fb\" (UID: \"f204102e-c8ed-4d40-b8c3-87c1921f66fb\") " Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.642565 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f204102e-c8ed-4d40-b8c3-87c1921f66fb-kube-api-access-nrcnb" (OuterVolumeSpecName: "kube-api-access-nrcnb") pod "f204102e-c8ed-4d40-b8c3-87c1921f66fb" (UID: "f204102e-c8ed-4d40-b8c3-87c1921f66fb"). InnerVolumeSpecName "kube-api-access-nrcnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.664336 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f204102e-c8ed-4d40-b8c3-87c1921f66fb-config-data" (OuterVolumeSpecName: "config-data") pod "f204102e-c8ed-4d40-b8c3-87c1921f66fb" (UID: "f204102e-c8ed-4d40-b8c3-87c1921f66fb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.685974 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f204102e-c8ed-4d40-b8c3-87c1921f66fb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f204102e-c8ed-4d40-b8c3-87c1921f66fb" (UID: "f204102e-c8ed-4d40-b8c3-87c1921f66fb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.726338 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f204102e-c8ed-4d40-b8c3-87c1921f66fb-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.726376 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f204102e-c8ed-4d40-b8c3-87c1921f66fb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.726390 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrcnb\" (UniqueName: \"kubernetes.io/projected/f204102e-c8ed-4d40-b8c3-87c1921f66fb-kube-api-access-nrcnb\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.954554 4766 generic.go:334] "Generic (PLEG): container finished" podID="f204102e-c8ed-4d40-b8c3-87c1921f66fb" containerID="86cf6939663ece5ac286102f34cb70f143bd14f042d754439fd56045814822dd" exitCode=0 Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.954600 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f204102e-c8ed-4d40-b8c3-87c1921f66fb","Type":"ContainerDied","Data":"86cf6939663ece5ac286102f34cb70f143bd14f042d754439fd56045814822dd"} Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.954615 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.954635 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f204102e-c8ed-4d40-b8c3-87c1921f66fb","Type":"ContainerDied","Data":"6e814b2c7e1b2d9913b671b1270737b16334d9fda854ba42eb91f70d84e1ec11"} Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.954686 4766 scope.go:117] "RemoveContainer" containerID="86cf6939663ece5ac286102f34cb70f143bd14f042d754439fd56045814822dd" Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.001382 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.007333 4766 scope.go:117] "RemoveContainer" containerID="86cf6939663ece5ac286102f34cb70f143bd14f042d754439fd56045814822dd" Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.021708 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 17:56:02 crc kubenswrapper[4766]: E0130 17:56:02.022498 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86cf6939663ece5ac286102f34cb70f143bd14f042d754439fd56045814822dd\": container with ID starting with 86cf6939663ece5ac286102f34cb70f143bd14f042d754439fd56045814822dd not found: ID does not exist" containerID="86cf6939663ece5ac286102f34cb70f143bd14f042d754439fd56045814822dd" Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.022554 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86cf6939663ece5ac286102f34cb70f143bd14f042d754439fd56045814822dd"} err="failed to get container status \"86cf6939663ece5ac286102f34cb70f143bd14f042d754439fd56045814822dd\": rpc error: code = NotFound desc = could not find container \"86cf6939663ece5ac286102f34cb70f143bd14f042d754439fd56045814822dd\": container with ID starting with 86cf6939663ece5ac286102f34cb70f143bd14f042d754439fd56045814822dd not found: ID does not exist" Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.031229 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.054259 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f688a02-a337-43d9-9cc8-ca5d7ba19898" path="/var/lib/kubelet/pods/1f688a02-a337-43d9-9cc8-ca5d7ba19898/volumes" Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.055404 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f204102e-c8ed-4d40-b8c3-87c1921f66fb" path="/var/lib/kubelet/pods/f204102e-c8ed-4d40-b8c3-87c1921f66fb/volumes" Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.056154 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 17:56:02 crc kubenswrapper[4766]: E0130 17:56:02.056676 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f204102e-c8ed-4d40-b8c3-87c1921f66fb" containerName="nova-scheduler-scheduler" Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.056731 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f204102e-c8ed-4d40-b8c3-87c1921f66fb" containerName="nova-scheduler-scheduler" Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.056940 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="f204102e-c8ed-4d40-b8c3-87c1921f66fb" containerName="nova-scheduler-scheduler" Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.057655 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.057735 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.061087 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.137515 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/782b2122-c6f0-424d-85b1-efb911f37e20-config-data\") pod \"nova-scheduler-0\" (UID: \"782b2122-c6f0-424d-85b1-efb911f37e20\") " pod="openstack/nova-scheduler-0" Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.137593 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/782b2122-c6f0-424d-85b1-efb911f37e20-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"782b2122-c6f0-424d-85b1-efb911f37e20\") " pod="openstack/nova-scheduler-0" Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.137623 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t82bq\" (UniqueName: \"kubernetes.io/projected/782b2122-c6f0-424d-85b1-efb911f37e20-kube-api-access-t82bq\") pod \"nova-scheduler-0\" (UID: \"782b2122-c6f0-424d-85b1-efb911f37e20\") " pod="openstack/nova-scheduler-0" Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.239119 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/782b2122-c6f0-424d-85b1-efb911f37e20-config-data\") pod \"nova-scheduler-0\" (UID: \"782b2122-c6f0-424d-85b1-efb911f37e20\") " pod="openstack/nova-scheduler-0" Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.239201 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/782b2122-c6f0-424d-85b1-efb911f37e20-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"782b2122-c6f0-424d-85b1-efb911f37e20\") " pod="openstack/nova-scheduler-0" Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.239220 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t82bq\" (UniqueName: \"kubernetes.io/projected/782b2122-c6f0-424d-85b1-efb911f37e20-kube-api-access-t82bq\") pod \"nova-scheduler-0\" (UID: \"782b2122-c6f0-424d-85b1-efb911f37e20\") " pod="openstack/nova-scheduler-0" Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.244257 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/782b2122-c6f0-424d-85b1-efb911f37e20-config-data\") pod \"nova-scheduler-0\" (UID: \"782b2122-c6f0-424d-85b1-efb911f37e20\") " pod="openstack/nova-scheduler-0" Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.245568 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/782b2122-c6f0-424d-85b1-efb911f37e20-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"782b2122-c6f0-424d-85b1-efb911f37e20\") " pod="openstack/nova-scheduler-0" Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.276457 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t82bq\" (UniqueName: \"kubernetes.io/projected/782b2122-c6f0-424d-85b1-efb911f37e20-kube-api-access-t82bq\") pod \"nova-scheduler-0\" (UID: \"782b2122-c6f0-424d-85b1-efb911f37e20\") " pod="openstack/nova-scheduler-0" Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.379857 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.539267 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="82bd49a0-efdc-46f1-95b8-a706be68208d" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.66:8774/\": read tcp 10.217.0.2:58116->10.217.1.66:8774: read: connection reset by peer" Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.540360 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="82bd49a0-efdc-46f1-95b8-a706be68208d" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.66:8774/\": read tcp 10.217.0.2:58118->10.217.1.66:8774: read: connection reset by peer" Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.585368 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.585587 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="42ca03b3-7414-49ac-8fb1-7d2489d1c251" containerName="nova-cell1-conductor-conductor" containerID="cri-o://4d171e1c1b30aea9ede9ee62f7dba8a4c3e8962d723b023c388dffbcb48561c5" gracePeriod=30 Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.726012 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="d0670fd5-b8de-408e-9cfa-b594e8e3aa84" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.65:8775/\": read tcp 10.217.0.2:50018->10.217.1.65:8775: read: connection reset by peer" Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.726026 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="d0670fd5-b8de-408e-9cfa-b594e8e3aa84" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.65:8775/\": read tcp 10.217.0.2:50004->10.217.1.65:8775: read: connection reset by peer" Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.862873 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 17:56:02 crc kubenswrapper[4766]: W0130 17:56:02.873390 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod782b2122_c6f0_424d_85b1_efb911f37e20.slice/crio-cdf3aaa53a6e507a0e3af566f880e4127a11d4a4df4b929ce2242ea377c6f60a WatchSource:0}: Error finding container cdf3aaa53a6e507a0e3af566f880e4127a11d4a4df4b929ce2242ea377c6f60a: Status 404 returned error can't find the container with id cdf3aaa53a6e507a0e3af566f880e4127a11d4a4df4b929ce2242ea377c6f60a Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.965452 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-85bdb4454f-9zxvr" podUID="c5061d92-9c4a-4434-a5ff-32dcdd752ee7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.68:5353: i/o timeout" Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.972833 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"782b2122-c6f0-424d-85b1-efb911f37e20","Type":"ContainerStarted","Data":"cdf3aaa53a6e507a0e3af566f880e4127a11d4a4df4b929ce2242ea377c6f60a"} Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.982736 4766 generic.go:334] "Generic (PLEG): container finished" podID="d0670fd5-b8de-408e-9cfa-b594e8e3aa84" containerID="34892f0d77a4bfb5e47c1f7f0fc93f06bb57eddf06d58f3f97423ed2b6e202d3" exitCode=0 Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.982820 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d0670fd5-b8de-408e-9cfa-b594e8e3aa84","Type":"ContainerDied","Data":"34892f0d77a4bfb5e47c1f7f0fc93f06bb57eddf06d58f3f97423ed2b6e202d3"} Jan 30 17:56:03 crc kubenswrapper[4766]: I0130 17:56:03.010204 4766 generic.go:334] "Generic (PLEG): container finished" podID="82bd49a0-efdc-46f1-95b8-a706be68208d" containerID="f7a15f090c543f159f64b81fc90febf534407d29f511b8ad8202cf69378c21f4" exitCode=0 Jan 30 17:56:03 crc kubenswrapper[4766]: I0130 17:56:03.010332 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"82bd49a0-efdc-46f1-95b8-a706be68208d","Type":"ContainerDied","Data":"f7a15f090c543f159f64b81fc90febf534407d29f511b8ad8202cf69378c21f4"} Jan 30 17:56:03 crc kubenswrapper[4766]: I0130 17:56:03.013661 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"5d4aa9c5-4f42-495a-921f-986b170dafe4","Type":"ContainerStarted","Data":"25018228b02ea207d81542655aa9b32ef3784522ec69ac31eb4ff676b85b705b"} Jan 30 17:56:03 crc kubenswrapper[4766]: I0130 17:56:03.013718 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"5d4aa9c5-4f42-495a-921f-986b170dafe4","Type":"ContainerStarted","Data":"c7806c8fea73e7b8121b46870f9955dd9e4a2c8319903d9c09f3b36c64d06acc"} Jan 30 17:56:03 crc kubenswrapper[4766]: I0130 17:56:03.018441 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 17:56:03 crc kubenswrapper[4766]: I0130 17:56:03.046122 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.045907379 podStartE2EDuration="3.045907379s" podCreationTimestamp="2026-01-30 17:56:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:56:03.033241705 +0000 UTC m=+5617.671199051" watchObservedRunningTime="2026-01-30 17:56:03.045907379 +0000 UTC m=+5617.683864725" Jan 30 17:56:03 crc kubenswrapper[4766]: I0130 17:56:03.059881 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82bd49a0-efdc-46f1-95b8-a706be68208d-logs\") pod \"82bd49a0-efdc-46f1-95b8-a706be68208d\" (UID: \"82bd49a0-efdc-46f1-95b8-a706be68208d\") " Jan 30 17:56:03 crc kubenswrapper[4766]: I0130 17:56:03.059991 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82bd49a0-efdc-46f1-95b8-a706be68208d-combined-ca-bundle\") pod \"82bd49a0-efdc-46f1-95b8-a706be68208d\" (UID: \"82bd49a0-efdc-46f1-95b8-a706be68208d\") " Jan 30 17:56:03 crc kubenswrapper[4766]: I0130 17:56:03.060044 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8tgj\" (UniqueName: \"kubernetes.io/projected/82bd49a0-efdc-46f1-95b8-a706be68208d-kube-api-access-h8tgj\") pod \"82bd49a0-efdc-46f1-95b8-a706be68208d\" (UID: \"82bd49a0-efdc-46f1-95b8-a706be68208d\") " Jan 30 17:56:03 crc kubenswrapper[4766]: I0130 17:56:03.060068 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82bd49a0-efdc-46f1-95b8-a706be68208d-config-data\") pod \"82bd49a0-efdc-46f1-95b8-a706be68208d\" (UID: \"82bd49a0-efdc-46f1-95b8-a706be68208d\") " Jan 30 17:56:03 crc kubenswrapper[4766]: I0130 17:56:03.061791 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82bd49a0-efdc-46f1-95b8-a706be68208d-logs" (OuterVolumeSpecName: "logs") pod "82bd49a0-efdc-46f1-95b8-a706be68208d" (UID: "82bd49a0-efdc-46f1-95b8-a706be68208d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:56:03 crc kubenswrapper[4766]: I0130 17:56:03.084430 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82bd49a0-efdc-46f1-95b8-a706be68208d-kube-api-access-h8tgj" (OuterVolumeSpecName: "kube-api-access-h8tgj") pod "82bd49a0-efdc-46f1-95b8-a706be68208d" (UID: "82bd49a0-efdc-46f1-95b8-a706be68208d"). InnerVolumeSpecName "kube-api-access-h8tgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:56:03 crc kubenswrapper[4766]: I0130 17:56:03.128123 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82bd49a0-efdc-46f1-95b8-a706be68208d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "82bd49a0-efdc-46f1-95b8-a706be68208d" (UID: "82bd49a0-efdc-46f1-95b8-a706be68208d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:56:03 crc kubenswrapper[4766]: I0130 17:56:03.131080 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82bd49a0-efdc-46f1-95b8-a706be68208d-config-data" (OuterVolumeSpecName: "config-data") pod "82bd49a0-efdc-46f1-95b8-a706be68208d" (UID: "82bd49a0-efdc-46f1-95b8-a706be68208d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:56:03 crc kubenswrapper[4766]: I0130 17:56:03.167672 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82bd49a0-efdc-46f1-95b8-a706be68208d-logs\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:03 crc kubenswrapper[4766]: I0130 17:56:03.167713 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82bd49a0-efdc-46f1-95b8-a706be68208d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:03 crc kubenswrapper[4766]: I0130 17:56:03.167726 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8tgj\" (UniqueName: \"kubernetes.io/projected/82bd49a0-efdc-46f1-95b8-a706be68208d-kube-api-access-h8tgj\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:03 crc kubenswrapper[4766]: I0130 17:56:03.167737 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82bd49a0-efdc-46f1-95b8-a706be68208d-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:03 crc kubenswrapper[4766]: I0130 17:56:03.203434 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 17:56:03 crc kubenswrapper[4766]: I0130 17:56:03.268603 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0670fd5-b8de-408e-9cfa-b594e8e3aa84-combined-ca-bundle\") pod \"d0670fd5-b8de-408e-9cfa-b594e8e3aa84\" (UID: \"d0670fd5-b8de-408e-9cfa-b594e8e3aa84\") " Jan 30 17:56:03 crc kubenswrapper[4766]: I0130 17:56:03.268906 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0670fd5-b8de-408e-9cfa-b594e8e3aa84-logs\") pod \"d0670fd5-b8de-408e-9cfa-b594e8e3aa84\" (UID: \"d0670fd5-b8de-408e-9cfa-b594e8e3aa84\") " Jan 30 17:56:03 crc kubenswrapper[4766]: I0130 17:56:03.269008 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92bns\" (UniqueName: \"kubernetes.io/projected/d0670fd5-b8de-408e-9cfa-b594e8e3aa84-kube-api-access-92bns\") pod \"d0670fd5-b8de-408e-9cfa-b594e8e3aa84\" (UID: \"d0670fd5-b8de-408e-9cfa-b594e8e3aa84\") " Jan 30 17:56:03 crc kubenswrapper[4766]: I0130 17:56:03.269221 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0670fd5-b8de-408e-9cfa-b594e8e3aa84-config-data\") pod \"d0670fd5-b8de-408e-9cfa-b594e8e3aa84\" (UID: \"d0670fd5-b8de-408e-9cfa-b594e8e3aa84\") " Jan 30 17:56:03 crc kubenswrapper[4766]: I0130 17:56:03.269812 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0670fd5-b8de-408e-9cfa-b594e8e3aa84-logs" (OuterVolumeSpecName: "logs") pod "d0670fd5-b8de-408e-9cfa-b594e8e3aa84" (UID: "d0670fd5-b8de-408e-9cfa-b594e8e3aa84"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:56:03 crc kubenswrapper[4766]: I0130 17:56:03.271396 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0670fd5-b8de-408e-9cfa-b594e8e3aa84-logs\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:03 crc kubenswrapper[4766]: I0130 17:56:03.275369 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0670fd5-b8de-408e-9cfa-b594e8e3aa84-kube-api-access-92bns" (OuterVolumeSpecName: "kube-api-access-92bns") pod "d0670fd5-b8de-408e-9cfa-b594e8e3aa84" (UID: "d0670fd5-b8de-408e-9cfa-b594e8e3aa84"). InnerVolumeSpecName "kube-api-access-92bns". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:56:03 crc kubenswrapper[4766]: I0130 17:56:03.335157 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0670fd5-b8de-408e-9cfa-b594e8e3aa84-config-data" (OuterVolumeSpecName: "config-data") pod "d0670fd5-b8de-408e-9cfa-b594e8e3aa84" (UID: "d0670fd5-b8de-408e-9cfa-b594e8e3aa84"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:56:03 crc kubenswrapper[4766]: I0130 17:56:03.338498 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0670fd5-b8de-408e-9cfa-b594e8e3aa84-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d0670fd5-b8de-408e-9cfa-b594e8e3aa84" (UID: "d0670fd5-b8de-408e-9cfa-b594e8e3aa84"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:56:03 crc kubenswrapper[4766]: I0130 17:56:03.374094 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0670fd5-b8de-408e-9cfa-b594e8e3aa84-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:03 crc kubenswrapper[4766]: I0130 17:56:03.374125 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0670fd5-b8de-408e-9cfa-b594e8e3aa84-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:03 crc kubenswrapper[4766]: I0130 17:56:03.374136 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92bns\" (UniqueName: \"kubernetes.io/projected/d0670fd5-b8de-408e-9cfa-b594e8e3aa84-kube-api-access-92bns\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:03 crc kubenswrapper[4766]: E0130 17:56:03.401557 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4d171e1c1b30aea9ede9ee62f7dba8a4c3e8962d723b023c388dffbcb48561c5" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 17:56:03 crc kubenswrapper[4766]: E0130 17:56:03.412073 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4d171e1c1b30aea9ede9ee62f7dba8a4c3e8962d723b023c388dffbcb48561c5" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 17:56:03 crc kubenswrapper[4766]: E0130 17:56:03.417967 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4d171e1c1b30aea9ede9ee62f7dba8a4c3e8962d723b023c388dffbcb48561c5" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 17:56:03 crc kubenswrapper[4766]: E0130 17:56:03.418276 4766 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="42ca03b3-7414-49ac-8fb1-7d2489d1c251" containerName="nova-cell1-conductor-conductor" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.028191 4766 generic.go:334] "Generic (PLEG): container finished" podID="c6725384-f878-416e-832e-64ea63dc6698" containerID="c2aeea8ee2f173823cfcba5d88e64c5feb602801106b43496eaa109ece4c74aa" exitCode=0 Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.028369 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"c6725384-f878-416e-832e-64ea63dc6698","Type":"ContainerDied","Data":"c2aeea8ee2f173823cfcba5d88e64c5feb602801106b43496eaa109ece4c74aa"} Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.031458 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"82bd49a0-efdc-46f1-95b8-a706be68208d","Type":"ContainerDied","Data":"66dc3da390f241d612fa55fe27e56687a1e8882de35f533a122e60bb3d2e3202"} Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.031538 4766 scope.go:117] "RemoveContainer" containerID="f7a15f090c543f159f64b81fc90febf534407d29f511b8ad8202cf69378c21f4" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.032194 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.049292 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.054745 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"782b2122-c6f0-424d-85b1-efb911f37e20","Type":"ContainerStarted","Data":"3f1f30daaa1e0931fb7ea855dc99e864ca970d09d174b3686e8c7026c65b948f"} Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.054936 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d0670fd5-b8de-408e-9cfa-b594e8e3aa84","Type":"ContainerDied","Data":"79c9df2100d6bd4132153d14d3ae6f09c3f6598da8bf5ede5fb0e766b11c0c04"} Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.077696 4766 scope.go:117] "RemoveContainer" containerID="a0d7c7e6d2cb5633e8a0b4e0bc52406e3e7faf95042bec5169821f0c2ab91d39" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.093517 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.09347907 podStartE2EDuration="3.09347907s" podCreationTimestamp="2026-01-30 17:56:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:56:04.070622789 +0000 UTC m=+5618.708580135" watchObservedRunningTime="2026-01-30 17:56:04.09347907 +0000 UTC m=+5618.731436416" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.159292 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.176162 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.185555 4766 scope.go:117] "RemoveContainer" containerID="34892f0d77a4bfb5e47c1f7f0fc93f06bb57eddf06d58f3f97423ed2b6e202d3" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.190653 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 17:56:04 crc kubenswrapper[4766]: E0130 17:56:04.191145 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82bd49a0-efdc-46f1-95b8-a706be68208d" containerName="nova-api-log" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.191170 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="82bd49a0-efdc-46f1-95b8-a706be68208d" containerName="nova-api-log" Jan 30 17:56:04 crc kubenswrapper[4766]: E0130 17:56:04.191211 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82bd49a0-efdc-46f1-95b8-a706be68208d" containerName="nova-api-api" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.191219 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="82bd49a0-efdc-46f1-95b8-a706be68208d" containerName="nova-api-api" Jan 30 17:56:04 crc kubenswrapper[4766]: E0130 17:56:04.191234 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0670fd5-b8de-408e-9cfa-b594e8e3aa84" containerName="nova-metadata-log" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.191243 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0670fd5-b8de-408e-9cfa-b594e8e3aa84" containerName="nova-metadata-log" Jan 30 17:56:04 crc kubenswrapper[4766]: E0130 17:56:04.191256 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0670fd5-b8de-408e-9cfa-b594e8e3aa84" containerName="nova-metadata-metadata" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.191264 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0670fd5-b8de-408e-9cfa-b594e8e3aa84" containerName="nova-metadata-metadata" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.191471 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="82bd49a0-efdc-46f1-95b8-a706be68208d" containerName="nova-api-api" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.191484 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0670fd5-b8de-408e-9cfa-b594e8e3aa84" containerName="nova-metadata-log" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.191494 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="82bd49a0-efdc-46f1-95b8-a706be68208d" containerName="nova-api-log" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.191505 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0670fd5-b8de-408e-9cfa-b594e8e3aa84" containerName="nova-metadata-metadata" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.192886 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.207264 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.211547 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.221217 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.243230 4766 scope.go:117] "RemoveContainer" containerID="200bcd264043dcad571b98db0257dd6c2f6205e9a8442561bca96aee3f006c3d" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.255288 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.268919 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.270665 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.276531 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.286475 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.299292 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af618003-f485-4daa-bedb-d1408b4547bb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"af618003-f485-4daa-bedb-d1408b4547bb\") " pod="openstack/nova-api-0" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.299357 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-426pd\" (UniqueName: \"kubernetes.io/projected/af618003-f485-4daa-bedb-d1408b4547bb-kube-api-access-426pd\") pod \"nova-api-0\" (UID: \"af618003-f485-4daa-bedb-d1408b4547bb\") " pod="openstack/nova-api-0" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.299381 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af618003-f485-4daa-bedb-d1408b4547bb-config-data\") pod \"nova-api-0\" (UID: \"af618003-f485-4daa-bedb-d1408b4547bb\") " pod="openstack/nova-api-0" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.299408 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af618003-f485-4daa-bedb-d1408b4547bb-logs\") pod \"nova-api-0\" (UID: \"af618003-f485-4daa-bedb-d1408b4547bb\") " pod="openstack/nova-api-0" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.405374 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx2s4\" (UniqueName: \"kubernetes.io/projected/374fa21e-428d-4383-9124-5272df0552d4-kube-api-access-dx2s4\") pod \"nova-metadata-0\" (UID: \"374fa21e-428d-4383-9124-5272df0552d4\") " pod="openstack/nova-metadata-0" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.405420 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/374fa21e-428d-4383-9124-5272df0552d4-config-data\") pod \"nova-metadata-0\" (UID: \"374fa21e-428d-4383-9124-5272df0552d4\") " pod="openstack/nova-metadata-0" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.405512 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/374fa21e-428d-4383-9124-5272df0552d4-logs\") pod \"nova-metadata-0\" (UID: \"374fa21e-428d-4383-9124-5272df0552d4\") " pod="openstack/nova-metadata-0" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.405530 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/374fa21e-428d-4383-9124-5272df0552d4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"374fa21e-428d-4383-9124-5272df0552d4\") " pod="openstack/nova-metadata-0" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.405590 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af618003-f485-4daa-bedb-d1408b4547bb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"af618003-f485-4daa-bedb-d1408b4547bb\") " pod="openstack/nova-api-0" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.405622 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-426pd\" (UniqueName: \"kubernetes.io/projected/af618003-f485-4daa-bedb-d1408b4547bb-kube-api-access-426pd\") pod \"nova-api-0\" (UID: \"af618003-f485-4daa-bedb-d1408b4547bb\") " pod="openstack/nova-api-0" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.405642 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af618003-f485-4daa-bedb-d1408b4547bb-config-data\") pod \"nova-api-0\" (UID: \"af618003-f485-4daa-bedb-d1408b4547bb\") " pod="openstack/nova-api-0" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.405668 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af618003-f485-4daa-bedb-d1408b4547bb-logs\") pod \"nova-api-0\" (UID: \"af618003-f485-4daa-bedb-d1408b4547bb\") " pod="openstack/nova-api-0" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.406151 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af618003-f485-4daa-bedb-d1408b4547bb-logs\") pod \"nova-api-0\" (UID: \"af618003-f485-4daa-bedb-d1408b4547bb\") " pod="openstack/nova-api-0" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.413266 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af618003-f485-4daa-bedb-d1408b4547bb-config-data\") pod \"nova-api-0\" (UID: \"af618003-f485-4daa-bedb-d1408b4547bb\") " pod="openstack/nova-api-0" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.413934 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af618003-f485-4daa-bedb-d1408b4547bb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"af618003-f485-4daa-bedb-d1408b4547bb\") " pod="openstack/nova-api-0" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.428861 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-426pd\" (UniqueName: \"kubernetes.io/projected/af618003-f485-4daa-bedb-d1408b4547bb-kube-api-access-426pd\") pod \"nova-api-0\" (UID: \"af618003-f485-4daa-bedb-d1408b4547bb\") " pod="openstack/nova-api-0" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.492788 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.506814 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6725384-f878-416e-832e-64ea63dc6698-config-data\") pod \"c6725384-f878-416e-832e-64ea63dc6698\" (UID: \"c6725384-f878-416e-832e-64ea63dc6698\") " Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.506941 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kwb2\" (UniqueName: \"kubernetes.io/projected/c6725384-f878-416e-832e-64ea63dc6698-kube-api-access-6kwb2\") pod \"c6725384-f878-416e-832e-64ea63dc6698\" (UID: \"c6725384-f878-416e-832e-64ea63dc6698\") " Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.506980 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6725384-f878-416e-832e-64ea63dc6698-combined-ca-bundle\") pod \"c6725384-f878-416e-832e-64ea63dc6698\" (UID: \"c6725384-f878-416e-832e-64ea63dc6698\") " Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.507227 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx2s4\" (UniqueName: \"kubernetes.io/projected/374fa21e-428d-4383-9124-5272df0552d4-kube-api-access-dx2s4\") pod \"nova-metadata-0\" (UID: \"374fa21e-428d-4383-9124-5272df0552d4\") " pod="openstack/nova-metadata-0" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.507255 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/374fa21e-428d-4383-9124-5272df0552d4-config-data\") pod \"nova-metadata-0\" (UID: \"374fa21e-428d-4383-9124-5272df0552d4\") " pod="openstack/nova-metadata-0" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.507337 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/374fa21e-428d-4383-9124-5272df0552d4-logs\") pod \"nova-metadata-0\" (UID: \"374fa21e-428d-4383-9124-5272df0552d4\") " pod="openstack/nova-metadata-0" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.507352 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/374fa21e-428d-4383-9124-5272df0552d4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"374fa21e-428d-4383-9124-5272df0552d4\") " pod="openstack/nova-metadata-0" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.508824 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/374fa21e-428d-4383-9124-5272df0552d4-logs\") pod \"nova-metadata-0\" (UID: \"374fa21e-428d-4383-9124-5272df0552d4\") " pod="openstack/nova-metadata-0" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.511574 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/374fa21e-428d-4383-9124-5272df0552d4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"374fa21e-428d-4383-9124-5272df0552d4\") " pod="openstack/nova-metadata-0" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.512380 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/374fa21e-428d-4383-9124-5272df0552d4-config-data\") pod \"nova-metadata-0\" (UID: \"374fa21e-428d-4383-9124-5272df0552d4\") " pod="openstack/nova-metadata-0" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.512920 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6725384-f878-416e-832e-64ea63dc6698-kube-api-access-6kwb2" (OuterVolumeSpecName: "kube-api-access-6kwb2") pod "c6725384-f878-416e-832e-64ea63dc6698" (UID: "c6725384-f878-416e-832e-64ea63dc6698"). InnerVolumeSpecName "kube-api-access-6kwb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.539986 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.542539 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6725384-f878-416e-832e-64ea63dc6698-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c6725384-f878-416e-832e-64ea63dc6698" (UID: "c6725384-f878-416e-832e-64ea63dc6698"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.545490 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx2s4\" (UniqueName: \"kubernetes.io/projected/374fa21e-428d-4383-9124-5272df0552d4-kube-api-access-dx2s4\") pod \"nova-metadata-0\" (UID: \"374fa21e-428d-4383-9124-5272df0552d4\") " pod="openstack/nova-metadata-0" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.588378 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6725384-f878-416e-832e-64ea63dc6698-config-data" (OuterVolumeSpecName: "config-data") pod "c6725384-f878-416e-832e-64ea63dc6698" (UID: "c6725384-f878-416e-832e-64ea63dc6698"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.598378 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.608750 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kwb2\" (UniqueName: \"kubernetes.io/projected/c6725384-f878-416e-832e-64ea63dc6698-kube-api-access-6kwb2\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.608819 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6725384-f878-416e-832e-64ea63dc6698-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.608829 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6725384-f878-416e-832e-64ea63dc6698-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:05 crc kubenswrapper[4766]: I0130 17:56:05.034826 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 17:56:05 crc kubenswrapper[4766]: I0130 17:56:05.056726 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"af618003-f485-4daa-bedb-d1408b4547bb","Type":"ContainerStarted","Data":"582f8309a7ee5444494d0cee309368d965f0b1605401c50bae8f9becb98ea8cf"} Jan 30 17:56:05 crc kubenswrapper[4766]: I0130 17:56:05.058111 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"c6725384-f878-416e-832e-64ea63dc6698","Type":"ContainerDied","Data":"04dfddcb65778a7ed5dd4fe1da7afcca1ade4d7f0563c40559bc94e19e6acdc2"} Jan 30 17:56:05 crc kubenswrapper[4766]: I0130 17:56:05.058146 4766 scope.go:117] "RemoveContainer" containerID="c2aeea8ee2f173823cfcba5d88e64c5feb602801106b43496eaa109ece4c74aa" Jan 30 17:56:05 crc kubenswrapper[4766]: I0130 17:56:05.058250 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 17:56:05 crc kubenswrapper[4766]: I0130 17:56:05.099514 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 17:56:05 crc kubenswrapper[4766]: I0130 17:56:05.115384 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 17:56:05 crc kubenswrapper[4766]: I0130 17:56:05.125583 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 17:56:05 crc kubenswrapper[4766]: E0130 17:56:05.126104 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6725384-f878-416e-832e-64ea63dc6698" containerName="nova-cell0-conductor-conductor" Jan 30 17:56:05 crc kubenswrapper[4766]: I0130 17:56:05.126122 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6725384-f878-416e-832e-64ea63dc6698" containerName="nova-cell0-conductor-conductor" Jan 30 17:56:05 crc kubenswrapper[4766]: I0130 17:56:05.126632 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6725384-f878-416e-832e-64ea63dc6698" containerName="nova-cell0-conductor-conductor" Jan 30 17:56:05 crc kubenswrapper[4766]: I0130 17:56:05.127444 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 17:56:05 crc kubenswrapper[4766]: I0130 17:56:05.133010 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 30 17:56:05 crc kubenswrapper[4766]: I0130 17:56:05.135987 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 17:56:05 crc kubenswrapper[4766]: I0130 17:56:05.180743 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 17:56:05 crc kubenswrapper[4766]: I0130 17:56:05.321198 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/463fa20b-ef02-4b0a-ae8e-3fed6dc02c37-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"463fa20b-ef02-4b0a-ae8e-3fed6dc02c37\") " pod="openstack/nova-cell0-conductor-0" Jan 30 17:56:05 crc kubenswrapper[4766]: I0130 17:56:05.321287 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/463fa20b-ef02-4b0a-ae8e-3fed6dc02c37-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"463fa20b-ef02-4b0a-ae8e-3fed6dc02c37\") " pod="openstack/nova-cell0-conductor-0" Jan 30 17:56:05 crc kubenswrapper[4766]: I0130 17:56:05.321871 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmp2x\" (UniqueName: \"kubernetes.io/projected/463fa20b-ef02-4b0a-ae8e-3fed6dc02c37-kube-api-access-rmp2x\") pod \"nova-cell0-conductor-0\" (UID: \"463fa20b-ef02-4b0a-ae8e-3fed6dc02c37\") " pod="openstack/nova-cell0-conductor-0" Jan 30 17:56:05 crc kubenswrapper[4766]: I0130 17:56:05.423786 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/463fa20b-ef02-4b0a-ae8e-3fed6dc02c37-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"463fa20b-ef02-4b0a-ae8e-3fed6dc02c37\") " pod="openstack/nova-cell0-conductor-0" Jan 30 17:56:05 crc kubenswrapper[4766]: I0130 17:56:05.423869 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/463fa20b-ef02-4b0a-ae8e-3fed6dc02c37-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"463fa20b-ef02-4b0a-ae8e-3fed6dc02c37\") " pod="openstack/nova-cell0-conductor-0" Jan 30 17:56:05 crc kubenswrapper[4766]: I0130 17:56:05.423902 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmp2x\" (UniqueName: \"kubernetes.io/projected/463fa20b-ef02-4b0a-ae8e-3fed6dc02c37-kube-api-access-rmp2x\") pod \"nova-cell0-conductor-0\" (UID: \"463fa20b-ef02-4b0a-ae8e-3fed6dc02c37\") " pod="openstack/nova-cell0-conductor-0" Jan 30 17:56:05 crc kubenswrapper[4766]: I0130 17:56:05.430862 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/463fa20b-ef02-4b0a-ae8e-3fed6dc02c37-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"463fa20b-ef02-4b0a-ae8e-3fed6dc02c37\") " pod="openstack/nova-cell0-conductor-0" Jan 30 17:56:05 crc kubenswrapper[4766]: I0130 17:56:05.430877 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/463fa20b-ef02-4b0a-ae8e-3fed6dc02c37-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"463fa20b-ef02-4b0a-ae8e-3fed6dc02c37\") " pod="openstack/nova-cell0-conductor-0" Jan 30 17:56:05 crc kubenswrapper[4766]: I0130 17:56:05.444621 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmp2x\" (UniqueName: \"kubernetes.io/projected/463fa20b-ef02-4b0a-ae8e-3fed6dc02c37-kube-api-access-rmp2x\") pod \"nova-cell0-conductor-0\" (UID: \"463fa20b-ef02-4b0a-ae8e-3fed6dc02c37\") " pod="openstack/nova-cell0-conductor-0" Jan 30 17:56:05 crc kubenswrapper[4766]: I0130 17:56:05.470846 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 17:56:05 crc kubenswrapper[4766]: I0130 17:56:05.952490 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 17:56:05 crc kubenswrapper[4766]: W0130 17:56:05.958623 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod463fa20b_ef02_4b0a_ae8e_3fed6dc02c37.slice/crio-dc02ab14c8c15463c9f164f6bca8410d544fd0d3db20728364752bb7f512008b WatchSource:0}: Error finding container dc02ab14c8c15463c9f164f6bca8410d544fd0d3db20728364752bb7f512008b: Status 404 returned error can't find the container with id dc02ab14c8c15463c9f164f6bca8410d544fd0d3db20728364752bb7f512008b Jan 30 17:56:06 crc kubenswrapper[4766]: I0130 17:56:06.065840 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82bd49a0-efdc-46f1-95b8-a706be68208d" path="/var/lib/kubelet/pods/82bd49a0-efdc-46f1-95b8-a706be68208d/volumes" Jan 30 17:56:06 crc kubenswrapper[4766]: I0130 17:56:06.066463 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6725384-f878-416e-832e-64ea63dc6698" path="/var/lib/kubelet/pods/c6725384-f878-416e-832e-64ea63dc6698/volumes" Jan 30 17:56:06 crc kubenswrapper[4766]: I0130 17:56:06.066985 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0670fd5-b8de-408e-9cfa-b594e8e3aa84" path="/var/lib/kubelet/pods/d0670fd5-b8de-408e-9cfa-b594e8e3aa84/volumes" Jan 30 17:56:06 crc kubenswrapper[4766]: I0130 17:56:06.078614 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"463fa20b-ef02-4b0a-ae8e-3fed6dc02c37","Type":"ContainerStarted","Data":"dc02ab14c8c15463c9f164f6bca8410d544fd0d3db20728364752bb7f512008b"} Jan 30 17:56:06 crc kubenswrapper[4766]: I0130 17:56:06.081702 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"374fa21e-428d-4383-9124-5272df0552d4","Type":"ContainerStarted","Data":"b255fb203861cf38aece6b9f19759ab6362bcc74d07753d836d503a0f0531810"} Jan 30 17:56:06 crc kubenswrapper[4766]: I0130 17:56:06.081756 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"374fa21e-428d-4383-9124-5272df0552d4","Type":"ContainerStarted","Data":"ea2df8a4ca0b63725bb799c30ae8cd374fcac1fed842b7154565bcd302c5ab2b"} Jan 30 17:56:06 crc kubenswrapper[4766]: I0130 17:56:06.081770 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"374fa21e-428d-4383-9124-5272df0552d4","Type":"ContainerStarted","Data":"39dd9d346cb8093b57a4b81986998af18fb6480cbee4c8d238152e5b9603eba8"} Jan 30 17:56:06 crc kubenswrapper[4766]: I0130 17:56:06.084248 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"af618003-f485-4daa-bedb-d1408b4547bb","Type":"ContainerStarted","Data":"dbd2f7fdf9f744ecda2014b7d25bc692be44ca5a2049413c28d896553b81626d"} Jan 30 17:56:06 crc kubenswrapper[4766]: I0130 17:56:06.084288 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"af618003-f485-4daa-bedb-d1408b4547bb","Type":"ContainerStarted","Data":"f7d9b9f0781c098e45f79bc0549e0721fc8c2f87f4c435181fa68f3e690a10fb"} Jan 30 17:56:06 crc kubenswrapper[4766]: I0130 17:56:06.140327 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.14030674 podStartE2EDuration="2.14030674s" podCreationTimestamp="2026-01-30 17:56:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:56:06.132528189 +0000 UTC m=+5620.770485545" watchObservedRunningTime="2026-01-30 17:56:06.14030674 +0000 UTC m=+5620.778264086" Jan 30 17:56:06 crc kubenswrapper[4766]: I0130 17:56:06.158907 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.158890305 podStartE2EDuration="2.158890305s" podCreationTimestamp="2026-01-30 17:56:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:56:06.157034765 +0000 UTC m=+5620.794992111" watchObservedRunningTime="2026-01-30 17:56:06.158890305 +0000 UTC m=+5620.796847651" Jan 30 17:56:06 crc kubenswrapper[4766]: I0130 17:56:06.401128 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:56:07 crc kubenswrapper[4766]: I0130 17:56:07.095603 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"463fa20b-ef02-4b0a-ae8e-3fed6dc02c37","Type":"ContainerStarted","Data":"148fa7325166aabadf12d512e159985b0672ebc805e033bb178eeafce376f3b6"} Jan 30 17:56:07 crc kubenswrapper[4766]: I0130 17:56:07.119600 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.119580455 podStartE2EDuration="2.119580455s" podCreationTimestamp="2026-01-30 17:56:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:56:07.109019068 +0000 UTC m=+5621.746976414" watchObservedRunningTime="2026-01-30 17:56:07.119580455 +0000 UTC m=+5621.757537801" Jan 30 17:56:07 crc kubenswrapper[4766]: I0130 17:56:07.393217 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 17:56:07 crc kubenswrapper[4766]: I0130 17:56:07.694563 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 17:56:07 crc kubenswrapper[4766]: I0130 17:56:07.871075 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qz5gq\" (UniqueName: \"kubernetes.io/projected/42ca03b3-7414-49ac-8fb1-7d2489d1c251-kube-api-access-qz5gq\") pod \"42ca03b3-7414-49ac-8fb1-7d2489d1c251\" (UID: \"42ca03b3-7414-49ac-8fb1-7d2489d1c251\") " Jan 30 17:56:07 crc kubenswrapper[4766]: I0130 17:56:07.871254 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42ca03b3-7414-49ac-8fb1-7d2489d1c251-config-data\") pod \"42ca03b3-7414-49ac-8fb1-7d2489d1c251\" (UID: \"42ca03b3-7414-49ac-8fb1-7d2489d1c251\") " Jan 30 17:56:07 crc kubenswrapper[4766]: I0130 17:56:07.871301 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42ca03b3-7414-49ac-8fb1-7d2489d1c251-combined-ca-bundle\") pod \"42ca03b3-7414-49ac-8fb1-7d2489d1c251\" (UID: \"42ca03b3-7414-49ac-8fb1-7d2489d1c251\") " Jan 30 17:56:07 crc kubenswrapper[4766]: I0130 17:56:07.876644 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42ca03b3-7414-49ac-8fb1-7d2489d1c251-kube-api-access-qz5gq" (OuterVolumeSpecName: "kube-api-access-qz5gq") pod "42ca03b3-7414-49ac-8fb1-7d2489d1c251" (UID: "42ca03b3-7414-49ac-8fb1-7d2489d1c251"). InnerVolumeSpecName "kube-api-access-qz5gq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:56:07 crc kubenswrapper[4766]: I0130 17:56:07.896220 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42ca03b3-7414-49ac-8fb1-7d2489d1c251-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "42ca03b3-7414-49ac-8fb1-7d2489d1c251" (UID: "42ca03b3-7414-49ac-8fb1-7d2489d1c251"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:56:07 crc kubenswrapper[4766]: I0130 17:56:07.899577 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42ca03b3-7414-49ac-8fb1-7d2489d1c251-config-data" (OuterVolumeSpecName: "config-data") pod "42ca03b3-7414-49ac-8fb1-7d2489d1c251" (UID: "42ca03b3-7414-49ac-8fb1-7d2489d1c251"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:56:07 crc kubenswrapper[4766]: I0130 17:56:07.973196 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qz5gq\" (UniqueName: \"kubernetes.io/projected/42ca03b3-7414-49ac-8fb1-7d2489d1c251-kube-api-access-qz5gq\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:07 crc kubenswrapper[4766]: I0130 17:56:07.973244 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42ca03b3-7414-49ac-8fb1-7d2489d1c251-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:07 crc kubenswrapper[4766]: I0130 17:56:07.973255 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42ca03b3-7414-49ac-8fb1-7d2489d1c251-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:08 crc kubenswrapper[4766]: I0130 17:56:08.104411 4766 generic.go:334] "Generic (PLEG): container finished" podID="42ca03b3-7414-49ac-8fb1-7d2489d1c251" containerID="4d171e1c1b30aea9ede9ee62f7dba8a4c3e8962d723b023c388dffbcb48561c5" exitCode=0 Jan 30 17:56:08 crc kubenswrapper[4766]: I0130 17:56:08.105191 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 17:56:08 crc kubenswrapper[4766]: I0130 17:56:08.105637 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"42ca03b3-7414-49ac-8fb1-7d2489d1c251","Type":"ContainerDied","Data":"4d171e1c1b30aea9ede9ee62f7dba8a4c3e8962d723b023c388dffbcb48561c5"} Jan 30 17:56:08 crc kubenswrapper[4766]: I0130 17:56:08.105677 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"42ca03b3-7414-49ac-8fb1-7d2489d1c251","Type":"ContainerDied","Data":"18d42518db1b0bb06251f082044f954d0b9d14d82dbcc6772e7d16a38b44879b"} Jan 30 17:56:08 crc kubenswrapper[4766]: I0130 17:56:08.105694 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 30 17:56:08 crc kubenswrapper[4766]: I0130 17:56:08.106035 4766 scope.go:117] "RemoveContainer" containerID="4d171e1c1b30aea9ede9ee62f7dba8a4c3e8962d723b023c388dffbcb48561c5" Jan 30 17:56:08 crc kubenswrapper[4766]: I0130 17:56:08.132057 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 17:56:08 crc kubenswrapper[4766]: I0130 17:56:08.147634 4766 scope.go:117] "RemoveContainer" containerID="4d171e1c1b30aea9ede9ee62f7dba8a4c3e8962d723b023c388dffbcb48561c5" Jan 30 17:56:08 crc kubenswrapper[4766]: E0130 17:56:08.149473 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d171e1c1b30aea9ede9ee62f7dba8a4c3e8962d723b023c388dffbcb48561c5\": container with ID starting with 4d171e1c1b30aea9ede9ee62f7dba8a4c3e8962d723b023c388dffbcb48561c5 not found: ID does not exist" containerID="4d171e1c1b30aea9ede9ee62f7dba8a4c3e8962d723b023c388dffbcb48561c5" Jan 30 17:56:08 crc kubenswrapper[4766]: I0130 17:56:08.149516 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d171e1c1b30aea9ede9ee62f7dba8a4c3e8962d723b023c388dffbcb48561c5"} err="failed to get container status \"4d171e1c1b30aea9ede9ee62f7dba8a4c3e8962d723b023c388dffbcb48561c5\": rpc error: code = NotFound desc = could not find container \"4d171e1c1b30aea9ede9ee62f7dba8a4c3e8962d723b023c388dffbcb48561c5\": container with ID starting with 4d171e1c1b30aea9ede9ee62f7dba8a4c3e8962d723b023c388dffbcb48561c5 not found: ID does not exist" Jan 30 17:56:08 crc kubenswrapper[4766]: I0130 17:56:08.156980 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 17:56:08 crc kubenswrapper[4766]: I0130 17:56:08.169384 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 17:56:08 crc kubenswrapper[4766]: E0130 17:56:08.169790 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42ca03b3-7414-49ac-8fb1-7d2489d1c251" containerName="nova-cell1-conductor-conductor" Jan 30 17:56:08 crc kubenswrapper[4766]: I0130 17:56:08.169806 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="42ca03b3-7414-49ac-8fb1-7d2489d1c251" containerName="nova-cell1-conductor-conductor" Jan 30 17:56:08 crc kubenswrapper[4766]: I0130 17:56:08.170016 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="42ca03b3-7414-49ac-8fb1-7d2489d1c251" containerName="nova-cell1-conductor-conductor" Jan 30 17:56:08 crc kubenswrapper[4766]: I0130 17:56:08.170640 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 17:56:08 crc kubenswrapper[4766]: I0130 17:56:08.173604 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 30 17:56:08 crc kubenswrapper[4766]: I0130 17:56:08.185483 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 17:56:08 crc kubenswrapper[4766]: I0130 17:56:08.278030 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9vdq\" (UniqueName: \"kubernetes.io/projected/b4061a48-dd7c-4b2f-aa8b-422eb8f65c1e-kube-api-access-h9vdq\") pod \"nova-cell1-conductor-0\" (UID: \"b4061a48-dd7c-4b2f-aa8b-422eb8f65c1e\") " pod="openstack/nova-cell1-conductor-0" Jan 30 17:56:08 crc kubenswrapper[4766]: I0130 17:56:08.278147 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4061a48-dd7c-4b2f-aa8b-422eb8f65c1e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"b4061a48-dd7c-4b2f-aa8b-422eb8f65c1e\") " pod="openstack/nova-cell1-conductor-0" Jan 30 17:56:08 crc kubenswrapper[4766]: I0130 17:56:08.278497 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4061a48-dd7c-4b2f-aa8b-422eb8f65c1e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"b4061a48-dd7c-4b2f-aa8b-422eb8f65c1e\") " pod="openstack/nova-cell1-conductor-0" Jan 30 17:56:08 crc kubenswrapper[4766]: I0130 17:56:08.381899 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4061a48-dd7c-4b2f-aa8b-422eb8f65c1e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"b4061a48-dd7c-4b2f-aa8b-422eb8f65c1e\") " pod="openstack/nova-cell1-conductor-0" Jan 30 17:56:08 crc kubenswrapper[4766]: I0130 17:56:08.382083 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4061a48-dd7c-4b2f-aa8b-422eb8f65c1e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"b4061a48-dd7c-4b2f-aa8b-422eb8f65c1e\") " pod="openstack/nova-cell1-conductor-0" Jan 30 17:56:08 crc kubenswrapper[4766]: I0130 17:56:08.382266 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9vdq\" (UniqueName: \"kubernetes.io/projected/b4061a48-dd7c-4b2f-aa8b-422eb8f65c1e-kube-api-access-h9vdq\") pod \"nova-cell1-conductor-0\" (UID: \"b4061a48-dd7c-4b2f-aa8b-422eb8f65c1e\") " pod="openstack/nova-cell1-conductor-0" Jan 30 17:56:08 crc kubenswrapper[4766]: I0130 17:56:08.388849 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4061a48-dd7c-4b2f-aa8b-422eb8f65c1e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"b4061a48-dd7c-4b2f-aa8b-422eb8f65c1e\") " pod="openstack/nova-cell1-conductor-0" Jan 30 17:56:08 crc kubenswrapper[4766]: I0130 17:56:08.401712 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4061a48-dd7c-4b2f-aa8b-422eb8f65c1e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"b4061a48-dd7c-4b2f-aa8b-422eb8f65c1e\") " pod="openstack/nova-cell1-conductor-0" Jan 30 17:56:08 crc kubenswrapper[4766]: I0130 17:56:08.402131 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9vdq\" (UniqueName: \"kubernetes.io/projected/b4061a48-dd7c-4b2f-aa8b-422eb8f65c1e-kube-api-access-h9vdq\") pod \"nova-cell1-conductor-0\" (UID: \"b4061a48-dd7c-4b2f-aa8b-422eb8f65c1e\") " pod="openstack/nova-cell1-conductor-0" Jan 30 17:56:08 crc kubenswrapper[4766]: I0130 17:56:08.493498 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 17:56:08 crc kubenswrapper[4766]: I0130 17:56:08.933321 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 17:56:08 crc kubenswrapper[4766]: W0130 17:56:08.936991 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4061a48_dd7c_4b2f_aa8b_422eb8f65c1e.slice/crio-01e3823cb67cae187a32f2df5a42e36b409fe11085bbeb1f4ac66c22f9b339f7 WatchSource:0}: Error finding container 01e3823cb67cae187a32f2df5a42e36b409fe11085bbeb1f4ac66c22f9b339f7: Status 404 returned error can't find the container with id 01e3823cb67cae187a32f2df5a42e36b409fe11085bbeb1f4ac66c22f9b339f7 Jan 30 17:56:09 crc kubenswrapper[4766]: I0130 17:56:09.045802 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:56:09 crc kubenswrapper[4766]: I0130 17:56:09.046160 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:56:09 crc kubenswrapper[4766]: I0130 17:56:09.114381 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"b4061a48-dd7c-4b2f-aa8b-422eb8f65c1e","Type":"ContainerStarted","Data":"01e3823cb67cae187a32f2df5a42e36b409fe11085bbeb1f4ac66c22f9b339f7"} Jan 30 17:56:09 crc kubenswrapper[4766]: I0130 17:56:09.599153 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 17:56:09 crc kubenswrapper[4766]: I0130 17:56:09.599221 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 17:56:10 crc kubenswrapper[4766]: I0130 17:56:10.049616 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42ca03b3-7414-49ac-8fb1-7d2489d1c251" path="/var/lib/kubelet/pods/42ca03b3-7414-49ac-8fb1-7d2489d1c251/volumes" Jan 30 17:56:10 crc kubenswrapper[4766]: I0130 17:56:10.127015 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"b4061a48-dd7c-4b2f-aa8b-422eb8f65c1e","Type":"ContainerStarted","Data":"f0ce49675782d93734e1bb3bec7969fd7a37f6cf9d4f90c57370abf0dc245664"} Jan 30 17:56:10 crc kubenswrapper[4766]: I0130 17:56:10.127862 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 30 17:56:10 crc kubenswrapper[4766]: I0130 17:56:10.147074 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.147057327 podStartE2EDuration="2.147057327s" podCreationTimestamp="2026-01-30 17:56:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:56:10.146287366 +0000 UTC m=+5624.784244732" watchObservedRunningTime="2026-01-30 17:56:10.147057327 +0000 UTC m=+5624.785014673" Jan 30 17:56:11 crc kubenswrapper[4766]: I0130 17:56:11.401565 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:56:11 crc kubenswrapper[4766]: I0130 17:56:11.412077 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:56:12 crc kubenswrapper[4766]: I0130 17:56:12.155641 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:56:12 crc kubenswrapper[4766]: I0130 17:56:12.380077 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 17:56:12 crc kubenswrapper[4766]: I0130 17:56:12.404696 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 17:56:13 crc kubenswrapper[4766]: I0130 17:56:13.176527 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 17:56:14 crc kubenswrapper[4766]: I0130 17:56:14.540873 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 17:56:14 crc kubenswrapper[4766]: I0130 17:56:14.541237 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 17:56:14 crc kubenswrapper[4766]: I0130 17:56:14.599873 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 17:56:14 crc kubenswrapper[4766]: I0130 17:56:14.599923 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 17:56:15 crc kubenswrapper[4766]: I0130 17:56:15.500762 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 30 17:56:15 crc kubenswrapper[4766]: I0130 17:56:15.623443 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="af618003-f485-4daa-bedb-d1408b4547bb" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.76:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:56:15 crc kubenswrapper[4766]: I0130 17:56:15.623443 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="af618003-f485-4daa-bedb-d1408b4547bb" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.76:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:56:15 crc kubenswrapper[4766]: I0130 17:56:15.705798 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="374fa21e-428d-4383-9124-5272df0552d4" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.77:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:56:15 crc kubenswrapper[4766]: I0130 17:56:15.706266 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="374fa21e-428d-4383-9124-5272df0552d4" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.77:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:56:17 crc kubenswrapper[4766]: I0130 17:56:17.116254 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 17:56:17 crc kubenswrapper[4766]: I0130 17:56:17.118061 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 17:56:17 crc kubenswrapper[4766]: I0130 17:56:17.123876 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 30 17:56:17 crc kubenswrapper[4766]: I0130 17:56:17.135633 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 17:56:17 crc kubenswrapper[4766]: I0130 17:56:17.162402 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e39e170e-2256-4796-a06f-b1e63a1425cb-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e39e170e-2256-4796-a06f-b1e63a1425cb\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:17 crc kubenswrapper[4766]: I0130 17:56:17.162485 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e39e170e-2256-4796-a06f-b1e63a1425cb-config-data\") pod \"cinder-scheduler-0\" (UID: \"e39e170e-2256-4796-a06f-b1e63a1425cb\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:17 crc kubenswrapper[4766]: I0130 17:56:17.162625 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e39e170e-2256-4796-a06f-b1e63a1425cb-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e39e170e-2256-4796-a06f-b1e63a1425cb\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:17 crc kubenswrapper[4766]: I0130 17:56:17.162731 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e39e170e-2256-4796-a06f-b1e63a1425cb-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e39e170e-2256-4796-a06f-b1e63a1425cb\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:17 crc kubenswrapper[4766]: I0130 17:56:17.162865 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgg4t\" (UniqueName: \"kubernetes.io/projected/e39e170e-2256-4796-a06f-b1e63a1425cb-kube-api-access-kgg4t\") pod \"cinder-scheduler-0\" (UID: \"e39e170e-2256-4796-a06f-b1e63a1425cb\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:17 crc kubenswrapper[4766]: I0130 17:56:17.162903 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e39e170e-2256-4796-a06f-b1e63a1425cb-scripts\") pod \"cinder-scheduler-0\" (UID: \"e39e170e-2256-4796-a06f-b1e63a1425cb\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:17 crc kubenswrapper[4766]: I0130 17:56:17.264335 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e39e170e-2256-4796-a06f-b1e63a1425cb-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e39e170e-2256-4796-a06f-b1e63a1425cb\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:17 crc kubenswrapper[4766]: I0130 17:56:17.264396 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e39e170e-2256-4796-a06f-b1e63a1425cb-config-data\") pod \"cinder-scheduler-0\" (UID: \"e39e170e-2256-4796-a06f-b1e63a1425cb\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:17 crc kubenswrapper[4766]: I0130 17:56:17.264441 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e39e170e-2256-4796-a06f-b1e63a1425cb-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e39e170e-2256-4796-a06f-b1e63a1425cb\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:17 crc kubenswrapper[4766]: I0130 17:56:17.264482 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e39e170e-2256-4796-a06f-b1e63a1425cb-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e39e170e-2256-4796-a06f-b1e63a1425cb\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:17 crc kubenswrapper[4766]: I0130 17:56:17.264530 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgg4t\" (UniqueName: \"kubernetes.io/projected/e39e170e-2256-4796-a06f-b1e63a1425cb-kube-api-access-kgg4t\") pod \"cinder-scheduler-0\" (UID: \"e39e170e-2256-4796-a06f-b1e63a1425cb\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:17 crc kubenswrapper[4766]: I0130 17:56:17.264550 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e39e170e-2256-4796-a06f-b1e63a1425cb-scripts\") pod \"cinder-scheduler-0\" (UID: \"e39e170e-2256-4796-a06f-b1e63a1425cb\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:17 crc kubenswrapper[4766]: I0130 17:56:17.265539 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e39e170e-2256-4796-a06f-b1e63a1425cb-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e39e170e-2256-4796-a06f-b1e63a1425cb\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:17 crc kubenswrapper[4766]: I0130 17:56:17.270151 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e39e170e-2256-4796-a06f-b1e63a1425cb-scripts\") pod \"cinder-scheduler-0\" (UID: \"e39e170e-2256-4796-a06f-b1e63a1425cb\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:17 crc kubenswrapper[4766]: I0130 17:56:17.270407 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e39e170e-2256-4796-a06f-b1e63a1425cb-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e39e170e-2256-4796-a06f-b1e63a1425cb\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:17 crc kubenswrapper[4766]: I0130 17:56:17.272503 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e39e170e-2256-4796-a06f-b1e63a1425cb-config-data\") pod \"cinder-scheduler-0\" (UID: \"e39e170e-2256-4796-a06f-b1e63a1425cb\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:17 crc kubenswrapper[4766]: I0130 17:56:17.273168 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e39e170e-2256-4796-a06f-b1e63a1425cb-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e39e170e-2256-4796-a06f-b1e63a1425cb\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:17 crc kubenswrapper[4766]: I0130 17:56:17.284340 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgg4t\" (UniqueName: \"kubernetes.io/projected/e39e170e-2256-4796-a06f-b1e63a1425cb-kube-api-access-kgg4t\") pod \"cinder-scheduler-0\" (UID: \"e39e170e-2256-4796-a06f-b1e63a1425cb\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:17 crc kubenswrapper[4766]: I0130 17:56:17.445749 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 17:56:17 crc kubenswrapper[4766]: I0130 17:56:17.914639 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 17:56:18 crc kubenswrapper[4766]: I0130 17:56:18.189311 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e39e170e-2256-4796-a06f-b1e63a1425cb","Type":"ContainerStarted","Data":"bd8c344070ab05d750472242fd65de8a04e107ddf96ae138a125614afab2f3d2"} Jan 30 17:56:18 crc kubenswrapper[4766]: I0130 17:56:18.524550 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 30 17:56:18 crc kubenswrapper[4766]: I0130 17:56:18.745332 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 30 17:56:18 crc kubenswrapper[4766]: I0130 17:56:18.745943 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="ae29236e-6325-4cee-99e8-45b5dbfdae9d" containerName="cinder-api-log" containerID="cri-o://7d3561f611119703905071b1a184200e4fd9b43325527e17a71ec76489c683e7" gracePeriod=30 Jan 30 17:56:18 crc kubenswrapper[4766]: I0130 17:56:18.746443 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="ae29236e-6325-4cee-99e8-45b5dbfdae9d" containerName="cinder-api" containerID="cri-o://c08867f77925f297d8364ac04af980e47fe8184765c9411990b3db0e28b7c360" gracePeriod=30 Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.198613 4766 generic.go:334] "Generic (PLEG): container finished" podID="ae29236e-6325-4cee-99e8-45b5dbfdae9d" containerID="7d3561f611119703905071b1a184200e4fd9b43325527e17a71ec76489c683e7" exitCode=143 Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.198702 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ae29236e-6325-4cee-99e8-45b5dbfdae9d","Type":"ContainerDied","Data":"7d3561f611119703905071b1a184200e4fd9b43325527e17a71ec76489c683e7"} Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.200405 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e39e170e-2256-4796-a06f-b1e63a1425cb","Type":"ContainerStarted","Data":"8f7986fdf0509a3b313a84b7431579c1b97d51683c10261eb94b9c452088e236"} Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.200450 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e39e170e-2256-4796-a06f-b1e63a1425cb","Type":"ContainerStarted","Data":"4d1dcfb137f45248ef24c36aea3a345b897d78946e574c91d0a4df5b10898168"} Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.230663 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=2.230640415 podStartE2EDuration="2.230640415s" podCreationTimestamp="2026-01-30 17:56:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:56:19.219831771 +0000 UTC m=+5633.857789117" watchObservedRunningTime="2026-01-30 17:56:19.230640415 +0000 UTC m=+5633.868597781" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.374436 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.376910 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.384317 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.415459 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.508277 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cf1121a2-7545-40c9-9280-9337e94554d9-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.508338 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/cf1121a2-7545-40c9-9280-9337e94554d9-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.508380 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf1121a2-7545-40c9-9280-9337e94554d9-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.508420 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/cf1121a2-7545-40c9-9280-9337e94554d9-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.508444 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/cf1121a2-7545-40c9-9280-9337e94554d9-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.508465 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cf1121a2-7545-40c9-9280-9337e94554d9-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.508489 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/cf1121a2-7545-40c9-9280-9337e94554d9-sys\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.508547 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/cf1121a2-7545-40c9-9280-9337e94554d9-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.508577 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5985v\" (UniqueName: \"kubernetes.io/projected/cf1121a2-7545-40c9-9280-9337e94554d9-kube-api-access-5985v\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.508597 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf1121a2-7545-40c9-9280-9337e94554d9-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.508632 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/cf1121a2-7545-40c9-9280-9337e94554d9-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.508670 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/cf1121a2-7545-40c9-9280-9337e94554d9-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.508695 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cf1121a2-7545-40c9-9280-9337e94554d9-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.508734 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/cf1121a2-7545-40c9-9280-9337e94554d9-dev\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.508774 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/cf1121a2-7545-40c9-9280-9337e94554d9-run\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.508814 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf1121a2-7545-40c9-9280-9337e94554d9-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.610576 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/cf1121a2-7545-40c9-9280-9337e94554d9-dev\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.610651 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/cf1121a2-7545-40c9-9280-9337e94554d9-run\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.610693 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf1121a2-7545-40c9-9280-9337e94554d9-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.610733 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cf1121a2-7545-40c9-9280-9337e94554d9-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.610750 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/cf1121a2-7545-40c9-9280-9337e94554d9-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.610778 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf1121a2-7545-40c9-9280-9337e94554d9-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.610804 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/cf1121a2-7545-40c9-9280-9337e94554d9-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.610822 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/cf1121a2-7545-40c9-9280-9337e94554d9-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.610839 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cf1121a2-7545-40c9-9280-9337e94554d9-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.610856 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/cf1121a2-7545-40c9-9280-9337e94554d9-sys\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.610898 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/cf1121a2-7545-40c9-9280-9337e94554d9-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.610916 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5985v\" (UniqueName: \"kubernetes.io/projected/cf1121a2-7545-40c9-9280-9337e94554d9-kube-api-access-5985v\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.610933 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf1121a2-7545-40c9-9280-9337e94554d9-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.611022 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/cf1121a2-7545-40c9-9280-9337e94554d9-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.611060 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/cf1121a2-7545-40c9-9280-9337e94554d9-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.611081 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cf1121a2-7545-40c9-9280-9337e94554d9-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.611160 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cf1121a2-7545-40c9-9280-9337e94554d9-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.611213 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/cf1121a2-7545-40c9-9280-9337e94554d9-dev\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.611236 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/cf1121a2-7545-40c9-9280-9337e94554d9-run\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.613579 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/cf1121a2-7545-40c9-9280-9337e94554d9-sys\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.613865 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/cf1121a2-7545-40c9-9280-9337e94554d9-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.614449 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/cf1121a2-7545-40c9-9280-9337e94554d9-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.614477 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cf1121a2-7545-40c9-9280-9337e94554d9-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.614553 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/cf1121a2-7545-40c9-9280-9337e94554d9-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.614634 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/cf1121a2-7545-40c9-9280-9337e94554d9-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.614664 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/cf1121a2-7545-40c9-9280-9337e94554d9-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.617389 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cf1121a2-7545-40c9-9280-9337e94554d9-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.617459 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/cf1121a2-7545-40c9-9280-9337e94554d9-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.619933 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf1121a2-7545-40c9-9280-9337e94554d9-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.620359 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf1121a2-7545-40c9-9280-9337e94554d9-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.620560 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf1121a2-7545-40c9-9280-9337e94554d9-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.629798 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5985v\" (UniqueName: \"kubernetes.io/projected/cf1121a2-7545-40c9-9280-9337e94554d9-kube-api-access-5985v\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.719087 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.978594 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.980760 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.983623 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.991902 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.087086 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.115376 4766 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.122536 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/1a4ab9dd-be94-4701-a0ba-55dde27e9543-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.122598 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a4ab9dd-be94-4701-a0ba-55dde27e9543-config-data\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.122693 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1a4ab9dd-be94-4701-a0ba-55dde27e9543-run\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.122717 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1a4ab9dd-be94-4701-a0ba-55dde27e9543-sys\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.122734 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/1a4ab9dd-be94-4701-a0ba-55dde27e9543-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.122748 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a4ab9dd-be94-4701-a0ba-55dde27e9543-config-data-custom\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.122766 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a4ab9dd-be94-4701-a0ba-55dde27e9543-scripts\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.122821 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a4ab9dd-be94-4701-a0ba-55dde27e9543-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.123115 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/1a4ab9dd-be94-4701-a0ba-55dde27e9543-etc-nvme\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.123227 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/1a4ab9dd-be94-4701-a0ba-55dde27e9543-dev\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.123288 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/1a4ab9dd-be94-4701-a0ba-55dde27e9543-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.123406 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1a4ab9dd-be94-4701-a0ba-55dde27e9543-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.123462 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1a4ab9dd-be94-4701-a0ba-55dde27e9543-ceph\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.123501 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/1a4ab9dd-be94-4701-a0ba-55dde27e9543-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.123522 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1a4ab9dd-be94-4701-a0ba-55dde27e9543-lib-modules\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.123539 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6cnl\" (UniqueName: \"kubernetes.io/projected/1a4ab9dd-be94-4701-a0ba-55dde27e9543-kube-api-access-j6cnl\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.209732 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"cf1121a2-7545-40c9-9280-9337e94554d9","Type":"ContainerStarted","Data":"146b4e6fed4059a7b0f7215f8e3b9261dd1e347a79471ba7bda8a46a110ffac7"} Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.225302 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/1a4ab9dd-be94-4701-a0ba-55dde27e9543-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.225428 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1a4ab9dd-be94-4701-a0ba-55dde27e9543-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.225448 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/1a4ab9dd-be94-4701-a0ba-55dde27e9543-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.225466 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1a4ab9dd-be94-4701-a0ba-55dde27e9543-ceph\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.225542 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1a4ab9dd-be94-4701-a0ba-55dde27e9543-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.225571 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/1a4ab9dd-be94-4701-a0ba-55dde27e9543-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.225607 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1a4ab9dd-be94-4701-a0ba-55dde27e9543-lib-modules\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.225616 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/1a4ab9dd-be94-4701-a0ba-55dde27e9543-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.225640 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6cnl\" (UniqueName: \"kubernetes.io/projected/1a4ab9dd-be94-4701-a0ba-55dde27e9543-kube-api-access-j6cnl\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.225694 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/1a4ab9dd-be94-4701-a0ba-55dde27e9543-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.225774 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/1a4ab9dd-be94-4701-a0ba-55dde27e9543-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.225792 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a4ab9dd-be94-4701-a0ba-55dde27e9543-config-data\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.225782 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1a4ab9dd-be94-4701-a0ba-55dde27e9543-lib-modules\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.226049 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1a4ab9dd-be94-4701-a0ba-55dde27e9543-run\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.226076 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1a4ab9dd-be94-4701-a0ba-55dde27e9543-run\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.226132 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1a4ab9dd-be94-4701-a0ba-55dde27e9543-sys\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.226233 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1a4ab9dd-be94-4701-a0ba-55dde27e9543-sys\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.226275 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/1a4ab9dd-be94-4701-a0ba-55dde27e9543-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.226302 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a4ab9dd-be94-4701-a0ba-55dde27e9543-config-data-custom\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.226304 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/1a4ab9dd-be94-4701-a0ba-55dde27e9543-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.226335 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a4ab9dd-be94-4701-a0ba-55dde27e9543-scripts\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.226482 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a4ab9dd-be94-4701-a0ba-55dde27e9543-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.226578 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/1a4ab9dd-be94-4701-a0ba-55dde27e9543-etc-nvme\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.226665 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/1a4ab9dd-be94-4701-a0ba-55dde27e9543-etc-nvme\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.226731 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/1a4ab9dd-be94-4701-a0ba-55dde27e9543-dev\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.226870 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/1a4ab9dd-be94-4701-a0ba-55dde27e9543-dev\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.234000 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a4ab9dd-be94-4701-a0ba-55dde27e9543-scripts\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.235078 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a4ab9dd-be94-4701-a0ba-55dde27e9543-config-data\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.235519 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1a4ab9dd-be94-4701-a0ba-55dde27e9543-ceph\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.236956 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a4ab9dd-be94-4701-a0ba-55dde27e9543-config-data-custom\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.237918 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a4ab9dd-be94-4701-a0ba-55dde27e9543-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.250873 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6cnl\" (UniqueName: \"kubernetes.io/projected/1a4ab9dd-be94-4701-a0ba-55dde27e9543-kube-api-access-j6cnl\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.312638 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.867308 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 30 17:56:20 crc kubenswrapper[4766]: W0130 17:56:20.877956 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a4ab9dd_be94_4701_a0ba_55dde27e9543.slice/crio-96a2de712568508cf1d1c68114b04d443a069839b11b2c111dcca205ca08ea98 WatchSource:0}: Error finding container 96a2de712568508cf1d1c68114b04d443a069839b11b2c111dcca205ca08ea98: Status 404 returned error can't find the container with id 96a2de712568508cf1d1c68114b04d443a069839b11b2c111dcca205ca08ea98 Jan 30 17:56:21 crc kubenswrapper[4766]: I0130 17:56:21.219969 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"1a4ab9dd-be94-4701-a0ba-55dde27e9543","Type":"ContainerStarted","Data":"96a2de712568508cf1d1c68114b04d443a069839b11b2c111dcca205ca08ea98"} Jan 30 17:56:22 crc kubenswrapper[4766]: I0130 17:56:22.235618 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"cf1121a2-7545-40c9-9280-9337e94554d9","Type":"ContainerStarted","Data":"42bfdb3153947fd27f29e304c5840adc4afcdad64331539bae96d661874821c3"} Jan 30 17:56:22 crc kubenswrapper[4766]: I0130 17:56:22.236135 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"cf1121a2-7545-40c9-9280-9337e94554d9","Type":"ContainerStarted","Data":"37e3f83e319f5434e415c94d5864b501a939dc7bff638335a31077f6430d92fe"} Jan 30 17:56:22 crc kubenswrapper[4766]: I0130 17:56:22.238033 4766 generic.go:334] "Generic (PLEG): container finished" podID="ae29236e-6325-4cee-99e8-45b5dbfdae9d" containerID="c08867f77925f297d8364ac04af980e47fe8184765c9411990b3db0e28b7c360" exitCode=0 Jan 30 17:56:22 crc kubenswrapper[4766]: I0130 17:56:22.238094 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ae29236e-6325-4cee-99e8-45b5dbfdae9d","Type":"ContainerDied","Data":"c08867f77925f297d8364ac04af980e47fe8184765c9411990b3db0e28b7c360"} Jan 30 17:56:22 crc kubenswrapper[4766]: I0130 17:56:22.242238 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"1a4ab9dd-be94-4701-a0ba-55dde27e9543","Type":"ContainerStarted","Data":"ea4f8155551129bdd1136314348feddd5496f037a509057432b02004d45ec400"} Jan 30 17:56:22 crc kubenswrapper[4766]: I0130 17:56:22.242304 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"1a4ab9dd-be94-4701-a0ba-55dde27e9543","Type":"ContainerStarted","Data":"f287689326ec338c74496105b1c0ac7e73fb399b0d9513755b69ccd5bc6ccda5"} Jan 30 17:56:22 crc kubenswrapper[4766]: I0130 17:56:22.295562 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=2.149010573 podStartE2EDuration="3.295543194s" podCreationTimestamp="2026-01-30 17:56:19 +0000 UTC" firstStartedPulling="2026-01-30 17:56:20.115076782 +0000 UTC m=+5634.753034128" lastFinishedPulling="2026-01-30 17:56:21.261609403 +0000 UTC m=+5635.899566749" observedRunningTime="2026-01-30 17:56:22.286072426 +0000 UTC m=+5636.924029792" watchObservedRunningTime="2026-01-30 17:56:22.295543194 +0000 UTC m=+5636.933500540" Jan 30 17:56:22 crc kubenswrapper[4766]: I0130 17:56:22.314268 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=2.480187822 podStartE2EDuration="3.314250091s" podCreationTimestamp="2026-01-30 17:56:19 +0000 UTC" firstStartedPulling="2026-01-30 17:56:20.88141375 +0000 UTC m=+5635.519371096" lastFinishedPulling="2026-01-30 17:56:21.715476019 +0000 UTC m=+5636.353433365" observedRunningTime="2026-01-30 17:56:22.306505542 +0000 UTC m=+5636.944462888" watchObservedRunningTime="2026-01-30 17:56:22.314250091 +0000 UTC m=+5636.952207427" Jan 30 17:56:22 crc kubenswrapper[4766]: I0130 17:56:22.323190 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 17:56:22 crc kubenswrapper[4766]: I0130 17:56:22.385513 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5c4b\" (UniqueName: \"kubernetes.io/projected/ae29236e-6325-4cee-99e8-45b5dbfdae9d-kube-api-access-z5c4b\") pod \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\" (UID: \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\") " Jan 30 17:56:22 crc kubenswrapper[4766]: I0130 17:56:22.385883 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae29236e-6325-4cee-99e8-45b5dbfdae9d-combined-ca-bundle\") pod \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\" (UID: \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\") " Jan 30 17:56:22 crc kubenswrapper[4766]: I0130 17:56:22.385926 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae29236e-6325-4cee-99e8-45b5dbfdae9d-config-data\") pod \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\" (UID: \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\") " Jan 30 17:56:22 crc kubenswrapper[4766]: I0130 17:56:22.385959 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae29236e-6325-4cee-99e8-45b5dbfdae9d-config-data-custom\") pod \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\" (UID: \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\") " Jan 30 17:56:22 crc kubenswrapper[4766]: I0130 17:56:22.386010 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae29236e-6325-4cee-99e8-45b5dbfdae9d-logs\") pod \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\" (UID: \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\") " Jan 30 17:56:22 crc kubenswrapper[4766]: I0130 17:56:22.386049 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae29236e-6325-4cee-99e8-45b5dbfdae9d-scripts\") pod \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\" (UID: \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\") " Jan 30 17:56:22 crc kubenswrapper[4766]: I0130 17:56:22.386066 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ae29236e-6325-4cee-99e8-45b5dbfdae9d-etc-machine-id\") pod \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\" (UID: \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\") " Jan 30 17:56:22 crc kubenswrapper[4766]: I0130 17:56:22.386419 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae29236e-6325-4cee-99e8-45b5dbfdae9d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ae29236e-6325-4cee-99e8-45b5dbfdae9d" (UID: "ae29236e-6325-4cee-99e8-45b5dbfdae9d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:56:22 crc kubenswrapper[4766]: I0130 17:56:22.387719 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae29236e-6325-4cee-99e8-45b5dbfdae9d-logs" (OuterVolumeSpecName: "logs") pod "ae29236e-6325-4cee-99e8-45b5dbfdae9d" (UID: "ae29236e-6325-4cee-99e8-45b5dbfdae9d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:56:22 crc kubenswrapper[4766]: I0130 17:56:22.393645 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae29236e-6325-4cee-99e8-45b5dbfdae9d-kube-api-access-z5c4b" (OuterVolumeSpecName: "kube-api-access-z5c4b") pod "ae29236e-6325-4cee-99e8-45b5dbfdae9d" (UID: "ae29236e-6325-4cee-99e8-45b5dbfdae9d"). InnerVolumeSpecName "kube-api-access-z5c4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:56:22 crc kubenswrapper[4766]: I0130 17:56:22.397542 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae29236e-6325-4cee-99e8-45b5dbfdae9d-scripts" (OuterVolumeSpecName: "scripts") pod "ae29236e-6325-4cee-99e8-45b5dbfdae9d" (UID: "ae29236e-6325-4cee-99e8-45b5dbfdae9d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:56:22 crc kubenswrapper[4766]: I0130 17:56:22.400924 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae29236e-6325-4cee-99e8-45b5dbfdae9d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ae29236e-6325-4cee-99e8-45b5dbfdae9d" (UID: "ae29236e-6325-4cee-99e8-45b5dbfdae9d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:56:22 crc kubenswrapper[4766]: I0130 17:56:22.436090 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae29236e-6325-4cee-99e8-45b5dbfdae9d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ae29236e-6325-4cee-99e8-45b5dbfdae9d" (UID: "ae29236e-6325-4cee-99e8-45b5dbfdae9d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:56:22 crc kubenswrapper[4766]: I0130 17:56:22.448693 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 30 17:56:22 crc kubenswrapper[4766]: I0130 17:56:22.458251 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae29236e-6325-4cee-99e8-45b5dbfdae9d-config-data" (OuterVolumeSpecName: "config-data") pod "ae29236e-6325-4cee-99e8-45b5dbfdae9d" (UID: "ae29236e-6325-4cee-99e8-45b5dbfdae9d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:56:22 crc kubenswrapper[4766]: I0130 17:56:22.487975 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae29236e-6325-4cee-99e8-45b5dbfdae9d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:22 crc kubenswrapper[4766]: I0130 17:56:22.488379 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae29236e-6325-4cee-99e8-45b5dbfdae9d-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:22 crc kubenswrapper[4766]: I0130 17:56:22.488394 4766 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae29236e-6325-4cee-99e8-45b5dbfdae9d-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:22 crc kubenswrapper[4766]: I0130 17:56:22.488403 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae29236e-6325-4cee-99e8-45b5dbfdae9d-logs\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:22 crc kubenswrapper[4766]: I0130 17:56:22.488414 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae29236e-6325-4cee-99e8-45b5dbfdae9d-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:22 crc kubenswrapper[4766]: I0130 17:56:22.488422 4766 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ae29236e-6325-4cee-99e8-45b5dbfdae9d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:22 crc kubenswrapper[4766]: I0130 17:56:22.488429 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5c4b\" (UniqueName: \"kubernetes.io/projected/ae29236e-6325-4cee-99e8-45b5dbfdae9d-kube-api-access-z5c4b\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.258897 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.258964 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ae29236e-6325-4cee-99e8-45b5dbfdae9d","Type":"ContainerDied","Data":"603dadae66a61f77a7416e142f56c189014aab67ac07047867138dbd5a061aa5"} Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.259111 4766 scope.go:117] "RemoveContainer" containerID="c08867f77925f297d8364ac04af980e47fe8184765c9411990b3db0e28b7c360" Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.316511 4766 scope.go:117] "RemoveContainer" containerID="7d3561f611119703905071b1a184200e4fd9b43325527e17a71ec76489c683e7" Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.333427 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.344659 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.367286 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 30 17:56:23 crc kubenswrapper[4766]: E0130 17:56:23.367777 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae29236e-6325-4cee-99e8-45b5dbfdae9d" containerName="cinder-api" Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.367812 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae29236e-6325-4cee-99e8-45b5dbfdae9d" containerName="cinder-api" Jan 30 17:56:23 crc kubenswrapper[4766]: E0130 17:56:23.367845 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae29236e-6325-4cee-99e8-45b5dbfdae9d" containerName="cinder-api-log" Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.367855 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae29236e-6325-4cee-99e8-45b5dbfdae9d" containerName="cinder-api-log" Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.368649 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae29236e-6325-4cee-99e8-45b5dbfdae9d" containerName="cinder-api" Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.368685 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae29236e-6325-4cee-99e8-45b5dbfdae9d" containerName="cinder-api-log" Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.369840 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.371997 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.379473 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.418121 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9a81891-2796-4952-bf9e-9a9f83668e34-logs\") pod \"cinder-api-0\" (UID: \"e9a81891-2796-4952-bf9e-9a9f83668e34\") " pod="openstack/cinder-api-0" Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.418297 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e9a81891-2796-4952-bf9e-9a9f83668e34-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e9a81891-2796-4952-bf9e-9a9f83668e34\") " pod="openstack/cinder-api-0" Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.418350 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9a81891-2796-4952-bf9e-9a9f83668e34-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e9a81891-2796-4952-bf9e-9a9f83668e34\") " pod="openstack/cinder-api-0" Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.418390 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9a81891-2796-4952-bf9e-9a9f83668e34-config-data\") pod \"cinder-api-0\" (UID: \"e9a81891-2796-4952-bf9e-9a9f83668e34\") " pod="openstack/cinder-api-0" Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.418532 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e9a81891-2796-4952-bf9e-9a9f83668e34-config-data-custom\") pod \"cinder-api-0\" (UID: \"e9a81891-2796-4952-bf9e-9a9f83668e34\") " pod="openstack/cinder-api-0" Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.418783 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s25rt\" (UniqueName: \"kubernetes.io/projected/e9a81891-2796-4952-bf9e-9a9f83668e34-kube-api-access-s25rt\") pod \"cinder-api-0\" (UID: \"e9a81891-2796-4952-bf9e-9a9f83668e34\") " pod="openstack/cinder-api-0" Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.418823 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9a81891-2796-4952-bf9e-9a9f83668e34-scripts\") pod \"cinder-api-0\" (UID: \"e9a81891-2796-4952-bf9e-9a9f83668e34\") " pod="openstack/cinder-api-0" Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.521021 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s25rt\" (UniqueName: \"kubernetes.io/projected/e9a81891-2796-4952-bf9e-9a9f83668e34-kube-api-access-s25rt\") pod \"cinder-api-0\" (UID: \"e9a81891-2796-4952-bf9e-9a9f83668e34\") " pod="openstack/cinder-api-0" Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.521378 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9a81891-2796-4952-bf9e-9a9f83668e34-scripts\") pod \"cinder-api-0\" (UID: \"e9a81891-2796-4952-bf9e-9a9f83668e34\") " pod="openstack/cinder-api-0" Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.521439 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9a81891-2796-4952-bf9e-9a9f83668e34-logs\") pod \"cinder-api-0\" (UID: \"e9a81891-2796-4952-bf9e-9a9f83668e34\") " pod="openstack/cinder-api-0" Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.521510 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e9a81891-2796-4952-bf9e-9a9f83668e34-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e9a81891-2796-4952-bf9e-9a9f83668e34\") " pod="openstack/cinder-api-0" Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.521537 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9a81891-2796-4952-bf9e-9a9f83668e34-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e9a81891-2796-4952-bf9e-9a9f83668e34\") " pod="openstack/cinder-api-0" Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.521563 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9a81891-2796-4952-bf9e-9a9f83668e34-config-data\") pod \"cinder-api-0\" (UID: \"e9a81891-2796-4952-bf9e-9a9f83668e34\") " pod="openstack/cinder-api-0" Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.521586 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e9a81891-2796-4952-bf9e-9a9f83668e34-config-data-custom\") pod \"cinder-api-0\" (UID: \"e9a81891-2796-4952-bf9e-9a9f83668e34\") " pod="openstack/cinder-api-0" Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.522714 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9a81891-2796-4952-bf9e-9a9f83668e34-logs\") pod \"cinder-api-0\" (UID: \"e9a81891-2796-4952-bf9e-9a9f83668e34\") " pod="openstack/cinder-api-0" Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.524972 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e9a81891-2796-4952-bf9e-9a9f83668e34-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e9a81891-2796-4952-bf9e-9a9f83668e34\") " pod="openstack/cinder-api-0" Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.530997 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9a81891-2796-4952-bf9e-9a9f83668e34-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e9a81891-2796-4952-bf9e-9a9f83668e34\") " pod="openstack/cinder-api-0" Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.530997 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e9a81891-2796-4952-bf9e-9a9f83668e34-config-data-custom\") pod \"cinder-api-0\" (UID: \"e9a81891-2796-4952-bf9e-9a9f83668e34\") " pod="openstack/cinder-api-0" Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.537264 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9a81891-2796-4952-bf9e-9a9f83668e34-scripts\") pod \"cinder-api-0\" (UID: \"e9a81891-2796-4952-bf9e-9a9f83668e34\") " pod="openstack/cinder-api-0" Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.539120 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s25rt\" (UniqueName: \"kubernetes.io/projected/e9a81891-2796-4952-bf9e-9a9f83668e34-kube-api-access-s25rt\") pod \"cinder-api-0\" (UID: \"e9a81891-2796-4952-bf9e-9a9f83668e34\") " pod="openstack/cinder-api-0" Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.540300 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9a81891-2796-4952-bf9e-9a9f83668e34-config-data\") pod \"cinder-api-0\" (UID: \"e9a81891-2796-4952-bf9e-9a9f83668e34\") " pod="openstack/cinder-api-0" Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.685983 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 17:56:24 crc kubenswrapper[4766]: I0130 17:56:24.050688 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae29236e-6325-4cee-99e8-45b5dbfdae9d" path="/var/lib/kubelet/pods/ae29236e-6325-4cee-99e8-45b5dbfdae9d/volumes" Jan 30 17:56:24 crc kubenswrapper[4766]: W0130 17:56:24.139243 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9a81891_2796_4952_bf9e_9a9f83668e34.slice/crio-c69488c4c10198d030beb4df329e9bcb690f401cb0a8192d87870bcbeaa7c95c WatchSource:0}: Error finding container c69488c4c10198d030beb4df329e9bcb690f401cb0a8192d87870bcbeaa7c95c: Status 404 returned error can't find the container with id c69488c4c10198d030beb4df329e9bcb690f401cb0a8192d87870bcbeaa7c95c Jan 30 17:56:24 crc kubenswrapper[4766]: I0130 17:56:24.155705 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 17:56:24 crc kubenswrapper[4766]: I0130 17:56:24.270464 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e9a81891-2796-4952-bf9e-9a9f83668e34","Type":"ContainerStarted","Data":"c69488c4c10198d030beb4df329e9bcb690f401cb0a8192d87870bcbeaa7c95c"} Jan 30 17:56:24 crc kubenswrapper[4766]: I0130 17:56:24.544744 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 17:56:24 crc kubenswrapper[4766]: I0130 17:56:24.546121 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 17:56:24 crc kubenswrapper[4766]: I0130 17:56:24.546193 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 17:56:24 crc kubenswrapper[4766]: I0130 17:56:24.550706 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 17:56:24 crc kubenswrapper[4766]: I0130 17:56:24.602375 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 17:56:24 crc kubenswrapper[4766]: I0130 17:56:24.602462 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 17:56:24 crc kubenswrapper[4766]: I0130 17:56:24.605821 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 17:56:24 crc kubenswrapper[4766]: I0130 17:56:24.606163 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 17:56:24 crc kubenswrapper[4766]: I0130 17:56:24.719541 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:25 crc kubenswrapper[4766]: I0130 17:56:25.282734 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e9a81891-2796-4952-bf9e-9a9f83668e34","Type":"ContainerStarted","Data":"2192d8d5a852ee6ed3f6476d054fb4f52aff5f1be451d555a155c7c49dad9876"} Jan 30 17:56:25 crc kubenswrapper[4766]: I0130 17:56:25.283090 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 17:56:25 crc kubenswrapper[4766]: I0130 17:56:25.286327 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 17:56:25 crc kubenswrapper[4766]: I0130 17:56:25.314586 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Jan 30 17:56:26 crc kubenswrapper[4766]: I0130 17:56:26.293957 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e9a81891-2796-4952-bf9e-9a9f83668e34","Type":"ContainerStarted","Data":"b5b53a4ca11ebcd9760615d00e06b7cf70cec663cd6c22fc2f2a84d6c6a88377"} Jan 30 17:56:26 crc kubenswrapper[4766]: I0130 17:56:26.321136 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.3211141619999998 podStartE2EDuration="3.321114162s" podCreationTimestamp="2026-01-30 17:56:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:56:26.31738588 +0000 UTC m=+5640.955343256" watchObservedRunningTime="2026-01-30 17:56:26.321114162 +0000 UTC m=+5640.959071508" Jan 30 17:56:27 crc kubenswrapper[4766]: I0130 17:56:27.302201 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 30 17:56:27 crc kubenswrapper[4766]: I0130 17:56:27.661786 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 30 17:56:27 crc kubenswrapper[4766]: I0130 17:56:27.712097 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 17:56:28 crc kubenswrapper[4766]: I0130 17:56:28.310254 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="e39e170e-2256-4796-a06f-b1e63a1425cb" containerName="probe" containerID="cri-o://8f7986fdf0509a3b313a84b7431579c1b97d51683c10261eb94b9c452088e236" gracePeriod=30 Jan 30 17:56:28 crc kubenswrapper[4766]: I0130 17:56:28.310210 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="e39e170e-2256-4796-a06f-b1e63a1425cb" containerName="cinder-scheduler" containerID="cri-o://4d1dcfb137f45248ef24c36aea3a345b897d78946e574c91d0a4df5b10898168" gracePeriod=30 Jan 30 17:56:29 crc kubenswrapper[4766]: I0130 17:56:29.320918 4766 generic.go:334] "Generic (PLEG): container finished" podID="e39e170e-2256-4796-a06f-b1e63a1425cb" containerID="8f7986fdf0509a3b313a84b7431579c1b97d51683c10261eb94b9c452088e236" exitCode=0 Jan 30 17:56:29 crc kubenswrapper[4766]: I0130 17:56:29.320969 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e39e170e-2256-4796-a06f-b1e63a1425cb","Type":"ContainerDied","Data":"8f7986fdf0509a3b313a84b7431579c1b97d51683c10261eb94b9c452088e236"} Jan 30 17:56:29 crc kubenswrapper[4766]: I0130 17:56:29.914377 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:30 crc kubenswrapper[4766]: I0130 17:56:30.528903 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Jan 30 17:56:30 crc kubenswrapper[4766]: I0130 17:56:30.837617 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 17:56:30 crc kubenswrapper[4766]: I0130 17:56:30.884760 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e39e170e-2256-4796-a06f-b1e63a1425cb-etc-machine-id\") pod \"e39e170e-2256-4796-a06f-b1e63a1425cb\" (UID: \"e39e170e-2256-4796-a06f-b1e63a1425cb\") " Jan 30 17:56:30 crc kubenswrapper[4766]: I0130 17:56:30.884822 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e39e170e-2256-4796-a06f-b1e63a1425cb-scripts\") pod \"e39e170e-2256-4796-a06f-b1e63a1425cb\" (UID: \"e39e170e-2256-4796-a06f-b1e63a1425cb\") " Jan 30 17:56:30 crc kubenswrapper[4766]: I0130 17:56:30.884861 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgg4t\" (UniqueName: \"kubernetes.io/projected/e39e170e-2256-4796-a06f-b1e63a1425cb-kube-api-access-kgg4t\") pod \"e39e170e-2256-4796-a06f-b1e63a1425cb\" (UID: \"e39e170e-2256-4796-a06f-b1e63a1425cb\") " Jan 30 17:56:30 crc kubenswrapper[4766]: I0130 17:56:30.884885 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e39e170e-2256-4796-a06f-b1e63a1425cb-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e39e170e-2256-4796-a06f-b1e63a1425cb" (UID: "e39e170e-2256-4796-a06f-b1e63a1425cb"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:56:30 crc kubenswrapper[4766]: I0130 17:56:30.884978 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e39e170e-2256-4796-a06f-b1e63a1425cb-config-data-custom\") pod \"e39e170e-2256-4796-a06f-b1e63a1425cb\" (UID: \"e39e170e-2256-4796-a06f-b1e63a1425cb\") " Jan 30 17:56:30 crc kubenswrapper[4766]: I0130 17:56:30.885006 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e39e170e-2256-4796-a06f-b1e63a1425cb-combined-ca-bundle\") pod \"e39e170e-2256-4796-a06f-b1e63a1425cb\" (UID: \"e39e170e-2256-4796-a06f-b1e63a1425cb\") " Jan 30 17:56:30 crc kubenswrapper[4766]: I0130 17:56:30.885065 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e39e170e-2256-4796-a06f-b1e63a1425cb-config-data\") pod \"e39e170e-2256-4796-a06f-b1e63a1425cb\" (UID: \"e39e170e-2256-4796-a06f-b1e63a1425cb\") " Jan 30 17:56:30 crc kubenswrapper[4766]: I0130 17:56:30.885573 4766 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e39e170e-2256-4796-a06f-b1e63a1425cb-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:30 crc kubenswrapper[4766]: I0130 17:56:30.897448 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e39e170e-2256-4796-a06f-b1e63a1425cb-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e39e170e-2256-4796-a06f-b1e63a1425cb" (UID: "e39e170e-2256-4796-a06f-b1e63a1425cb"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:56:30 crc kubenswrapper[4766]: I0130 17:56:30.897510 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e39e170e-2256-4796-a06f-b1e63a1425cb-kube-api-access-kgg4t" (OuterVolumeSpecName: "kube-api-access-kgg4t") pod "e39e170e-2256-4796-a06f-b1e63a1425cb" (UID: "e39e170e-2256-4796-a06f-b1e63a1425cb"). InnerVolumeSpecName "kube-api-access-kgg4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:56:30 crc kubenswrapper[4766]: I0130 17:56:30.898627 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e39e170e-2256-4796-a06f-b1e63a1425cb-scripts" (OuterVolumeSpecName: "scripts") pod "e39e170e-2256-4796-a06f-b1e63a1425cb" (UID: "e39e170e-2256-4796-a06f-b1e63a1425cb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:56:30 crc kubenswrapper[4766]: I0130 17:56:30.935664 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e39e170e-2256-4796-a06f-b1e63a1425cb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e39e170e-2256-4796-a06f-b1e63a1425cb" (UID: "e39e170e-2256-4796-a06f-b1e63a1425cb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:56:30 crc kubenswrapper[4766]: I0130 17:56:30.987484 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e39e170e-2256-4796-a06f-b1e63a1425cb-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:30 crc kubenswrapper[4766]: I0130 17:56:30.987524 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgg4t\" (UniqueName: \"kubernetes.io/projected/e39e170e-2256-4796-a06f-b1e63a1425cb-kube-api-access-kgg4t\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:30 crc kubenswrapper[4766]: I0130 17:56:30.987539 4766 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e39e170e-2256-4796-a06f-b1e63a1425cb-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:30 crc kubenswrapper[4766]: I0130 17:56:30.987548 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e39e170e-2256-4796-a06f-b1e63a1425cb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:30 crc kubenswrapper[4766]: I0130 17:56:30.992524 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e39e170e-2256-4796-a06f-b1e63a1425cb-config-data" (OuterVolumeSpecName: "config-data") pod "e39e170e-2256-4796-a06f-b1e63a1425cb" (UID: "e39e170e-2256-4796-a06f-b1e63a1425cb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.089840 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e39e170e-2256-4796-a06f-b1e63a1425cb-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.358902 4766 generic.go:334] "Generic (PLEG): container finished" podID="e39e170e-2256-4796-a06f-b1e63a1425cb" containerID="4d1dcfb137f45248ef24c36aea3a345b897d78946e574c91d0a4df5b10898168" exitCode=0 Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.358953 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e39e170e-2256-4796-a06f-b1e63a1425cb","Type":"ContainerDied","Data":"4d1dcfb137f45248ef24c36aea3a345b897d78946e574c91d0a4df5b10898168"} Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.358984 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e39e170e-2256-4796-a06f-b1e63a1425cb","Type":"ContainerDied","Data":"bd8c344070ab05d750472242fd65de8a04e107ddf96ae138a125614afab2f3d2"} Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.359001 4766 scope.go:117] "RemoveContainer" containerID="8f7986fdf0509a3b313a84b7431579c1b97d51683c10261eb94b9c452088e236" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.359012 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.385159 4766 scope.go:117] "RemoveContainer" containerID="4d1dcfb137f45248ef24c36aea3a345b897d78946e574c91d0a4df5b10898168" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.408194 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.426248 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.434203 4766 scope.go:117] "RemoveContainer" containerID="8f7986fdf0509a3b313a84b7431579c1b97d51683c10261eb94b9c452088e236" Jan 30 17:56:31 crc kubenswrapper[4766]: E0130 17:56:31.434677 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f7986fdf0509a3b313a84b7431579c1b97d51683c10261eb94b9c452088e236\": container with ID starting with 8f7986fdf0509a3b313a84b7431579c1b97d51683c10261eb94b9c452088e236 not found: ID does not exist" containerID="8f7986fdf0509a3b313a84b7431579c1b97d51683c10261eb94b9c452088e236" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.434739 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f7986fdf0509a3b313a84b7431579c1b97d51683c10261eb94b9c452088e236"} err="failed to get container status \"8f7986fdf0509a3b313a84b7431579c1b97d51683c10261eb94b9c452088e236\": rpc error: code = NotFound desc = could not find container \"8f7986fdf0509a3b313a84b7431579c1b97d51683c10261eb94b9c452088e236\": container with ID starting with 8f7986fdf0509a3b313a84b7431579c1b97d51683c10261eb94b9c452088e236 not found: ID does not exist" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.434768 4766 scope.go:117] "RemoveContainer" containerID="4d1dcfb137f45248ef24c36aea3a345b897d78946e574c91d0a4df5b10898168" Jan 30 17:56:31 crc kubenswrapper[4766]: E0130 17:56:31.435195 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d1dcfb137f45248ef24c36aea3a345b897d78946e574c91d0a4df5b10898168\": container with ID starting with 4d1dcfb137f45248ef24c36aea3a345b897d78946e574c91d0a4df5b10898168 not found: ID does not exist" containerID="4d1dcfb137f45248ef24c36aea3a345b897d78946e574c91d0a4df5b10898168" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.435224 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d1dcfb137f45248ef24c36aea3a345b897d78946e574c91d0a4df5b10898168"} err="failed to get container status \"4d1dcfb137f45248ef24c36aea3a345b897d78946e574c91d0a4df5b10898168\": rpc error: code = NotFound desc = could not find container \"4d1dcfb137f45248ef24c36aea3a345b897d78946e574c91d0a4df5b10898168\": container with ID starting with 4d1dcfb137f45248ef24c36aea3a345b897d78946e574c91d0a4df5b10898168 not found: ID does not exist" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.438378 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 17:56:31 crc kubenswrapper[4766]: E0130 17:56:31.438855 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e39e170e-2256-4796-a06f-b1e63a1425cb" containerName="probe" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.438874 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="e39e170e-2256-4796-a06f-b1e63a1425cb" containerName="probe" Jan 30 17:56:31 crc kubenswrapper[4766]: E0130 17:56:31.438895 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e39e170e-2256-4796-a06f-b1e63a1425cb" containerName="cinder-scheduler" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.438902 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="e39e170e-2256-4796-a06f-b1e63a1425cb" containerName="cinder-scheduler" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.439071 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="e39e170e-2256-4796-a06f-b1e63a1425cb" containerName="cinder-scheduler" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.439102 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="e39e170e-2256-4796-a06f-b1e63a1425cb" containerName="probe" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.440095 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.444539 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.447776 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.498295 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm5z4\" (UniqueName: \"kubernetes.io/projected/598edf34-3970-416e-b9fb-4de69de61ca1-kube-api-access-rm5z4\") pod \"cinder-scheduler-0\" (UID: \"598edf34-3970-416e-b9fb-4de69de61ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.498558 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/598edf34-3970-416e-b9fb-4de69de61ca1-config-data\") pod \"cinder-scheduler-0\" (UID: \"598edf34-3970-416e-b9fb-4de69de61ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.498677 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/598edf34-3970-416e-b9fb-4de69de61ca1-scripts\") pod \"cinder-scheduler-0\" (UID: \"598edf34-3970-416e-b9fb-4de69de61ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.499302 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/598edf34-3970-416e-b9fb-4de69de61ca1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"598edf34-3970-416e-b9fb-4de69de61ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.499356 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/598edf34-3970-416e-b9fb-4de69de61ca1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"598edf34-3970-416e-b9fb-4de69de61ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.499465 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/598edf34-3970-416e-b9fb-4de69de61ca1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"598edf34-3970-416e-b9fb-4de69de61ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.603062 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/598edf34-3970-416e-b9fb-4de69de61ca1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"598edf34-3970-416e-b9fb-4de69de61ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.603143 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rm5z4\" (UniqueName: \"kubernetes.io/projected/598edf34-3970-416e-b9fb-4de69de61ca1-kube-api-access-rm5z4\") pod \"cinder-scheduler-0\" (UID: \"598edf34-3970-416e-b9fb-4de69de61ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.603202 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/598edf34-3970-416e-b9fb-4de69de61ca1-config-data\") pod \"cinder-scheduler-0\" (UID: \"598edf34-3970-416e-b9fb-4de69de61ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.603260 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/598edf34-3970-416e-b9fb-4de69de61ca1-scripts\") pod \"cinder-scheduler-0\" (UID: \"598edf34-3970-416e-b9fb-4de69de61ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.603266 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/598edf34-3970-416e-b9fb-4de69de61ca1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"598edf34-3970-416e-b9fb-4de69de61ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.603412 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/598edf34-3970-416e-b9fb-4de69de61ca1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"598edf34-3970-416e-b9fb-4de69de61ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.603435 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/598edf34-3970-416e-b9fb-4de69de61ca1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"598edf34-3970-416e-b9fb-4de69de61ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.617412 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/598edf34-3970-416e-b9fb-4de69de61ca1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"598edf34-3970-416e-b9fb-4de69de61ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.617543 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/598edf34-3970-416e-b9fb-4de69de61ca1-config-data\") pod \"cinder-scheduler-0\" (UID: \"598edf34-3970-416e-b9fb-4de69de61ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.618004 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/598edf34-3970-416e-b9fb-4de69de61ca1-scripts\") pod \"cinder-scheduler-0\" (UID: \"598edf34-3970-416e-b9fb-4de69de61ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.618430 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/598edf34-3970-416e-b9fb-4de69de61ca1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"598edf34-3970-416e-b9fb-4de69de61ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.621418 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rm5z4\" (UniqueName: \"kubernetes.io/projected/598edf34-3970-416e-b9fb-4de69de61ca1-kube-api-access-rm5z4\") pod \"cinder-scheduler-0\" (UID: \"598edf34-3970-416e-b9fb-4de69de61ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.778524 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 17:56:32 crc kubenswrapper[4766]: I0130 17:56:32.053142 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e39e170e-2256-4796-a06f-b1e63a1425cb" path="/var/lib/kubelet/pods/e39e170e-2256-4796-a06f-b1e63a1425cb/volumes" Jan 30 17:56:32 crc kubenswrapper[4766]: I0130 17:56:32.264052 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 17:56:32 crc kubenswrapper[4766]: W0130 17:56:32.271322 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod598edf34_3970_416e_b9fb_4de69de61ca1.slice/crio-ed3148ec89b342916a0190ca83b69eb216d71af20531f264a17f34d538250ed6 WatchSource:0}: Error finding container ed3148ec89b342916a0190ca83b69eb216d71af20531f264a17f34d538250ed6: Status 404 returned error can't find the container with id ed3148ec89b342916a0190ca83b69eb216d71af20531f264a17f34d538250ed6 Jan 30 17:56:32 crc kubenswrapper[4766]: I0130 17:56:32.377518 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"598edf34-3970-416e-b9fb-4de69de61ca1","Type":"ContainerStarted","Data":"ed3148ec89b342916a0190ca83b69eb216d71af20531f264a17f34d538250ed6"} Jan 30 17:56:33 crc kubenswrapper[4766]: I0130 17:56:33.392582 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"598edf34-3970-416e-b9fb-4de69de61ca1","Type":"ContainerStarted","Data":"502ef3346c314657d354bff20d968ed7b9231399eab8fd575b2133cd6c7a0701"} Jan 30 17:56:33 crc kubenswrapper[4766]: I0130 17:56:33.393398 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"598edf34-3970-416e-b9fb-4de69de61ca1","Type":"ContainerStarted","Data":"157ace3d760d4885134b3cac4a4f23aa79dd7bbc39cd2738fc79abde829f0bec"} Jan 30 17:56:33 crc kubenswrapper[4766]: I0130 17:56:33.421712 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=2.421691854 podStartE2EDuration="2.421691854s" podCreationTimestamp="2026-01-30 17:56:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:56:33.413853381 +0000 UTC m=+5648.051810727" watchObservedRunningTime="2026-01-30 17:56:33.421691854 +0000 UTC m=+5648.059649200" Jan 30 17:56:35 crc kubenswrapper[4766]: I0130 17:56:35.482646 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 30 17:56:36 crc kubenswrapper[4766]: I0130 17:56:36.779374 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 30 17:56:39 crc kubenswrapper[4766]: I0130 17:56:39.045860 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:56:39 crc kubenswrapper[4766]: I0130 17:56:39.046265 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:56:42 crc kubenswrapper[4766]: I0130 17:56:42.002061 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 30 17:57:09 crc kubenswrapper[4766]: I0130 17:57:09.045144 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:57:09 crc kubenswrapper[4766]: I0130 17:57:09.045785 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:57:09 crc kubenswrapper[4766]: I0130 17:57:09.045831 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 17:57:09 crc kubenswrapper[4766]: I0130 17:57:09.046442 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e52209522ad6ff86d8d333246ee724873200c61e028d93d4fc612ac4b0977354"} pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 17:57:09 crc kubenswrapper[4766]: I0130 17:57:09.046531 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" containerID="cri-o://e52209522ad6ff86d8d333246ee724873200c61e028d93d4fc612ac4b0977354" gracePeriod=600 Jan 30 17:57:09 crc kubenswrapper[4766]: E0130 17:57:09.175284 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:57:09 crc kubenswrapper[4766]: I0130 17:57:09.741513 4766 generic.go:334] "Generic (PLEG): container finished" podID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerID="e52209522ad6ff86d8d333246ee724873200c61e028d93d4fc612ac4b0977354" exitCode=0 Jan 30 17:57:09 crc kubenswrapper[4766]: I0130 17:57:09.741620 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerDied","Data":"e52209522ad6ff86d8d333246ee724873200c61e028d93d4fc612ac4b0977354"} Jan 30 17:57:09 crc kubenswrapper[4766]: I0130 17:57:09.741934 4766 scope.go:117] "RemoveContainer" containerID="8e6d5be2cdd78ae95945579ba29f0735f8e5f2a5f43aacf73ebc0159baabfa78" Jan 30 17:57:09 crc kubenswrapper[4766]: I0130 17:57:09.743695 4766 scope.go:117] "RemoveContainer" containerID="e52209522ad6ff86d8d333246ee724873200c61e028d93d4fc612ac4b0977354" Jan 30 17:57:09 crc kubenswrapper[4766]: E0130 17:57:09.744066 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:57:19 crc kubenswrapper[4766]: I0130 17:57:19.968219 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7xd9g"] Jan 30 17:57:19 crc kubenswrapper[4766]: I0130 17:57:19.971128 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7xd9g" Jan 30 17:57:20 crc kubenswrapper[4766]: I0130 17:57:20.010531 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7xd9g"] Jan 30 17:57:20 crc kubenswrapper[4766]: I0130 17:57:20.020209 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdndz\" (UniqueName: \"kubernetes.io/projected/73284976-6eff-4a55-b925-9d82571c7f79-kube-api-access-kdndz\") pod \"redhat-marketplace-7xd9g\" (UID: \"73284976-6eff-4a55-b925-9d82571c7f79\") " pod="openshift-marketplace/redhat-marketplace-7xd9g" Jan 30 17:57:20 crc kubenswrapper[4766]: I0130 17:57:20.020358 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73284976-6eff-4a55-b925-9d82571c7f79-utilities\") pod \"redhat-marketplace-7xd9g\" (UID: \"73284976-6eff-4a55-b925-9d82571c7f79\") " pod="openshift-marketplace/redhat-marketplace-7xd9g" Jan 30 17:57:20 crc kubenswrapper[4766]: I0130 17:57:20.020619 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73284976-6eff-4a55-b925-9d82571c7f79-catalog-content\") pod \"redhat-marketplace-7xd9g\" (UID: \"73284976-6eff-4a55-b925-9d82571c7f79\") " pod="openshift-marketplace/redhat-marketplace-7xd9g" Jan 30 17:57:20 crc kubenswrapper[4766]: I0130 17:57:20.121893 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73284976-6eff-4a55-b925-9d82571c7f79-utilities\") pod \"redhat-marketplace-7xd9g\" (UID: \"73284976-6eff-4a55-b925-9d82571c7f79\") " pod="openshift-marketplace/redhat-marketplace-7xd9g" Jan 30 17:57:20 crc kubenswrapper[4766]: I0130 17:57:20.122389 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73284976-6eff-4a55-b925-9d82571c7f79-catalog-content\") pod \"redhat-marketplace-7xd9g\" (UID: \"73284976-6eff-4a55-b925-9d82571c7f79\") " pod="openshift-marketplace/redhat-marketplace-7xd9g" Jan 30 17:57:20 crc kubenswrapper[4766]: I0130 17:57:20.122592 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73284976-6eff-4a55-b925-9d82571c7f79-utilities\") pod \"redhat-marketplace-7xd9g\" (UID: \"73284976-6eff-4a55-b925-9d82571c7f79\") " pod="openshift-marketplace/redhat-marketplace-7xd9g" Jan 30 17:57:20 crc kubenswrapper[4766]: I0130 17:57:20.122604 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdndz\" (UniqueName: \"kubernetes.io/projected/73284976-6eff-4a55-b925-9d82571c7f79-kube-api-access-kdndz\") pod \"redhat-marketplace-7xd9g\" (UID: \"73284976-6eff-4a55-b925-9d82571c7f79\") " pod="openshift-marketplace/redhat-marketplace-7xd9g" Jan 30 17:57:20 crc kubenswrapper[4766]: I0130 17:57:20.122927 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73284976-6eff-4a55-b925-9d82571c7f79-catalog-content\") pod \"redhat-marketplace-7xd9g\" (UID: \"73284976-6eff-4a55-b925-9d82571c7f79\") " pod="openshift-marketplace/redhat-marketplace-7xd9g" Jan 30 17:57:20 crc kubenswrapper[4766]: I0130 17:57:20.147057 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdndz\" (UniqueName: \"kubernetes.io/projected/73284976-6eff-4a55-b925-9d82571c7f79-kube-api-access-kdndz\") pod \"redhat-marketplace-7xd9g\" (UID: \"73284976-6eff-4a55-b925-9d82571c7f79\") " pod="openshift-marketplace/redhat-marketplace-7xd9g" Jan 30 17:57:20 crc kubenswrapper[4766]: I0130 17:57:20.301669 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7xd9g" Jan 30 17:57:20 crc kubenswrapper[4766]: I0130 17:57:20.792471 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7xd9g"] Jan 30 17:57:20 crc kubenswrapper[4766]: I0130 17:57:20.834775 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7xd9g" event={"ID":"73284976-6eff-4a55-b925-9d82571c7f79","Type":"ContainerStarted","Data":"5ad2c1eba53c17a6fdc63d95b48d182c3d830b443e9884d0d531c8423ad14e81"} Jan 30 17:57:21 crc kubenswrapper[4766]: I0130 17:57:21.850622 4766 generic.go:334] "Generic (PLEG): container finished" podID="73284976-6eff-4a55-b925-9d82571c7f79" containerID="5b1f8184a634e79e80c90c049517e3656d817b66da23ddb1a86347d3bc926e05" exitCode=0 Jan 30 17:57:21 crc kubenswrapper[4766]: I0130 17:57:21.850746 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7xd9g" event={"ID":"73284976-6eff-4a55-b925-9d82571c7f79","Type":"ContainerDied","Data":"5b1f8184a634e79e80c90c049517e3656d817b66da23ddb1a86347d3bc926e05"} Jan 30 17:57:22 crc kubenswrapper[4766]: I0130 17:57:22.860310 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7xd9g" event={"ID":"73284976-6eff-4a55-b925-9d82571c7f79","Type":"ContainerStarted","Data":"eeff4c334f54ceabf7a4ea4b97c342b0012a18553932e458b855bc22698be8e6"} Jan 30 17:57:23 crc kubenswrapper[4766]: I0130 17:57:23.875598 4766 generic.go:334] "Generic (PLEG): container finished" podID="73284976-6eff-4a55-b925-9d82571c7f79" containerID="eeff4c334f54ceabf7a4ea4b97c342b0012a18553932e458b855bc22698be8e6" exitCode=0 Jan 30 17:57:23 crc kubenswrapper[4766]: I0130 17:57:23.875681 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7xd9g" event={"ID":"73284976-6eff-4a55-b925-9d82571c7f79","Type":"ContainerDied","Data":"eeff4c334f54ceabf7a4ea4b97c342b0012a18553932e458b855bc22698be8e6"} Jan 30 17:57:24 crc kubenswrapper[4766]: I0130 17:57:24.885384 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7xd9g" event={"ID":"73284976-6eff-4a55-b925-9d82571c7f79","Type":"ContainerStarted","Data":"9db3b26914f47e4c95b7d3f9ef56990a55920419a41ff416a50d4e83163c6f9d"} Jan 30 17:57:24 crc kubenswrapper[4766]: I0130 17:57:24.916168 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7xd9g" podStartSLOduration=3.521994322 podStartE2EDuration="5.916148781s" podCreationTimestamp="2026-01-30 17:57:19 +0000 UTC" firstStartedPulling="2026-01-30 17:57:21.85486187 +0000 UTC m=+5696.492819216" lastFinishedPulling="2026-01-30 17:57:24.249016329 +0000 UTC m=+5698.886973675" observedRunningTime="2026-01-30 17:57:24.903913928 +0000 UTC m=+5699.541871314" watchObservedRunningTime="2026-01-30 17:57:24.916148781 +0000 UTC m=+5699.554106127" Jan 30 17:57:25 crc kubenswrapper[4766]: I0130 17:57:25.039658 4766 scope.go:117] "RemoveContainer" containerID="e52209522ad6ff86d8d333246ee724873200c61e028d93d4fc612ac4b0977354" Jan 30 17:57:25 crc kubenswrapper[4766]: E0130 17:57:25.040123 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:57:30 crc kubenswrapper[4766]: I0130 17:57:30.302136 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7xd9g" Jan 30 17:57:30 crc kubenswrapper[4766]: I0130 17:57:30.302857 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7xd9g" Jan 30 17:57:30 crc kubenswrapper[4766]: I0130 17:57:30.355474 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7xd9g" Jan 30 17:57:30 crc kubenswrapper[4766]: I0130 17:57:30.987710 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7xd9g" Jan 30 17:57:31 crc kubenswrapper[4766]: I0130 17:57:31.048764 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7xd9g"] Jan 30 17:57:32 crc kubenswrapper[4766]: I0130 17:57:32.544473 4766 scope.go:117] "RemoveContainer" containerID="2284b685070b20ff7f99a6b288edfe628604e9b16f379e70a8725075d3d9749a" Jan 30 17:57:32 crc kubenswrapper[4766]: I0130 17:57:32.567360 4766 scope.go:117] "RemoveContainer" containerID="325111ae8b2b39896c73638f1c0026db7d59ab4097cfdf84ec6a851d0d088ecd" Jan 30 17:57:32 crc kubenswrapper[4766]: I0130 17:57:32.956282 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7xd9g" podUID="73284976-6eff-4a55-b925-9d82571c7f79" containerName="registry-server" containerID="cri-o://9db3b26914f47e4c95b7d3f9ef56990a55920419a41ff416a50d4e83163c6f9d" gracePeriod=2 Jan 30 17:57:33 crc kubenswrapper[4766]: I0130 17:57:33.429428 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7xd9g" Jan 30 17:57:33 crc kubenswrapper[4766]: I0130 17:57:33.528292 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73284976-6eff-4a55-b925-9d82571c7f79-catalog-content\") pod \"73284976-6eff-4a55-b925-9d82571c7f79\" (UID: \"73284976-6eff-4a55-b925-9d82571c7f79\") " Jan 30 17:57:33 crc kubenswrapper[4766]: I0130 17:57:33.528438 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdndz\" (UniqueName: \"kubernetes.io/projected/73284976-6eff-4a55-b925-9d82571c7f79-kube-api-access-kdndz\") pod \"73284976-6eff-4a55-b925-9d82571c7f79\" (UID: \"73284976-6eff-4a55-b925-9d82571c7f79\") " Jan 30 17:57:33 crc kubenswrapper[4766]: I0130 17:57:33.528464 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73284976-6eff-4a55-b925-9d82571c7f79-utilities\") pod \"73284976-6eff-4a55-b925-9d82571c7f79\" (UID: \"73284976-6eff-4a55-b925-9d82571c7f79\") " Jan 30 17:57:33 crc kubenswrapper[4766]: I0130 17:57:33.531535 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73284976-6eff-4a55-b925-9d82571c7f79-utilities" (OuterVolumeSpecName: "utilities") pod "73284976-6eff-4a55-b925-9d82571c7f79" (UID: "73284976-6eff-4a55-b925-9d82571c7f79"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:57:33 crc kubenswrapper[4766]: I0130 17:57:33.535102 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73284976-6eff-4a55-b925-9d82571c7f79-kube-api-access-kdndz" (OuterVolumeSpecName: "kube-api-access-kdndz") pod "73284976-6eff-4a55-b925-9d82571c7f79" (UID: "73284976-6eff-4a55-b925-9d82571c7f79"). InnerVolumeSpecName "kube-api-access-kdndz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:57:33 crc kubenswrapper[4766]: I0130 17:57:33.551533 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73284976-6eff-4a55-b925-9d82571c7f79-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "73284976-6eff-4a55-b925-9d82571c7f79" (UID: "73284976-6eff-4a55-b925-9d82571c7f79"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:57:34 crc kubenswrapper[4766]: I0130 17:57:33.631319 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73284976-6eff-4a55-b925-9d82571c7f79-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:57:34 crc kubenswrapper[4766]: I0130 17:57:33.631363 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdndz\" (UniqueName: \"kubernetes.io/projected/73284976-6eff-4a55-b925-9d82571c7f79-kube-api-access-kdndz\") on node \"crc\" DevicePath \"\"" Jan 30 17:57:34 crc kubenswrapper[4766]: I0130 17:57:33.631374 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73284976-6eff-4a55-b925-9d82571c7f79-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:57:34 crc kubenswrapper[4766]: I0130 17:57:33.968045 4766 generic.go:334] "Generic (PLEG): container finished" podID="73284976-6eff-4a55-b925-9d82571c7f79" containerID="9db3b26914f47e4c95b7d3f9ef56990a55920419a41ff416a50d4e83163c6f9d" exitCode=0 Jan 30 17:57:34 crc kubenswrapper[4766]: I0130 17:57:33.968085 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7xd9g" event={"ID":"73284976-6eff-4a55-b925-9d82571c7f79","Type":"ContainerDied","Data":"9db3b26914f47e4c95b7d3f9ef56990a55920419a41ff416a50d4e83163c6f9d"} Jan 30 17:57:34 crc kubenswrapper[4766]: I0130 17:57:33.968110 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7xd9g" event={"ID":"73284976-6eff-4a55-b925-9d82571c7f79","Type":"ContainerDied","Data":"5ad2c1eba53c17a6fdc63d95b48d182c3d830b443e9884d0d531c8423ad14e81"} Jan 30 17:57:34 crc kubenswrapper[4766]: I0130 17:57:33.968126 4766 scope.go:117] "RemoveContainer" containerID="9db3b26914f47e4c95b7d3f9ef56990a55920419a41ff416a50d4e83163c6f9d" Jan 30 17:57:34 crc kubenswrapper[4766]: I0130 17:57:33.968323 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7xd9g" Jan 30 17:57:34 crc kubenswrapper[4766]: I0130 17:57:34.592826 4766 scope.go:117] "RemoveContainer" containerID="eeff4c334f54ceabf7a4ea4b97c342b0012a18553932e458b855bc22698be8e6" Jan 30 17:57:34 crc kubenswrapper[4766]: I0130 17:57:34.604854 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7xd9g"] Jan 30 17:57:34 crc kubenswrapper[4766]: I0130 17:57:34.615749 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7xd9g"] Jan 30 17:57:34 crc kubenswrapper[4766]: I0130 17:57:34.634849 4766 scope.go:117] "RemoveContainer" containerID="5b1f8184a634e79e80c90c049517e3656d817b66da23ddb1a86347d3bc926e05" Jan 30 17:57:34 crc kubenswrapper[4766]: I0130 17:57:34.685429 4766 scope.go:117] "RemoveContainer" containerID="9db3b26914f47e4c95b7d3f9ef56990a55920419a41ff416a50d4e83163c6f9d" Jan 30 17:57:34 crc kubenswrapper[4766]: E0130 17:57:34.687523 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9db3b26914f47e4c95b7d3f9ef56990a55920419a41ff416a50d4e83163c6f9d\": container with ID starting with 9db3b26914f47e4c95b7d3f9ef56990a55920419a41ff416a50d4e83163c6f9d not found: ID does not exist" containerID="9db3b26914f47e4c95b7d3f9ef56990a55920419a41ff416a50d4e83163c6f9d" Jan 30 17:57:34 crc kubenswrapper[4766]: I0130 17:57:34.687565 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9db3b26914f47e4c95b7d3f9ef56990a55920419a41ff416a50d4e83163c6f9d"} err="failed to get container status \"9db3b26914f47e4c95b7d3f9ef56990a55920419a41ff416a50d4e83163c6f9d\": rpc error: code = NotFound desc = could not find container \"9db3b26914f47e4c95b7d3f9ef56990a55920419a41ff416a50d4e83163c6f9d\": container with ID starting with 9db3b26914f47e4c95b7d3f9ef56990a55920419a41ff416a50d4e83163c6f9d not found: ID does not exist" Jan 30 17:57:34 crc kubenswrapper[4766]: I0130 17:57:34.687615 4766 scope.go:117] "RemoveContainer" containerID="eeff4c334f54ceabf7a4ea4b97c342b0012a18553932e458b855bc22698be8e6" Jan 30 17:57:34 crc kubenswrapper[4766]: E0130 17:57:34.688167 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eeff4c334f54ceabf7a4ea4b97c342b0012a18553932e458b855bc22698be8e6\": container with ID starting with eeff4c334f54ceabf7a4ea4b97c342b0012a18553932e458b855bc22698be8e6 not found: ID does not exist" containerID="eeff4c334f54ceabf7a4ea4b97c342b0012a18553932e458b855bc22698be8e6" Jan 30 17:57:34 crc kubenswrapper[4766]: I0130 17:57:34.688213 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eeff4c334f54ceabf7a4ea4b97c342b0012a18553932e458b855bc22698be8e6"} err="failed to get container status \"eeff4c334f54ceabf7a4ea4b97c342b0012a18553932e458b855bc22698be8e6\": rpc error: code = NotFound desc = could not find container \"eeff4c334f54ceabf7a4ea4b97c342b0012a18553932e458b855bc22698be8e6\": container with ID starting with eeff4c334f54ceabf7a4ea4b97c342b0012a18553932e458b855bc22698be8e6 not found: ID does not exist" Jan 30 17:57:34 crc kubenswrapper[4766]: I0130 17:57:34.688230 4766 scope.go:117] "RemoveContainer" containerID="5b1f8184a634e79e80c90c049517e3656d817b66da23ddb1a86347d3bc926e05" Jan 30 17:57:34 crc kubenswrapper[4766]: E0130 17:57:34.688834 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b1f8184a634e79e80c90c049517e3656d817b66da23ddb1a86347d3bc926e05\": container with ID starting with 5b1f8184a634e79e80c90c049517e3656d817b66da23ddb1a86347d3bc926e05 not found: ID does not exist" containerID="5b1f8184a634e79e80c90c049517e3656d817b66da23ddb1a86347d3bc926e05" Jan 30 17:57:34 crc kubenswrapper[4766]: I0130 17:57:34.688860 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b1f8184a634e79e80c90c049517e3656d817b66da23ddb1a86347d3bc926e05"} err="failed to get container status \"5b1f8184a634e79e80c90c049517e3656d817b66da23ddb1a86347d3bc926e05\": rpc error: code = NotFound desc = could not find container \"5b1f8184a634e79e80c90c049517e3656d817b66da23ddb1a86347d3bc926e05\": container with ID starting with 5b1f8184a634e79e80c90c049517e3656d817b66da23ddb1a86347d3bc926e05 not found: ID does not exist" Jan 30 17:57:36 crc kubenswrapper[4766]: I0130 17:57:36.052976 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73284976-6eff-4a55-b925-9d82571c7f79" path="/var/lib/kubelet/pods/73284976-6eff-4a55-b925-9d82571c7f79/volumes" Jan 30 17:57:40 crc kubenswrapper[4766]: I0130 17:57:40.040348 4766 scope.go:117] "RemoveContainer" containerID="e52209522ad6ff86d8d333246ee724873200c61e028d93d4fc612ac4b0977354" Jan 30 17:57:40 crc kubenswrapper[4766]: E0130 17:57:40.041161 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:57:44 crc kubenswrapper[4766]: I0130 17:57:44.352121 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hxmkb"] Jan 30 17:57:44 crc kubenswrapper[4766]: E0130 17:57:44.353240 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73284976-6eff-4a55-b925-9d82571c7f79" containerName="registry-server" Jan 30 17:57:44 crc kubenswrapper[4766]: I0130 17:57:44.353256 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="73284976-6eff-4a55-b925-9d82571c7f79" containerName="registry-server" Jan 30 17:57:44 crc kubenswrapper[4766]: E0130 17:57:44.353277 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73284976-6eff-4a55-b925-9d82571c7f79" containerName="extract-utilities" Jan 30 17:57:44 crc kubenswrapper[4766]: I0130 17:57:44.353285 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="73284976-6eff-4a55-b925-9d82571c7f79" containerName="extract-utilities" Jan 30 17:57:44 crc kubenswrapper[4766]: E0130 17:57:44.353304 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73284976-6eff-4a55-b925-9d82571c7f79" containerName="extract-content" Jan 30 17:57:44 crc kubenswrapper[4766]: I0130 17:57:44.353309 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="73284976-6eff-4a55-b925-9d82571c7f79" containerName="extract-content" Jan 30 17:57:44 crc kubenswrapper[4766]: I0130 17:57:44.353493 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="73284976-6eff-4a55-b925-9d82571c7f79" containerName="registry-server" Jan 30 17:57:44 crc kubenswrapper[4766]: I0130 17:57:44.354724 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hxmkb" Jan 30 17:57:44 crc kubenswrapper[4766]: I0130 17:57:44.370348 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hxmkb"] Jan 30 17:57:44 crc kubenswrapper[4766]: I0130 17:57:44.434104 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxrxn\" (UniqueName: \"kubernetes.io/projected/d84c1be7-4d75-42f5-a45d-cd83378aadca-kube-api-access-lxrxn\") pod \"redhat-operators-hxmkb\" (UID: \"d84c1be7-4d75-42f5-a45d-cd83378aadca\") " pod="openshift-marketplace/redhat-operators-hxmkb" Jan 30 17:57:44 crc kubenswrapper[4766]: I0130 17:57:44.434167 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d84c1be7-4d75-42f5-a45d-cd83378aadca-utilities\") pod \"redhat-operators-hxmkb\" (UID: \"d84c1be7-4d75-42f5-a45d-cd83378aadca\") " pod="openshift-marketplace/redhat-operators-hxmkb" Jan 30 17:57:44 crc kubenswrapper[4766]: I0130 17:57:44.434217 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d84c1be7-4d75-42f5-a45d-cd83378aadca-catalog-content\") pod \"redhat-operators-hxmkb\" (UID: \"d84c1be7-4d75-42f5-a45d-cd83378aadca\") " pod="openshift-marketplace/redhat-operators-hxmkb" Jan 30 17:57:44 crc kubenswrapper[4766]: I0130 17:57:44.537789 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxrxn\" (UniqueName: \"kubernetes.io/projected/d84c1be7-4d75-42f5-a45d-cd83378aadca-kube-api-access-lxrxn\") pod \"redhat-operators-hxmkb\" (UID: \"d84c1be7-4d75-42f5-a45d-cd83378aadca\") " pod="openshift-marketplace/redhat-operators-hxmkb" Jan 30 17:57:44 crc kubenswrapper[4766]: I0130 17:57:44.537878 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d84c1be7-4d75-42f5-a45d-cd83378aadca-utilities\") pod \"redhat-operators-hxmkb\" (UID: \"d84c1be7-4d75-42f5-a45d-cd83378aadca\") " pod="openshift-marketplace/redhat-operators-hxmkb" Jan 30 17:57:44 crc kubenswrapper[4766]: I0130 17:57:44.537904 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d84c1be7-4d75-42f5-a45d-cd83378aadca-catalog-content\") pod \"redhat-operators-hxmkb\" (UID: \"d84c1be7-4d75-42f5-a45d-cd83378aadca\") " pod="openshift-marketplace/redhat-operators-hxmkb" Jan 30 17:57:44 crc kubenswrapper[4766]: I0130 17:57:44.538643 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d84c1be7-4d75-42f5-a45d-cd83378aadca-catalog-content\") pod \"redhat-operators-hxmkb\" (UID: \"d84c1be7-4d75-42f5-a45d-cd83378aadca\") " pod="openshift-marketplace/redhat-operators-hxmkb" Jan 30 17:57:44 crc kubenswrapper[4766]: I0130 17:57:44.539242 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d84c1be7-4d75-42f5-a45d-cd83378aadca-utilities\") pod \"redhat-operators-hxmkb\" (UID: \"d84c1be7-4d75-42f5-a45d-cd83378aadca\") " pod="openshift-marketplace/redhat-operators-hxmkb" Jan 30 17:57:44 crc kubenswrapper[4766]: I0130 17:57:44.586917 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxrxn\" (UniqueName: \"kubernetes.io/projected/d84c1be7-4d75-42f5-a45d-cd83378aadca-kube-api-access-lxrxn\") pod \"redhat-operators-hxmkb\" (UID: \"d84c1be7-4d75-42f5-a45d-cd83378aadca\") " pod="openshift-marketplace/redhat-operators-hxmkb" Jan 30 17:57:44 crc kubenswrapper[4766]: I0130 17:57:44.681361 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hxmkb" Jan 30 17:57:45 crc kubenswrapper[4766]: I0130 17:57:45.162630 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hxmkb"] Jan 30 17:57:46 crc kubenswrapper[4766]: I0130 17:57:46.064998 4766 generic.go:334] "Generic (PLEG): container finished" podID="d84c1be7-4d75-42f5-a45d-cd83378aadca" containerID="b7788b564bdadb9d8530785901307d94fc47f8758660789b46508bb69321c392" exitCode=0 Jan 30 17:57:46 crc kubenswrapper[4766]: I0130 17:57:46.065066 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hxmkb" event={"ID":"d84c1be7-4d75-42f5-a45d-cd83378aadca","Type":"ContainerDied","Data":"b7788b564bdadb9d8530785901307d94fc47f8758660789b46508bb69321c392"} Jan 30 17:57:46 crc kubenswrapper[4766]: I0130 17:57:46.066580 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hxmkb" event={"ID":"d84c1be7-4d75-42f5-a45d-cd83378aadca","Type":"ContainerStarted","Data":"75407a0832c673e238385866978541172268aca09f7f655c44988ed38c282199"} Jan 30 17:57:51 crc kubenswrapper[4766]: I0130 17:57:51.038922 4766 scope.go:117] "RemoveContainer" containerID="e52209522ad6ff86d8d333246ee724873200c61e028d93d4fc612ac4b0977354" Jan 30 17:57:51 crc kubenswrapper[4766]: E0130 17:57:51.039941 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:57:59 crc kubenswrapper[4766]: I0130 17:57:59.191457 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hxmkb" event={"ID":"d84c1be7-4d75-42f5-a45d-cd83378aadca","Type":"ContainerStarted","Data":"ac508c4b87ba712911043d539a3b0e39f16d5bc1e6043c253c25bcd60f01ee06"} Jan 30 17:58:00 crc kubenswrapper[4766]: I0130 17:58:00.203447 4766 generic.go:334] "Generic (PLEG): container finished" podID="d84c1be7-4d75-42f5-a45d-cd83378aadca" containerID="ac508c4b87ba712911043d539a3b0e39f16d5bc1e6043c253c25bcd60f01ee06" exitCode=0 Jan 30 17:58:00 crc kubenswrapper[4766]: I0130 17:58:00.203659 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hxmkb" event={"ID":"d84c1be7-4d75-42f5-a45d-cd83378aadca","Type":"ContainerDied","Data":"ac508c4b87ba712911043d539a3b0e39f16d5bc1e6043c253c25bcd60f01ee06"} Jan 30 17:58:02 crc kubenswrapper[4766]: I0130 17:58:02.225004 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hxmkb" event={"ID":"d84c1be7-4d75-42f5-a45d-cd83378aadca","Type":"ContainerStarted","Data":"6b946ffa1f4b52c19c636d3b367874088190e5fd884d68c8436310a53d49129f"} Jan 30 17:58:02 crc kubenswrapper[4766]: I0130 17:58:02.261652 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hxmkb" podStartSLOduration=3.641671693 podStartE2EDuration="18.26161944s" podCreationTimestamp="2026-01-30 17:57:44 +0000 UTC" firstStartedPulling="2026-01-30 17:57:46.067607793 +0000 UTC m=+5720.705565139" lastFinishedPulling="2026-01-30 17:58:00.68755554 +0000 UTC m=+5735.325512886" observedRunningTime="2026-01-30 17:58:02.253795247 +0000 UTC m=+5736.891752593" watchObservedRunningTime="2026-01-30 17:58:02.26161944 +0000 UTC m=+5736.899576786" Jan 30 17:58:04 crc kubenswrapper[4766]: I0130 17:58:04.039562 4766 scope.go:117] "RemoveContainer" containerID="e52209522ad6ff86d8d333246ee724873200c61e028d93d4fc612ac4b0977354" Jan 30 17:58:04 crc kubenswrapper[4766]: E0130 17:58:04.040154 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:58:04 crc kubenswrapper[4766]: I0130 17:58:04.683042 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hxmkb" Jan 30 17:58:04 crc kubenswrapper[4766]: I0130 17:58:04.683362 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hxmkb" Jan 30 17:58:05 crc kubenswrapper[4766]: I0130 17:58:05.739535 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hxmkb" podUID="d84c1be7-4d75-42f5-a45d-cd83378aadca" containerName="registry-server" probeResult="failure" output=< Jan 30 17:58:05 crc kubenswrapper[4766]: timeout: failed to connect service ":50051" within 1s Jan 30 17:58:05 crc kubenswrapper[4766]: > Jan 30 17:58:10 crc kubenswrapper[4766]: I0130 17:58:10.115574 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jmmpk"] Jan 30 17:58:10 crc kubenswrapper[4766]: I0130 17:58:10.118047 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jmmpk" Jan 30 17:58:10 crc kubenswrapper[4766]: I0130 17:58:10.130105 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jmmpk"] Jan 30 17:58:10 crc kubenswrapper[4766]: I0130 17:58:10.171079 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a0dc221-4e00-4488-b09c-31ce4c70b735-utilities\") pod \"community-operators-jmmpk\" (UID: \"3a0dc221-4e00-4488-b09c-31ce4c70b735\") " pod="openshift-marketplace/community-operators-jmmpk" Jan 30 17:58:10 crc kubenswrapper[4766]: I0130 17:58:10.171158 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a0dc221-4e00-4488-b09c-31ce4c70b735-catalog-content\") pod \"community-operators-jmmpk\" (UID: \"3a0dc221-4e00-4488-b09c-31ce4c70b735\") " pod="openshift-marketplace/community-operators-jmmpk" Jan 30 17:58:10 crc kubenswrapper[4766]: I0130 17:58:10.171295 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjgjh\" (UniqueName: \"kubernetes.io/projected/3a0dc221-4e00-4488-b09c-31ce4c70b735-kube-api-access-gjgjh\") pod \"community-operators-jmmpk\" (UID: \"3a0dc221-4e00-4488-b09c-31ce4c70b735\") " pod="openshift-marketplace/community-operators-jmmpk" Jan 30 17:58:10 crc kubenswrapper[4766]: I0130 17:58:10.272798 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a0dc221-4e00-4488-b09c-31ce4c70b735-catalog-content\") pod \"community-operators-jmmpk\" (UID: \"3a0dc221-4e00-4488-b09c-31ce4c70b735\") " pod="openshift-marketplace/community-operators-jmmpk" Jan 30 17:58:10 crc kubenswrapper[4766]: I0130 17:58:10.272899 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjgjh\" (UniqueName: \"kubernetes.io/projected/3a0dc221-4e00-4488-b09c-31ce4c70b735-kube-api-access-gjgjh\") pod \"community-operators-jmmpk\" (UID: \"3a0dc221-4e00-4488-b09c-31ce4c70b735\") " pod="openshift-marketplace/community-operators-jmmpk" Jan 30 17:58:10 crc kubenswrapper[4766]: I0130 17:58:10.273020 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a0dc221-4e00-4488-b09c-31ce4c70b735-utilities\") pod \"community-operators-jmmpk\" (UID: \"3a0dc221-4e00-4488-b09c-31ce4c70b735\") " pod="openshift-marketplace/community-operators-jmmpk" Jan 30 17:58:10 crc kubenswrapper[4766]: I0130 17:58:10.273669 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a0dc221-4e00-4488-b09c-31ce4c70b735-utilities\") pod \"community-operators-jmmpk\" (UID: \"3a0dc221-4e00-4488-b09c-31ce4c70b735\") " pod="openshift-marketplace/community-operators-jmmpk" Jan 30 17:58:10 crc kubenswrapper[4766]: I0130 17:58:10.273920 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a0dc221-4e00-4488-b09c-31ce4c70b735-catalog-content\") pod \"community-operators-jmmpk\" (UID: \"3a0dc221-4e00-4488-b09c-31ce4c70b735\") " pod="openshift-marketplace/community-operators-jmmpk" Jan 30 17:58:10 crc kubenswrapper[4766]: I0130 17:58:10.295065 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjgjh\" (UniqueName: \"kubernetes.io/projected/3a0dc221-4e00-4488-b09c-31ce4c70b735-kube-api-access-gjgjh\") pod \"community-operators-jmmpk\" (UID: \"3a0dc221-4e00-4488-b09c-31ce4c70b735\") " pod="openshift-marketplace/community-operators-jmmpk" Jan 30 17:58:10 crc kubenswrapper[4766]: I0130 17:58:10.438027 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jmmpk" Jan 30 17:58:10 crc kubenswrapper[4766]: I0130 17:58:10.979052 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jmmpk"] Jan 30 17:58:11 crc kubenswrapper[4766]: I0130 17:58:11.295862 4766 generic.go:334] "Generic (PLEG): container finished" podID="3a0dc221-4e00-4488-b09c-31ce4c70b735" containerID="71e8eb07bd8d0652afe6f78cfb4afc70c271503071bd4f84e51ac5f2dd19ad24" exitCode=0 Jan 30 17:58:11 crc kubenswrapper[4766]: I0130 17:58:11.295933 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jmmpk" event={"ID":"3a0dc221-4e00-4488-b09c-31ce4c70b735","Type":"ContainerDied","Data":"71e8eb07bd8d0652afe6f78cfb4afc70c271503071bd4f84e51ac5f2dd19ad24"} Jan 30 17:58:11 crc kubenswrapper[4766]: I0130 17:58:11.296163 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jmmpk" event={"ID":"3a0dc221-4e00-4488-b09c-31ce4c70b735","Type":"ContainerStarted","Data":"5cf6928557b6939990dc1e11354457a1ee4fcb0ad54a84fa252e26d53511f230"} Jan 30 17:58:12 crc kubenswrapper[4766]: I0130 17:58:12.313817 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jmmpk" event={"ID":"3a0dc221-4e00-4488-b09c-31ce4c70b735","Type":"ContainerStarted","Data":"cb0dbe766675ea9006eae26acaa59b2b4c2ffb4eb8a5039fa387c58aebde62fd"} Jan 30 17:58:13 crc kubenswrapper[4766]: I0130 17:58:13.323067 4766 generic.go:334] "Generic (PLEG): container finished" podID="3a0dc221-4e00-4488-b09c-31ce4c70b735" containerID="cb0dbe766675ea9006eae26acaa59b2b4c2ffb4eb8a5039fa387c58aebde62fd" exitCode=0 Jan 30 17:58:13 crc kubenswrapper[4766]: I0130 17:58:13.323128 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jmmpk" event={"ID":"3a0dc221-4e00-4488-b09c-31ce4c70b735","Type":"ContainerDied","Data":"cb0dbe766675ea9006eae26acaa59b2b4c2ffb4eb8a5039fa387c58aebde62fd"} Jan 30 17:58:14 crc kubenswrapper[4766]: I0130 17:58:14.334695 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jmmpk" event={"ID":"3a0dc221-4e00-4488-b09c-31ce4c70b735","Type":"ContainerStarted","Data":"93f5d26fc1fca4c23cf2807e6521bc19bd8d2f655281164e00cfe3eb6836b083"} Jan 30 17:58:14 crc kubenswrapper[4766]: I0130 17:58:14.359613 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jmmpk" podStartSLOduration=1.883891839 podStartE2EDuration="4.359595664s" podCreationTimestamp="2026-01-30 17:58:10 +0000 UTC" firstStartedPulling="2026-01-30 17:58:11.297418739 +0000 UTC m=+5745.935376085" lastFinishedPulling="2026-01-30 17:58:13.773122564 +0000 UTC m=+5748.411079910" observedRunningTime="2026-01-30 17:58:14.35505334 +0000 UTC m=+5748.993010686" watchObservedRunningTime="2026-01-30 17:58:14.359595664 +0000 UTC m=+5748.997553010" Jan 30 17:58:15 crc kubenswrapper[4766]: I0130 17:58:15.728736 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hxmkb" podUID="d84c1be7-4d75-42f5-a45d-cd83378aadca" containerName="registry-server" probeResult="failure" output=< Jan 30 17:58:15 crc kubenswrapper[4766]: timeout: failed to connect service ":50051" within 1s Jan 30 17:58:15 crc kubenswrapper[4766]: > Jan 30 17:58:16 crc kubenswrapper[4766]: I0130 17:58:16.049445 4766 scope.go:117] "RemoveContainer" containerID="e52209522ad6ff86d8d333246ee724873200c61e028d93d4fc612ac4b0977354" Jan 30 17:58:16 crc kubenswrapper[4766]: E0130 17:58:16.049704 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:58:20 crc kubenswrapper[4766]: I0130 17:58:20.439382 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jmmpk" Jan 30 17:58:20 crc kubenswrapper[4766]: I0130 17:58:20.440077 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jmmpk" Jan 30 17:58:20 crc kubenswrapper[4766]: I0130 17:58:20.485247 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jmmpk" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.356929 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-k9frg"] Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.358568 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-k9frg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.361124 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-z9mbd" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.361340 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.374490 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-k9frg"] Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.385535 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8d8369af-eac5-4d31-b183-1a542da452c5-var-run-ovn\") pod \"ovn-controller-k9frg\" (UID: \"8d8369af-eac5-4d31-b183-1a542da452c5\") " pod="openstack/ovn-controller-k9frg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.385601 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8d8369af-eac5-4d31-b183-1a542da452c5-var-log-ovn\") pod \"ovn-controller-k9frg\" (UID: \"8d8369af-eac5-4d31-b183-1a542da452c5\") " pod="openstack/ovn-controller-k9frg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.385622 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8d8369af-eac5-4d31-b183-1a542da452c5-scripts\") pod \"ovn-controller-k9frg\" (UID: \"8d8369af-eac5-4d31-b183-1a542da452c5\") " pod="openstack/ovn-controller-k9frg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.385721 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8d8369af-eac5-4d31-b183-1a542da452c5-var-run\") pod \"ovn-controller-k9frg\" (UID: \"8d8369af-eac5-4d31-b183-1a542da452c5\") " pod="openstack/ovn-controller-k9frg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.385847 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbqdc\" (UniqueName: \"kubernetes.io/projected/8d8369af-eac5-4d31-b183-1a542da452c5-kube-api-access-xbqdc\") pod \"ovn-controller-k9frg\" (UID: \"8d8369af-eac5-4d31-b183-1a542da452c5\") " pod="openstack/ovn-controller-k9frg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.385992 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-b4vlg"] Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.388367 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-b4vlg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.403721 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-b4vlg"] Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.473869 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jmmpk" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.488461 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/aa514cb2-1f05-42a6-a181-f4f62250bd7c-var-run\") pod \"ovn-controller-ovs-b4vlg\" (UID: \"aa514cb2-1f05-42a6-a181-f4f62250bd7c\") " pod="openstack/ovn-controller-ovs-b4vlg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.488546 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/aa514cb2-1f05-42a6-a181-f4f62250bd7c-var-lib\") pod \"ovn-controller-ovs-b4vlg\" (UID: \"aa514cb2-1f05-42a6-a181-f4f62250bd7c\") " pod="openstack/ovn-controller-ovs-b4vlg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.488806 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/aa514cb2-1f05-42a6-a181-f4f62250bd7c-var-log\") pod \"ovn-controller-ovs-b4vlg\" (UID: \"aa514cb2-1f05-42a6-a181-f4f62250bd7c\") " pod="openstack/ovn-controller-ovs-b4vlg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.488879 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfrtl\" (UniqueName: \"kubernetes.io/projected/aa514cb2-1f05-42a6-a181-f4f62250bd7c-kube-api-access-dfrtl\") pod \"ovn-controller-ovs-b4vlg\" (UID: \"aa514cb2-1f05-42a6-a181-f4f62250bd7c\") " pod="openstack/ovn-controller-ovs-b4vlg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.488957 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8d8369af-eac5-4d31-b183-1a542da452c5-var-run\") pod \"ovn-controller-k9frg\" (UID: \"8d8369af-eac5-4d31-b183-1a542da452c5\") " pod="openstack/ovn-controller-k9frg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.489149 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/aa514cb2-1f05-42a6-a181-f4f62250bd7c-etc-ovs\") pod \"ovn-controller-ovs-b4vlg\" (UID: \"aa514cb2-1f05-42a6-a181-f4f62250bd7c\") " pod="openstack/ovn-controller-ovs-b4vlg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.489171 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbqdc\" (UniqueName: \"kubernetes.io/projected/8d8369af-eac5-4d31-b183-1a542da452c5-kube-api-access-xbqdc\") pod \"ovn-controller-k9frg\" (UID: \"8d8369af-eac5-4d31-b183-1a542da452c5\") " pod="openstack/ovn-controller-k9frg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.489326 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8d8369af-eac5-4d31-b183-1a542da452c5-var-run-ovn\") pod \"ovn-controller-k9frg\" (UID: \"8d8369af-eac5-4d31-b183-1a542da452c5\") " pod="openstack/ovn-controller-k9frg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.489453 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8d8369af-eac5-4d31-b183-1a542da452c5-var-run\") pod \"ovn-controller-k9frg\" (UID: \"8d8369af-eac5-4d31-b183-1a542da452c5\") " pod="openstack/ovn-controller-k9frg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.489486 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8d8369af-eac5-4d31-b183-1a542da452c5-var-log-ovn\") pod \"ovn-controller-k9frg\" (UID: \"8d8369af-eac5-4d31-b183-1a542da452c5\") " pod="openstack/ovn-controller-k9frg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.489522 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aa514cb2-1f05-42a6-a181-f4f62250bd7c-scripts\") pod \"ovn-controller-ovs-b4vlg\" (UID: \"aa514cb2-1f05-42a6-a181-f4f62250bd7c\") " pod="openstack/ovn-controller-ovs-b4vlg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.489521 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8d8369af-eac5-4d31-b183-1a542da452c5-var-run-ovn\") pod \"ovn-controller-k9frg\" (UID: \"8d8369af-eac5-4d31-b183-1a542da452c5\") " pod="openstack/ovn-controller-k9frg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.489571 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8d8369af-eac5-4d31-b183-1a542da452c5-scripts\") pod \"ovn-controller-k9frg\" (UID: \"8d8369af-eac5-4d31-b183-1a542da452c5\") " pod="openstack/ovn-controller-k9frg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.489545 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8d8369af-eac5-4d31-b183-1a542da452c5-var-log-ovn\") pod \"ovn-controller-k9frg\" (UID: \"8d8369af-eac5-4d31-b183-1a542da452c5\") " pod="openstack/ovn-controller-k9frg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.492214 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8d8369af-eac5-4d31-b183-1a542da452c5-scripts\") pod \"ovn-controller-k9frg\" (UID: \"8d8369af-eac5-4d31-b183-1a542da452c5\") " pod="openstack/ovn-controller-k9frg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.520732 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbqdc\" (UniqueName: \"kubernetes.io/projected/8d8369af-eac5-4d31-b183-1a542da452c5-kube-api-access-xbqdc\") pod \"ovn-controller-k9frg\" (UID: \"8d8369af-eac5-4d31-b183-1a542da452c5\") " pod="openstack/ovn-controller-k9frg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.591248 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/aa514cb2-1f05-42a6-a181-f4f62250bd7c-etc-ovs\") pod \"ovn-controller-ovs-b4vlg\" (UID: \"aa514cb2-1f05-42a6-a181-f4f62250bd7c\") " pod="openstack/ovn-controller-ovs-b4vlg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.591980 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aa514cb2-1f05-42a6-a181-f4f62250bd7c-scripts\") pod \"ovn-controller-ovs-b4vlg\" (UID: \"aa514cb2-1f05-42a6-a181-f4f62250bd7c\") " pod="openstack/ovn-controller-ovs-b4vlg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.592103 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/aa514cb2-1f05-42a6-a181-f4f62250bd7c-var-run\") pod \"ovn-controller-ovs-b4vlg\" (UID: \"aa514cb2-1f05-42a6-a181-f4f62250bd7c\") " pod="openstack/ovn-controller-ovs-b4vlg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.592307 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/aa514cb2-1f05-42a6-a181-f4f62250bd7c-var-run\") pod \"ovn-controller-ovs-b4vlg\" (UID: \"aa514cb2-1f05-42a6-a181-f4f62250bd7c\") " pod="openstack/ovn-controller-ovs-b4vlg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.591564 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/aa514cb2-1f05-42a6-a181-f4f62250bd7c-etc-ovs\") pod \"ovn-controller-ovs-b4vlg\" (UID: \"aa514cb2-1f05-42a6-a181-f4f62250bd7c\") " pod="openstack/ovn-controller-ovs-b4vlg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.592346 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/aa514cb2-1f05-42a6-a181-f4f62250bd7c-var-lib\") pod \"ovn-controller-ovs-b4vlg\" (UID: \"aa514cb2-1f05-42a6-a181-f4f62250bd7c\") " pod="openstack/ovn-controller-ovs-b4vlg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.592612 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/aa514cb2-1f05-42a6-a181-f4f62250bd7c-var-log\") pod \"ovn-controller-ovs-b4vlg\" (UID: \"aa514cb2-1f05-42a6-a181-f4f62250bd7c\") " pod="openstack/ovn-controller-ovs-b4vlg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.592738 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfrtl\" (UniqueName: \"kubernetes.io/projected/aa514cb2-1f05-42a6-a181-f4f62250bd7c-kube-api-access-dfrtl\") pod \"ovn-controller-ovs-b4vlg\" (UID: \"aa514cb2-1f05-42a6-a181-f4f62250bd7c\") " pod="openstack/ovn-controller-ovs-b4vlg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.592663 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/aa514cb2-1f05-42a6-a181-f4f62250bd7c-var-log\") pod \"ovn-controller-ovs-b4vlg\" (UID: \"aa514cb2-1f05-42a6-a181-f4f62250bd7c\") " pod="openstack/ovn-controller-ovs-b4vlg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.592517 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/aa514cb2-1f05-42a6-a181-f4f62250bd7c-var-lib\") pod \"ovn-controller-ovs-b4vlg\" (UID: \"aa514cb2-1f05-42a6-a181-f4f62250bd7c\") " pod="openstack/ovn-controller-ovs-b4vlg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.594977 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aa514cb2-1f05-42a6-a181-f4f62250bd7c-scripts\") pod \"ovn-controller-ovs-b4vlg\" (UID: \"aa514cb2-1f05-42a6-a181-f4f62250bd7c\") " pod="openstack/ovn-controller-ovs-b4vlg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.610999 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfrtl\" (UniqueName: \"kubernetes.io/projected/aa514cb2-1f05-42a6-a181-f4f62250bd7c-kube-api-access-dfrtl\") pod \"ovn-controller-ovs-b4vlg\" (UID: \"aa514cb2-1f05-42a6-a181-f4f62250bd7c\") " pod="openstack/ovn-controller-ovs-b4vlg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.683496 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-k9frg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.719002 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-b4vlg" Jan 30 17:58:22 crc kubenswrapper[4766]: I0130 17:58:22.166497 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-k9frg"] Jan 30 17:58:22 crc kubenswrapper[4766]: I0130 17:58:22.432009 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-k9frg" event={"ID":"8d8369af-eac5-4d31-b183-1a542da452c5","Type":"ContainerStarted","Data":"7e7e5a24829ec3b421b31fa6c1410ddd8fe104c7268e6838bb1161bbc508962b"} Jan 30 17:58:22 crc kubenswrapper[4766]: I0130 17:58:22.610539 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-b4vlg"] Jan 30 17:58:22 crc kubenswrapper[4766]: I0130 17:58:22.917913 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-8hgh6"] Jan 30 17:58:22 crc kubenswrapper[4766]: I0130 17:58:22.919486 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-8hgh6" Jan 30 17:58:22 crc kubenswrapper[4766]: I0130 17:58:22.936542 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 30 17:58:22 crc kubenswrapper[4766]: I0130 17:58:22.966855 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-8hgh6"] Jan 30 17:58:23 crc kubenswrapper[4766]: I0130 17:58:23.022922 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a4fbcc6-ea61-45d4-b3c4-ecaf44f460c5-config\") pod \"ovn-controller-metrics-8hgh6\" (UID: \"1a4fbcc6-ea61-45d4-b3c4-ecaf44f460c5\") " pod="openstack/ovn-controller-metrics-8hgh6" Jan 30 17:58:23 crc kubenswrapper[4766]: I0130 17:58:23.023018 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/1a4fbcc6-ea61-45d4-b3c4-ecaf44f460c5-ovs-rundir\") pod \"ovn-controller-metrics-8hgh6\" (UID: \"1a4fbcc6-ea61-45d4-b3c4-ecaf44f460c5\") " pod="openstack/ovn-controller-metrics-8hgh6" Jan 30 17:58:23 crc kubenswrapper[4766]: I0130 17:58:23.023100 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/1a4fbcc6-ea61-45d4-b3c4-ecaf44f460c5-ovn-rundir\") pod \"ovn-controller-metrics-8hgh6\" (UID: \"1a4fbcc6-ea61-45d4-b3c4-ecaf44f460c5\") " pod="openstack/ovn-controller-metrics-8hgh6" Jan 30 17:58:23 crc kubenswrapper[4766]: I0130 17:58:23.023141 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l7rz\" (UniqueName: \"kubernetes.io/projected/1a4fbcc6-ea61-45d4-b3c4-ecaf44f460c5-kube-api-access-6l7rz\") pod \"ovn-controller-metrics-8hgh6\" (UID: \"1a4fbcc6-ea61-45d4-b3c4-ecaf44f460c5\") " pod="openstack/ovn-controller-metrics-8hgh6" Jan 30 17:58:23 crc kubenswrapper[4766]: I0130 17:58:23.125493 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/1a4fbcc6-ea61-45d4-b3c4-ecaf44f460c5-ovn-rundir\") pod \"ovn-controller-metrics-8hgh6\" (UID: \"1a4fbcc6-ea61-45d4-b3c4-ecaf44f460c5\") " pod="openstack/ovn-controller-metrics-8hgh6" Jan 30 17:58:23 crc kubenswrapper[4766]: I0130 17:58:23.125555 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6l7rz\" (UniqueName: \"kubernetes.io/projected/1a4fbcc6-ea61-45d4-b3c4-ecaf44f460c5-kube-api-access-6l7rz\") pod \"ovn-controller-metrics-8hgh6\" (UID: \"1a4fbcc6-ea61-45d4-b3c4-ecaf44f460c5\") " pod="openstack/ovn-controller-metrics-8hgh6" Jan 30 17:58:23 crc kubenswrapper[4766]: I0130 17:58:23.125816 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a4fbcc6-ea61-45d4-b3c4-ecaf44f460c5-config\") pod \"ovn-controller-metrics-8hgh6\" (UID: \"1a4fbcc6-ea61-45d4-b3c4-ecaf44f460c5\") " pod="openstack/ovn-controller-metrics-8hgh6" Jan 30 17:58:23 crc kubenswrapper[4766]: I0130 17:58:23.125873 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/1a4fbcc6-ea61-45d4-b3c4-ecaf44f460c5-ovs-rundir\") pod \"ovn-controller-metrics-8hgh6\" (UID: \"1a4fbcc6-ea61-45d4-b3c4-ecaf44f460c5\") " pod="openstack/ovn-controller-metrics-8hgh6" Jan 30 17:58:23 crc kubenswrapper[4766]: I0130 17:58:23.126629 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/1a4fbcc6-ea61-45d4-b3c4-ecaf44f460c5-ovs-rundir\") pod \"ovn-controller-metrics-8hgh6\" (UID: \"1a4fbcc6-ea61-45d4-b3c4-ecaf44f460c5\") " pod="openstack/ovn-controller-metrics-8hgh6" Jan 30 17:58:23 crc kubenswrapper[4766]: I0130 17:58:23.126743 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/1a4fbcc6-ea61-45d4-b3c4-ecaf44f460c5-ovn-rundir\") pod \"ovn-controller-metrics-8hgh6\" (UID: \"1a4fbcc6-ea61-45d4-b3c4-ecaf44f460c5\") " pod="openstack/ovn-controller-metrics-8hgh6" Jan 30 17:58:23 crc kubenswrapper[4766]: I0130 17:58:23.131589 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a4fbcc6-ea61-45d4-b3c4-ecaf44f460c5-config\") pod \"ovn-controller-metrics-8hgh6\" (UID: \"1a4fbcc6-ea61-45d4-b3c4-ecaf44f460c5\") " pod="openstack/ovn-controller-metrics-8hgh6" Jan 30 17:58:23 crc kubenswrapper[4766]: I0130 17:58:23.168166 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6l7rz\" (UniqueName: \"kubernetes.io/projected/1a4fbcc6-ea61-45d4-b3c4-ecaf44f460c5-kube-api-access-6l7rz\") pod \"ovn-controller-metrics-8hgh6\" (UID: \"1a4fbcc6-ea61-45d4-b3c4-ecaf44f460c5\") " pod="openstack/ovn-controller-metrics-8hgh6" Jan 30 17:58:23 crc kubenswrapper[4766]: I0130 17:58:23.242992 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-8hgh6" Jan 30 17:58:23 crc kubenswrapper[4766]: I0130 17:58:23.447498 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-b4vlg" event={"ID":"aa514cb2-1f05-42a6-a181-f4f62250bd7c","Type":"ContainerStarted","Data":"143be8a0f811a00d67cd46ab09fd8b7f258bdc7b5d6dc1b23fe47b96043e7445"} Jan 30 17:58:23 crc kubenswrapper[4766]: I0130 17:58:23.447540 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-b4vlg" event={"ID":"aa514cb2-1f05-42a6-a181-f4f62250bd7c","Type":"ContainerStarted","Data":"cf468ff065fb884ba4cf1173d1837a6b2420211ea82252bb4e199606c0139d64"} Jan 30 17:58:23 crc kubenswrapper[4766]: I0130 17:58:23.451811 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-k9frg" event={"ID":"8d8369af-eac5-4d31-b183-1a542da452c5","Type":"ContainerStarted","Data":"b986bc055709cbcc4703a88dca5184d3fc49b9385592ad3b2bfb2a90a8a769b4"} Jan 30 17:58:23 crc kubenswrapper[4766]: I0130 17:58:23.452471 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-k9frg" Jan 30 17:58:23 crc kubenswrapper[4766]: I0130 17:58:23.499169 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-k9frg" podStartSLOduration=2.499149332 podStartE2EDuration="2.499149332s" podCreationTimestamp="2026-01-30 17:58:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:58:23.49757337 +0000 UTC m=+5758.135530726" watchObservedRunningTime="2026-01-30 17:58:23.499149332 +0000 UTC m=+5758.137106678" Jan 30 17:58:23 crc kubenswrapper[4766]: I0130 17:58:23.793896 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-8hgh6"] Jan 30 17:58:23 crc kubenswrapper[4766]: W0130 17:58:23.798363 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a4fbcc6_ea61_45d4_b3c4_ecaf44f460c5.slice/crio-ecc1eb48bc9e4e9e7a961bda88f51c676a0719ec28e3491a039957c1ff928c59 WatchSource:0}: Error finding container ecc1eb48bc9e4e9e7a961bda88f51c676a0719ec28e3491a039957c1ff928c59: Status 404 returned error can't find the container with id ecc1eb48bc9e4e9e7a961bda88f51c676a0719ec28e3491a039957c1ff928c59 Jan 30 17:58:24 crc kubenswrapper[4766]: I0130 17:58:24.110793 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jmmpk"] Jan 30 17:58:24 crc kubenswrapper[4766]: I0130 17:58:24.111653 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jmmpk" podUID="3a0dc221-4e00-4488-b09c-31ce4c70b735" containerName="registry-server" containerID="cri-o://93f5d26fc1fca4c23cf2807e6521bc19bd8d2f655281164e00cfe3eb6836b083" gracePeriod=2 Jan 30 17:58:24 crc kubenswrapper[4766]: I0130 17:58:24.469394 4766 generic.go:334] "Generic (PLEG): container finished" podID="3a0dc221-4e00-4488-b09c-31ce4c70b735" containerID="93f5d26fc1fca4c23cf2807e6521bc19bd8d2f655281164e00cfe3eb6836b083" exitCode=0 Jan 30 17:58:24 crc kubenswrapper[4766]: I0130 17:58:24.469487 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jmmpk" event={"ID":"3a0dc221-4e00-4488-b09c-31ce4c70b735","Type":"ContainerDied","Data":"93f5d26fc1fca4c23cf2807e6521bc19bd8d2f655281164e00cfe3eb6836b083"} Jan 30 17:58:24 crc kubenswrapper[4766]: I0130 17:58:24.477116 4766 generic.go:334] "Generic (PLEG): container finished" podID="aa514cb2-1f05-42a6-a181-f4f62250bd7c" containerID="143be8a0f811a00d67cd46ab09fd8b7f258bdc7b5d6dc1b23fe47b96043e7445" exitCode=0 Jan 30 17:58:24 crc kubenswrapper[4766]: I0130 17:58:24.477950 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-b4vlg" event={"ID":"aa514cb2-1f05-42a6-a181-f4f62250bd7c","Type":"ContainerDied","Data":"143be8a0f811a00d67cd46ab09fd8b7f258bdc7b5d6dc1b23fe47b96043e7445"} Jan 30 17:58:24 crc kubenswrapper[4766]: I0130 17:58:24.482605 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-8hgh6" event={"ID":"1a4fbcc6-ea61-45d4-b3c4-ecaf44f460c5","Type":"ContainerStarted","Data":"43435190680dd7c5bcead45a6b2a56d4fa91d33134a332f628615d3c5cc13704"} Jan 30 17:58:24 crc kubenswrapper[4766]: I0130 17:58:24.483429 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-8hgh6" event={"ID":"1a4fbcc6-ea61-45d4-b3c4-ecaf44f460c5","Type":"ContainerStarted","Data":"ecc1eb48bc9e4e9e7a961bda88f51c676a0719ec28e3491a039957c1ff928c59"} Jan 30 17:58:24 crc kubenswrapper[4766]: I0130 17:58:24.550533 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-8hgh6" podStartSLOduration=2.550508677 podStartE2EDuration="2.550508677s" podCreationTimestamp="2026-01-30 17:58:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:58:24.541151062 +0000 UTC m=+5759.179108418" watchObservedRunningTime="2026-01-30 17:58:24.550508677 +0000 UTC m=+5759.188466023" Jan 30 17:58:24 crc kubenswrapper[4766]: I0130 17:58:24.635691 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-db-create-d22q5"] Jan 30 17:58:24 crc kubenswrapper[4766]: I0130 17:58:24.636944 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-d22q5" Jan 30 17:58:24 crc kubenswrapper[4766]: I0130 17:58:24.639271 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jmmpk" Jan 30 17:58:24 crc kubenswrapper[4766]: I0130 17:58:24.654621 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-create-d22q5"] Jan 30 17:58:24 crc kubenswrapper[4766]: I0130 17:58:24.685031 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a0dc221-4e00-4488-b09c-31ce4c70b735-catalog-content\") pod \"3a0dc221-4e00-4488-b09c-31ce4c70b735\" (UID: \"3a0dc221-4e00-4488-b09c-31ce4c70b735\") " Jan 30 17:58:24 crc kubenswrapper[4766]: I0130 17:58:24.685856 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a0dc221-4e00-4488-b09c-31ce4c70b735-utilities\") pod \"3a0dc221-4e00-4488-b09c-31ce4c70b735\" (UID: \"3a0dc221-4e00-4488-b09c-31ce4c70b735\") " Jan 30 17:58:24 crc kubenswrapper[4766]: I0130 17:58:24.686002 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjgjh\" (UniqueName: \"kubernetes.io/projected/3a0dc221-4e00-4488-b09c-31ce4c70b735-kube-api-access-gjgjh\") pod \"3a0dc221-4e00-4488-b09c-31ce4c70b735\" (UID: \"3a0dc221-4e00-4488-b09c-31ce4c70b735\") " Jan 30 17:58:24 crc kubenswrapper[4766]: I0130 17:58:24.686759 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/944d7612-c3af-4bbd-b193-a2769b8d362d-operator-scripts\") pod \"octavia-db-create-d22q5\" (UID: \"944d7612-c3af-4bbd-b193-a2769b8d362d\") " pod="openstack/octavia-db-create-d22q5" Jan 30 17:58:24 crc kubenswrapper[4766]: I0130 17:58:24.686929 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k96sm\" (UniqueName: \"kubernetes.io/projected/944d7612-c3af-4bbd-b193-a2769b8d362d-kube-api-access-k96sm\") pod \"octavia-db-create-d22q5\" (UID: \"944d7612-c3af-4bbd-b193-a2769b8d362d\") " pod="openstack/octavia-db-create-d22q5" Jan 30 17:58:24 crc kubenswrapper[4766]: I0130 17:58:24.690772 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a0dc221-4e00-4488-b09c-31ce4c70b735-utilities" (OuterVolumeSpecName: "utilities") pod "3a0dc221-4e00-4488-b09c-31ce4c70b735" (UID: "3a0dc221-4e00-4488-b09c-31ce4c70b735"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:58:24 crc kubenswrapper[4766]: I0130 17:58:24.722129 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a0dc221-4e00-4488-b09c-31ce4c70b735-kube-api-access-gjgjh" (OuterVolumeSpecName: "kube-api-access-gjgjh") pod "3a0dc221-4e00-4488-b09c-31ce4c70b735" (UID: "3a0dc221-4e00-4488-b09c-31ce4c70b735"). InnerVolumeSpecName "kube-api-access-gjgjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:58:24 crc kubenswrapper[4766]: I0130 17:58:24.767624 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a0dc221-4e00-4488-b09c-31ce4c70b735-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3a0dc221-4e00-4488-b09c-31ce4c70b735" (UID: "3a0dc221-4e00-4488-b09c-31ce4c70b735"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:58:24 crc kubenswrapper[4766]: I0130 17:58:24.788197 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k96sm\" (UniqueName: \"kubernetes.io/projected/944d7612-c3af-4bbd-b193-a2769b8d362d-kube-api-access-k96sm\") pod \"octavia-db-create-d22q5\" (UID: \"944d7612-c3af-4bbd-b193-a2769b8d362d\") " pod="openstack/octavia-db-create-d22q5" Jan 30 17:58:24 crc kubenswrapper[4766]: I0130 17:58:24.788640 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/944d7612-c3af-4bbd-b193-a2769b8d362d-operator-scripts\") pod \"octavia-db-create-d22q5\" (UID: \"944d7612-c3af-4bbd-b193-a2769b8d362d\") " pod="openstack/octavia-db-create-d22q5" Jan 30 17:58:24 crc kubenswrapper[4766]: I0130 17:58:24.788700 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a0dc221-4e00-4488-b09c-31ce4c70b735-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:58:24 crc kubenswrapper[4766]: I0130 17:58:24.788712 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjgjh\" (UniqueName: \"kubernetes.io/projected/3a0dc221-4e00-4488-b09c-31ce4c70b735-kube-api-access-gjgjh\") on node \"crc\" DevicePath \"\"" Jan 30 17:58:24 crc kubenswrapper[4766]: I0130 17:58:24.788721 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a0dc221-4e00-4488-b09c-31ce4c70b735-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:58:24 crc kubenswrapper[4766]: I0130 17:58:24.789374 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/944d7612-c3af-4bbd-b193-a2769b8d362d-operator-scripts\") pod \"octavia-db-create-d22q5\" (UID: \"944d7612-c3af-4bbd-b193-a2769b8d362d\") " pod="openstack/octavia-db-create-d22q5" Jan 30 17:58:24 crc kubenswrapper[4766]: I0130 17:58:24.807697 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k96sm\" (UniqueName: \"kubernetes.io/projected/944d7612-c3af-4bbd-b193-a2769b8d362d-kube-api-access-k96sm\") pod \"octavia-db-create-d22q5\" (UID: \"944d7612-c3af-4bbd-b193-a2769b8d362d\") " pod="openstack/octavia-db-create-d22q5" Jan 30 17:58:24 crc kubenswrapper[4766]: I0130 17:58:24.968292 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-d22q5" Jan 30 17:58:25 crc kubenswrapper[4766]: I0130 17:58:25.497800 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jmmpk" Jan 30 17:58:25 crc kubenswrapper[4766]: I0130 17:58:25.497796 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jmmpk" event={"ID":"3a0dc221-4e00-4488-b09c-31ce4c70b735","Type":"ContainerDied","Data":"5cf6928557b6939990dc1e11354457a1ee4fcb0ad54a84fa252e26d53511f230"} Jan 30 17:58:25 crc kubenswrapper[4766]: I0130 17:58:25.498398 4766 scope.go:117] "RemoveContainer" containerID="93f5d26fc1fca4c23cf2807e6521bc19bd8d2f655281164e00cfe3eb6836b083" Jan 30 17:58:25 crc kubenswrapper[4766]: I0130 17:58:25.507253 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-b4vlg" event={"ID":"aa514cb2-1f05-42a6-a181-f4f62250bd7c","Type":"ContainerStarted","Data":"bfd610bc75a97b55f0608c218cd67fffef18d73ffd384d902fcbc938a367bde2"} Jan 30 17:58:25 crc kubenswrapper[4766]: I0130 17:58:25.507326 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-b4vlg" event={"ID":"aa514cb2-1f05-42a6-a181-f4f62250bd7c","Type":"ContainerStarted","Data":"cd50fc48bc3b1dd5e97ea1f2783219fdb7826ffe66b540c7191f0eaec544b888"} Jan 30 17:58:25 crc kubenswrapper[4766]: I0130 17:58:25.507548 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-b4vlg" Jan 30 17:58:25 crc kubenswrapper[4766]: I0130 17:58:25.507926 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-b4vlg" Jan 30 17:58:25 crc kubenswrapper[4766]: I0130 17:58:25.527828 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-create-d22q5"] Jan 30 17:58:25 crc kubenswrapper[4766]: I0130 17:58:25.537902 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-b4vlg" podStartSLOduration=4.537881222 podStartE2EDuration="4.537881222s" podCreationTimestamp="2026-01-30 17:58:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:58:25.537401989 +0000 UTC m=+5760.175359345" watchObservedRunningTime="2026-01-30 17:58:25.537881222 +0000 UTC m=+5760.175838568" Jan 30 17:58:25 crc kubenswrapper[4766]: I0130 17:58:25.545324 4766 scope.go:117] "RemoveContainer" containerID="cb0dbe766675ea9006eae26acaa59b2b4c2ffb4eb8a5039fa387c58aebde62fd" Jan 30 17:58:25 crc kubenswrapper[4766]: I0130 17:58:25.564432 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jmmpk"] Jan 30 17:58:25 crc kubenswrapper[4766]: I0130 17:58:25.583252 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jmmpk"] Jan 30 17:58:25 crc kubenswrapper[4766]: I0130 17:58:25.636385 4766 scope.go:117] "RemoveContainer" containerID="71e8eb07bd8d0652afe6f78cfb4afc70c271503071bd4f84e51ac5f2dd19ad24" Jan 30 17:58:25 crc kubenswrapper[4766]: I0130 17:58:25.755847 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hxmkb" podUID="d84c1be7-4d75-42f5-a45d-cd83378aadca" containerName="registry-server" probeResult="failure" output=< Jan 30 17:58:25 crc kubenswrapper[4766]: timeout: failed to connect service ":50051" within 1s Jan 30 17:58:25 crc kubenswrapper[4766]: > Jan 30 17:58:26 crc kubenswrapper[4766]: I0130 17:58:26.064820 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a0dc221-4e00-4488-b09c-31ce4c70b735" path="/var/lib/kubelet/pods/3a0dc221-4e00-4488-b09c-31ce4c70b735/volumes" Jan 30 17:58:26 crc kubenswrapper[4766]: I0130 17:58:26.077806 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-c8b6-account-create-update-vqz78"] Jan 30 17:58:26 crc kubenswrapper[4766]: E0130 17:58:26.078424 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a0dc221-4e00-4488-b09c-31ce4c70b735" containerName="extract-utilities" Jan 30 17:58:26 crc kubenswrapper[4766]: I0130 17:58:26.078448 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a0dc221-4e00-4488-b09c-31ce4c70b735" containerName="extract-utilities" Jan 30 17:58:26 crc kubenswrapper[4766]: E0130 17:58:26.078493 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a0dc221-4e00-4488-b09c-31ce4c70b735" containerName="extract-content" Jan 30 17:58:26 crc kubenswrapper[4766]: I0130 17:58:26.078518 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a0dc221-4e00-4488-b09c-31ce4c70b735" containerName="extract-content" Jan 30 17:58:26 crc kubenswrapper[4766]: E0130 17:58:26.078533 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a0dc221-4e00-4488-b09c-31ce4c70b735" containerName="registry-server" Jan 30 17:58:26 crc kubenswrapper[4766]: I0130 17:58:26.078542 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a0dc221-4e00-4488-b09c-31ce4c70b735" containerName="registry-server" Jan 30 17:58:26 crc kubenswrapper[4766]: I0130 17:58:26.078809 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a0dc221-4e00-4488-b09c-31ce4c70b735" containerName="registry-server" Jan 30 17:58:26 crc kubenswrapper[4766]: I0130 17:58:26.079664 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-c8b6-account-create-update-vqz78" Jan 30 17:58:26 crc kubenswrapper[4766]: I0130 17:58:26.089753 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-db-secret" Jan 30 17:58:26 crc kubenswrapper[4766]: I0130 17:58:26.114056 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-c8b6-account-create-update-vqz78"] Jan 30 17:58:26 crc kubenswrapper[4766]: I0130 17:58:26.118798 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fc91a16-cfbf-425d-bca1-f23f53f60beb-operator-scripts\") pod \"octavia-c8b6-account-create-update-vqz78\" (UID: \"6fc91a16-cfbf-425d-bca1-f23f53f60beb\") " pod="openstack/octavia-c8b6-account-create-update-vqz78" Jan 30 17:58:26 crc kubenswrapper[4766]: I0130 17:58:26.118882 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cclnc\" (UniqueName: \"kubernetes.io/projected/6fc91a16-cfbf-425d-bca1-f23f53f60beb-kube-api-access-cclnc\") pod \"octavia-c8b6-account-create-update-vqz78\" (UID: \"6fc91a16-cfbf-425d-bca1-f23f53f60beb\") " pod="openstack/octavia-c8b6-account-create-update-vqz78" Jan 30 17:58:26 crc kubenswrapper[4766]: I0130 17:58:26.220843 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fc91a16-cfbf-425d-bca1-f23f53f60beb-operator-scripts\") pod \"octavia-c8b6-account-create-update-vqz78\" (UID: \"6fc91a16-cfbf-425d-bca1-f23f53f60beb\") " pod="openstack/octavia-c8b6-account-create-update-vqz78" Jan 30 17:58:26 crc kubenswrapper[4766]: I0130 17:58:26.220931 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cclnc\" (UniqueName: \"kubernetes.io/projected/6fc91a16-cfbf-425d-bca1-f23f53f60beb-kube-api-access-cclnc\") pod \"octavia-c8b6-account-create-update-vqz78\" (UID: \"6fc91a16-cfbf-425d-bca1-f23f53f60beb\") " pod="openstack/octavia-c8b6-account-create-update-vqz78" Jan 30 17:58:26 crc kubenswrapper[4766]: I0130 17:58:26.222089 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fc91a16-cfbf-425d-bca1-f23f53f60beb-operator-scripts\") pod \"octavia-c8b6-account-create-update-vqz78\" (UID: \"6fc91a16-cfbf-425d-bca1-f23f53f60beb\") " pod="openstack/octavia-c8b6-account-create-update-vqz78" Jan 30 17:58:26 crc kubenswrapper[4766]: I0130 17:58:26.241364 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cclnc\" (UniqueName: \"kubernetes.io/projected/6fc91a16-cfbf-425d-bca1-f23f53f60beb-kube-api-access-cclnc\") pod \"octavia-c8b6-account-create-update-vqz78\" (UID: \"6fc91a16-cfbf-425d-bca1-f23f53f60beb\") " pod="openstack/octavia-c8b6-account-create-update-vqz78" Jan 30 17:58:26 crc kubenswrapper[4766]: I0130 17:58:26.415695 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-c8b6-account-create-update-vqz78" Jan 30 17:58:26 crc kubenswrapper[4766]: I0130 17:58:26.568649 4766 generic.go:334] "Generic (PLEG): container finished" podID="944d7612-c3af-4bbd-b193-a2769b8d362d" containerID="82482b6c103da4e33a65a68c2aa8077854641cba347d1131ff453c1ad0a27d26" exitCode=0 Jan 30 17:58:26 crc kubenswrapper[4766]: I0130 17:58:26.569282 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-d22q5" event={"ID":"944d7612-c3af-4bbd-b193-a2769b8d362d","Type":"ContainerDied","Data":"82482b6c103da4e33a65a68c2aa8077854641cba347d1131ff453c1ad0a27d26"} Jan 30 17:58:26 crc kubenswrapper[4766]: I0130 17:58:26.569323 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-d22q5" event={"ID":"944d7612-c3af-4bbd-b193-a2769b8d362d","Type":"ContainerStarted","Data":"1a9f9ee339317f32652ca14791ecd0b014e024af02d6661810ffe67d8333cb7c"} Jan 30 17:58:26 crc kubenswrapper[4766]: W0130 17:58:26.995564 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6fc91a16_cfbf_425d_bca1_f23f53f60beb.slice/crio-7511c2f910e73880c57893eddebf7a72438e93478fbdcc69d3ccb57f2bd531e5 WatchSource:0}: Error finding container 7511c2f910e73880c57893eddebf7a72438e93478fbdcc69d3ccb57f2bd531e5: Status 404 returned error can't find the container with id 7511c2f910e73880c57893eddebf7a72438e93478fbdcc69d3ccb57f2bd531e5 Jan 30 17:58:27 crc kubenswrapper[4766]: I0130 17:58:27.002348 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-c8b6-account-create-update-vqz78"] Jan 30 17:58:27 crc kubenswrapper[4766]: I0130 17:58:27.593485 4766 generic.go:334] "Generic (PLEG): container finished" podID="6fc91a16-cfbf-425d-bca1-f23f53f60beb" containerID="c63229617d55f96821911e32ef6a34d5a26df3748957060c5998ef3872acbfa5" exitCode=0 Jan 30 17:58:27 crc kubenswrapper[4766]: I0130 17:58:27.593555 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-c8b6-account-create-update-vqz78" event={"ID":"6fc91a16-cfbf-425d-bca1-f23f53f60beb","Type":"ContainerDied","Data":"c63229617d55f96821911e32ef6a34d5a26df3748957060c5998ef3872acbfa5"} Jan 30 17:58:27 crc kubenswrapper[4766]: I0130 17:58:27.593941 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-c8b6-account-create-update-vqz78" event={"ID":"6fc91a16-cfbf-425d-bca1-f23f53f60beb","Type":"ContainerStarted","Data":"7511c2f910e73880c57893eddebf7a72438e93478fbdcc69d3ccb57f2bd531e5"} Jan 30 17:58:27 crc kubenswrapper[4766]: I0130 17:58:27.950889 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-d22q5" Jan 30 17:58:27 crc kubenswrapper[4766]: I0130 17:58:27.975102 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/944d7612-c3af-4bbd-b193-a2769b8d362d-operator-scripts\") pod \"944d7612-c3af-4bbd-b193-a2769b8d362d\" (UID: \"944d7612-c3af-4bbd-b193-a2769b8d362d\") " Jan 30 17:58:27 crc kubenswrapper[4766]: I0130 17:58:27.975231 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k96sm\" (UniqueName: \"kubernetes.io/projected/944d7612-c3af-4bbd-b193-a2769b8d362d-kube-api-access-k96sm\") pod \"944d7612-c3af-4bbd-b193-a2769b8d362d\" (UID: \"944d7612-c3af-4bbd-b193-a2769b8d362d\") " Jan 30 17:58:27 crc kubenswrapper[4766]: I0130 17:58:27.976338 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/944d7612-c3af-4bbd-b193-a2769b8d362d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "944d7612-c3af-4bbd-b193-a2769b8d362d" (UID: "944d7612-c3af-4bbd-b193-a2769b8d362d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:58:27 crc kubenswrapper[4766]: I0130 17:58:27.981067 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/944d7612-c3af-4bbd-b193-a2769b8d362d-kube-api-access-k96sm" (OuterVolumeSpecName: "kube-api-access-k96sm") pod "944d7612-c3af-4bbd-b193-a2769b8d362d" (UID: "944d7612-c3af-4bbd-b193-a2769b8d362d"). InnerVolumeSpecName "kube-api-access-k96sm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:58:28 crc kubenswrapper[4766]: I0130 17:58:28.053517 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-7780-account-create-update-96kcq"] Jan 30 17:58:28 crc kubenswrapper[4766]: I0130 17:58:28.059663 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-q5td7"] Jan 30 17:58:28 crc kubenswrapper[4766]: I0130 17:58:28.067671 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-q5td7"] Jan 30 17:58:28 crc kubenswrapper[4766]: I0130 17:58:28.075170 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-7780-account-create-update-96kcq"] Jan 30 17:58:28 crc kubenswrapper[4766]: I0130 17:58:28.078987 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/944d7612-c3af-4bbd-b193-a2769b8d362d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:58:28 crc kubenswrapper[4766]: I0130 17:58:28.079017 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k96sm\" (UniqueName: \"kubernetes.io/projected/944d7612-c3af-4bbd-b193-a2769b8d362d-kube-api-access-k96sm\") on node \"crc\" DevicePath \"\"" Jan 30 17:58:28 crc kubenswrapper[4766]: I0130 17:58:28.605384 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-d22q5" Jan 30 17:58:28 crc kubenswrapper[4766]: I0130 17:58:28.606162 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-d22q5" event={"ID":"944d7612-c3af-4bbd-b193-a2769b8d362d","Type":"ContainerDied","Data":"1a9f9ee339317f32652ca14791ecd0b014e024af02d6661810ffe67d8333cb7c"} Jan 30 17:58:28 crc kubenswrapper[4766]: I0130 17:58:28.606204 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a9f9ee339317f32652ca14791ecd0b014e024af02d6661810ffe67d8333cb7c" Jan 30 17:58:28 crc kubenswrapper[4766]: I0130 17:58:28.982156 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-c8b6-account-create-update-vqz78" Jan 30 17:58:29 crc kubenswrapper[4766]: I0130 17:58:29.102500 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cclnc\" (UniqueName: \"kubernetes.io/projected/6fc91a16-cfbf-425d-bca1-f23f53f60beb-kube-api-access-cclnc\") pod \"6fc91a16-cfbf-425d-bca1-f23f53f60beb\" (UID: \"6fc91a16-cfbf-425d-bca1-f23f53f60beb\") " Jan 30 17:58:29 crc kubenswrapper[4766]: I0130 17:58:29.102557 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fc91a16-cfbf-425d-bca1-f23f53f60beb-operator-scripts\") pod \"6fc91a16-cfbf-425d-bca1-f23f53f60beb\" (UID: \"6fc91a16-cfbf-425d-bca1-f23f53f60beb\") " Jan 30 17:58:29 crc kubenswrapper[4766]: I0130 17:58:29.103727 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fc91a16-cfbf-425d-bca1-f23f53f60beb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6fc91a16-cfbf-425d-bca1-f23f53f60beb" (UID: "6fc91a16-cfbf-425d-bca1-f23f53f60beb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:58:29 crc kubenswrapper[4766]: I0130 17:58:29.121981 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fc91a16-cfbf-425d-bca1-f23f53f60beb-kube-api-access-cclnc" (OuterVolumeSpecName: "kube-api-access-cclnc") pod "6fc91a16-cfbf-425d-bca1-f23f53f60beb" (UID: "6fc91a16-cfbf-425d-bca1-f23f53f60beb"). InnerVolumeSpecName "kube-api-access-cclnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:58:29 crc kubenswrapper[4766]: I0130 17:58:29.205210 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cclnc\" (UniqueName: \"kubernetes.io/projected/6fc91a16-cfbf-425d-bca1-f23f53f60beb-kube-api-access-cclnc\") on node \"crc\" DevicePath \"\"" Jan 30 17:58:29 crc kubenswrapper[4766]: I0130 17:58:29.205246 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fc91a16-cfbf-425d-bca1-f23f53f60beb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:58:29 crc kubenswrapper[4766]: I0130 17:58:29.620645 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-c8b6-account-create-update-vqz78" event={"ID":"6fc91a16-cfbf-425d-bca1-f23f53f60beb","Type":"ContainerDied","Data":"7511c2f910e73880c57893eddebf7a72438e93478fbdcc69d3ccb57f2bd531e5"} Jan 30 17:58:29 crc kubenswrapper[4766]: I0130 17:58:29.620692 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7511c2f910e73880c57893eddebf7a72438e93478fbdcc69d3ccb57f2bd531e5" Jan 30 17:58:29 crc kubenswrapper[4766]: I0130 17:58:29.620751 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-c8b6-account-create-update-vqz78" Jan 30 17:58:30 crc kubenswrapper[4766]: I0130 17:58:30.051740 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d0580a7-5f19-4aa4-893f-106812b15326" path="/var/lib/kubelet/pods/9d0580a7-5f19-4aa4-893f-106812b15326/volumes" Jan 30 17:58:30 crc kubenswrapper[4766]: I0130 17:58:30.052383 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e09e2e76-7c0b-4efa-b226-18df0a512567" path="/var/lib/kubelet/pods/e09e2e76-7c0b-4efa-b226-18df0a512567/volumes" Jan 30 17:58:31 crc kubenswrapper[4766]: I0130 17:58:31.039704 4766 scope.go:117] "RemoveContainer" containerID="e52209522ad6ff86d8d333246ee724873200c61e028d93d4fc612ac4b0977354" Jan 30 17:58:31 crc kubenswrapper[4766]: E0130 17:58:31.040450 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:58:31 crc kubenswrapper[4766]: I0130 17:58:31.297714 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-persistence-db-create-v77vj"] Jan 30 17:58:31 crc kubenswrapper[4766]: E0130 17:58:31.298205 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="944d7612-c3af-4bbd-b193-a2769b8d362d" containerName="mariadb-database-create" Jan 30 17:58:31 crc kubenswrapper[4766]: I0130 17:58:31.298230 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="944d7612-c3af-4bbd-b193-a2769b8d362d" containerName="mariadb-database-create" Jan 30 17:58:31 crc kubenswrapper[4766]: E0130 17:58:31.298261 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fc91a16-cfbf-425d-bca1-f23f53f60beb" containerName="mariadb-account-create-update" Jan 30 17:58:31 crc kubenswrapper[4766]: I0130 17:58:31.298269 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fc91a16-cfbf-425d-bca1-f23f53f60beb" containerName="mariadb-account-create-update" Jan 30 17:58:31 crc kubenswrapper[4766]: I0130 17:58:31.298490 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="944d7612-c3af-4bbd-b193-a2769b8d362d" containerName="mariadb-database-create" Jan 30 17:58:31 crc kubenswrapper[4766]: I0130 17:58:31.298530 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fc91a16-cfbf-425d-bca1-f23f53f60beb" containerName="mariadb-account-create-update" Jan 30 17:58:31 crc kubenswrapper[4766]: I0130 17:58:31.299211 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-v77vj" Jan 30 17:58:31 crc kubenswrapper[4766]: I0130 17:58:31.308774 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-persistence-db-create-v77vj"] Jan 30 17:58:31 crc kubenswrapper[4766]: I0130 17:58:31.443864 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0550f6c1-ed1f-405f-8420-507890f13d75-operator-scripts\") pod \"octavia-persistence-db-create-v77vj\" (UID: \"0550f6c1-ed1f-405f-8420-507890f13d75\") " pod="openstack/octavia-persistence-db-create-v77vj" Jan 30 17:58:31 crc kubenswrapper[4766]: I0130 17:58:31.443926 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4pk7\" (UniqueName: \"kubernetes.io/projected/0550f6c1-ed1f-405f-8420-507890f13d75-kube-api-access-h4pk7\") pod \"octavia-persistence-db-create-v77vj\" (UID: \"0550f6c1-ed1f-405f-8420-507890f13d75\") " pod="openstack/octavia-persistence-db-create-v77vj" Jan 30 17:58:31 crc kubenswrapper[4766]: I0130 17:58:31.545835 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4pk7\" (UniqueName: \"kubernetes.io/projected/0550f6c1-ed1f-405f-8420-507890f13d75-kube-api-access-h4pk7\") pod \"octavia-persistence-db-create-v77vj\" (UID: \"0550f6c1-ed1f-405f-8420-507890f13d75\") " pod="openstack/octavia-persistence-db-create-v77vj" Jan 30 17:58:31 crc kubenswrapper[4766]: I0130 17:58:31.545998 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0550f6c1-ed1f-405f-8420-507890f13d75-operator-scripts\") pod \"octavia-persistence-db-create-v77vj\" (UID: \"0550f6c1-ed1f-405f-8420-507890f13d75\") " pod="openstack/octavia-persistence-db-create-v77vj" Jan 30 17:58:31 crc kubenswrapper[4766]: I0130 17:58:31.546645 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0550f6c1-ed1f-405f-8420-507890f13d75-operator-scripts\") pod \"octavia-persistence-db-create-v77vj\" (UID: \"0550f6c1-ed1f-405f-8420-507890f13d75\") " pod="openstack/octavia-persistence-db-create-v77vj" Jan 30 17:58:31 crc kubenswrapper[4766]: I0130 17:58:31.564792 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4pk7\" (UniqueName: \"kubernetes.io/projected/0550f6c1-ed1f-405f-8420-507890f13d75-kube-api-access-h4pk7\") pod \"octavia-persistence-db-create-v77vj\" (UID: \"0550f6c1-ed1f-405f-8420-507890f13d75\") " pod="openstack/octavia-persistence-db-create-v77vj" Jan 30 17:58:31 crc kubenswrapper[4766]: I0130 17:58:31.625652 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-v77vj" Jan 30 17:58:31 crc kubenswrapper[4766]: I0130 17:58:31.840404 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-1019-account-create-update-skkw9"] Jan 30 17:58:31 crc kubenswrapper[4766]: I0130 17:58:31.842014 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-1019-account-create-update-skkw9" Jan 30 17:58:31 crc kubenswrapper[4766]: I0130 17:58:31.844514 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-persistence-db-secret" Jan 30 17:58:31 crc kubenswrapper[4766]: I0130 17:58:31.851523 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-1019-account-create-update-skkw9"] Jan 30 17:58:31 crc kubenswrapper[4766]: I0130 17:58:31.954016 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9xrz\" (UniqueName: \"kubernetes.io/projected/0c327fe8-260c-4117-b55e-3612be41da79-kube-api-access-t9xrz\") pod \"octavia-1019-account-create-update-skkw9\" (UID: \"0c327fe8-260c-4117-b55e-3612be41da79\") " pod="openstack/octavia-1019-account-create-update-skkw9" Jan 30 17:58:31 crc kubenswrapper[4766]: I0130 17:58:31.954142 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c327fe8-260c-4117-b55e-3612be41da79-operator-scripts\") pod \"octavia-1019-account-create-update-skkw9\" (UID: \"0c327fe8-260c-4117-b55e-3612be41da79\") " pod="openstack/octavia-1019-account-create-update-skkw9" Jan 30 17:58:32 crc kubenswrapper[4766]: I0130 17:58:32.056521 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9xrz\" (UniqueName: \"kubernetes.io/projected/0c327fe8-260c-4117-b55e-3612be41da79-kube-api-access-t9xrz\") pod \"octavia-1019-account-create-update-skkw9\" (UID: \"0c327fe8-260c-4117-b55e-3612be41da79\") " pod="openstack/octavia-1019-account-create-update-skkw9" Jan 30 17:58:32 crc kubenswrapper[4766]: I0130 17:58:32.056682 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c327fe8-260c-4117-b55e-3612be41da79-operator-scripts\") pod \"octavia-1019-account-create-update-skkw9\" (UID: \"0c327fe8-260c-4117-b55e-3612be41da79\") " pod="openstack/octavia-1019-account-create-update-skkw9" Jan 30 17:58:32 crc kubenswrapper[4766]: I0130 17:58:32.057652 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c327fe8-260c-4117-b55e-3612be41da79-operator-scripts\") pod \"octavia-1019-account-create-update-skkw9\" (UID: \"0c327fe8-260c-4117-b55e-3612be41da79\") " pod="openstack/octavia-1019-account-create-update-skkw9" Jan 30 17:58:32 crc kubenswrapper[4766]: I0130 17:58:32.076852 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9xrz\" (UniqueName: \"kubernetes.io/projected/0c327fe8-260c-4117-b55e-3612be41da79-kube-api-access-t9xrz\") pod \"octavia-1019-account-create-update-skkw9\" (UID: \"0c327fe8-260c-4117-b55e-3612be41da79\") " pod="openstack/octavia-1019-account-create-update-skkw9" Jan 30 17:58:32 crc kubenswrapper[4766]: I0130 17:58:32.138638 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-persistence-db-create-v77vj"] Jan 30 17:58:32 crc kubenswrapper[4766]: I0130 17:58:32.168547 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-1019-account-create-update-skkw9" Jan 30 17:58:32 crc kubenswrapper[4766]: I0130 17:58:32.624566 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-1019-account-create-update-skkw9"] Jan 30 17:58:32 crc kubenswrapper[4766]: W0130 17:58:32.627170 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c327fe8_260c_4117_b55e_3612be41da79.slice/crio-c4f8933a9425a9aac79f59116905112ce4e3f31f532c7fdbadffacb63d566a80 WatchSource:0}: Error finding container c4f8933a9425a9aac79f59116905112ce4e3f31f532c7fdbadffacb63d566a80: Status 404 returned error can't find the container with id c4f8933a9425a9aac79f59116905112ce4e3f31f532c7fdbadffacb63d566a80 Jan 30 17:58:32 crc kubenswrapper[4766]: I0130 17:58:32.647724 4766 scope.go:117] "RemoveContainer" containerID="3c6e55bd0cf024ebee065ba107a5ecdfde761cb270a8d820adbc79b96576773c" Jan 30 17:58:32 crc kubenswrapper[4766]: I0130 17:58:32.691480 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-1019-account-create-update-skkw9" event={"ID":"0c327fe8-260c-4117-b55e-3612be41da79","Type":"ContainerStarted","Data":"c4f8933a9425a9aac79f59116905112ce4e3f31f532c7fdbadffacb63d566a80"} Jan 30 17:58:32 crc kubenswrapper[4766]: I0130 17:58:32.691558 4766 scope.go:117] "RemoveContainer" containerID="869db07172127624e0324810e45f248df650df66e4eafda3a0b74e7b81e90798" Jan 30 17:58:32 crc kubenswrapper[4766]: I0130 17:58:32.694112 4766 generic.go:334] "Generic (PLEG): container finished" podID="0550f6c1-ed1f-405f-8420-507890f13d75" containerID="1156fa8967f6790101764cbd5a85756c89530dcced500933e43bdf4774cc947c" exitCode=0 Jan 30 17:58:32 crc kubenswrapper[4766]: I0130 17:58:32.694151 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-v77vj" event={"ID":"0550f6c1-ed1f-405f-8420-507890f13d75","Type":"ContainerDied","Data":"1156fa8967f6790101764cbd5a85756c89530dcced500933e43bdf4774cc947c"} Jan 30 17:58:32 crc kubenswrapper[4766]: I0130 17:58:32.698135 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-v77vj" event={"ID":"0550f6c1-ed1f-405f-8420-507890f13d75","Type":"ContainerStarted","Data":"7292c9fa31e9bd7c25c242e8e419f4d28576cb67cf96f255b28e665c6f3dbc40"} Jan 30 17:58:33 crc kubenswrapper[4766]: I0130 17:58:33.936576 4766 generic.go:334] "Generic (PLEG): container finished" podID="0c327fe8-260c-4117-b55e-3612be41da79" containerID="b39ea84d36ef42f8927d7576b9afa12181f150184fa9861bc236ee65bcdde03a" exitCode=0 Jan 30 17:58:33 crc kubenswrapper[4766]: I0130 17:58:33.936682 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-1019-account-create-update-skkw9" event={"ID":"0c327fe8-260c-4117-b55e-3612be41da79","Type":"ContainerDied","Data":"b39ea84d36ef42f8927d7576b9afa12181f150184fa9861bc236ee65bcdde03a"} Jan 30 17:58:34 crc kubenswrapper[4766]: I0130 17:58:34.284107 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-v77vj" Jan 30 17:58:34 crc kubenswrapper[4766]: I0130 17:58:34.432280 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0550f6c1-ed1f-405f-8420-507890f13d75-operator-scripts\") pod \"0550f6c1-ed1f-405f-8420-507890f13d75\" (UID: \"0550f6c1-ed1f-405f-8420-507890f13d75\") " Jan 30 17:58:34 crc kubenswrapper[4766]: I0130 17:58:34.432910 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4pk7\" (UniqueName: \"kubernetes.io/projected/0550f6c1-ed1f-405f-8420-507890f13d75-kube-api-access-h4pk7\") pod \"0550f6c1-ed1f-405f-8420-507890f13d75\" (UID: \"0550f6c1-ed1f-405f-8420-507890f13d75\") " Jan 30 17:58:34 crc kubenswrapper[4766]: I0130 17:58:34.432953 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0550f6c1-ed1f-405f-8420-507890f13d75-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0550f6c1-ed1f-405f-8420-507890f13d75" (UID: "0550f6c1-ed1f-405f-8420-507890f13d75"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:58:34 crc kubenswrapper[4766]: I0130 17:58:34.433450 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0550f6c1-ed1f-405f-8420-507890f13d75-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:58:34 crc kubenswrapper[4766]: I0130 17:58:34.438398 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0550f6c1-ed1f-405f-8420-507890f13d75-kube-api-access-h4pk7" (OuterVolumeSpecName: "kube-api-access-h4pk7") pod "0550f6c1-ed1f-405f-8420-507890f13d75" (UID: "0550f6c1-ed1f-405f-8420-507890f13d75"). InnerVolumeSpecName "kube-api-access-h4pk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:58:34 crc kubenswrapper[4766]: I0130 17:58:34.535472 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4pk7\" (UniqueName: \"kubernetes.io/projected/0550f6c1-ed1f-405f-8420-507890f13d75-kube-api-access-h4pk7\") on node \"crc\" DevicePath \"\"" Jan 30 17:58:34 crc kubenswrapper[4766]: I0130 17:58:34.727694 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hxmkb" Jan 30 17:58:34 crc kubenswrapper[4766]: I0130 17:58:34.773234 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hxmkb" Jan 30 17:58:34 crc kubenswrapper[4766]: I0130 17:58:34.835116 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hxmkb"] Jan 30 17:58:34 crc kubenswrapper[4766]: I0130 17:58:34.947439 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-v77vj" event={"ID":"0550f6c1-ed1f-405f-8420-507890f13d75","Type":"ContainerDied","Data":"7292c9fa31e9bd7c25c242e8e419f4d28576cb67cf96f255b28e665c6f3dbc40"} Jan 30 17:58:34 crc kubenswrapper[4766]: I0130 17:58:34.947480 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7292c9fa31e9bd7c25c242e8e419f4d28576cb67cf96f255b28e665c6f3dbc40" Jan 30 17:58:34 crc kubenswrapper[4766]: I0130 17:58:34.947601 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-v77vj" Jan 30 17:58:34 crc kubenswrapper[4766]: I0130 17:58:34.967808 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ck55d"] Jan 30 17:58:34 crc kubenswrapper[4766]: I0130 17:58:34.968077 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ck55d" podUID="e775d594-6680-4e4a-8b1f-01f3a0738015" containerName="registry-server" containerID="cri-o://0aca5a8d90310b8c5dc83366f011e64837ec6305eaa5f37b461361c778c239ee" gracePeriod=2 Jan 30 17:58:35 crc kubenswrapper[4766]: I0130 17:58:35.066245 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-6hlg5"] Jan 30 17:58:35 crc kubenswrapper[4766]: I0130 17:58:35.083060 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-6hlg5"] Jan 30 17:58:35 crc kubenswrapper[4766]: I0130 17:58:35.392883 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-1019-account-create-update-skkw9" Jan 30 17:58:35 crc kubenswrapper[4766]: I0130 17:58:35.558810 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ck55d" Jan 30 17:58:35 crc kubenswrapper[4766]: I0130 17:58:35.560847 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c327fe8-260c-4117-b55e-3612be41da79-operator-scripts\") pod \"0c327fe8-260c-4117-b55e-3612be41da79\" (UID: \"0c327fe8-260c-4117-b55e-3612be41da79\") " Jan 30 17:58:35 crc kubenswrapper[4766]: I0130 17:58:35.560975 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9xrz\" (UniqueName: \"kubernetes.io/projected/0c327fe8-260c-4117-b55e-3612be41da79-kube-api-access-t9xrz\") pod \"0c327fe8-260c-4117-b55e-3612be41da79\" (UID: \"0c327fe8-260c-4117-b55e-3612be41da79\") " Jan 30 17:58:35 crc kubenswrapper[4766]: I0130 17:58:35.561733 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c327fe8-260c-4117-b55e-3612be41da79-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0c327fe8-260c-4117-b55e-3612be41da79" (UID: "0c327fe8-260c-4117-b55e-3612be41da79"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:58:35 crc kubenswrapper[4766]: I0130 17:58:35.566519 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c327fe8-260c-4117-b55e-3612be41da79-kube-api-access-t9xrz" (OuterVolumeSpecName: "kube-api-access-t9xrz") pod "0c327fe8-260c-4117-b55e-3612be41da79" (UID: "0c327fe8-260c-4117-b55e-3612be41da79"). InnerVolumeSpecName "kube-api-access-t9xrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:58:35 crc kubenswrapper[4766]: I0130 17:58:35.662547 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e775d594-6680-4e4a-8b1f-01f3a0738015-catalog-content\") pod \"e775d594-6680-4e4a-8b1f-01f3a0738015\" (UID: \"e775d594-6680-4e4a-8b1f-01f3a0738015\") " Jan 30 17:58:35 crc kubenswrapper[4766]: I0130 17:58:35.662723 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e775d594-6680-4e4a-8b1f-01f3a0738015-utilities\") pod \"e775d594-6680-4e4a-8b1f-01f3a0738015\" (UID: \"e775d594-6680-4e4a-8b1f-01f3a0738015\") " Jan 30 17:58:35 crc kubenswrapper[4766]: I0130 17:58:35.662786 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lvft\" (UniqueName: \"kubernetes.io/projected/e775d594-6680-4e4a-8b1f-01f3a0738015-kube-api-access-5lvft\") pod \"e775d594-6680-4e4a-8b1f-01f3a0738015\" (UID: \"e775d594-6680-4e4a-8b1f-01f3a0738015\") " Jan 30 17:58:35 crc kubenswrapper[4766]: I0130 17:58:35.663319 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e775d594-6680-4e4a-8b1f-01f3a0738015-utilities" (OuterVolumeSpecName: "utilities") pod "e775d594-6680-4e4a-8b1f-01f3a0738015" (UID: "e775d594-6680-4e4a-8b1f-01f3a0738015"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:58:35 crc kubenswrapper[4766]: I0130 17:58:35.663364 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9xrz\" (UniqueName: \"kubernetes.io/projected/0c327fe8-260c-4117-b55e-3612be41da79-kube-api-access-t9xrz\") on node \"crc\" DevicePath \"\"" Jan 30 17:58:35 crc kubenswrapper[4766]: I0130 17:58:35.663516 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c327fe8-260c-4117-b55e-3612be41da79-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:58:35 crc kubenswrapper[4766]: I0130 17:58:35.666564 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e775d594-6680-4e4a-8b1f-01f3a0738015-kube-api-access-5lvft" (OuterVolumeSpecName: "kube-api-access-5lvft") pod "e775d594-6680-4e4a-8b1f-01f3a0738015" (UID: "e775d594-6680-4e4a-8b1f-01f3a0738015"). InnerVolumeSpecName "kube-api-access-5lvft". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:58:35 crc kubenswrapper[4766]: I0130 17:58:35.766985 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e775d594-6680-4e4a-8b1f-01f3a0738015-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:58:35 crc kubenswrapper[4766]: I0130 17:58:35.767278 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5lvft\" (UniqueName: \"kubernetes.io/projected/e775d594-6680-4e4a-8b1f-01f3a0738015-kube-api-access-5lvft\") on node \"crc\" DevicePath \"\"" Jan 30 17:58:35 crc kubenswrapper[4766]: I0130 17:58:35.777812 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e775d594-6680-4e4a-8b1f-01f3a0738015-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e775d594-6680-4e4a-8b1f-01f3a0738015" (UID: "e775d594-6680-4e4a-8b1f-01f3a0738015"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:58:35 crc kubenswrapper[4766]: I0130 17:58:35.869557 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e775d594-6680-4e4a-8b1f-01f3a0738015-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:58:35 crc kubenswrapper[4766]: I0130 17:58:35.959911 4766 generic.go:334] "Generic (PLEG): container finished" podID="e775d594-6680-4e4a-8b1f-01f3a0738015" containerID="0aca5a8d90310b8c5dc83366f011e64837ec6305eaa5f37b461361c778c239ee" exitCode=0 Jan 30 17:58:35 crc kubenswrapper[4766]: I0130 17:58:35.960064 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ck55d" Jan 30 17:58:35 crc kubenswrapper[4766]: I0130 17:58:35.960918 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ck55d" event={"ID":"e775d594-6680-4e4a-8b1f-01f3a0738015","Type":"ContainerDied","Data":"0aca5a8d90310b8c5dc83366f011e64837ec6305eaa5f37b461361c778c239ee"} Jan 30 17:58:35 crc kubenswrapper[4766]: I0130 17:58:35.960981 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ck55d" event={"ID":"e775d594-6680-4e4a-8b1f-01f3a0738015","Type":"ContainerDied","Data":"f894c54809796e9bc955e9c65573180850c5025aad67c7a860801cd7fd7de425"} Jan 30 17:58:35 crc kubenswrapper[4766]: I0130 17:58:35.961002 4766 scope.go:117] "RemoveContainer" containerID="0aca5a8d90310b8c5dc83366f011e64837ec6305eaa5f37b461361c778c239ee" Jan 30 17:58:35 crc kubenswrapper[4766]: I0130 17:58:35.964979 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-1019-account-create-update-skkw9" event={"ID":"0c327fe8-260c-4117-b55e-3612be41da79","Type":"ContainerDied","Data":"c4f8933a9425a9aac79f59116905112ce4e3f31f532c7fdbadffacb63d566a80"} Jan 30 17:58:35 crc kubenswrapper[4766]: I0130 17:58:35.965020 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4f8933a9425a9aac79f59116905112ce4e3f31f532c7fdbadffacb63d566a80" Jan 30 17:58:35 crc kubenswrapper[4766]: I0130 17:58:35.965035 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-1019-account-create-update-skkw9" Jan 30 17:58:35 crc kubenswrapper[4766]: I0130 17:58:35.986040 4766 scope.go:117] "RemoveContainer" containerID="f0512c8bbc0a86dc1e032c89616f9648e9e30da9036a2694b12318eb2817a1dd" Jan 30 17:58:36 crc kubenswrapper[4766]: I0130 17:58:36.000846 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ck55d"] Jan 30 17:58:36 crc kubenswrapper[4766]: I0130 17:58:36.029574 4766 scope.go:117] "RemoveContainer" containerID="cda6ca3941388968472d6bc22ee2e166c80fab033e008666ff71f186be586a54" Jan 30 17:58:36 crc kubenswrapper[4766]: I0130 17:58:36.038918 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ck55d"] Jan 30 17:58:36 crc kubenswrapper[4766]: I0130 17:58:36.053891 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a04cef9-eaad-4fba-9aa9-0f15ed426885" path="/var/lib/kubelet/pods/7a04cef9-eaad-4fba-9aa9-0f15ed426885/volumes" Jan 30 17:58:36 crc kubenswrapper[4766]: I0130 17:58:36.054491 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e775d594-6680-4e4a-8b1f-01f3a0738015" path="/var/lib/kubelet/pods/e775d594-6680-4e4a-8b1f-01f3a0738015/volumes" Jan 30 17:58:36 crc kubenswrapper[4766]: I0130 17:58:36.056223 4766 scope.go:117] "RemoveContainer" containerID="0aca5a8d90310b8c5dc83366f011e64837ec6305eaa5f37b461361c778c239ee" Jan 30 17:58:36 crc kubenswrapper[4766]: E0130 17:58:36.057078 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0aca5a8d90310b8c5dc83366f011e64837ec6305eaa5f37b461361c778c239ee\": container with ID starting with 0aca5a8d90310b8c5dc83366f011e64837ec6305eaa5f37b461361c778c239ee not found: ID does not exist" containerID="0aca5a8d90310b8c5dc83366f011e64837ec6305eaa5f37b461361c778c239ee" Jan 30 17:58:36 crc kubenswrapper[4766]: I0130 17:58:36.057111 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0aca5a8d90310b8c5dc83366f011e64837ec6305eaa5f37b461361c778c239ee"} err="failed to get container status \"0aca5a8d90310b8c5dc83366f011e64837ec6305eaa5f37b461361c778c239ee\": rpc error: code = NotFound desc = could not find container \"0aca5a8d90310b8c5dc83366f011e64837ec6305eaa5f37b461361c778c239ee\": container with ID starting with 0aca5a8d90310b8c5dc83366f011e64837ec6305eaa5f37b461361c778c239ee not found: ID does not exist" Jan 30 17:58:36 crc kubenswrapper[4766]: I0130 17:58:36.057130 4766 scope.go:117] "RemoveContainer" containerID="f0512c8bbc0a86dc1e032c89616f9648e9e30da9036a2694b12318eb2817a1dd" Jan 30 17:58:36 crc kubenswrapper[4766]: E0130 17:58:36.057507 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0512c8bbc0a86dc1e032c89616f9648e9e30da9036a2694b12318eb2817a1dd\": container with ID starting with f0512c8bbc0a86dc1e032c89616f9648e9e30da9036a2694b12318eb2817a1dd not found: ID does not exist" containerID="f0512c8bbc0a86dc1e032c89616f9648e9e30da9036a2694b12318eb2817a1dd" Jan 30 17:58:36 crc kubenswrapper[4766]: I0130 17:58:36.057548 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0512c8bbc0a86dc1e032c89616f9648e9e30da9036a2694b12318eb2817a1dd"} err="failed to get container status \"f0512c8bbc0a86dc1e032c89616f9648e9e30da9036a2694b12318eb2817a1dd\": rpc error: code = NotFound desc = could not find container \"f0512c8bbc0a86dc1e032c89616f9648e9e30da9036a2694b12318eb2817a1dd\": container with ID starting with f0512c8bbc0a86dc1e032c89616f9648e9e30da9036a2694b12318eb2817a1dd not found: ID does not exist" Jan 30 17:58:36 crc kubenswrapper[4766]: I0130 17:58:36.057575 4766 scope.go:117] "RemoveContainer" containerID="cda6ca3941388968472d6bc22ee2e166c80fab033e008666ff71f186be586a54" Jan 30 17:58:36 crc kubenswrapper[4766]: E0130 17:58:36.057948 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cda6ca3941388968472d6bc22ee2e166c80fab033e008666ff71f186be586a54\": container with ID starting with cda6ca3941388968472d6bc22ee2e166c80fab033e008666ff71f186be586a54 not found: ID does not exist" containerID="cda6ca3941388968472d6bc22ee2e166c80fab033e008666ff71f186be586a54" Jan 30 17:58:36 crc kubenswrapper[4766]: I0130 17:58:36.057976 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cda6ca3941388968472d6bc22ee2e166c80fab033e008666ff71f186be586a54"} err="failed to get container status \"cda6ca3941388968472d6bc22ee2e166c80fab033e008666ff71f186be586a54\": rpc error: code = NotFound desc = could not find container \"cda6ca3941388968472d6bc22ee2e166c80fab033e008666ff71f186be586a54\": container with ID starting with cda6ca3941388968472d6bc22ee2e166c80fab033e008666ff71f186be586a54 not found: ID does not exist" Jan 30 17:58:37 crc kubenswrapper[4766]: I0130 17:58:37.176001 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-api-5c95b64c75-5mhgs"] Jan 30 17:58:37 crc kubenswrapper[4766]: E0130 17:58:37.176441 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e775d594-6680-4e4a-8b1f-01f3a0738015" containerName="registry-server" Jan 30 17:58:37 crc kubenswrapper[4766]: I0130 17:58:37.176454 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="e775d594-6680-4e4a-8b1f-01f3a0738015" containerName="registry-server" Jan 30 17:58:37 crc kubenswrapper[4766]: E0130 17:58:37.176469 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e775d594-6680-4e4a-8b1f-01f3a0738015" containerName="extract-utilities" Jan 30 17:58:37 crc kubenswrapper[4766]: I0130 17:58:37.176475 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="e775d594-6680-4e4a-8b1f-01f3a0738015" containerName="extract-utilities" Jan 30 17:58:37 crc kubenswrapper[4766]: E0130 17:58:37.176491 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c327fe8-260c-4117-b55e-3612be41da79" containerName="mariadb-account-create-update" Jan 30 17:58:37 crc kubenswrapper[4766]: I0130 17:58:37.176497 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c327fe8-260c-4117-b55e-3612be41da79" containerName="mariadb-account-create-update" Jan 30 17:58:37 crc kubenswrapper[4766]: E0130 17:58:37.176513 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0550f6c1-ed1f-405f-8420-507890f13d75" containerName="mariadb-database-create" Jan 30 17:58:37 crc kubenswrapper[4766]: I0130 17:58:37.176521 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="0550f6c1-ed1f-405f-8420-507890f13d75" containerName="mariadb-database-create" Jan 30 17:58:37 crc kubenswrapper[4766]: E0130 17:58:37.176542 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e775d594-6680-4e4a-8b1f-01f3a0738015" containerName="extract-content" Jan 30 17:58:37 crc kubenswrapper[4766]: I0130 17:58:37.176551 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="e775d594-6680-4e4a-8b1f-01f3a0738015" containerName="extract-content" Jan 30 17:58:37 crc kubenswrapper[4766]: I0130 17:58:37.176729 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="e775d594-6680-4e4a-8b1f-01f3a0738015" containerName="registry-server" Jan 30 17:58:37 crc kubenswrapper[4766]: I0130 17:58:37.176737 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c327fe8-260c-4117-b55e-3612be41da79" containerName="mariadb-account-create-update" Jan 30 17:58:37 crc kubenswrapper[4766]: I0130 17:58:37.176747 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="0550f6c1-ed1f-405f-8420-507890f13d75" containerName="mariadb-database-create" Jan 30 17:58:37 crc kubenswrapper[4766]: I0130 17:58:37.178033 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-5c95b64c75-5mhgs" Jan 30 17:58:37 crc kubenswrapper[4766]: I0130 17:58:37.180938 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-api-config-data" Jan 30 17:58:37 crc kubenswrapper[4766]: I0130 17:58:37.180985 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-octavia-dockercfg-2h9xz" Jan 30 17:58:37 crc kubenswrapper[4766]: I0130 17:58:37.181009 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-api-scripts" Jan 30 17:58:37 crc kubenswrapper[4766]: I0130 17:58:37.189556 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-5c95b64c75-5mhgs"] Jan 30 17:58:37 crc kubenswrapper[4766]: I0130 17:58:37.195023 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/0eb984d4-df63-4a4e-b808-e30c97f6f606-octavia-run\") pod \"octavia-api-5c95b64c75-5mhgs\" (UID: \"0eb984d4-df63-4a4e-b808-e30c97f6f606\") " pod="openstack/octavia-api-5c95b64c75-5mhgs" Jan 30 17:58:37 crc kubenswrapper[4766]: I0130 17:58:37.195070 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0eb984d4-df63-4a4e-b808-e30c97f6f606-combined-ca-bundle\") pod \"octavia-api-5c95b64c75-5mhgs\" (UID: \"0eb984d4-df63-4a4e-b808-e30c97f6f606\") " pod="openstack/octavia-api-5c95b64c75-5mhgs" Jan 30 17:58:37 crc kubenswrapper[4766]: I0130 17:58:37.195111 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/0eb984d4-df63-4a4e-b808-e30c97f6f606-config-data-merged\") pod \"octavia-api-5c95b64c75-5mhgs\" (UID: \"0eb984d4-df63-4a4e-b808-e30c97f6f606\") " pod="openstack/octavia-api-5c95b64c75-5mhgs" Jan 30 17:58:37 crc kubenswrapper[4766]: I0130 17:58:37.195138 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0eb984d4-df63-4a4e-b808-e30c97f6f606-scripts\") pod \"octavia-api-5c95b64c75-5mhgs\" (UID: \"0eb984d4-df63-4a4e-b808-e30c97f6f606\") " pod="openstack/octavia-api-5c95b64c75-5mhgs" Jan 30 17:58:37 crc kubenswrapper[4766]: I0130 17:58:37.195159 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0eb984d4-df63-4a4e-b808-e30c97f6f606-config-data\") pod \"octavia-api-5c95b64c75-5mhgs\" (UID: \"0eb984d4-df63-4a4e-b808-e30c97f6f606\") " pod="openstack/octavia-api-5c95b64c75-5mhgs" Jan 30 17:58:37 crc kubenswrapper[4766]: I0130 17:58:37.297564 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0eb984d4-df63-4a4e-b808-e30c97f6f606-scripts\") pod \"octavia-api-5c95b64c75-5mhgs\" (UID: \"0eb984d4-df63-4a4e-b808-e30c97f6f606\") " pod="openstack/octavia-api-5c95b64c75-5mhgs" Jan 30 17:58:37 crc kubenswrapper[4766]: I0130 17:58:37.298000 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0eb984d4-df63-4a4e-b808-e30c97f6f606-config-data\") pod \"octavia-api-5c95b64c75-5mhgs\" (UID: \"0eb984d4-df63-4a4e-b808-e30c97f6f606\") " pod="openstack/octavia-api-5c95b64c75-5mhgs" Jan 30 17:58:37 crc kubenswrapper[4766]: I0130 17:58:37.298240 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/0eb984d4-df63-4a4e-b808-e30c97f6f606-octavia-run\") pod \"octavia-api-5c95b64c75-5mhgs\" (UID: \"0eb984d4-df63-4a4e-b808-e30c97f6f606\") " pod="openstack/octavia-api-5c95b64c75-5mhgs" Jan 30 17:58:37 crc kubenswrapper[4766]: I0130 17:58:37.298269 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0eb984d4-df63-4a4e-b808-e30c97f6f606-combined-ca-bundle\") pod \"octavia-api-5c95b64c75-5mhgs\" (UID: \"0eb984d4-df63-4a4e-b808-e30c97f6f606\") " pod="openstack/octavia-api-5c95b64c75-5mhgs" Jan 30 17:58:37 crc kubenswrapper[4766]: I0130 17:58:37.298306 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/0eb984d4-df63-4a4e-b808-e30c97f6f606-config-data-merged\") pod \"octavia-api-5c95b64c75-5mhgs\" (UID: \"0eb984d4-df63-4a4e-b808-e30c97f6f606\") " pod="openstack/octavia-api-5c95b64c75-5mhgs" Jan 30 17:58:37 crc kubenswrapper[4766]: I0130 17:58:37.298827 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/0eb984d4-df63-4a4e-b808-e30c97f6f606-config-data-merged\") pod \"octavia-api-5c95b64c75-5mhgs\" (UID: \"0eb984d4-df63-4a4e-b808-e30c97f6f606\") " pod="openstack/octavia-api-5c95b64c75-5mhgs" Jan 30 17:58:37 crc kubenswrapper[4766]: I0130 17:58:37.298893 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/0eb984d4-df63-4a4e-b808-e30c97f6f606-octavia-run\") pod \"octavia-api-5c95b64c75-5mhgs\" (UID: \"0eb984d4-df63-4a4e-b808-e30c97f6f606\") " pod="openstack/octavia-api-5c95b64c75-5mhgs" Jan 30 17:58:37 crc kubenswrapper[4766]: I0130 17:58:37.302862 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0eb984d4-df63-4a4e-b808-e30c97f6f606-combined-ca-bundle\") pod \"octavia-api-5c95b64c75-5mhgs\" (UID: \"0eb984d4-df63-4a4e-b808-e30c97f6f606\") " pod="openstack/octavia-api-5c95b64c75-5mhgs" Jan 30 17:58:37 crc kubenswrapper[4766]: I0130 17:58:37.314770 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0eb984d4-df63-4a4e-b808-e30c97f6f606-scripts\") pod \"octavia-api-5c95b64c75-5mhgs\" (UID: \"0eb984d4-df63-4a4e-b808-e30c97f6f606\") " pod="openstack/octavia-api-5c95b64c75-5mhgs" Jan 30 17:58:37 crc kubenswrapper[4766]: I0130 17:58:37.317058 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0eb984d4-df63-4a4e-b808-e30c97f6f606-config-data\") pod \"octavia-api-5c95b64c75-5mhgs\" (UID: \"0eb984d4-df63-4a4e-b808-e30c97f6f606\") " pod="openstack/octavia-api-5c95b64c75-5mhgs" Jan 30 17:58:37 crc kubenswrapper[4766]: I0130 17:58:37.498891 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-5c95b64c75-5mhgs" Jan 30 17:58:38 crc kubenswrapper[4766]: I0130 17:58:38.132216 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-5c95b64c75-5mhgs"] Jan 30 17:58:38 crc kubenswrapper[4766]: W0130 17:58:38.143951 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0eb984d4_df63_4a4e_b808_e30c97f6f606.slice/crio-faf04933e5e63a749aefb17151f84b059d72581faa2000348392bfd5a90b0566 WatchSource:0}: Error finding container faf04933e5e63a749aefb17151f84b059d72581faa2000348392bfd5a90b0566: Status 404 returned error can't find the container with id faf04933e5e63a749aefb17151f84b059d72581faa2000348392bfd5a90b0566 Jan 30 17:58:38 crc kubenswrapper[4766]: I0130 17:58:38.991817 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-5c95b64c75-5mhgs" event={"ID":"0eb984d4-df63-4a4e-b808-e30c97f6f606","Type":"ContainerStarted","Data":"faf04933e5e63a749aefb17151f84b059d72581faa2000348392bfd5a90b0566"} Jan 30 17:58:45 crc kubenswrapper[4766]: I0130 17:58:45.040546 4766 scope.go:117] "RemoveContainer" containerID="e52209522ad6ff86d8d333246ee724873200c61e028d93d4fc612ac4b0977354" Jan 30 17:58:45 crc kubenswrapper[4766]: E0130 17:58:45.042520 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:58:48 crc kubenswrapper[4766]: I0130 17:58:48.035823 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-zr744"] Jan 30 17:58:48 crc kubenswrapper[4766]: I0130 17:58:48.051205 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-zr744"] Jan 30 17:58:50 crc kubenswrapper[4766]: I0130 17:58:50.051199 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c267d58-0d99-463b-9011-34118e7f961a" path="/var/lib/kubelet/pods/9c267d58-0d99-463b-9011-34118e7f961a/volumes" Jan 30 17:58:53 crc kubenswrapper[4766]: I0130 17:58:53.147343 4766 generic.go:334] "Generic (PLEG): container finished" podID="0eb984d4-df63-4a4e-b808-e30c97f6f606" containerID="03043db2deda6cf603d122dd759c870251f616eb6f723b24bbdfb636cc6e75be" exitCode=0 Jan 30 17:58:53 crc kubenswrapper[4766]: I0130 17:58:53.147408 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-5c95b64c75-5mhgs" event={"ID":"0eb984d4-df63-4a4e-b808-e30c97f6f606","Type":"ContainerDied","Data":"03043db2deda6cf603d122dd759c870251f616eb6f723b24bbdfb636cc6e75be"} Jan 30 17:58:54 crc kubenswrapper[4766]: I0130 17:58:54.159508 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-5c95b64c75-5mhgs" event={"ID":"0eb984d4-df63-4a4e-b808-e30c97f6f606","Type":"ContainerStarted","Data":"efe7b09ed864756182729316e61ca03a5eb0cbef21aee43310bada2149c9ffb3"} Jan 30 17:58:54 crc kubenswrapper[4766]: I0130 17:58:54.160209 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-5c95b64c75-5mhgs" event={"ID":"0eb984d4-df63-4a4e-b808-e30c97f6f606","Type":"ContainerStarted","Data":"beee8500ff584654976b6f044659339aa9095538b851e561ae295d5fdc9064a4"} Jan 30 17:58:54 crc kubenswrapper[4766]: I0130 17:58:54.160233 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-5c95b64c75-5mhgs" Jan 30 17:58:54 crc kubenswrapper[4766]: I0130 17:58:54.764167 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-api-5c95b64c75-5mhgs" podStartSLOduration=3.5937698080000002 podStartE2EDuration="17.764139865s" podCreationTimestamp="2026-01-30 17:58:37 +0000 UTC" firstStartedPulling="2026-01-30 17:58:38.146868084 +0000 UTC m=+5772.784825430" lastFinishedPulling="2026-01-30 17:58:52.317238141 +0000 UTC m=+5786.955195487" observedRunningTime="2026-01-30 17:58:54.182446335 +0000 UTC m=+5788.820403681" watchObservedRunningTime="2026-01-30 17:58:54.764139865 +0000 UTC m=+5789.402097221" Jan 30 17:58:54 crc kubenswrapper[4766]: I0130 17:58:54.767731 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-rsyslog-l7mdv"] Jan 30 17:58:54 crc kubenswrapper[4766]: I0130 17:58:54.770500 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-l7mdv" Jan 30 17:58:54 crc kubenswrapper[4766]: I0130 17:58:54.774523 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"octavia-hmport-map" Jan 30 17:58:54 crc kubenswrapper[4766]: I0130 17:58:54.774557 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-config-data" Jan 30 17:58:54 crc kubenswrapper[4766]: I0130 17:58:54.774619 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-scripts" Jan 30 17:58:54 crc kubenswrapper[4766]: I0130 17:58:54.779843 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-l7mdv"] Jan 30 17:58:54 crc kubenswrapper[4766]: I0130 17:58:54.863998 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/37d87bf7-0bd7-4201-b0e3-0d1b8062c930-config-data-merged\") pod \"octavia-rsyslog-l7mdv\" (UID: \"37d87bf7-0bd7-4201-b0e3-0d1b8062c930\") " pod="openstack/octavia-rsyslog-l7mdv" Jan 30 17:58:54 crc kubenswrapper[4766]: I0130 17:58:54.864408 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/37d87bf7-0bd7-4201-b0e3-0d1b8062c930-hm-ports\") pod \"octavia-rsyslog-l7mdv\" (UID: \"37d87bf7-0bd7-4201-b0e3-0d1b8062c930\") " pod="openstack/octavia-rsyslog-l7mdv" Jan 30 17:58:54 crc kubenswrapper[4766]: I0130 17:58:54.864614 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37d87bf7-0bd7-4201-b0e3-0d1b8062c930-config-data\") pod \"octavia-rsyslog-l7mdv\" (UID: \"37d87bf7-0bd7-4201-b0e3-0d1b8062c930\") " pod="openstack/octavia-rsyslog-l7mdv" Jan 30 17:58:54 crc kubenswrapper[4766]: I0130 17:58:54.864829 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37d87bf7-0bd7-4201-b0e3-0d1b8062c930-scripts\") pod \"octavia-rsyslog-l7mdv\" (UID: \"37d87bf7-0bd7-4201-b0e3-0d1b8062c930\") " pod="openstack/octavia-rsyslog-l7mdv" Jan 30 17:58:54 crc kubenswrapper[4766]: I0130 17:58:54.966750 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37d87bf7-0bd7-4201-b0e3-0d1b8062c930-scripts\") pod \"octavia-rsyslog-l7mdv\" (UID: \"37d87bf7-0bd7-4201-b0e3-0d1b8062c930\") " pod="openstack/octavia-rsyslog-l7mdv" Jan 30 17:58:54 crc kubenswrapper[4766]: I0130 17:58:54.966862 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/37d87bf7-0bd7-4201-b0e3-0d1b8062c930-config-data-merged\") pod \"octavia-rsyslog-l7mdv\" (UID: \"37d87bf7-0bd7-4201-b0e3-0d1b8062c930\") " pod="openstack/octavia-rsyslog-l7mdv" Jan 30 17:58:54 crc kubenswrapper[4766]: I0130 17:58:54.966898 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/37d87bf7-0bd7-4201-b0e3-0d1b8062c930-hm-ports\") pod \"octavia-rsyslog-l7mdv\" (UID: \"37d87bf7-0bd7-4201-b0e3-0d1b8062c930\") " pod="openstack/octavia-rsyslog-l7mdv" Jan 30 17:58:54 crc kubenswrapper[4766]: I0130 17:58:54.966942 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37d87bf7-0bd7-4201-b0e3-0d1b8062c930-config-data\") pod \"octavia-rsyslog-l7mdv\" (UID: \"37d87bf7-0bd7-4201-b0e3-0d1b8062c930\") " pod="openstack/octavia-rsyslog-l7mdv" Jan 30 17:58:54 crc kubenswrapper[4766]: I0130 17:58:54.967990 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/37d87bf7-0bd7-4201-b0e3-0d1b8062c930-hm-ports\") pod \"octavia-rsyslog-l7mdv\" (UID: \"37d87bf7-0bd7-4201-b0e3-0d1b8062c930\") " pod="openstack/octavia-rsyslog-l7mdv" Jan 30 17:58:54 crc kubenswrapper[4766]: I0130 17:58:54.968325 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/37d87bf7-0bd7-4201-b0e3-0d1b8062c930-config-data-merged\") pod \"octavia-rsyslog-l7mdv\" (UID: \"37d87bf7-0bd7-4201-b0e3-0d1b8062c930\") " pod="openstack/octavia-rsyslog-l7mdv" Jan 30 17:58:54 crc kubenswrapper[4766]: I0130 17:58:54.985901 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37d87bf7-0bd7-4201-b0e3-0d1b8062c930-config-data\") pod \"octavia-rsyslog-l7mdv\" (UID: \"37d87bf7-0bd7-4201-b0e3-0d1b8062c930\") " pod="openstack/octavia-rsyslog-l7mdv" Jan 30 17:58:54 crc kubenswrapper[4766]: I0130 17:58:54.986787 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37d87bf7-0bd7-4201-b0e3-0d1b8062c930-scripts\") pod \"octavia-rsyslog-l7mdv\" (UID: \"37d87bf7-0bd7-4201-b0e3-0d1b8062c930\") " pod="openstack/octavia-rsyslog-l7mdv" Jan 30 17:58:55 crc kubenswrapper[4766]: I0130 17:58:55.104488 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-l7mdv" Jan 30 17:58:55 crc kubenswrapper[4766]: I0130 17:58:55.191355 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-5c95b64c75-5mhgs" Jan 30 17:58:55 crc kubenswrapper[4766]: I0130 17:58:55.379642 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-image-upload-59f8cff499-kprnv"] Jan 30 17:58:55 crc kubenswrapper[4766]: I0130 17:58:55.384648 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-59f8cff499-kprnv" Jan 30 17:58:55 crc kubenswrapper[4766]: I0130 17:58:55.388201 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-config-data" Jan 30 17:58:55 crc kubenswrapper[4766]: I0130 17:58:55.393570 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-kprnv"] Jan 30 17:58:55 crc kubenswrapper[4766]: I0130 17:58:55.484212 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09-httpd-config\") pod \"octavia-image-upload-59f8cff499-kprnv\" (UID: \"20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09\") " pod="openstack/octavia-image-upload-59f8cff499-kprnv" Jan 30 17:58:55 crc kubenswrapper[4766]: I0130 17:58:55.484283 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09-amphora-image\") pod \"octavia-image-upload-59f8cff499-kprnv\" (UID: \"20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09\") " pod="openstack/octavia-image-upload-59f8cff499-kprnv" Jan 30 17:58:55 crc kubenswrapper[4766]: I0130 17:58:55.586679 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09-httpd-config\") pod \"octavia-image-upload-59f8cff499-kprnv\" (UID: \"20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09\") " pod="openstack/octavia-image-upload-59f8cff499-kprnv" Jan 30 17:58:55 crc kubenswrapper[4766]: I0130 17:58:55.587128 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09-amphora-image\") pod \"octavia-image-upload-59f8cff499-kprnv\" (UID: \"20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09\") " pod="openstack/octavia-image-upload-59f8cff499-kprnv" Jan 30 17:58:55 crc kubenswrapper[4766]: I0130 17:58:55.587678 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09-amphora-image\") pod \"octavia-image-upload-59f8cff499-kprnv\" (UID: \"20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09\") " pod="openstack/octavia-image-upload-59f8cff499-kprnv" Jan 30 17:58:55 crc kubenswrapper[4766]: I0130 17:58:55.595053 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09-httpd-config\") pod \"octavia-image-upload-59f8cff499-kprnv\" (UID: \"20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09\") " pod="openstack/octavia-image-upload-59f8cff499-kprnv" Jan 30 17:58:55 crc kubenswrapper[4766]: I0130 17:58:55.712556 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-l7mdv"] Jan 30 17:58:55 crc kubenswrapper[4766]: I0130 17:58:55.727752 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-59f8cff499-kprnv" Jan 30 17:58:55 crc kubenswrapper[4766]: I0130 17:58:55.849467 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-l7mdv"] Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.111421 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-db-sync-8nm42"] Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.114605 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-8nm42" Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.117280 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-scripts" Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.128129 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-sync-8nm42"] Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.206979 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd5031f6-51af-4f63-8bc4-4a518f58ddd4-scripts\") pod \"octavia-db-sync-8nm42\" (UID: \"fd5031f6-51af-4f63-8bc4-4a518f58ddd4\") " pod="openstack/octavia-db-sync-8nm42" Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.207135 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/fd5031f6-51af-4f63-8bc4-4a518f58ddd4-config-data-merged\") pod \"octavia-db-sync-8nm42\" (UID: \"fd5031f6-51af-4f63-8bc4-4a518f58ddd4\") " pod="openstack/octavia-db-sync-8nm42" Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.207310 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd5031f6-51af-4f63-8bc4-4a518f58ddd4-config-data\") pod \"octavia-db-sync-8nm42\" (UID: \"fd5031f6-51af-4f63-8bc4-4a518f58ddd4\") " pod="openstack/octavia-db-sync-8nm42" Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.211203 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd5031f6-51af-4f63-8bc4-4a518f58ddd4-combined-ca-bundle\") pod \"octavia-db-sync-8nm42\" (UID: \"fd5031f6-51af-4f63-8bc4-4a518f58ddd4\") " pod="openstack/octavia-db-sync-8nm42" Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.218472 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-kprnv"] Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.224170 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-l7mdv" event={"ID":"37d87bf7-0bd7-4201-b0e3-0d1b8062c930","Type":"ContainerStarted","Data":"626b26395f7ce3aae2dc570650fe62987d13d3b3d64bd55bae6643135934c3bb"} Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.312898 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd5031f6-51af-4f63-8bc4-4a518f58ddd4-scripts\") pod \"octavia-db-sync-8nm42\" (UID: \"fd5031f6-51af-4f63-8bc4-4a518f58ddd4\") " pod="openstack/octavia-db-sync-8nm42" Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.313901 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/fd5031f6-51af-4f63-8bc4-4a518f58ddd4-config-data-merged\") pod \"octavia-db-sync-8nm42\" (UID: \"fd5031f6-51af-4f63-8bc4-4a518f58ddd4\") " pod="openstack/octavia-db-sync-8nm42" Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.313956 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/fd5031f6-51af-4f63-8bc4-4a518f58ddd4-config-data-merged\") pod \"octavia-db-sync-8nm42\" (UID: \"fd5031f6-51af-4f63-8bc4-4a518f58ddd4\") " pod="openstack/octavia-db-sync-8nm42" Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.314054 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd5031f6-51af-4f63-8bc4-4a518f58ddd4-config-data\") pod \"octavia-db-sync-8nm42\" (UID: \"fd5031f6-51af-4f63-8bc4-4a518f58ddd4\") " pod="openstack/octavia-db-sync-8nm42" Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.316602 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd5031f6-51af-4f63-8bc4-4a518f58ddd4-combined-ca-bundle\") pod \"octavia-db-sync-8nm42\" (UID: \"fd5031f6-51af-4f63-8bc4-4a518f58ddd4\") " pod="openstack/octavia-db-sync-8nm42" Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.324326 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd5031f6-51af-4f63-8bc4-4a518f58ddd4-config-data\") pod \"octavia-db-sync-8nm42\" (UID: \"fd5031f6-51af-4f63-8bc4-4a518f58ddd4\") " pod="openstack/octavia-db-sync-8nm42" Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.326828 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd5031f6-51af-4f63-8bc4-4a518f58ddd4-combined-ca-bundle\") pod \"octavia-db-sync-8nm42\" (UID: \"fd5031f6-51af-4f63-8bc4-4a518f58ddd4\") " pod="openstack/octavia-db-sync-8nm42" Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.328064 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd5031f6-51af-4f63-8bc4-4a518f58ddd4-scripts\") pod \"octavia-db-sync-8nm42\" (UID: \"fd5031f6-51af-4f63-8bc4-4a518f58ddd4\") " pod="openstack/octavia-db-sync-8nm42" Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.439891 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-8nm42" Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.763676 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-k9frg" podUID="8d8369af-eac5-4d31-b183-1a542da452c5" containerName="ovn-controller" probeResult="failure" output=< Jan 30 17:58:56 crc kubenswrapper[4766]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 30 17:58:56 crc kubenswrapper[4766]: > Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.786735 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-b4vlg" Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.788456 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-b4vlg" Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.913426 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-k9frg-config-8htbk"] Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.914641 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-k9frg-config-8htbk" Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.917815 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.923212 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-sync-8nm42"] Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.935724 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-scripts\") pod \"ovn-controller-k9frg-config-8htbk\" (UID: \"f4f4b9f0-b0d7-490c-984f-b50a40b2b723\") " pod="openstack/ovn-controller-k9frg-config-8htbk" Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.936102 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkmp4\" (UniqueName: \"kubernetes.io/projected/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-kube-api-access-zkmp4\") pod \"ovn-controller-k9frg-config-8htbk\" (UID: \"f4f4b9f0-b0d7-490c-984f-b50a40b2b723\") " pod="openstack/ovn-controller-k9frg-config-8htbk" Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.936163 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-var-run-ovn\") pod \"ovn-controller-k9frg-config-8htbk\" (UID: \"f4f4b9f0-b0d7-490c-984f-b50a40b2b723\") " pod="openstack/ovn-controller-k9frg-config-8htbk" Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.936264 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-additional-scripts\") pod \"ovn-controller-k9frg-config-8htbk\" (UID: \"f4f4b9f0-b0d7-490c-984f-b50a40b2b723\") " pod="openstack/ovn-controller-k9frg-config-8htbk" Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.936292 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-var-log-ovn\") pod \"ovn-controller-k9frg-config-8htbk\" (UID: \"f4f4b9f0-b0d7-490c-984f-b50a40b2b723\") " pod="openstack/ovn-controller-k9frg-config-8htbk" Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.936319 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-var-run\") pod \"ovn-controller-k9frg-config-8htbk\" (UID: \"f4f4b9f0-b0d7-490c-984f-b50a40b2b723\") " pod="openstack/ovn-controller-k9frg-config-8htbk" Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.963149 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-k9frg-config-8htbk"] Jan 30 17:58:57 crc kubenswrapper[4766]: I0130 17:58:57.037726 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-scripts\") pod \"ovn-controller-k9frg-config-8htbk\" (UID: \"f4f4b9f0-b0d7-490c-984f-b50a40b2b723\") " pod="openstack/ovn-controller-k9frg-config-8htbk" Jan 30 17:58:57 crc kubenswrapper[4766]: I0130 17:58:57.038063 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkmp4\" (UniqueName: \"kubernetes.io/projected/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-kube-api-access-zkmp4\") pod \"ovn-controller-k9frg-config-8htbk\" (UID: \"f4f4b9f0-b0d7-490c-984f-b50a40b2b723\") " pod="openstack/ovn-controller-k9frg-config-8htbk" Jan 30 17:58:57 crc kubenswrapper[4766]: I0130 17:58:57.038102 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-var-run-ovn\") pod \"ovn-controller-k9frg-config-8htbk\" (UID: \"f4f4b9f0-b0d7-490c-984f-b50a40b2b723\") " pod="openstack/ovn-controller-k9frg-config-8htbk" Jan 30 17:58:57 crc kubenswrapper[4766]: I0130 17:58:57.038130 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-additional-scripts\") pod \"ovn-controller-k9frg-config-8htbk\" (UID: \"f4f4b9f0-b0d7-490c-984f-b50a40b2b723\") " pod="openstack/ovn-controller-k9frg-config-8htbk" Jan 30 17:58:57 crc kubenswrapper[4766]: I0130 17:58:57.038153 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-var-log-ovn\") pod \"ovn-controller-k9frg-config-8htbk\" (UID: \"f4f4b9f0-b0d7-490c-984f-b50a40b2b723\") " pod="openstack/ovn-controller-k9frg-config-8htbk" Jan 30 17:58:57 crc kubenswrapper[4766]: I0130 17:58:57.038203 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-var-run\") pod \"ovn-controller-k9frg-config-8htbk\" (UID: \"f4f4b9f0-b0d7-490c-984f-b50a40b2b723\") " pod="openstack/ovn-controller-k9frg-config-8htbk" Jan 30 17:58:57 crc kubenswrapper[4766]: I0130 17:58:57.038501 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-var-run\") pod \"ovn-controller-k9frg-config-8htbk\" (UID: \"f4f4b9f0-b0d7-490c-984f-b50a40b2b723\") " pod="openstack/ovn-controller-k9frg-config-8htbk" Jan 30 17:58:57 crc kubenswrapper[4766]: I0130 17:58:57.039099 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-var-run-ovn\") pod \"ovn-controller-k9frg-config-8htbk\" (UID: \"f4f4b9f0-b0d7-490c-984f-b50a40b2b723\") " pod="openstack/ovn-controller-k9frg-config-8htbk" Jan 30 17:58:57 crc kubenswrapper[4766]: I0130 17:58:57.039157 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-var-log-ovn\") pod \"ovn-controller-k9frg-config-8htbk\" (UID: \"f4f4b9f0-b0d7-490c-984f-b50a40b2b723\") " pod="openstack/ovn-controller-k9frg-config-8htbk" Jan 30 17:58:57 crc kubenswrapper[4766]: I0130 17:58:57.039631 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-additional-scripts\") pod \"ovn-controller-k9frg-config-8htbk\" (UID: \"f4f4b9f0-b0d7-490c-984f-b50a40b2b723\") " pod="openstack/ovn-controller-k9frg-config-8htbk" Jan 30 17:58:57 crc kubenswrapper[4766]: I0130 17:58:57.039766 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-scripts\") pod \"ovn-controller-k9frg-config-8htbk\" (UID: \"f4f4b9f0-b0d7-490c-984f-b50a40b2b723\") " pod="openstack/ovn-controller-k9frg-config-8htbk" Jan 30 17:58:57 crc kubenswrapper[4766]: I0130 17:58:57.040611 4766 scope.go:117] "RemoveContainer" containerID="e52209522ad6ff86d8d333246ee724873200c61e028d93d4fc612ac4b0977354" Jan 30 17:58:57 crc kubenswrapper[4766]: E0130 17:58:57.041071 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:58:57 crc kubenswrapper[4766]: I0130 17:58:57.058340 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkmp4\" (UniqueName: \"kubernetes.io/projected/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-kube-api-access-zkmp4\") pod \"ovn-controller-k9frg-config-8htbk\" (UID: \"f4f4b9f0-b0d7-490c-984f-b50a40b2b723\") " pod="openstack/ovn-controller-k9frg-config-8htbk" Jan 30 17:58:57 crc kubenswrapper[4766]: I0130 17:58:57.237904 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-kprnv" event={"ID":"20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09","Type":"ContainerStarted","Data":"3dbe81e46eed52883df8dfc889eb0ab8c07352aa770fac3bae2b8943846bbc9f"} Jan 30 17:58:57 crc kubenswrapper[4766]: I0130 17:58:57.240170 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-8nm42" event={"ID":"fd5031f6-51af-4f63-8bc4-4a518f58ddd4","Type":"ContainerStarted","Data":"31c8a3d4fa3c5871f82c77326d881824b1b083a480b009f1be2bb206710bb303"} Jan 30 17:58:57 crc kubenswrapper[4766]: I0130 17:58:57.240219 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-8nm42" event={"ID":"fd5031f6-51af-4f63-8bc4-4a518f58ddd4","Type":"ContainerStarted","Data":"2c806951b029baf7fccea5984672ed2b0aa381a3215c57028fdc906444227a0d"} Jan 30 17:58:57 crc kubenswrapper[4766]: I0130 17:58:57.247922 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-k9frg-config-8htbk" Jan 30 17:58:57 crc kubenswrapper[4766]: I0130 17:58:57.788409 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-k9frg-config-8htbk"] Jan 30 17:58:58 crc kubenswrapper[4766]: I0130 17:58:58.254199 4766 generic.go:334] "Generic (PLEG): container finished" podID="fd5031f6-51af-4f63-8bc4-4a518f58ddd4" containerID="31c8a3d4fa3c5871f82c77326d881824b1b083a480b009f1be2bb206710bb303" exitCode=0 Jan 30 17:58:58 crc kubenswrapper[4766]: I0130 17:58:58.254262 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-8nm42" event={"ID":"fd5031f6-51af-4f63-8bc4-4a518f58ddd4","Type":"ContainerDied","Data":"31c8a3d4fa3c5871f82c77326d881824b1b083a480b009f1be2bb206710bb303"} Jan 30 17:58:59 crc kubenswrapper[4766]: I0130 17:58:59.263610 4766 generic.go:334] "Generic (PLEG): container finished" podID="f4f4b9f0-b0d7-490c-984f-b50a40b2b723" containerID="622b9b57d1c8ffadafcb076f305a5bdc22e042ba182b300a03ff05dbcdcc46b3" exitCode=0 Jan 30 17:58:59 crc kubenswrapper[4766]: I0130 17:58:59.263667 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-k9frg-config-8htbk" event={"ID":"f4f4b9f0-b0d7-490c-984f-b50a40b2b723","Type":"ContainerDied","Data":"622b9b57d1c8ffadafcb076f305a5bdc22e042ba182b300a03ff05dbcdcc46b3"} Jan 30 17:58:59 crc kubenswrapper[4766]: I0130 17:58:59.264326 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-k9frg-config-8htbk" event={"ID":"f4f4b9f0-b0d7-490c-984f-b50a40b2b723","Type":"ContainerStarted","Data":"f48a768c35bd6923498f26636554b3d5843d8e3e24e9068eee17555bd7ab0446"} Jan 30 17:58:59 crc kubenswrapper[4766]: I0130 17:58:59.268470 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-8nm42" event={"ID":"fd5031f6-51af-4f63-8bc4-4a518f58ddd4","Type":"ContainerStarted","Data":"ac71d8e70f653ebbdd2675504fd0957f83245a57664fca40a163d39e26aa650a"} Jan 30 17:58:59 crc kubenswrapper[4766]: I0130 17:58:59.326119 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-db-sync-8nm42" podStartSLOduration=3.326094811 podStartE2EDuration="3.326094811s" podCreationTimestamp="2026-01-30 17:58:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:58:59.305046729 +0000 UTC m=+5793.943004075" watchObservedRunningTime="2026-01-30 17:58:59.326094811 +0000 UTC m=+5793.964052147" Jan 30 17:59:00 crc kubenswrapper[4766]: I0130 17:59:00.287614 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-l7mdv" event={"ID":"37d87bf7-0bd7-4201-b0e3-0d1b8062c930","Type":"ContainerStarted","Data":"c378fedde629874c7b167ebd5f1cc93d0ed0243ac98f90ed4616430a0502cf1b"} Jan 30 17:59:00 crc kubenswrapper[4766]: I0130 17:59:00.794116 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-k9frg-config-8htbk" Jan 30 17:59:00 crc kubenswrapper[4766]: I0130 17:59:00.920596 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-var-log-ovn\") pod \"f4f4b9f0-b0d7-490c-984f-b50a40b2b723\" (UID: \"f4f4b9f0-b0d7-490c-984f-b50a40b2b723\") " Jan 30 17:59:00 crc kubenswrapper[4766]: I0130 17:59:00.920824 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-var-run\") pod \"f4f4b9f0-b0d7-490c-984f-b50a40b2b723\" (UID: \"f4f4b9f0-b0d7-490c-984f-b50a40b2b723\") " Jan 30 17:59:00 crc kubenswrapper[4766]: I0130 17:59:00.920962 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkmp4\" (UniqueName: \"kubernetes.io/projected/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-kube-api-access-zkmp4\") pod \"f4f4b9f0-b0d7-490c-984f-b50a40b2b723\" (UID: \"f4f4b9f0-b0d7-490c-984f-b50a40b2b723\") " Jan 30 17:59:00 crc kubenswrapper[4766]: I0130 17:59:00.921032 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-additional-scripts\") pod \"f4f4b9f0-b0d7-490c-984f-b50a40b2b723\" (UID: \"f4f4b9f0-b0d7-490c-984f-b50a40b2b723\") " Jan 30 17:59:00 crc kubenswrapper[4766]: I0130 17:59:00.921254 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-var-run-ovn\") pod \"f4f4b9f0-b0d7-490c-984f-b50a40b2b723\" (UID: \"f4f4b9f0-b0d7-490c-984f-b50a40b2b723\") " Jan 30 17:59:00 crc kubenswrapper[4766]: I0130 17:59:00.921311 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-scripts\") pod \"f4f4b9f0-b0d7-490c-984f-b50a40b2b723\" (UID: \"f4f4b9f0-b0d7-490c-984f-b50a40b2b723\") " Jan 30 17:59:00 crc kubenswrapper[4766]: I0130 17:59:00.925817 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "f4f4b9f0-b0d7-490c-984f-b50a40b2b723" (UID: "f4f4b9f0-b0d7-490c-984f-b50a40b2b723"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:59:00 crc kubenswrapper[4766]: I0130 17:59:00.925942 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-var-run" (OuterVolumeSpecName: "var-run") pod "f4f4b9f0-b0d7-490c-984f-b50a40b2b723" (UID: "f4f4b9f0-b0d7-490c-984f-b50a40b2b723"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:59:00 crc kubenswrapper[4766]: I0130 17:59:00.925950 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "f4f4b9f0-b0d7-490c-984f-b50a40b2b723" (UID: "f4f4b9f0-b0d7-490c-984f-b50a40b2b723"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:59:00 crc kubenswrapper[4766]: I0130 17:59:00.926562 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-scripts" (OuterVolumeSpecName: "scripts") pod "f4f4b9f0-b0d7-490c-984f-b50a40b2b723" (UID: "f4f4b9f0-b0d7-490c-984f-b50a40b2b723"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:59:00 crc kubenswrapper[4766]: I0130 17:59:00.926760 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "f4f4b9f0-b0d7-490c-984f-b50a40b2b723" (UID: "f4f4b9f0-b0d7-490c-984f-b50a40b2b723"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:59:00 crc kubenswrapper[4766]: I0130 17:59:00.950259 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-kube-api-access-zkmp4" (OuterVolumeSpecName: "kube-api-access-zkmp4") pod "f4f4b9f0-b0d7-490c-984f-b50a40b2b723" (UID: "f4f4b9f0-b0d7-490c-984f-b50a40b2b723"). InnerVolumeSpecName "kube-api-access-zkmp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:59:01 crc kubenswrapper[4766]: I0130 17:59:01.025298 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkmp4\" (UniqueName: \"kubernetes.io/projected/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-kube-api-access-zkmp4\") on node \"crc\" DevicePath \"\"" Jan 30 17:59:01 crc kubenswrapper[4766]: I0130 17:59:01.025348 4766 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:59:01 crc kubenswrapper[4766]: I0130 17:59:01.025363 4766 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 17:59:01 crc kubenswrapper[4766]: I0130 17:59:01.025375 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:59:01 crc kubenswrapper[4766]: I0130 17:59:01.025389 4766 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 17:59:01 crc kubenswrapper[4766]: I0130 17:59:01.025401 4766 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-var-run\") on node \"crc\" DevicePath \"\"" Jan 30 17:59:01 crc kubenswrapper[4766]: I0130 17:59:01.299815 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-k9frg-config-8htbk" Jan 30 17:59:01 crc kubenswrapper[4766]: I0130 17:59:01.299811 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-k9frg-config-8htbk" event={"ID":"f4f4b9f0-b0d7-490c-984f-b50a40b2b723","Type":"ContainerDied","Data":"f48a768c35bd6923498f26636554b3d5843d8e3e24e9068eee17555bd7ab0446"} Jan 30 17:59:01 crc kubenswrapper[4766]: I0130 17:59:01.299889 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f48a768c35bd6923498f26636554b3d5843d8e3e24e9068eee17555bd7ab0446" Jan 30 17:59:01 crc kubenswrapper[4766]: I0130 17:59:01.725485 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-k9frg" Jan 30 17:59:01 crc kubenswrapper[4766]: I0130 17:59:01.881740 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-k9frg-config-8htbk"] Jan 30 17:59:01 crc kubenswrapper[4766]: I0130 17:59:01.893607 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-k9frg-config-8htbk"] Jan 30 17:59:02 crc kubenswrapper[4766]: I0130 17:59:02.050651 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4f4b9f0-b0d7-490c-984f-b50a40b2b723" path="/var/lib/kubelet/pods/f4f4b9f0-b0d7-490c-984f-b50a40b2b723/volumes" Jan 30 17:59:02 crc kubenswrapper[4766]: I0130 17:59:02.309578 4766 generic.go:334] "Generic (PLEG): container finished" podID="37d87bf7-0bd7-4201-b0e3-0d1b8062c930" containerID="c378fedde629874c7b167ebd5f1cc93d0ed0243ac98f90ed4616430a0502cf1b" exitCode=0 Jan 30 17:59:02 crc kubenswrapper[4766]: I0130 17:59:02.309634 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-l7mdv" event={"ID":"37d87bf7-0bd7-4201-b0e3-0d1b8062c930","Type":"ContainerDied","Data":"c378fedde629874c7b167ebd5f1cc93d0ed0243ac98f90ed4616430a0502cf1b"} Jan 30 17:59:02 crc kubenswrapper[4766]: I0130 17:59:02.313835 4766 generic.go:334] "Generic (PLEG): container finished" podID="fd5031f6-51af-4f63-8bc4-4a518f58ddd4" containerID="ac71d8e70f653ebbdd2675504fd0957f83245a57664fca40a163d39e26aa650a" exitCode=0 Jan 30 17:59:02 crc kubenswrapper[4766]: I0130 17:59:02.313877 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-8nm42" event={"ID":"fd5031f6-51af-4f63-8bc4-4a518f58ddd4","Type":"ContainerDied","Data":"ac71d8e70f653ebbdd2675504fd0957f83245a57664fca40a163d39e26aa650a"} Jan 30 17:59:10 crc kubenswrapper[4766]: I0130 17:59:10.040170 4766 scope.go:117] "RemoveContainer" containerID="e52209522ad6ff86d8d333246ee724873200c61e028d93d4fc612ac4b0977354" Jan 30 17:59:10 crc kubenswrapper[4766]: E0130 17:59:10.041135 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:59:11 crc kubenswrapper[4766]: I0130 17:59:11.658627 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-8nm42" Jan 30 17:59:11 crc kubenswrapper[4766]: I0130 17:59:11.838865 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/fd5031f6-51af-4f63-8bc4-4a518f58ddd4-config-data-merged\") pod \"fd5031f6-51af-4f63-8bc4-4a518f58ddd4\" (UID: \"fd5031f6-51af-4f63-8bc4-4a518f58ddd4\") " Jan 30 17:59:11 crc kubenswrapper[4766]: I0130 17:59:11.839424 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd5031f6-51af-4f63-8bc4-4a518f58ddd4-combined-ca-bundle\") pod \"fd5031f6-51af-4f63-8bc4-4a518f58ddd4\" (UID: \"fd5031f6-51af-4f63-8bc4-4a518f58ddd4\") " Jan 30 17:59:11 crc kubenswrapper[4766]: I0130 17:59:11.839485 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd5031f6-51af-4f63-8bc4-4a518f58ddd4-scripts\") pod \"fd5031f6-51af-4f63-8bc4-4a518f58ddd4\" (UID: \"fd5031f6-51af-4f63-8bc4-4a518f58ddd4\") " Jan 30 17:59:11 crc kubenswrapper[4766]: I0130 17:59:11.839519 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd5031f6-51af-4f63-8bc4-4a518f58ddd4-config-data\") pod \"fd5031f6-51af-4f63-8bc4-4a518f58ddd4\" (UID: \"fd5031f6-51af-4f63-8bc4-4a518f58ddd4\") " Jan 30 17:59:11 crc kubenswrapper[4766]: I0130 17:59:11.845211 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd5031f6-51af-4f63-8bc4-4a518f58ddd4-scripts" (OuterVolumeSpecName: "scripts") pod "fd5031f6-51af-4f63-8bc4-4a518f58ddd4" (UID: "fd5031f6-51af-4f63-8bc4-4a518f58ddd4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:59:11 crc kubenswrapper[4766]: I0130 17:59:11.846334 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd5031f6-51af-4f63-8bc4-4a518f58ddd4-config-data" (OuterVolumeSpecName: "config-data") pod "fd5031f6-51af-4f63-8bc4-4a518f58ddd4" (UID: "fd5031f6-51af-4f63-8bc4-4a518f58ddd4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:59:11 crc kubenswrapper[4766]: I0130 17:59:11.862735 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd5031f6-51af-4f63-8bc4-4a518f58ddd4-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "fd5031f6-51af-4f63-8bc4-4a518f58ddd4" (UID: "fd5031f6-51af-4f63-8bc4-4a518f58ddd4"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:59:11 crc kubenswrapper[4766]: I0130 17:59:11.866453 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd5031f6-51af-4f63-8bc4-4a518f58ddd4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd5031f6-51af-4f63-8bc4-4a518f58ddd4" (UID: "fd5031f6-51af-4f63-8bc4-4a518f58ddd4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:59:11 crc kubenswrapper[4766]: I0130 17:59:11.942397 4766 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/fd5031f6-51af-4f63-8bc4-4a518f58ddd4-config-data-merged\") on node \"crc\" DevicePath \"\"" Jan 30 17:59:11 crc kubenswrapper[4766]: I0130 17:59:11.942444 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd5031f6-51af-4f63-8bc4-4a518f58ddd4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:59:11 crc kubenswrapper[4766]: I0130 17:59:11.942457 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd5031f6-51af-4f63-8bc4-4a518f58ddd4-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:59:11 crc kubenswrapper[4766]: I0130 17:59:11.942468 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd5031f6-51af-4f63-8bc4-4a518f58ddd4-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:59:12 crc kubenswrapper[4766]: I0130 17:59:12.351318 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-5c95b64c75-5mhgs" Jan 30 17:59:12 crc kubenswrapper[4766]: I0130 17:59:12.356636 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-5c95b64c75-5mhgs" Jan 30 17:59:12 crc kubenswrapper[4766]: I0130 17:59:12.441323 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-8nm42" event={"ID":"fd5031f6-51af-4f63-8bc4-4a518f58ddd4","Type":"ContainerDied","Data":"2c806951b029baf7fccea5984672ed2b0aa381a3215c57028fdc906444227a0d"} Jan 30 17:59:12 crc kubenswrapper[4766]: I0130 17:59:12.441357 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-8nm42" Jan 30 17:59:12 crc kubenswrapper[4766]: I0130 17:59:12.441374 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c806951b029baf7fccea5984672ed2b0aa381a3215c57028fdc906444227a0d" Jan 30 17:59:12 crc kubenswrapper[4766]: E0130 17:59:12.585806 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/gthiemonge/octavia-amphora-image:latest" Jan 30 17:59:12 crc kubenswrapper[4766]: E0130 17:59:12.587925 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/gthiemonge/octavia-amphora-image,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DEST_DIR,Value:/usr/local/apache2/htdocs,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:amphora-image,ReadOnly:false,MountPath:/usr/local/apache2/htdocs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-image-upload-59f8cff499-kprnv_openstack(20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 17:59:12 crc kubenswrapper[4766]: E0130 17:59:12.589246 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/octavia-image-upload-59f8cff499-kprnv" podUID="20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09" Jan 30 17:59:13 crc kubenswrapper[4766]: I0130 17:59:13.451699 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-l7mdv" event={"ID":"37d87bf7-0bd7-4201-b0e3-0d1b8062c930","Type":"ContainerStarted","Data":"4f36cf88151b4841c0255a4ec15b29656e21be70b64fa7810306e1a52ce7136a"} Jan 30 17:59:13 crc kubenswrapper[4766]: I0130 17:59:13.452809 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-rsyslog-l7mdv" Jan 30 17:59:13 crc kubenswrapper[4766]: E0130 17:59:13.454527 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/gthiemonge/octavia-amphora-image\\\"\"" pod="openstack/octavia-image-upload-59f8cff499-kprnv" podUID="20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09" Jan 30 17:59:13 crc kubenswrapper[4766]: I0130 17:59:13.495990 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-rsyslog-l7mdv" podStartSLOduration=2.490489973 podStartE2EDuration="19.495971295s" podCreationTimestamp="2026-01-30 17:58:54 +0000 UTC" firstStartedPulling="2026-01-30 17:58:55.72099005 +0000 UTC m=+5790.358947396" lastFinishedPulling="2026-01-30 17:59:12.726471372 +0000 UTC m=+5807.364428718" observedRunningTime="2026-01-30 17:59:13.492608484 +0000 UTC m=+5808.130565830" watchObservedRunningTime="2026-01-30 17:59:13.495971295 +0000 UTC m=+5808.133928641" Jan 30 17:59:14 crc kubenswrapper[4766]: I0130 17:59:14.674893 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xcxq8"] Jan 30 17:59:14 crc kubenswrapper[4766]: E0130 17:59:14.677030 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd5031f6-51af-4f63-8bc4-4a518f58ddd4" containerName="init" Jan 30 17:59:14 crc kubenswrapper[4766]: I0130 17:59:14.677300 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd5031f6-51af-4f63-8bc4-4a518f58ddd4" containerName="init" Jan 30 17:59:14 crc kubenswrapper[4766]: E0130 17:59:14.677430 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4f4b9f0-b0d7-490c-984f-b50a40b2b723" containerName="ovn-config" Jan 30 17:59:14 crc kubenswrapper[4766]: I0130 17:59:14.677507 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4f4b9f0-b0d7-490c-984f-b50a40b2b723" containerName="ovn-config" Jan 30 17:59:14 crc kubenswrapper[4766]: E0130 17:59:14.677605 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd5031f6-51af-4f63-8bc4-4a518f58ddd4" containerName="octavia-db-sync" Jan 30 17:59:14 crc kubenswrapper[4766]: I0130 17:59:14.677686 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd5031f6-51af-4f63-8bc4-4a518f58ddd4" containerName="octavia-db-sync" Jan 30 17:59:14 crc kubenswrapper[4766]: I0130 17:59:14.678038 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4f4b9f0-b0d7-490c-984f-b50a40b2b723" containerName="ovn-config" Jan 30 17:59:14 crc kubenswrapper[4766]: I0130 17:59:14.683111 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd5031f6-51af-4f63-8bc4-4a518f58ddd4" containerName="octavia-db-sync" Jan 30 17:59:14 crc kubenswrapper[4766]: I0130 17:59:14.688789 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xcxq8" Jan 30 17:59:14 crc kubenswrapper[4766]: I0130 17:59:14.691762 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xcxq8"] Jan 30 17:59:14 crc kubenswrapper[4766]: I0130 17:59:14.712663 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs6b6\" (UniqueName: \"kubernetes.io/projected/e4834c01-4dd4-4f39-aa18-6abc2d33686c-kube-api-access-bs6b6\") pod \"certified-operators-xcxq8\" (UID: \"e4834c01-4dd4-4f39-aa18-6abc2d33686c\") " pod="openshift-marketplace/certified-operators-xcxq8" Jan 30 17:59:14 crc kubenswrapper[4766]: I0130 17:59:14.712933 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4834c01-4dd4-4f39-aa18-6abc2d33686c-catalog-content\") pod \"certified-operators-xcxq8\" (UID: \"e4834c01-4dd4-4f39-aa18-6abc2d33686c\") " pod="openshift-marketplace/certified-operators-xcxq8" Jan 30 17:59:14 crc kubenswrapper[4766]: I0130 17:59:14.713134 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4834c01-4dd4-4f39-aa18-6abc2d33686c-utilities\") pod \"certified-operators-xcxq8\" (UID: \"e4834c01-4dd4-4f39-aa18-6abc2d33686c\") " pod="openshift-marketplace/certified-operators-xcxq8" Jan 30 17:59:14 crc kubenswrapper[4766]: I0130 17:59:14.815039 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs6b6\" (UniqueName: \"kubernetes.io/projected/e4834c01-4dd4-4f39-aa18-6abc2d33686c-kube-api-access-bs6b6\") pod \"certified-operators-xcxq8\" (UID: \"e4834c01-4dd4-4f39-aa18-6abc2d33686c\") " pod="openshift-marketplace/certified-operators-xcxq8" Jan 30 17:59:14 crc kubenswrapper[4766]: I0130 17:59:14.815141 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4834c01-4dd4-4f39-aa18-6abc2d33686c-catalog-content\") pod \"certified-operators-xcxq8\" (UID: \"e4834c01-4dd4-4f39-aa18-6abc2d33686c\") " pod="openshift-marketplace/certified-operators-xcxq8" Jan 30 17:59:14 crc kubenswrapper[4766]: I0130 17:59:14.815212 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4834c01-4dd4-4f39-aa18-6abc2d33686c-utilities\") pod \"certified-operators-xcxq8\" (UID: \"e4834c01-4dd4-4f39-aa18-6abc2d33686c\") " pod="openshift-marketplace/certified-operators-xcxq8" Jan 30 17:59:14 crc kubenswrapper[4766]: I0130 17:59:14.815908 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4834c01-4dd4-4f39-aa18-6abc2d33686c-utilities\") pod \"certified-operators-xcxq8\" (UID: \"e4834c01-4dd4-4f39-aa18-6abc2d33686c\") " pod="openshift-marketplace/certified-operators-xcxq8" Jan 30 17:59:14 crc kubenswrapper[4766]: I0130 17:59:14.816206 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4834c01-4dd4-4f39-aa18-6abc2d33686c-catalog-content\") pod \"certified-operators-xcxq8\" (UID: \"e4834c01-4dd4-4f39-aa18-6abc2d33686c\") " pod="openshift-marketplace/certified-operators-xcxq8" Jan 30 17:59:14 crc kubenswrapper[4766]: I0130 17:59:14.836356 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs6b6\" (UniqueName: \"kubernetes.io/projected/e4834c01-4dd4-4f39-aa18-6abc2d33686c-kube-api-access-bs6b6\") pod \"certified-operators-xcxq8\" (UID: \"e4834c01-4dd4-4f39-aa18-6abc2d33686c\") " pod="openshift-marketplace/certified-operators-xcxq8" Jan 30 17:59:15 crc kubenswrapper[4766]: I0130 17:59:15.055051 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xcxq8" Jan 30 17:59:15 crc kubenswrapper[4766]: I0130 17:59:15.621911 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xcxq8"] Jan 30 17:59:15 crc kubenswrapper[4766]: W0130 17:59:15.624728 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode4834c01_4dd4_4f39_aa18_6abc2d33686c.slice/crio-1291f9f1e44e0679180ad2235f81ea96c5da16673a5005302ee7baa4eb70f06d WatchSource:0}: Error finding container 1291f9f1e44e0679180ad2235f81ea96c5da16673a5005302ee7baa4eb70f06d: Status 404 returned error can't find the container with id 1291f9f1e44e0679180ad2235f81ea96c5da16673a5005302ee7baa4eb70f06d Jan 30 17:59:16 crc kubenswrapper[4766]: I0130 17:59:16.486148 4766 generic.go:334] "Generic (PLEG): container finished" podID="e4834c01-4dd4-4f39-aa18-6abc2d33686c" containerID="dd126a225572a0e6279e5b77c45922224f194737968711dc6b9a6dd0e122c09e" exitCode=0 Jan 30 17:59:16 crc kubenswrapper[4766]: I0130 17:59:16.486254 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcxq8" event={"ID":"e4834c01-4dd4-4f39-aa18-6abc2d33686c","Type":"ContainerDied","Data":"dd126a225572a0e6279e5b77c45922224f194737968711dc6b9a6dd0e122c09e"} Jan 30 17:59:16 crc kubenswrapper[4766]: I0130 17:59:16.486552 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcxq8" event={"ID":"e4834c01-4dd4-4f39-aa18-6abc2d33686c","Type":"ContainerStarted","Data":"1291f9f1e44e0679180ad2235f81ea96c5da16673a5005302ee7baa4eb70f06d"} Jan 30 17:59:17 crc kubenswrapper[4766]: I0130 17:59:17.498047 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcxq8" event={"ID":"e4834c01-4dd4-4f39-aa18-6abc2d33686c","Type":"ContainerStarted","Data":"380006adfe28b44508cc25aa4e9baefd90b8e39115de3b2518d212a23a88586c"} Jan 30 17:59:18 crc kubenswrapper[4766]: I0130 17:59:18.510703 4766 generic.go:334] "Generic (PLEG): container finished" podID="e4834c01-4dd4-4f39-aa18-6abc2d33686c" containerID="380006adfe28b44508cc25aa4e9baefd90b8e39115de3b2518d212a23a88586c" exitCode=0 Jan 30 17:59:18 crc kubenswrapper[4766]: I0130 17:59:18.510766 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcxq8" event={"ID":"e4834c01-4dd4-4f39-aa18-6abc2d33686c","Type":"ContainerDied","Data":"380006adfe28b44508cc25aa4e9baefd90b8e39115de3b2518d212a23a88586c"} Jan 30 17:59:19 crc kubenswrapper[4766]: I0130 17:59:19.521418 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcxq8" event={"ID":"e4834c01-4dd4-4f39-aa18-6abc2d33686c","Type":"ContainerStarted","Data":"6dad50dbe6259fb1422608f3a8180cd5de4d5f6edc03a2ba8666d0ebac69d678"} Jan 30 17:59:19 crc kubenswrapper[4766]: I0130 17:59:19.542952 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xcxq8" podStartSLOduration=3.116917657 podStartE2EDuration="5.542934052s" podCreationTimestamp="2026-01-30 17:59:14 +0000 UTC" firstStartedPulling="2026-01-30 17:59:16.48786137 +0000 UTC m=+5811.125818716" lastFinishedPulling="2026-01-30 17:59:18.913877765 +0000 UTC m=+5813.551835111" observedRunningTime="2026-01-30 17:59:19.537285049 +0000 UTC m=+5814.175242395" watchObservedRunningTime="2026-01-30 17:59:19.542934052 +0000 UTC m=+5814.180891398" Jan 30 17:59:22 crc kubenswrapper[4766]: I0130 17:59:22.040129 4766 scope.go:117] "RemoveContainer" containerID="e52209522ad6ff86d8d333246ee724873200c61e028d93d4fc612ac4b0977354" Jan 30 17:59:22 crc kubenswrapper[4766]: E0130 17:59:22.040940 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:59:25 crc kubenswrapper[4766]: I0130 17:59:25.056003 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xcxq8" Jan 30 17:59:25 crc kubenswrapper[4766]: I0130 17:59:25.056459 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xcxq8" Jan 30 17:59:25 crc kubenswrapper[4766]: I0130 17:59:25.102199 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xcxq8" Jan 30 17:59:25 crc kubenswrapper[4766]: I0130 17:59:25.140646 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-rsyslog-l7mdv" Jan 30 17:59:25 crc kubenswrapper[4766]: I0130 17:59:25.650535 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xcxq8" Jan 30 17:59:25 crc kubenswrapper[4766]: I0130 17:59:25.705782 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xcxq8"] Jan 30 17:59:26 crc kubenswrapper[4766]: I0130 17:59:26.598069 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-kprnv" event={"ID":"20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09","Type":"ContainerStarted","Data":"9288a1db61282484e649eb58946ed94646c5ab6baa0a6167232dee58508adef7"} Jan 30 17:59:27 crc kubenswrapper[4766]: I0130 17:59:27.613791 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xcxq8" podUID="e4834c01-4dd4-4f39-aa18-6abc2d33686c" containerName="registry-server" containerID="cri-o://6dad50dbe6259fb1422608f3a8180cd5de4d5f6edc03a2ba8666d0ebac69d678" gracePeriod=2 Jan 30 17:59:28 crc kubenswrapper[4766]: I0130 17:59:28.629346 4766 generic.go:334] "Generic (PLEG): container finished" podID="e4834c01-4dd4-4f39-aa18-6abc2d33686c" containerID="6dad50dbe6259fb1422608f3a8180cd5de4d5f6edc03a2ba8666d0ebac69d678" exitCode=0 Jan 30 17:59:28 crc kubenswrapper[4766]: I0130 17:59:28.629481 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcxq8" event={"ID":"e4834c01-4dd4-4f39-aa18-6abc2d33686c","Type":"ContainerDied","Data":"6dad50dbe6259fb1422608f3a8180cd5de4d5f6edc03a2ba8666d0ebac69d678"} Jan 30 17:59:29 crc kubenswrapper[4766]: I0130 17:59:29.357562 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xcxq8" Jan 30 17:59:29 crc kubenswrapper[4766]: I0130 17:59:29.435514 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bs6b6\" (UniqueName: \"kubernetes.io/projected/e4834c01-4dd4-4f39-aa18-6abc2d33686c-kube-api-access-bs6b6\") pod \"e4834c01-4dd4-4f39-aa18-6abc2d33686c\" (UID: \"e4834c01-4dd4-4f39-aa18-6abc2d33686c\") " Jan 30 17:59:29 crc kubenswrapper[4766]: I0130 17:59:29.435596 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4834c01-4dd4-4f39-aa18-6abc2d33686c-utilities\") pod \"e4834c01-4dd4-4f39-aa18-6abc2d33686c\" (UID: \"e4834c01-4dd4-4f39-aa18-6abc2d33686c\") " Jan 30 17:59:29 crc kubenswrapper[4766]: I0130 17:59:29.435648 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4834c01-4dd4-4f39-aa18-6abc2d33686c-catalog-content\") pod \"e4834c01-4dd4-4f39-aa18-6abc2d33686c\" (UID: \"e4834c01-4dd4-4f39-aa18-6abc2d33686c\") " Jan 30 17:59:29 crc kubenswrapper[4766]: I0130 17:59:29.436793 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4834c01-4dd4-4f39-aa18-6abc2d33686c-utilities" (OuterVolumeSpecName: "utilities") pod "e4834c01-4dd4-4f39-aa18-6abc2d33686c" (UID: "e4834c01-4dd4-4f39-aa18-6abc2d33686c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:59:29 crc kubenswrapper[4766]: I0130 17:59:29.443912 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4834c01-4dd4-4f39-aa18-6abc2d33686c-kube-api-access-bs6b6" (OuterVolumeSpecName: "kube-api-access-bs6b6") pod "e4834c01-4dd4-4f39-aa18-6abc2d33686c" (UID: "e4834c01-4dd4-4f39-aa18-6abc2d33686c"). InnerVolumeSpecName "kube-api-access-bs6b6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:59:29 crc kubenswrapper[4766]: I0130 17:59:29.507653 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4834c01-4dd4-4f39-aa18-6abc2d33686c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e4834c01-4dd4-4f39-aa18-6abc2d33686c" (UID: "e4834c01-4dd4-4f39-aa18-6abc2d33686c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:59:29 crc kubenswrapper[4766]: I0130 17:59:29.537582 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4834c01-4dd4-4f39-aa18-6abc2d33686c-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:59:29 crc kubenswrapper[4766]: I0130 17:59:29.537623 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4834c01-4dd4-4f39-aa18-6abc2d33686c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:59:29 crc kubenswrapper[4766]: I0130 17:59:29.537642 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bs6b6\" (UniqueName: \"kubernetes.io/projected/e4834c01-4dd4-4f39-aa18-6abc2d33686c-kube-api-access-bs6b6\") on node \"crc\" DevicePath \"\"" Jan 30 17:59:29 crc kubenswrapper[4766]: I0130 17:59:29.655508 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcxq8" event={"ID":"e4834c01-4dd4-4f39-aa18-6abc2d33686c","Type":"ContainerDied","Data":"1291f9f1e44e0679180ad2235f81ea96c5da16673a5005302ee7baa4eb70f06d"} Jan 30 17:59:29 crc kubenswrapper[4766]: I0130 17:59:29.655555 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xcxq8" Jan 30 17:59:29 crc kubenswrapper[4766]: I0130 17:59:29.655579 4766 scope.go:117] "RemoveContainer" containerID="6dad50dbe6259fb1422608f3a8180cd5de4d5f6edc03a2ba8666d0ebac69d678" Jan 30 17:59:29 crc kubenswrapper[4766]: I0130 17:59:29.663731 4766 generic.go:334] "Generic (PLEG): container finished" podID="20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09" containerID="9288a1db61282484e649eb58946ed94646c5ab6baa0a6167232dee58508adef7" exitCode=0 Jan 30 17:59:29 crc kubenswrapper[4766]: I0130 17:59:29.663811 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-kprnv" event={"ID":"20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09","Type":"ContainerDied","Data":"9288a1db61282484e649eb58946ed94646c5ab6baa0a6167232dee58508adef7"} Jan 30 17:59:29 crc kubenswrapper[4766]: I0130 17:59:29.698145 4766 scope.go:117] "RemoveContainer" containerID="380006adfe28b44508cc25aa4e9baefd90b8e39115de3b2518d212a23a88586c" Jan 30 17:59:29 crc kubenswrapper[4766]: I0130 17:59:29.721719 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xcxq8"] Jan 30 17:59:29 crc kubenswrapper[4766]: I0130 17:59:29.732384 4766 scope.go:117] "RemoveContainer" containerID="dd126a225572a0e6279e5b77c45922224f194737968711dc6b9a6dd0e122c09e" Jan 30 17:59:29 crc kubenswrapper[4766]: I0130 17:59:29.732619 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xcxq8"] Jan 30 17:59:30 crc kubenswrapper[4766]: I0130 17:59:30.052385 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4834c01-4dd4-4f39-aa18-6abc2d33686c" path="/var/lib/kubelet/pods/e4834c01-4dd4-4f39-aa18-6abc2d33686c/volumes" Jan 30 17:59:32 crc kubenswrapper[4766]: I0130 17:59:32.691327 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-kprnv" event={"ID":"20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09","Type":"ContainerStarted","Data":"e2f5208b28788e01c9bd188a40ecfb44c883ee355f35a749ceb30f8df75e9e39"} Jan 30 17:59:32 crc kubenswrapper[4766]: I0130 17:59:32.846362 4766 scope.go:117] "RemoveContainer" containerID="bc8079f8c0ccd370bc3a3a51529041c82b6352c79d4171184261059c45df6bfa" Jan 30 17:59:32 crc kubenswrapper[4766]: I0130 17:59:32.886997 4766 scope.go:117] "RemoveContainer" containerID="a65fe77666bd1dd89a9c3e39317ec3bd94cd2f336d1abf824947e6dcb6ba640a" Jan 30 17:59:35 crc kubenswrapper[4766]: I0130 17:59:35.039637 4766 scope.go:117] "RemoveContainer" containerID="e52209522ad6ff86d8d333246ee724873200c61e028d93d4fc612ac4b0977354" Jan 30 17:59:35 crc kubenswrapper[4766]: E0130 17:59:35.040209 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:59:47 crc kubenswrapper[4766]: I0130 17:59:47.040078 4766 scope.go:117] "RemoveContainer" containerID="e52209522ad6ff86d8d333246ee724873200c61e028d93d4fc612ac4b0977354" Jan 30 17:59:47 crc kubenswrapper[4766]: E0130 17:59:47.041031 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:59:52 crc kubenswrapper[4766]: I0130 17:59:52.633913 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-image-upload-59f8cff499-kprnv" podStartSLOduration=21.949871511 podStartE2EDuration="57.633892876s" podCreationTimestamp="2026-01-30 17:58:55 +0000 UTC" firstStartedPulling="2026-01-30 17:58:56.229787128 +0000 UTC m=+5790.867744474" lastFinishedPulling="2026-01-30 17:59:31.913808493 +0000 UTC m=+5826.551765839" observedRunningTime="2026-01-30 17:59:32.715618122 +0000 UTC m=+5827.353575468" watchObservedRunningTime="2026-01-30 17:59:52.633892876 +0000 UTC m=+5847.271850222" Jan 30 17:59:52 crc kubenswrapper[4766]: I0130 17:59:52.645199 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-kprnv"] Jan 30 17:59:52 crc kubenswrapper[4766]: I0130 17:59:52.645453 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/octavia-image-upload-59f8cff499-kprnv" podUID="20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09" containerName="octavia-amphora-httpd" containerID="cri-o://e2f5208b28788e01c9bd188a40ecfb44c883ee355f35a749ceb30f8df75e9e39" gracePeriod=30 Jan 30 17:59:52 crc kubenswrapper[4766]: I0130 17:59:52.949998 4766 generic.go:334] "Generic (PLEG): container finished" podID="20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09" containerID="e2f5208b28788e01c9bd188a40ecfb44c883ee355f35a749ceb30f8df75e9e39" exitCode=0 Jan 30 17:59:52 crc kubenswrapper[4766]: I0130 17:59:52.950435 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-kprnv" event={"ID":"20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09","Type":"ContainerDied","Data":"e2f5208b28788e01c9bd188a40ecfb44c883ee355f35a749ceb30f8df75e9e39"} Jan 30 17:59:53 crc kubenswrapper[4766]: I0130 17:59:53.204285 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-59f8cff499-kprnv" Jan 30 17:59:53 crc kubenswrapper[4766]: I0130 17:59:53.241415 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09-httpd-config\") pod \"20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09\" (UID: \"20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09\") " Jan 30 17:59:53 crc kubenswrapper[4766]: I0130 17:59:53.241520 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09-amphora-image\") pod \"20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09\" (UID: \"20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09\") " Jan 30 17:59:53 crc kubenswrapper[4766]: I0130 17:59:53.286221 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09" (UID: "20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:59:53 crc kubenswrapper[4766]: I0130 17:59:53.326812 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09-amphora-image" (OuterVolumeSpecName: "amphora-image") pod "20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09" (UID: "20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09"). InnerVolumeSpecName "amphora-image". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:59:53 crc kubenswrapper[4766]: I0130 17:59:53.344548 4766 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:59:53 crc kubenswrapper[4766]: I0130 17:59:53.344603 4766 reconciler_common.go:293] "Volume detached for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09-amphora-image\") on node \"crc\" DevicePath \"\"" Jan 30 17:59:53 crc kubenswrapper[4766]: I0130 17:59:53.961467 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-kprnv" event={"ID":"20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09","Type":"ContainerDied","Data":"3dbe81e46eed52883df8dfc889eb0ab8c07352aa770fac3bae2b8943846bbc9f"} Jan 30 17:59:53 crc kubenswrapper[4766]: I0130 17:59:53.961524 4766 scope.go:117] "RemoveContainer" containerID="e2f5208b28788e01c9bd188a40ecfb44c883ee355f35a749ceb30f8df75e9e39" Jan 30 17:59:53 crc kubenswrapper[4766]: I0130 17:59:53.961521 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-59f8cff499-kprnv" Jan 30 17:59:53 crc kubenswrapper[4766]: I0130 17:59:53.997042 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-kprnv"] Jan 30 17:59:53 crc kubenswrapper[4766]: I0130 17:59:53.998254 4766 scope.go:117] "RemoveContainer" containerID="9288a1db61282484e649eb58946ed94646c5ab6baa0a6167232dee58508adef7" Jan 30 17:59:54 crc kubenswrapper[4766]: I0130 17:59:54.006331 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-kprnv"] Jan 30 17:59:54 crc kubenswrapper[4766]: I0130 17:59:54.056156 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09" path="/var/lib/kubelet/pods/20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09/volumes" Jan 30 17:59:57 crc kubenswrapper[4766]: I0130 17:59:57.221974 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-image-upload-59f8cff499-b9qv6"] Jan 30 17:59:57 crc kubenswrapper[4766]: E0130 17:59:57.223068 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4834c01-4dd4-4f39-aa18-6abc2d33686c" containerName="extract-content" Jan 30 17:59:57 crc kubenswrapper[4766]: I0130 17:59:57.223081 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4834c01-4dd4-4f39-aa18-6abc2d33686c" containerName="extract-content" Jan 30 17:59:57 crc kubenswrapper[4766]: E0130 17:59:57.223102 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09" containerName="octavia-amphora-httpd" Jan 30 17:59:57 crc kubenswrapper[4766]: I0130 17:59:57.223108 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09" containerName="octavia-amphora-httpd" Jan 30 17:59:57 crc kubenswrapper[4766]: E0130 17:59:57.223132 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4834c01-4dd4-4f39-aa18-6abc2d33686c" containerName="registry-server" Jan 30 17:59:57 crc kubenswrapper[4766]: I0130 17:59:57.223140 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4834c01-4dd4-4f39-aa18-6abc2d33686c" containerName="registry-server" Jan 30 17:59:57 crc kubenswrapper[4766]: E0130 17:59:57.223149 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09" containerName="init" Jan 30 17:59:57 crc kubenswrapper[4766]: I0130 17:59:57.223156 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09" containerName="init" Jan 30 17:59:57 crc kubenswrapper[4766]: E0130 17:59:57.223170 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4834c01-4dd4-4f39-aa18-6abc2d33686c" containerName="extract-utilities" Jan 30 17:59:57 crc kubenswrapper[4766]: I0130 17:59:57.223190 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4834c01-4dd4-4f39-aa18-6abc2d33686c" containerName="extract-utilities" Jan 30 17:59:57 crc kubenswrapper[4766]: I0130 17:59:57.223375 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09" containerName="octavia-amphora-httpd" Jan 30 17:59:57 crc kubenswrapper[4766]: I0130 17:59:57.223389 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4834c01-4dd4-4f39-aa18-6abc2d33686c" containerName="registry-server" Jan 30 17:59:57 crc kubenswrapper[4766]: I0130 17:59:57.224388 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-59f8cff499-b9qv6" Jan 30 17:59:57 crc kubenswrapper[4766]: I0130 17:59:57.232722 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-b9qv6"] Jan 30 17:59:57 crc kubenswrapper[4766]: I0130 17:59:57.232878 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-config-data" Jan 30 17:59:57 crc kubenswrapper[4766]: I0130 17:59:57.320541 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a2dd03c7-c095-4563-9107-802624d1e4f5-httpd-config\") pod \"octavia-image-upload-59f8cff499-b9qv6\" (UID: \"a2dd03c7-c095-4563-9107-802624d1e4f5\") " pod="openstack/octavia-image-upload-59f8cff499-b9qv6" Jan 30 17:59:57 crc kubenswrapper[4766]: I0130 17:59:57.320657 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/a2dd03c7-c095-4563-9107-802624d1e4f5-amphora-image\") pod \"octavia-image-upload-59f8cff499-b9qv6\" (UID: \"a2dd03c7-c095-4563-9107-802624d1e4f5\") " pod="openstack/octavia-image-upload-59f8cff499-b9qv6" Jan 30 17:59:57 crc kubenswrapper[4766]: I0130 17:59:57.421655 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/a2dd03c7-c095-4563-9107-802624d1e4f5-amphora-image\") pod \"octavia-image-upload-59f8cff499-b9qv6\" (UID: \"a2dd03c7-c095-4563-9107-802624d1e4f5\") " pod="openstack/octavia-image-upload-59f8cff499-b9qv6" Jan 30 17:59:57 crc kubenswrapper[4766]: I0130 17:59:57.421791 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a2dd03c7-c095-4563-9107-802624d1e4f5-httpd-config\") pod \"octavia-image-upload-59f8cff499-b9qv6\" (UID: \"a2dd03c7-c095-4563-9107-802624d1e4f5\") " pod="openstack/octavia-image-upload-59f8cff499-b9qv6" Jan 30 17:59:57 crc kubenswrapper[4766]: I0130 17:59:57.422336 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/a2dd03c7-c095-4563-9107-802624d1e4f5-amphora-image\") pod \"octavia-image-upload-59f8cff499-b9qv6\" (UID: \"a2dd03c7-c095-4563-9107-802624d1e4f5\") " pod="openstack/octavia-image-upload-59f8cff499-b9qv6" Jan 30 17:59:57 crc kubenswrapper[4766]: I0130 17:59:57.428984 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a2dd03c7-c095-4563-9107-802624d1e4f5-httpd-config\") pod \"octavia-image-upload-59f8cff499-b9qv6\" (UID: \"a2dd03c7-c095-4563-9107-802624d1e4f5\") " pod="openstack/octavia-image-upload-59f8cff499-b9qv6" Jan 30 17:59:57 crc kubenswrapper[4766]: I0130 17:59:57.589963 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-59f8cff499-b9qv6" Jan 30 17:59:58 crc kubenswrapper[4766]: I0130 17:59:58.066196 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-b9qv6"] Jan 30 17:59:59 crc kubenswrapper[4766]: I0130 17:59:59.008150 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-b9qv6" event={"ID":"a2dd03c7-c095-4563-9107-802624d1e4f5","Type":"ContainerStarted","Data":"e5af812f61446effd4db02a5680ea069f94a7df7166099b6970c932818b1caee"} Jan 30 17:59:59 crc kubenswrapper[4766]: I0130 17:59:59.008563 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-b9qv6" event={"ID":"a2dd03c7-c095-4563-9107-802624d1e4f5","Type":"ContainerStarted","Data":"7bfc45edc16a9568b1b93dc25fc7b80b90ce50c9462614d0b5ef7a5a2181ea6a"} Jan 30 17:59:59 crc kubenswrapper[4766]: I0130 17:59:59.039074 4766 scope.go:117] "RemoveContainer" containerID="e52209522ad6ff86d8d333246ee724873200c61e028d93d4fc612ac4b0977354" Jan 30 17:59:59 crc kubenswrapper[4766]: E0130 17:59:59.039349 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:59:59 crc kubenswrapper[4766]: I0130 17:59:59.419226 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-healthmanager-422fs"] Jan 30 17:59:59 crc kubenswrapper[4766]: I0130 17:59:59.420891 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-422fs" Jan 30 17:59:59 crc kubenswrapper[4766]: I0130 17:59:59.423044 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-certs-secret" Jan 30 17:59:59 crc kubenswrapper[4766]: I0130 17:59:59.423349 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-scripts" Jan 30 17:59:59 crc kubenswrapper[4766]: I0130 17:59:59.423593 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-config-data" Jan 30 17:59:59 crc kubenswrapper[4766]: I0130 17:59:59.429339 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-422fs"] Jan 30 17:59:59 crc kubenswrapper[4766]: I0130 17:59:59.472533 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c79d934-7880-4883-bee6-c60ea7745616-config-data\") pod \"octavia-healthmanager-422fs\" (UID: \"1c79d934-7880-4883-bee6-c60ea7745616\") " pod="openstack/octavia-healthmanager-422fs" Jan 30 17:59:59 crc kubenswrapper[4766]: I0130 17:59:59.472684 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c79d934-7880-4883-bee6-c60ea7745616-combined-ca-bundle\") pod \"octavia-healthmanager-422fs\" (UID: \"1c79d934-7880-4883-bee6-c60ea7745616\") " pod="openstack/octavia-healthmanager-422fs" Jan 30 17:59:59 crc kubenswrapper[4766]: I0130 17:59:59.472716 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c79d934-7880-4883-bee6-c60ea7745616-scripts\") pod \"octavia-healthmanager-422fs\" (UID: \"1c79d934-7880-4883-bee6-c60ea7745616\") " pod="openstack/octavia-healthmanager-422fs" Jan 30 17:59:59 crc kubenswrapper[4766]: I0130 17:59:59.472775 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/1c79d934-7880-4883-bee6-c60ea7745616-config-data-merged\") pod \"octavia-healthmanager-422fs\" (UID: \"1c79d934-7880-4883-bee6-c60ea7745616\") " pod="openstack/octavia-healthmanager-422fs" Jan 30 17:59:59 crc kubenswrapper[4766]: I0130 17:59:59.472826 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/1c79d934-7880-4883-bee6-c60ea7745616-hm-ports\") pod \"octavia-healthmanager-422fs\" (UID: \"1c79d934-7880-4883-bee6-c60ea7745616\") " pod="openstack/octavia-healthmanager-422fs" Jan 30 17:59:59 crc kubenswrapper[4766]: I0130 17:59:59.472853 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/1c79d934-7880-4883-bee6-c60ea7745616-amphora-certs\") pod \"octavia-healthmanager-422fs\" (UID: \"1c79d934-7880-4883-bee6-c60ea7745616\") " pod="openstack/octavia-healthmanager-422fs" Jan 30 17:59:59 crc kubenswrapper[4766]: I0130 17:59:59.574417 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/1c79d934-7880-4883-bee6-c60ea7745616-config-data-merged\") pod \"octavia-healthmanager-422fs\" (UID: \"1c79d934-7880-4883-bee6-c60ea7745616\") " pod="openstack/octavia-healthmanager-422fs" Jan 30 17:59:59 crc kubenswrapper[4766]: I0130 17:59:59.574500 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/1c79d934-7880-4883-bee6-c60ea7745616-hm-ports\") pod \"octavia-healthmanager-422fs\" (UID: \"1c79d934-7880-4883-bee6-c60ea7745616\") " pod="openstack/octavia-healthmanager-422fs" Jan 30 17:59:59 crc kubenswrapper[4766]: I0130 17:59:59.574528 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/1c79d934-7880-4883-bee6-c60ea7745616-amphora-certs\") pod \"octavia-healthmanager-422fs\" (UID: \"1c79d934-7880-4883-bee6-c60ea7745616\") " pod="openstack/octavia-healthmanager-422fs" Jan 30 17:59:59 crc kubenswrapper[4766]: I0130 17:59:59.574601 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c79d934-7880-4883-bee6-c60ea7745616-config-data\") pod \"octavia-healthmanager-422fs\" (UID: \"1c79d934-7880-4883-bee6-c60ea7745616\") " pod="openstack/octavia-healthmanager-422fs" Jan 30 17:59:59 crc kubenswrapper[4766]: I0130 17:59:59.574695 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c79d934-7880-4883-bee6-c60ea7745616-combined-ca-bundle\") pod \"octavia-healthmanager-422fs\" (UID: \"1c79d934-7880-4883-bee6-c60ea7745616\") " pod="openstack/octavia-healthmanager-422fs" Jan 30 17:59:59 crc kubenswrapper[4766]: I0130 17:59:59.574719 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c79d934-7880-4883-bee6-c60ea7745616-scripts\") pod \"octavia-healthmanager-422fs\" (UID: \"1c79d934-7880-4883-bee6-c60ea7745616\") " pod="openstack/octavia-healthmanager-422fs" Jan 30 17:59:59 crc kubenswrapper[4766]: I0130 17:59:59.576303 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/1c79d934-7880-4883-bee6-c60ea7745616-config-data-merged\") pod \"octavia-healthmanager-422fs\" (UID: \"1c79d934-7880-4883-bee6-c60ea7745616\") " pod="openstack/octavia-healthmanager-422fs" Jan 30 17:59:59 crc kubenswrapper[4766]: I0130 17:59:59.577584 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/1c79d934-7880-4883-bee6-c60ea7745616-hm-ports\") pod \"octavia-healthmanager-422fs\" (UID: \"1c79d934-7880-4883-bee6-c60ea7745616\") " pod="openstack/octavia-healthmanager-422fs" Jan 30 17:59:59 crc kubenswrapper[4766]: I0130 17:59:59.587358 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/1c79d934-7880-4883-bee6-c60ea7745616-amphora-certs\") pod \"octavia-healthmanager-422fs\" (UID: \"1c79d934-7880-4883-bee6-c60ea7745616\") " pod="openstack/octavia-healthmanager-422fs" Jan 30 17:59:59 crc kubenswrapper[4766]: I0130 17:59:59.587516 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c79d934-7880-4883-bee6-c60ea7745616-scripts\") pod \"octavia-healthmanager-422fs\" (UID: \"1c79d934-7880-4883-bee6-c60ea7745616\") " pod="openstack/octavia-healthmanager-422fs" Jan 30 17:59:59 crc kubenswrapper[4766]: I0130 17:59:59.588148 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c79d934-7880-4883-bee6-c60ea7745616-combined-ca-bundle\") pod \"octavia-healthmanager-422fs\" (UID: \"1c79d934-7880-4883-bee6-c60ea7745616\") " pod="openstack/octavia-healthmanager-422fs" Jan 30 17:59:59 crc kubenswrapper[4766]: I0130 17:59:59.588351 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c79d934-7880-4883-bee6-c60ea7745616-config-data\") pod \"octavia-healthmanager-422fs\" (UID: \"1c79d934-7880-4883-bee6-c60ea7745616\") " pod="openstack/octavia-healthmanager-422fs" Jan 30 17:59:59 crc kubenswrapper[4766]: I0130 17:59:59.749885 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-422fs" Jan 30 18:00:00 crc kubenswrapper[4766]: I0130 18:00:00.147505 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496600-spdnq"] Jan 30 18:00:00 crc kubenswrapper[4766]: I0130 18:00:00.149253 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496600-spdnq" Jan 30 18:00:00 crc kubenswrapper[4766]: I0130 18:00:00.152527 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 18:00:00 crc kubenswrapper[4766]: I0130 18:00:00.152542 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 18:00:00 crc kubenswrapper[4766]: I0130 18:00:00.165074 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496600-spdnq"] Jan 30 18:00:00 crc kubenswrapper[4766]: I0130 18:00:00.191066 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eaef52df-0dea-425d-ac97-09334d4d44bf-secret-volume\") pod \"collect-profiles-29496600-spdnq\" (UID: \"eaef52df-0dea-425d-ac97-09334d4d44bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496600-spdnq" Jan 30 18:00:00 crc kubenswrapper[4766]: I0130 18:00:00.191301 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eaef52df-0dea-425d-ac97-09334d4d44bf-config-volume\") pod \"collect-profiles-29496600-spdnq\" (UID: \"eaef52df-0dea-425d-ac97-09334d4d44bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496600-spdnq" Jan 30 18:00:00 crc kubenswrapper[4766]: I0130 18:00:00.191362 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftltl\" (UniqueName: \"kubernetes.io/projected/eaef52df-0dea-425d-ac97-09334d4d44bf-kube-api-access-ftltl\") pod \"collect-profiles-29496600-spdnq\" (UID: \"eaef52df-0dea-425d-ac97-09334d4d44bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496600-spdnq" Jan 30 18:00:00 crc kubenswrapper[4766]: I0130 18:00:00.292847 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eaef52df-0dea-425d-ac97-09334d4d44bf-secret-volume\") pod \"collect-profiles-29496600-spdnq\" (UID: \"eaef52df-0dea-425d-ac97-09334d4d44bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496600-spdnq" Jan 30 18:00:00 crc kubenswrapper[4766]: I0130 18:00:00.292935 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eaef52df-0dea-425d-ac97-09334d4d44bf-config-volume\") pod \"collect-profiles-29496600-spdnq\" (UID: \"eaef52df-0dea-425d-ac97-09334d4d44bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496600-spdnq" Jan 30 18:00:00 crc kubenswrapper[4766]: I0130 18:00:00.292962 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftltl\" (UniqueName: \"kubernetes.io/projected/eaef52df-0dea-425d-ac97-09334d4d44bf-kube-api-access-ftltl\") pod \"collect-profiles-29496600-spdnq\" (UID: \"eaef52df-0dea-425d-ac97-09334d4d44bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496600-spdnq" Jan 30 18:00:00 crc kubenswrapper[4766]: I0130 18:00:00.295974 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eaef52df-0dea-425d-ac97-09334d4d44bf-config-volume\") pod \"collect-profiles-29496600-spdnq\" (UID: \"eaef52df-0dea-425d-ac97-09334d4d44bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496600-spdnq" Jan 30 18:00:00 crc kubenswrapper[4766]: I0130 18:00:00.300995 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-422fs"] Jan 30 18:00:00 crc kubenswrapper[4766]: I0130 18:00:00.311770 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eaef52df-0dea-425d-ac97-09334d4d44bf-secret-volume\") pod \"collect-profiles-29496600-spdnq\" (UID: \"eaef52df-0dea-425d-ac97-09334d4d44bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496600-spdnq" Jan 30 18:00:00 crc kubenswrapper[4766]: I0130 18:00:00.313258 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftltl\" (UniqueName: \"kubernetes.io/projected/eaef52df-0dea-425d-ac97-09334d4d44bf-kube-api-access-ftltl\") pod \"collect-profiles-29496600-spdnq\" (UID: \"eaef52df-0dea-425d-ac97-09334d4d44bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496600-spdnq" Jan 30 18:00:00 crc kubenswrapper[4766]: I0130 18:00:00.482849 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496600-spdnq" Jan 30 18:00:00 crc kubenswrapper[4766]: I0130 18:00:00.986703 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496600-spdnq"] Jan 30 18:00:00 crc kubenswrapper[4766]: W0130 18:00:00.989635 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeaef52df_0dea_425d_ac97_09334d4d44bf.slice/crio-1952b855f783d1c096a055762c6f9dd3a2a7e77b1bc9815fc98a26b7bf9fedcf WatchSource:0}: Error finding container 1952b855f783d1c096a055762c6f9dd3a2a7e77b1bc9815fc98a26b7bf9fedcf: Status 404 returned error can't find the container with id 1952b855f783d1c096a055762c6f9dd3a2a7e77b1bc9815fc98a26b7bf9fedcf Jan 30 18:00:01 crc kubenswrapper[4766]: I0130 18:00:01.025796 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-422fs" event={"ID":"1c79d934-7880-4883-bee6-c60ea7745616","Type":"ContainerStarted","Data":"3725f34e1ccb8b31dc9940969a4050287625592f0d18c088cda3834f58c9655c"} Jan 30 18:00:01 crc kubenswrapper[4766]: I0130 18:00:01.025846 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-422fs" event={"ID":"1c79d934-7880-4883-bee6-c60ea7745616","Type":"ContainerStarted","Data":"0c9862552caf41cbd5f638444ecf4b54dfc7a0268d5453a489b3d0fa94a6938c"} Jan 30 18:00:01 crc kubenswrapper[4766]: I0130 18:00:01.031114 4766 generic.go:334] "Generic (PLEG): container finished" podID="a2dd03c7-c095-4563-9107-802624d1e4f5" containerID="e5af812f61446effd4db02a5680ea069f94a7df7166099b6970c932818b1caee" exitCode=0 Jan 30 18:00:01 crc kubenswrapper[4766]: I0130 18:00:01.031334 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-b9qv6" event={"ID":"a2dd03c7-c095-4563-9107-802624d1e4f5","Type":"ContainerDied","Data":"e5af812f61446effd4db02a5680ea069f94a7df7166099b6970c932818b1caee"} Jan 30 18:00:01 crc kubenswrapper[4766]: I0130 18:00:01.033815 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496600-spdnq" event={"ID":"eaef52df-0dea-425d-ac97-09334d4d44bf","Type":"ContainerStarted","Data":"1952b855f783d1c096a055762c6f9dd3a2a7e77b1bc9815fc98a26b7bf9fedcf"} Jan 30 18:00:01 crc kubenswrapper[4766]: I0130 18:00:01.501878 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-housekeeping-f25c5"] Jan 30 18:00:01 crc kubenswrapper[4766]: I0130 18:00:01.503708 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-f25c5" Jan 30 18:00:01 crc kubenswrapper[4766]: I0130 18:00:01.505557 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-config-data" Jan 30 18:00:01 crc kubenswrapper[4766]: I0130 18:00:01.512484 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-scripts" Jan 30 18:00:01 crc kubenswrapper[4766]: I0130 18:00:01.522782 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-f25c5"] Jan 30 18:00:01 crc kubenswrapper[4766]: I0130 18:00:01.618951 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a-amphora-certs\") pod \"octavia-housekeeping-f25c5\" (UID: \"7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a\") " pod="openstack/octavia-housekeeping-f25c5" Jan 30 18:00:01 crc kubenswrapper[4766]: I0130 18:00:01.619654 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a-config-data-merged\") pod \"octavia-housekeeping-f25c5\" (UID: \"7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a\") " pod="openstack/octavia-housekeeping-f25c5" Jan 30 18:00:01 crc kubenswrapper[4766]: I0130 18:00:01.619754 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a-combined-ca-bundle\") pod \"octavia-housekeeping-f25c5\" (UID: \"7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a\") " pod="openstack/octavia-housekeeping-f25c5" Jan 30 18:00:01 crc kubenswrapper[4766]: I0130 18:00:01.620041 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a-scripts\") pod \"octavia-housekeeping-f25c5\" (UID: \"7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a\") " pod="openstack/octavia-housekeeping-f25c5" Jan 30 18:00:01 crc kubenswrapper[4766]: I0130 18:00:01.620135 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a-hm-ports\") pod \"octavia-housekeeping-f25c5\" (UID: \"7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a\") " pod="openstack/octavia-housekeeping-f25c5" Jan 30 18:00:01 crc kubenswrapper[4766]: I0130 18:00:01.620237 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a-config-data\") pod \"octavia-housekeeping-f25c5\" (UID: \"7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a\") " pod="openstack/octavia-housekeeping-f25c5" Jan 30 18:00:01 crc kubenswrapper[4766]: I0130 18:00:01.721601 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a-scripts\") pod \"octavia-housekeeping-f25c5\" (UID: \"7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a\") " pod="openstack/octavia-housekeeping-f25c5" Jan 30 18:00:01 crc kubenswrapper[4766]: I0130 18:00:01.721669 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a-hm-ports\") pod \"octavia-housekeeping-f25c5\" (UID: \"7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a\") " pod="openstack/octavia-housekeeping-f25c5" Jan 30 18:00:01 crc kubenswrapper[4766]: I0130 18:00:01.721707 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a-config-data\") pod \"octavia-housekeeping-f25c5\" (UID: \"7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a\") " pod="openstack/octavia-housekeeping-f25c5" Jan 30 18:00:01 crc kubenswrapper[4766]: I0130 18:00:01.721737 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a-amphora-certs\") pod \"octavia-housekeeping-f25c5\" (UID: \"7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a\") " pod="openstack/octavia-housekeeping-f25c5" Jan 30 18:00:01 crc kubenswrapper[4766]: I0130 18:00:01.721832 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a-config-data-merged\") pod \"octavia-housekeeping-f25c5\" (UID: \"7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a\") " pod="openstack/octavia-housekeeping-f25c5" Jan 30 18:00:01 crc kubenswrapper[4766]: I0130 18:00:01.721858 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a-combined-ca-bundle\") pod \"octavia-housekeeping-f25c5\" (UID: \"7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a\") " pod="openstack/octavia-housekeeping-f25c5" Jan 30 18:00:01 crc kubenswrapper[4766]: I0130 18:00:01.722511 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a-config-data-merged\") pod \"octavia-housekeeping-f25c5\" (UID: \"7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a\") " pod="openstack/octavia-housekeeping-f25c5" Jan 30 18:00:01 crc kubenswrapper[4766]: I0130 18:00:01.722839 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a-hm-ports\") pod \"octavia-housekeeping-f25c5\" (UID: \"7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a\") " pod="openstack/octavia-housekeeping-f25c5" Jan 30 18:00:01 crc kubenswrapper[4766]: I0130 18:00:01.730678 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a-scripts\") pod \"octavia-housekeeping-f25c5\" (UID: \"7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a\") " pod="openstack/octavia-housekeeping-f25c5" Jan 30 18:00:01 crc kubenswrapper[4766]: I0130 18:00:01.734550 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a-amphora-certs\") pod \"octavia-housekeeping-f25c5\" (UID: \"7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a\") " pod="openstack/octavia-housekeeping-f25c5" Jan 30 18:00:01 crc kubenswrapper[4766]: I0130 18:00:01.735038 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a-config-data\") pod \"octavia-housekeeping-f25c5\" (UID: \"7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a\") " pod="openstack/octavia-housekeeping-f25c5" Jan 30 18:00:01 crc kubenswrapper[4766]: I0130 18:00:01.745447 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a-combined-ca-bundle\") pod \"octavia-housekeeping-f25c5\" (UID: \"7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a\") " pod="openstack/octavia-housekeeping-f25c5" Jan 30 18:00:01 crc kubenswrapper[4766]: I0130 18:00:01.859531 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-f25c5" Jan 30 18:00:02 crc kubenswrapper[4766]: I0130 18:00:02.044756 4766 generic.go:334] "Generic (PLEG): container finished" podID="eaef52df-0dea-425d-ac97-09334d4d44bf" containerID="630dd27806d1cf4d4d5c6404849501d2feb6f83451792694c5d5c0e9409fa40e" exitCode=0 Jan 30 18:00:02 crc kubenswrapper[4766]: I0130 18:00:02.053045 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496600-spdnq" event={"ID":"eaef52df-0dea-425d-ac97-09334d4d44bf","Type":"ContainerDied","Data":"630dd27806d1cf4d4d5c6404849501d2feb6f83451792694c5d5c0e9409fa40e"} Jan 30 18:00:02 crc kubenswrapper[4766]: I0130 18:00:02.365799 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-f25c5"] Jan 30 18:00:02 crc kubenswrapper[4766]: I0130 18:00:02.596781 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-worker-qrfbg"] Jan 30 18:00:02 crc kubenswrapper[4766]: I0130 18:00:02.599397 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-qrfbg" Jan 30 18:00:02 crc kubenswrapper[4766]: I0130 18:00:02.603222 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-scripts" Jan 30 18:00:02 crc kubenswrapper[4766]: I0130 18:00:02.604329 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-config-data" Jan 30 18:00:02 crc kubenswrapper[4766]: I0130 18:00:02.606364 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-qrfbg"] Jan 30 18:00:02 crc kubenswrapper[4766]: I0130 18:00:02.638613 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5aade569-1bea-4133-8ea3-51cea870143d-config-data\") pod \"octavia-worker-qrfbg\" (UID: \"5aade569-1bea-4133-8ea3-51cea870143d\") " pod="openstack/octavia-worker-qrfbg" Jan 30 18:00:02 crc kubenswrapper[4766]: I0130 18:00:02.638728 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/5aade569-1bea-4133-8ea3-51cea870143d-hm-ports\") pod \"octavia-worker-qrfbg\" (UID: \"5aade569-1bea-4133-8ea3-51cea870143d\") " pod="openstack/octavia-worker-qrfbg" Jan 30 18:00:02 crc kubenswrapper[4766]: I0130 18:00:02.638856 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5aade569-1bea-4133-8ea3-51cea870143d-combined-ca-bundle\") pod \"octavia-worker-qrfbg\" (UID: \"5aade569-1bea-4133-8ea3-51cea870143d\") " pod="openstack/octavia-worker-qrfbg" Jan 30 18:00:02 crc kubenswrapper[4766]: I0130 18:00:02.639051 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5aade569-1bea-4133-8ea3-51cea870143d-scripts\") pod \"octavia-worker-qrfbg\" (UID: \"5aade569-1bea-4133-8ea3-51cea870143d\") " pod="openstack/octavia-worker-qrfbg" Jan 30 18:00:02 crc kubenswrapper[4766]: I0130 18:00:02.639228 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/5aade569-1bea-4133-8ea3-51cea870143d-config-data-merged\") pod \"octavia-worker-qrfbg\" (UID: \"5aade569-1bea-4133-8ea3-51cea870143d\") " pod="openstack/octavia-worker-qrfbg" Jan 30 18:00:02 crc kubenswrapper[4766]: I0130 18:00:02.639308 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/5aade569-1bea-4133-8ea3-51cea870143d-amphora-certs\") pod \"octavia-worker-qrfbg\" (UID: \"5aade569-1bea-4133-8ea3-51cea870143d\") " pod="openstack/octavia-worker-qrfbg" Jan 30 18:00:02 crc kubenswrapper[4766]: I0130 18:00:02.741520 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5aade569-1bea-4133-8ea3-51cea870143d-combined-ca-bundle\") pod \"octavia-worker-qrfbg\" (UID: \"5aade569-1bea-4133-8ea3-51cea870143d\") " pod="openstack/octavia-worker-qrfbg" Jan 30 18:00:02 crc kubenswrapper[4766]: I0130 18:00:02.741641 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5aade569-1bea-4133-8ea3-51cea870143d-scripts\") pod \"octavia-worker-qrfbg\" (UID: \"5aade569-1bea-4133-8ea3-51cea870143d\") " pod="openstack/octavia-worker-qrfbg" Jan 30 18:00:02 crc kubenswrapper[4766]: I0130 18:00:02.741707 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/5aade569-1bea-4133-8ea3-51cea870143d-config-data-merged\") pod \"octavia-worker-qrfbg\" (UID: \"5aade569-1bea-4133-8ea3-51cea870143d\") " pod="openstack/octavia-worker-qrfbg" Jan 30 18:00:02 crc kubenswrapper[4766]: I0130 18:00:02.741750 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/5aade569-1bea-4133-8ea3-51cea870143d-amphora-certs\") pod \"octavia-worker-qrfbg\" (UID: \"5aade569-1bea-4133-8ea3-51cea870143d\") " pod="openstack/octavia-worker-qrfbg" Jan 30 18:00:02 crc kubenswrapper[4766]: I0130 18:00:02.742381 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/5aade569-1bea-4133-8ea3-51cea870143d-config-data-merged\") pod \"octavia-worker-qrfbg\" (UID: \"5aade569-1bea-4133-8ea3-51cea870143d\") " pod="openstack/octavia-worker-qrfbg" Jan 30 18:00:02 crc kubenswrapper[4766]: I0130 18:00:02.741809 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5aade569-1bea-4133-8ea3-51cea870143d-config-data\") pod \"octavia-worker-qrfbg\" (UID: \"5aade569-1bea-4133-8ea3-51cea870143d\") " pod="openstack/octavia-worker-qrfbg" Jan 30 18:00:02 crc kubenswrapper[4766]: I0130 18:00:02.742723 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/5aade569-1bea-4133-8ea3-51cea870143d-hm-ports\") pod \"octavia-worker-qrfbg\" (UID: \"5aade569-1bea-4133-8ea3-51cea870143d\") " pod="openstack/octavia-worker-qrfbg" Jan 30 18:00:02 crc kubenswrapper[4766]: I0130 18:00:02.743487 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/5aade569-1bea-4133-8ea3-51cea870143d-hm-ports\") pod \"octavia-worker-qrfbg\" (UID: \"5aade569-1bea-4133-8ea3-51cea870143d\") " pod="openstack/octavia-worker-qrfbg" Jan 30 18:00:02 crc kubenswrapper[4766]: I0130 18:00:02.749052 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5aade569-1bea-4133-8ea3-51cea870143d-combined-ca-bundle\") pod \"octavia-worker-qrfbg\" (UID: \"5aade569-1bea-4133-8ea3-51cea870143d\") " pod="openstack/octavia-worker-qrfbg" Jan 30 18:00:02 crc kubenswrapper[4766]: I0130 18:00:02.749058 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5aade569-1bea-4133-8ea3-51cea870143d-config-data\") pod \"octavia-worker-qrfbg\" (UID: \"5aade569-1bea-4133-8ea3-51cea870143d\") " pod="openstack/octavia-worker-qrfbg" Jan 30 18:00:02 crc kubenswrapper[4766]: I0130 18:00:02.749241 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/5aade569-1bea-4133-8ea3-51cea870143d-amphora-certs\") pod \"octavia-worker-qrfbg\" (UID: \"5aade569-1bea-4133-8ea3-51cea870143d\") " pod="openstack/octavia-worker-qrfbg" Jan 30 18:00:02 crc kubenswrapper[4766]: I0130 18:00:02.749268 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5aade569-1bea-4133-8ea3-51cea870143d-scripts\") pod \"octavia-worker-qrfbg\" (UID: \"5aade569-1bea-4133-8ea3-51cea870143d\") " pod="openstack/octavia-worker-qrfbg" Jan 30 18:00:02 crc kubenswrapper[4766]: I0130 18:00:02.917412 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-qrfbg" Jan 30 18:00:03 crc kubenswrapper[4766]: I0130 18:00:03.057736 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-f25c5" event={"ID":"7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a","Type":"ContainerStarted","Data":"fac55d0c967812e98dc725e19eb5f5fbc02f58bbe462c23ee392098dbdd974c2"} Jan 30 18:00:03 crc kubenswrapper[4766]: I0130 18:00:03.510245 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-qrfbg"] Jan 30 18:00:03 crc kubenswrapper[4766]: W0130 18:00:03.535475 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5aade569_1bea_4133_8ea3_51cea870143d.slice/crio-4d468e515aae54e10fc3cde8d96560ca07aaf017f9f7e81455e5dd64aea90ab8 WatchSource:0}: Error finding container 4d468e515aae54e10fc3cde8d96560ca07aaf017f9f7e81455e5dd64aea90ab8: Status 404 returned error can't find the container with id 4d468e515aae54e10fc3cde8d96560ca07aaf017f9f7e81455e5dd64aea90ab8 Jan 30 18:00:03 crc kubenswrapper[4766]: I0130 18:00:03.547801 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496600-spdnq" Jan 30 18:00:03 crc kubenswrapper[4766]: I0130 18:00:03.571273 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eaef52df-0dea-425d-ac97-09334d4d44bf-config-volume\") pod \"eaef52df-0dea-425d-ac97-09334d4d44bf\" (UID: \"eaef52df-0dea-425d-ac97-09334d4d44bf\") " Jan 30 18:00:03 crc kubenswrapper[4766]: I0130 18:00:03.571434 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftltl\" (UniqueName: \"kubernetes.io/projected/eaef52df-0dea-425d-ac97-09334d4d44bf-kube-api-access-ftltl\") pod \"eaef52df-0dea-425d-ac97-09334d4d44bf\" (UID: \"eaef52df-0dea-425d-ac97-09334d4d44bf\") " Jan 30 18:00:03 crc kubenswrapper[4766]: I0130 18:00:03.571526 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eaef52df-0dea-425d-ac97-09334d4d44bf-secret-volume\") pod \"eaef52df-0dea-425d-ac97-09334d4d44bf\" (UID: \"eaef52df-0dea-425d-ac97-09334d4d44bf\") " Jan 30 18:00:03 crc kubenswrapper[4766]: I0130 18:00:03.573369 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eaef52df-0dea-425d-ac97-09334d4d44bf-config-volume" (OuterVolumeSpecName: "config-volume") pod "eaef52df-0dea-425d-ac97-09334d4d44bf" (UID: "eaef52df-0dea-425d-ac97-09334d4d44bf"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 18:00:03 crc kubenswrapper[4766]: I0130 18:00:03.580100 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaef52df-0dea-425d-ac97-09334d4d44bf-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "eaef52df-0dea-425d-ac97-09334d4d44bf" (UID: "eaef52df-0dea-425d-ac97-09334d4d44bf"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 18:00:03 crc kubenswrapper[4766]: I0130 18:00:03.580638 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eaef52df-0dea-425d-ac97-09334d4d44bf-kube-api-access-ftltl" (OuterVolumeSpecName: "kube-api-access-ftltl") pod "eaef52df-0dea-425d-ac97-09334d4d44bf" (UID: "eaef52df-0dea-425d-ac97-09334d4d44bf"). InnerVolumeSpecName "kube-api-access-ftltl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:00:03 crc kubenswrapper[4766]: I0130 18:00:03.673796 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftltl\" (UniqueName: \"kubernetes.io/projected/eaef52df-0dea-425d-ac97-09334d4d44bf-kube-api-access-ftltl\") on node \"crc\" DevicePath \"\"" Jan 30 18:00:03 crc kubenswrapper[4766]: I0130 18:00:03.675000 4766 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eaef52df-0dea-425d-ac97-09334d4d44bf-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 18:00:03 crc kubenswrapper[4766]: I0130 18:00:03.675066 4766 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eaef52df-0dea-425d-ac97-09334d4d44bf-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 18:00:04 crc kubenswrapper[4766]: I0130 18:00:04.069731 4766 generic.go:334] "Generic (PLEG): container finished" podID="1c79d934-7880-4883-bee6-c60ea7745616" containerID="3725f34e1ccb8b31dc9940969a4050287625592f0d18c088cda3834f58c9655c" exitCode=0 Jan 30 18:00:04 crc kubenswrapper[4766]: I0130 18:00:04.070373 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-422fs" event={"ID":"1c79d934-7880-4883-bee6-c60ea7745616","Type":"ContainerDied","Data":"3725f34e1ccb8b31dc9940969a4050287625592f0d18c088cda3834f58c9655c"} Jan 30 18:00:04 crc kubenswrapper[4766]: I0130 18:00:04.077387 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496600-spdnq" Jan 30 18:00:04 crc kubenswrapper[4766]: I0130 18:00:04.078143 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496600-spdnq" event={"ID":"eaef52df-0dea-425d-ac97-09334d4d44bf","Type":"ContainerDied","Data":"1952b855f783d1c096a055762c6f9dd3a2a7e77b1bc9815fc98a26b7bf9fedcf"} Jan 30 18:00:04 crc kubenswrapper[4766]: I0130 18:00:04.078207 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1952b855f783d1c096a055762c6f9dd3a2a7e77b1bc9815fc98a26b7bf9fedcf" Jan 30 18:00:04 crc kubenswrapper[4766]: I0130 18:00:04.079132 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-qrfbg" event={"ID":"5aade569-1bea-4133-8ea3-51cea870143d","Type":"ContainerStarted","Data":"4d468e515aae54e10fc3cde8d96560ca07aaf017f9f7e81455e5dd64aea90ab8"} Jan 30 18:00:04 crc kubenswrapper[4766]: I0130 18:00:04.625560 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496555-l7zjm"] Jan 30 18:00:04 crc kubenswrapper[4766]: I0130 18:00:04.640810 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496555-l7zjm"] Jan 30 18:00:04 crc kubenswrapper[4766]: I0130 18:00:04.654604 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-422fs"] Jan 30 18:00:05 crc kubenswrapper[4766]: I0130 18:00:05.122974 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-422fs" event={"ID":"1c79d934-7880-4883-bee6-c60ea7745616","Type":"ContainerStarted","Data":"aa75a431b6569eb0fd2b042b2f24d3adf9d3e399b96bbd129c4518bba6afa585"} Jan 30 18:00:05 crc kubenswrapper[4766]: I0130 18:00:05.123610 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-healthmanager-422fs" Jan 30 18:00:05 crc kubenswrapper[4766]: I0130 18:00:05.161451 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-healthmanager-422fs" podStartSLOduration=6.161414184 podStartE2EDuration="6.161414184s" podCreationTimestamp="2026-01-30 17:59:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 18:00:05.151573887 +0000 UTC m=+5859.789531223" watchObservedRunningTime="2026-01-30 18:00:05.161414184 +0000 UTC m=+5859.799371540" Jan 30 18:00:06 crc kubenswrapper[4766]: I0130 18:00:06.058954 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20c37317-bc31-4749-bf2a-000f3786ebdb" path="/var/lib/kubelet/pods/20c37317-bc31-4749-bf2a-000f3786ebdb/volumes" Jan 30 18:00:06 crc kubenswrapper[4766]: I0130 18:00:06.140461 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-b9qv6" event={"ID":"a2dd03c7-c095-4563-9107-802624d1e4f5","Type":"ContainerStarted","Data":"28540abc823fad4e2aecdd708f90c28850196ec3b4fb5e1876b10103520bcc9f"} Jan 30 18:00:07 crc kubenswrapper[4766]: I0130 18:00:07.149489 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-qrfbg" event={"ID":"5aade569-1bea-4133-8ea3-51cea870143d","Type":"ContainerStarted","Data":"5527f6ae343fe79d80e6d78898deafe47bede7b9d73a5f66c1bad456982201bb"} Jan 30 18:00:07 crc kubenswrapper[4766]: I0130 18:00:07.151433 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-f25c5" event={"ID":"7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a","Type":"ContainerStarted","Data":"e62c53658244c6993e0692dd43c1db69147e0ab8e7a7528ae8479fffc0ba174a"} Jan 30 18:00:07 crc kubenswrapper[4766]: I0130 18:00:07.173465 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-image-upload-59f8cff499-b9qv6" podStartSLOduration=3.487921605 podStartE2EDuration="10.173443743s" podCreationTimestamp="2026-01-30 17:59:57 +0000 UTC" firstStartedPulling="2026-01-30 17:59:58.068884815 +0000 UTC m=+5852.706842161" lastFinishedPulling="2026-01-30 18:00:04.754406953 +0000 UTC m=+5859.392364299" observedRunningTime="2026-01-30 18:00:06.157419447 +0000 UTC m=+5860.795376793" watchObservedRunningTime="2026-01-30 18:00:07.173443743 +0000 UTC m=+5861.811401089" Jan 30 18:00:10 crc kubenswrapper[4766]: I0130 18:00:10.174832 4766 generic.go:334] "Generic (PLEG): container finished" podID="5aade569-1bea-4133-8ea3-51cea870143d" containerID="5527f6ae343fe79d80e6d78898deafe47bede7b9d73a5f66c1bad456982201bb" exitCode=0 Jan 30 18:00:10 crc kubenswrapper[4766]: I0130 18:00:10.174925 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-qrfbg" event={"ID":"5aade569-1bea-4133-8ea3-51cea870143d","Type":"ContainerDied","Data":"5527f6ae343fe79d80e6d78898deafe47bede7b9d73a5f66c1bad456982201bb"} Jan 30 18:00:10 crc kubenswrapper[4766]: I0130 18:00:10.177132 4766 generic.go:334] "Generic (PLEG): container finished" podID="7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a" containerID="e62c53658244c6993e0692dd43c1db69147e0ab8e7a7528ae8479fffc0ba174a" exitCode=0 Jan 30 18:00:10 crc kubenswrapper[4766]: I0130 18:00:10.177199 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-f25c5" event={"ID":"7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a","Type":"ContainerDied","Data":"e62c53658244c6993e0692dd43c1db69147e0ab8e7a7528ae8479fffc0ba174a"} Jan 30 18:00:11 crc kubenswrapper[4766]: I0130 18:00:11.189096 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-f25c5" event={"ID":"7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a","Type":"ContainerStarted","Data":"c8abed3e0876d78fcfbca64cfcab0d25b7920f91fec1ac552487164b0fb4d18b"} Jan 30 18:00:11 crc kubenswrapper[4766]: I0130 18:00:11.189637 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-housekeeping-f25c5" Jan 30 18:00:11 crc kubenswrapper[4766]: I0130 18:00:11.191655 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-qrfbg" event={"ID":"5aade569-1bea-4133-8ea3-51cea870143d","Type":"ContainerStarted","Data":"1ed70f33dc959997aef972353763f2d4044b2f90403962aaac7d9747f5c05eac"} Jan 30 18:00:11 crc kubenswrapper[4766]: I0130 18:00:11.191892 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-worker-qrfbg" Jan 30 18:00:11 crc kubenswrapper[4766]: I0130 18:00:11.212050 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-housekeeping-f25c5" podStartSLOduration=6.526098213 podStartE2EDuration="10.212031489s" podCreationTimestamp="2026-01-30 18:00:01 +0000 UTC" firstStartedPulling="2026-01-30 18:00:02.37716615 +0000 UTC m=+5857.015123496" lastFinishedPulling="2026-01-30 18:00:06.063099426 +0000 UTC m=+5860.701056772" observedRunningTime="2026-01-30 18:00:11.206419436 +0000 UTC m=+5865.844376802" watchObservedRunningTime="2026-01-30 18:00:11.212031489 +0000 UTC m=+5865.849988835" Jan 30 18:00:11 crc kubenswrapper[4766]: I0130 18:00:11.233433 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-worker-qrfbg" podStartSLOduration=6.667833168 podStartE2EDuration="9.233409952s" podCreationTimestamp="2026-01-30 18:00:02 +0000 UTC" firstStartedPulling="2026-01-30 18:00:03.543428113 +0000 UTC m=+5858.181385459" lastFinishedPulling="2026-01-30 18:00:06.109004897 +0000 UTC m=+5860.746962243" observedRunningTime="2026-01-30 18:00:11.229930166 +0000 UTC m=+5865.867887522" watchObservedRunningTime="2026-01-30 18:00:11.233409952 +0000 UTC m=+5865.871367298" Jan 30 18:00:14 crc kubenswrapper[4766]: I0130 18:00:14.041428 4766 scope.go:117] "RemoveContainer" containerID="e52209522ad6ff86d8d333246ee724873200c61e028d93d4fc612ac4b0977354" Jan 30 18:00:14 crc kubenswrapper[4766]: E0130 18:00:14.042332 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:00:14 crc kubenswrapper[4766]: I0130 18:00:14.807559 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-healthmanager-422fs" Jan 30 18:00:16 crc kubenswrapper[4766]: I0130 18:00:16.891534 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-housekeeping-f25c5" Jan 30 18:00:17 crc kubenswrapper[4766]: I0130 18:00:17.949612 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-worker-qrfbg" Jan 30 18:00:29 crc kubenswrapper[4766]: I0130 18:00:29.040369 4766 scope.go:117] "RemoveContainer" containerID="e52209522ad6ff86d8d333246ee724873200c61e028d93d4fc612ac4b0977354" Jan 30 18:00:29 crc kubenswrapper[4766]: E0130 18:00:29.041679 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:00:33 crc kubenswrapper[4766]: I0130 18:00:33.007415 4766 scope.go:117] "RemoveContainer" containerID="e7a7edb57ac3d27e7b4d4cf72feb542694a5d4be05f6296f5473eacbc813a28b" Jan 30 18:00:44 crc kubenswrapper[4766]: I0130 18:00:44.039721 4766 scope.go:117] "RemoveContainer" containerID="e52209522ad6ff86d8d333246ee724873200c61e028d93d4fc612ac4b0977354" Jan 30 18:00:44 crc kubenswrapper[4766]: E0130 18:00:44.040684 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:00:55 crc kubenswrapper[4766]: I0130 18:00:55.039664 4766 scope.go:117] "RemoveContainer" containerID="e52209522ad6ff86d8d333246ee724873200c61e028d93d4fc612ac4b0977354" Jan 30 18:00:55 crc kubenswrapper[4766]: E0130 18:00:55.040616 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.151787 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29496601-pl6qc"] Jan 30 18:01:00 crc kubenswrapper[4766]: E0130 18:01:00.152978 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaef52df-0dea-425d-ac97-09334d4d44bf" containerName="collect-profiles" Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.152999 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaef52df-0dea-425d-ac97-09334d4d44bf" containerName="collect-profiles" Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.153297 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaef52df-0dea-425d-ac97-09334d4d44bf" containerName="collect-profiles" Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.154194 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496601-pl6qc" Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.166779 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29496601-pl6qc"] Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.256497 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d20810a-2efe-43c6-a8e6-92a14834a048-combined-ca-bundle\") pod \"keystone-cron-29496601-pl6qc\" (UID: \"5d20810a-2efe-43c6-a8e6-92a14834a048\") " pod="openstack/keystone-cron-29496601-pl6qc" Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.256921 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5d20810a-2efe-43c6-a8e6-92a14834a048-fernet-keys\") pod \"keystone-cron-29496601-pl6qc\" (UID: \"5d20810a-2efe-43c6-a8e6-92a14834a048\") " pod="openstack/keystone-cron-29496601-pl6qc" Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.257025 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d20810a-2efe-43c6-a8e6-92a14834a048-config-data\") pod \"keystone-cron-29496601-pl6qc\" (UID: \"5d20810a-2efe-43c6-a8e6-92a14834a048\") " pod="openstack/keystone-cron-29496601-pl6qc" Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.257141 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42l2g\" (UniqueName: \"kubernetes.io/projected/5d20810a-2efe-43c6-a8e6-92a14834a048-kube-api-access-42l2g\") pod \"keystone-cron-29496601-pl6qc\" (UID: \"5d20810a-2efe-43c6-a8e6-92a14834a048\") " pod="openstack/keystone-cron-29496601-pl6qc" Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.359263 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5d20810a-2efe-43c6-a8e6-92a14834a048-fernet-keys\") pod \"keystone-cron-29496601-pl6qc\" (UID: \"5d20810a-2efe-43c6-a8e6-92a14834a048\") " pod="openstack/keystone-cron-29496601-pl6qc" Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.359316 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d20810a-2efe-43c6-a8e6-92a14834a048-config-data\") pod \"keystone-cron-29496601-pl6qc\" (UID: \"5d20810a-2efe-43c6-a8e6-92a14834a048\") " pod="openstack/keystone-cron-29496601-pl6qc" Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.359359 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42l2g\" (UniqueName: \"kubernetes.io/projected/5d20810a-2efe-43c6-a8e6-92a14834a048-kube-api-access-42l2g\") pod \"keystone-cron-29496601-pl6qc\" (UID: \"5d20810a-2efe-43c6-a8e6-92a14834a048\") " pod="openstack/keystone-cron-29496601-pl6qc" Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.359402 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d20810a-2efe-43c6-a8e6-92a14834a048-combined-ca-bundle\") pod \"keystone-cron-29496601-pl6qc\" (UID: \"5d20810a-2efe-43c6-a8e6-92a14834a048\") " pod="openstack/keystone-cron-29496601-pl6qc" Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.366970 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5d20810a-2efe-43c6-a8e6-92a14834a048-fernet-keys\") pod \"keystone-cron-29496601-pl6qc\" (UID: \"5d20810a-2efe-43c6-a8e6-92a14834a048\") " pod="openstack/keystone-cron-29496601-pl6qc" Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.367218 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d20810a-2efe-43c6-a8e6-92a14834a048-config-data\") pod \"keystone-cron-29496601-pl6qc\" (UID: \"5d20810a-2efe-43c6-a8e6-92a14834a048\") " pod="openstack/keystone-cron-29496601-pl6qc" Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.369132 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d20810a-2efe-43c6-a8e6-92a14834a048-combined-ca-bundle\") pod \"keystone-cron-29496601-pl6qc\" (UID: \"5d20810a-2efe-43c6-a8e6-92a14834a048\") " pod="openstack/keystone-cron-29496601-pl6qc" Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.375146 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42l2g\" (UniqueName: \"kubernetes.io/projected/5d20810a-2efe-43c6-a8e6-92a14834a048-kube-api-access-42l2g\") pod \"keystone-cron-29496601-pl6qc\" (UID: \"5d20810a-2efe-43c6-a8e6-92a14834a048\") " pod="openstack/keystone-cron-29496601-pl6qc" Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.529816 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496601-pl6qc" Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.852995 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5b8665dc85-mqdzq"] Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.855260 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b8665dc85-mqdzq" Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.858752 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-frkxg" Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.858924 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.859041 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.859224 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.883820 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5b8665dc85-mqdzq"] Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.921540 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.921816 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7946b0e6-2de2-4708-ac83-ce1ad398d8a5" containerName="glance-log" containerID="cri-o://cad90a5294d7a585930cf768d8e7c6d25d6344d562eb3235af5a3bc1a335ef10" gracePeriod=30 Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.921964 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7946b0e6-2de2-4708-ac83-ce1ad398d8a5" containerName="glance-httpd" containerID="cri-o://ba7a3a0bd3b87ff213481ded18b09fe05a378481a605d5c64f141f56bfac1eae" gracePeriod=30 Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.976397 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c267e584-67ae-40ca-90dc-5967ee8be5d5-scripts\") pod \"horizon-5b8665dc85-mqdzq\" (UID: \"c267e584-67ae-40ca-90dc-5967ee8be5d5\") " pod="openstack/horizon-5b8665dc85-mqdzq" Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.976461 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c267e584-67ae-40ca-90dc-5967ee8be5d5-horizon-secret-key\") pod \"horizon-5b8665dc85-mqdzq\" (UID: \"c267e584-67ae-40ca-90dc-5967ee8be5d5\") " pod="openstack/horizon-5b8665dc85-mqdzq" Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.976512 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n87tq\" (UniqueName: \"kubernetes.io/projected/c267e584-67ae-40ca-90dc-5967ee8be5d5-kube-api-access-n87tq\") pod \"horizon-5b8665dc85-mqdzq\" (UID: \"c267e584-67ae-40ca-90dc-5967ee8be5d5\") " pod="openstack/horizon-5b8665dc85-mqdzq" Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.976538 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c267e584-67ae-40ca-90dc-5967ee8be5d5-config-data\") pod \"horizon-5b8665dc85-mqdzq\" (UID: \"c267e584-67ae-40ca-90dc-5967ee8be5d5\") " pod="openstack/horizon-5b8665dc85-mqdzq" Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.976582 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c267e584-67ae-40ca-90dc-5967ee8be5d5-logs\") pod \"horizon-5b8665dc85-mqdzq\" (UID: \"c267e584-67ae-40ca-90dc-5967ee8be5d5\") " pod="openstack/horizon-5b8665dc85-mqdzq" Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.985754 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.985998 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="40e23b5f-28fc-4354-94de-90d54908e61b" containerName="glance-log" containerID="cri-o://155d7b6244102b757f3100d53fae683f2499dd63e37d81e454b339bfe1fcf7f8" gracePeriod=30 Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.986464 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="40e23b5f-28fc-4354-94de-90d54908e61b" containerName="glance-httpd" containerID="cri-o://ad6524bde7488d90070d2ccbcc60c3eedc219f1cc8c7fa871d2af523184d894a" gracePeriod=30 Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.044600 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29496601-pl6qc"] Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.060150 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-646c4b5b47-xr8w7"] Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.062205 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-646c4b5b47-xr8w7" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.069591 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-646c4b5b47-xr8w7"] Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.089056 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c267e584-67ae-40ca-90dc-5967ee8be5d5-scripts\") pod \"horizon-5b8665dc85-mqdzq\" (UID: \"c267e584-67ae-40ca-90dc-5967ee8be5d5\") " pod="openstack/horizon-5b8665dc85-mqdzq" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.089224 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c267e584-67ae-40ca-90dc-5967ee8be5d5-horizon-secret-key\") pod \"horizon-5b8665dc85-mqdzq\" (UID: \"c267e584-67ae-40ca-90dc-5967ee8be5d5\") " pod="openstack/horizon-5b8665dc85-mqdzq" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.089365 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n87tq\" (UniqueName: \"kubernetes.io/projected/c267e584-67ae-40ca-90dc-5967ee8be5d5-kube-api-access-n87tq\") pod \"horizon-5b8665dc85-mqdzq\" (UID: \"c267e584-67ae-40ca-90dc-5967ee8be5d5\") " pod="openstack/horizon-5b8665dc85-mqdzq" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.089398 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c267e584-67ae-40ca-90dc-5967ee8be5d5-config-data\") pod \"horizon-5b8665dc85-mqdzq\" (UID: \"c267e584-67ae-40ca-90dc-5967ee8be5d5\") " pod="openstack/horizon-5b8665dc85-mqdzq" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.089499 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c267e584-67ae-40ca-90dc-5967ee8be5d5-logs\") pod \"horizon-5b8665dc85-mqdzq\" (UID: \"c267e584-67ae-40ca-90dc-5967ee8be5d5\") " pod="openstack/horizon-5b8665dc85-mqdzq" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.090143 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c267e584-67ae-40ca-90dc-5967ee8be5d5-logs\") pod \"horizon-5b8665dc85-mqdzq\" (UID: \"c267e584-67ae-40ca-90dc-5967ee8be5d5\") " pod="openstack/horizon-5b8665dc85-mqdzq" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.090808 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c267e584-67ae-40ca-90dc-5967ee8be5d5-scripts\") pod \"horizon-5b8665dc85-mqdzq\" (UID: \"c267e584-67ae-40ca-90dc-5967ee8be5d5\") " pod="openstack/horizon-5b8665dc85-mqdzq" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.095812 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c267e584-67ae-40ca-90dc-5967ee8be5d5-horizon-secret-key\") pod \"horizon-5b8665dc85-mqdzq\" (UID: \"c267e584-67ae-40ca-90dc-5967ee8be5d5\") " pod="openstack/horizon-5b8665dc85-mqdzq" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.096081 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c267e584-67ae-40ca-90dc-5967ee8be5d5-config-data\") pod \"horizon-5b8665dc85-mqdzq\" (UID: \"c267e584-67ae-40ca-90dc-5967ee8be5d5\") " pod="openstack/horizon-5b8665dc85-mqdzq" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.113472 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n87tq\" (UniqueName: \"kubernetes.io/projected/c267e584-67ae-40ca-90dc-5967ee8be5d5-kube-api-access-n87tq\") pod \"horizon-5b8665dc85-mqdzq\" (UID: \"c267e584-67ae-40ca-90dc-5967ee8be5d5\") " pod="openstack/horizon-5b8665dc85-mqdzq" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.180578 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b8665dc85-mqdzq" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.191543 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8806aa45-5ae9-453c-8bc8-23fe8daa8e9d-logs\") pod \"horizon-646c4b5b47-xr8w7\" (UID: \"8806aa45-5ae9-453c-8bc8-23fe8daa8e9d\") " pod="openstack/horizon-646c4b5b47-xr8w7" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.191894 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8806aa45-5ae9-453c-8bc8-23fe8daa8e9d-config-data\") pod \"horizon-646c4b5b47-xr8w7\" (UID: \"8806aa45-5ae9-453c-8bc8-23fe8daa8e9d\") " pod="openstack/horizon-646c4b5b47-xr8w7" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.192057 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8806aa45-5ae9-453c-8bc8-23fe8daa8e9d-scripts\") pod \"horizon-646c4b5b47-xr8w7\" (UID: \"8806aa45-5ae9-453c-8bc8-23fe8daa8e9d\") " pod="openstack/horizon-646c4b5b47-xr8w7" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.192382 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd65n\" (UniqueName: \"kubernetes.io/projected/8806aa45-5ae9-453c-8bc8-23fe8daa8e9d-kube-api-access-zd65n\") pod \"horizon-646c4b5b47-xr8w7\" (UID: \"8806aa45-5ae9-453c-8bc8-23fe8daa8e9d\") " pod="openstack/horizon-646c4b5b47-xr8w7" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.192562 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8806aa45-5ae9-453c-8bc8-23fe8daa8e9d-horizon-secret-key\") pod \"horizon-646c4b5b47-xr8w7\" (UID: \"8806aa45-5ae9-453c-8bc8-23fe8daa8e9d\") " pod="openstack/horizon-646c4b5b47-xr8w7" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.295587 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8806aa45-5ae9-453c-8bc8-23fe8daa8e9d-logs\") pod \"horizon-646c4b5b47-xr8w7\" (UID: \"8806aa45-5ae9-453c-8bc8-23fe8daa8e9d\") " pod="openstack/horizon-646c4b5b47-xr8w7" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.295670 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8806aa45-5ae9-453c-8bc8-23fe8daa8e9d-config-data\") pod \"horizon-646c4b5b47-xr8w7\" (UID: \"8806aa45-5ae9-453c-8bc8-23fe8daa8e9d\") " pod="openstack/horizon-646c4b5b47-xr8w7" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.295753 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8806aa45-5ae9-453c-8bc8-23fe8daa8e9d-scripts\") pod \"horizon-646c4b5b47-xr8w7\" (UID: \"8806aa45-5ae9-453c-8bc8-23fe8daa8e9d\") " pod="openstack/horizon-646c4b5b47-xr8w7" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.295884 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd65n\" (UniqueName: \"kubernetes.io/projected/8806aa45-5ae9-453c-8bc8-23fe8daa8e9d-kube-api-access-zd65n\") pod \"horizon-646c4b5b47-xr8w7\" (UID: \"8806aa45-5ae9-453c-8bc8-23fe8daa8e9d\") " pod="openstack/horizon-646c4b5b47-xr8w7" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.295972 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8806aa45-5ae9-453c-8bc8-23fe8daa8e9d-horizon-secret-key\") pod \"horizon-646c4b5b47-xr8w7\" (UID: \"8806aa45-5ae9-453c-8bc8-23fe8daa8e9d\") " pod="openstack/horizon-646c4b5b47-xr8w7" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.296302 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8806aa45-5ae9-453c-8bc8-23fe8daa8e9d-logs\") pod \"horizon-646c4b5b47-xr8w7\" (UID: \"8806aa45-5ae9-453c-8bc8-23fe8daa8e9d\") " pod="openstack/horizon-646c4b5b47-xr8w7" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.298287 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8806aa45-5ae9-453c-8bc8-23fe8daa8e9d-scripts\") pod \"horizon-646c4b5b47-xr8w7\" (UID: \"8806aa45-5ae9-453c-8bc8-23fe8daa8e9d\") " pod="openstack/horizon-646c4b5b47-xr8w7" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.299326 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8806aa45-5ae9-453c-8bc8-23fe8daa8e9d-config-data\") pod \"horizon-646c4b5b47-xr8w7\" (UID: \"8806aa45-5ae9-453c-8bc8-23fe8daa8e9d\") " pod="openstack/horizon-646c4b5b47-xr8w7" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.305719 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8806aa45-5ae9-453c-8bc8-23fe8daa8e9d-horizon-secret-key\") pod \"horizon-646c4b5b47-xr8w7\" (UID: \"8806aa45-5ae9-453c-8bc8-23fe8daa8e9d\") " pod="openstack/horizon-646c4b5b47-xr8w7" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.321891 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd65n\" (UniqueName: \"kubernetes.io/projected/8806aa45-5ae9-453c-8bc8-23fe8daa8e9d-kube-api-access-zd65n\") pod \"horizon-646c4b5b47-xr8w7\" (UID: \"8806aa45-5ae9-453c-8bc8-23fe8daa8e9d\") " pod="openstack/horizon-646c4b5b47-xr8w7" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.412820 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-646c4b5b47-xr8w7" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.463842 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5b8665dc85-mqdzq"] Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.512833 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7c4d556457-cgwh5"] Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.514601 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7c4d556457-cgwh5" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.535097 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7c4d556457-cgwh5"] Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.602997 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hmkl\" (UniqueName: \"kubernetes.io/projected/e24a2653-c901-4306-a56b-2e2de8006403-kube-api-access-5hmkl\") pod \"horizon-7c4d556457-cgwh5\" (UID: \"e24a2653-c901-4306-a56b-2e2de8006403\") " pod="openstack/horizon-7c4d556457-cgwh5" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.603128 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e24a2653-c901-4306-a56b-2e2de8006403-logs\") pod \"horizon-7c4d556457-cgwh5\" (UID: \"e24a2653-c901-4306-a56b-2e2de8006403\") " pod="openstack/horizon-7c4d556457-cgwh5" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.603193 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e24a2653-c901-4306-a56b-2e2de8006403-horizon-secret-key\") pod \"horizon-7c4d556457-cgwh5\" (UID: \"e24a2653-c901-4306-a56b-2e2de8006403\") " pod="openstack/horizon-7c4d556457-cgwh5" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.603247 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e24a2653-c901-4306-a56b-2e2de8006403-scripts\") pod \"horizon-7c4d556457-cgwh5\" (UID: \"e24a2653-c901-4306-a56b-2e2de8006403\") " pod="openstack/horizon-7c4d556457-cgwh5" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.603301 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e24a2653-c901-4306-a56b-2e2de8006403-config-data\") pod \"horizon-7c4d556457-cgwh5\" (UID: \"e24a2653-c901-4306-a56b-2e2de8006403\") " pod="openstack/horizon-7c4d556457-cgwh5" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.692092 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5b8665dc85-mqdzq"] Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.704499 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hmkl\" (UniqueName: \"kubernetes.io/projected/e24a2653-c901-4306-a56b-2e2de8006403-kube-api-access-5hmkl\") pod \"horizon-7c4d556457-cgwh5\" (UID: \"e24a2653-c901-4306-a56b-2e2de8006403\") " pod="openstack/horizon-7c4d556457-cgwh5" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.704607 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e24a2653-c901-4306-a56b-2e2de8006403-logs\") pod \"horizon-7c4d556457-cgwh5\" (UID: \"e24a2653-c901-4306-a56b-2e2de8006403\") " pod="openstack/horizon-7c4d556457-cgwh5" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.704643 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e24a2653-c901-4306-a56b-2e2de8006403-horizon-secret-key\") pod \"horizon-7c4d556457-cgwh5\" (UID: \"e24a2653-c901-4306-a56b-2e2de8006403\") " pod="openstack/horizon-7c4d556457-cgwh5" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.704686 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e24a2653-c901-4306-a56b-2e2de8006403-scripts\") pod \"horizon-7c4d556457-cgwh5\" (UID: \"e24a2653-c901-4306-a56b-2e2de8006403\") " pod="openstack/horizon-7c4d556457-cgwh5" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.704735 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e24a2653-c901-4306-a56b-2e2de8006403-config-data\") pod \"horizon-7c4d556457-cgwh5\" (UID: \"e24a2653-c901-4306-a56b-2e2de8006403\") " pod="openstack/horizon-7c4d556457-cgwh5" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.705440 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e24a2653-c901-4306-a56b-2e2de8006403-logs\") pod \"horizon-7c4d556457-cgwh5\" (UID: \"e24a2653-c901-4306-a56b-2e2de8006403\") " pod="openstack/horizon-7c4d556457-cgwh5" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.705585 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e24a2653-c901-4306-a56b-2e2de8006403-scripts\") pod \"horizon-7c4d556457-cgwh5\" (UID: \"e24a2653-c901-4306-a56b-2e2de8006403\") " pod="openstack/horizon-7c4d556457-cgwh5" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.706102 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e24a2653-c901-4306-a56b-2e2de8006403-config-data\") pod \"horizon-7c4d556457-cgwh5\" (UID: \"e24a2653-c901-4306-a56b-2e2de8006403\") " pod="openstack/horizon-7c4d556457-cgwh5" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.710562 4766 generic.go:334] "Generic (PLEG): container finished" podID="40e23b5f-28fc-4354-94de-90d54908e61b" containerID="155d7b6244102b757f3100d53fae683f2499dd63e37d81e454b339bfe1fcf7f8" exitCode=143 Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.710648 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"40e23b5f-28fc-4354-94de-90d54908e61b","Type":"ContainerDied","Data":"155d7b6244102b757f3100d53fae683f2499dd63e37d81e454b339bfe1fcf7f8"} Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.738280 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e24a2653-c901-4306-a56b-2e2de8006403-horizon-secret-key\") pod \"horizon-7c4d556457-cgwh5\" (UID: \"e24a2653-c901-4306-a56b-2e2de8006403\") " pod="openstack/horizon-7c4d556457-cgwh5" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.762434 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hmkl\" (UniqueName: \"kubernetes.io/projected/e24a2653-c901-4306-a56b-2e2de8006403-kube-api-access-5hmkl\") pod \"horizon-7c4d556457-cgwh5\" (UID: \"e24a2653-c901-4306-a56b-2e2de8006403\") " pod="openstack/horizon-7c4d556457-cgwh5" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.773206 4766 generic.go:334] "Generic (PLEG): container finished" podID="7946b0e6-2de2-4708-ac83-ce1ad398d8a5" containerID="cad90a5294d7a585930cf768d8e7c6d25d6344d562eb3235af5a3bc1a335ef10" exitCode=143 Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.773284 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7946b0e6-2de2-4708-ac83-ce1ad398d8a5","Type":"ContainerDied","Data":"cad90a5294d7a585930cf768d8e7c6d25d6344d562eb3235af5a3bc1a335ef10"} Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.832416 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496601-pl6qc" event={"ID":"5d20810a-2efe-43c6-a8e6-92a14834a048","Type":"ContainerStarted","Data":"c70e0ed778d72191d9df042a51eab1bfa041969650181ccb900bd84b9e95d7d1"} Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.832460 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496601-pl6qc" event={"ID":"5d20810a-2efe-43c6-a8e6-92a14834a048","Type":"ContainerStarted","Data":"0506a31b4302c185010f640115c79ac98b2bccb6af61fe517bf39b47f821ddd3"} Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.891587 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29496601-pl6qc" podStartSLOduration=1.891562028 podStartE2EDuration="1.891562028s" podCreationTimestamp="2026-01-30 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 18:01:01.879666634 +0000 UTC m=+5916.517623980" watchObservedRunningTime="2026-01-30 18:01:01.891562028 +0000 UTC m=+5916.529519374" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.892895 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7c4d556457-cgwh5" Jan 30 18:01:02 crc kubenswrapper[4766]: I0130 18:01:02.088391 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-646c4b5b47-xr8w7"] Jan 30 18:01:02 crc kubenswrapper[4766]: W0130 18:01:02.095953 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8806aa45_5ae9_453c_8bc8_23fe8daa8e9d.slice/crio-4f401447cb213f1837b37ef48530e7e3b154870ca692e29ced373b3aa6253a8e WatchSource:0}: Error finding container 4f401447cb213f1837b37ef48530e7e3b154870ca692e29ced373b3aa6253a8e: Status 404 returned error can't find the container with id 4f401447cb213f1837b37ef48530e7e3b154870ca692e29ced373b3aa6253a8e Jan 30 18:01:02 crc kubenswrapper[4766]: W0130 18:01:02.398390 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode24a2653_c901_4306_a56b_2e2de8006403.slice/crio-cadf5bf4bc315740c9e7fe57dc7c31b825904f80226e6412c605c910373f6d91 WatchSource:0}: Error finding container cadf5bf4bc315740c9e7fe57dc7c31b825904f80226e6412c605c910373f6d91: Status 404 returned error can't find the container with id cadf5bf4bc315740c9e7fe57dc7c31b825904f80226e6412c605c910373f6d91 Jan 30 18:01:02 crc kubenswrapper[4766]: I0130 18:01:02.398601 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7c4d556457-cgwh5"] Jan 30 18:01:02 crc kubenswrapper[4766]: I0130 18:01:02.843120 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-646c4b5b47-xr8w7" event={"ID":"8806aa45-5ae9-453c-8bc8-23fe8daa8e9d","Type":"ContainerStarted","Data":"4f401447cb213f1837b37ef48530e7e3b154870ca692e29ced373b3aa6253a8e"} Jan 30 18:01:02 crc kubenswrapper[4766]: I0130 18:01:02.845001 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b8665dc85-mqdzq" event={"ID":"c267e584-67ae-40ca-90dc-5967ee8be5d5","Type":"ContainerStarted","Data":"826a0c9cee53980f380468b130146d783aa7261856c38f2757af740808b26324"} Jan 30 18:01:02 crc kubenswrapper[4766]: I0130 18:01:02.845938 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7c4d556457-cgwh5" event={"ID":"e24a2653-c901-4306-a56b-2e2de8006403","Type":"ContainerStarted","Data":"cadf5bf4bc315740c9e7fe57dc7c31b825904f80226e6412c605c910373f6d91"} Jan 30 18:01:03 crc kubenswrapper[4766]: I0130 18:01:03.856606 4766 generic.go:334] "Generic (PLEG): container finished" podID="5d20810a-2efe-43c6-a8e6-92a14834a048" containerID="c70e0ed778d72191d9df042a51eab1bfa041969650181ccb900bd84b9e95d7d1" exitCode=0 Jan 30 18:01:03 crc kubenswrapper[4766]: I0130 18:01:03.856704 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496601-pl6qc" event={"ID":"5d20810a-2efe-43c6-a8e6-92a14834a048","Type":"ContainerDied","Data":"c70e0ed778d72191d9df042a51eab1bfa041969650181ccb900bd84b9e95d7d1"} Jan 30 18:01:04 crc kubenswrapper[4766]: I0130 18:01:04.052098 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-360a-account-create-update-9fwlc"] Jan 30 18:01:04 crc kubenswrapper[4766]: I0130 18:01:04.056870 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-v7zdn"] Jan 30 18:01:04 crc kubenswrapper[4766]: I0130 18:01:04.065692 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-v7zdn"] Jan 30 18:01:04 crc kubenswrapper[4766]: I0130 18:01:04.073938 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-360a-account-create-update-9fwlc"] Jan 30 18:01:04 crc kubenswrapper[4766]: I0130 18:01:04.866536 4766 generic.go:334] "Generic (PLEG): container finished" podID="40e23b5f-28fc-4354-94de-90d54908e61b" containerID="ad6524bde7488d90070d2ccbcc60c3eedc219f1cc8c7fa871d2af523184d894a" exitCode=0 Jan 30 18:01:04 crc kubenswrapper[4766]: I0130 18:01:04.866672 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"40e23b5f-28fc-4354-94de-90d54908e61b","Type":"ContainerDied","Data":"ad6524bde7488d90070d2ccbcc60c3eedc219f1cc8c7fa871d2af523184d894a"} Jan 30 18:01:04 crc kubenswrapper[4766]: I0130 18:01:04.871669 4766 generic.go:334] "Generic (PLEG): container finished" podID="7946b0e6-2de2-4708-ac83-ce1ad398d8a5" containerID="ba7a3a0bd3b87ff213481ded18b09fe05a378481a605d5c64f141f56bfac1eae" exitCode=0 Jan 30 18:01:04 crc kubenswrapper[4766]: I0130 18:01:04.871904 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7946b0e6-2de2-4708-ac83-ce1ad398d8a5","Type":"ContainerDied","Data":"ba7a3a0bd3b87ff213481ded18b09fe05a378481a605d5c64f141f56bfac1eae"} Jan 30 18:01:06 crc kubenswrapper[4766]: I0130 18:01:06.069249 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9" path="/var/lib/kubelet/pods/3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9/volumes" Jan 30 18:01:06 crc kubenswrapper[4766]: I0130 18:01:06.070346 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa06091d-37e1-4828-9f71-7160f12ac3de" path="/var/lib/kubelet/pods/aa06091d-37e1-4828-9f71-7160f12ac3de/volumes" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.040095 4766 scope.go:117] "RemoveContainer" containerID="e52209522ad6ff86d8d333246ee724873200c61e028d93d4fc612ac4b0977354" Jan 30 18:01:09 crc kubenswrapper[4766]: E0130 18:01:09.040844 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.151326 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496601-pl6qc" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.161035 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.174957 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.268325 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d20810a-2efe-43c6-a8e6-92a14834a048-config-data\") pod \"5d20810a-2efe-43c6-a8e6-92a14834a048\" (UID: \"5d20810a-2efe-43c6-a8e6-92a14834a048\") " Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.268392 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-scripts\") pod \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\" (UID: \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\") " Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.268498 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-combined-ca-bundle\") pod \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\" (UID: \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\") " Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.268532 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40e23b5f-28fc-4354-94de-90d54908e61b-scripts\") pod \"40e23b5f-28fc-4354-94de-90d54908e61b\" (UID: \"40e23b5f-28fc-4354-94de-90d54908e61b\") " Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.268583 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40e23b5f-28fc-4354-94de-90d54908e61b-config-data\") pod \"40e23b5f-28fc-4354-94de-90d54908e61b\" (UID: \"40e23b5f-28fc-4354-94de-90d54908e61b\") " Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.268620 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkf94\" (UniqueName: \"kubernetes.io/projected/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-kube-api-access-wkf94\") pod \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\" (UID: \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\") " Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.268667 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/40e23b5f-28fc-4354-94de-90d54908e61b-ceph\") pod \"40e23b5f-28fc-4354-94de-90d54908e61b\" (UID: \"40e23b5f-28fc-4354-94de-90d54908e61b\") " Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.268717 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40e23b5f-28fc-4354-94de-90d54908e61b-logs\") pod \"40e23b5f-28fc-4354-94de-90d54908e61b\" (UID: \"40e23b5f-28fc-4354-94de-90d54908e61b\") " Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.268785 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5d20810a-2efe-43c6-a8e6-92a14834a048-fernet-keys\") pod \"5d20810a-2efe-43c6-a8e6-92a14834a048\" (UID: \"5d20810a-2efe-43c6-a8e6-92a14834a048\") " Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.268816 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-logs\") pod \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\" (UID: \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\") " Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.268836 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/40e23b5f-28fc-4354-94de-90d54908e61b-httpd-run\") pod \"40e23b5f-28fc-4354-94de-90d54908e61b\" (UID: \"40e23b5f-28fc-4354-94de-90d54908e61b\") " Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.268859 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d20810a-2efe-43c6-a8e6-92a14834a048-combined-ca-bundle\") pod \"5d20810a-2efe-43c6-a8e6-92a14834a048\" (UID: \"5d20810a-2efe-43c6-a8e6-92a14834a048\") " Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.268886 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-httpd-run\") pod \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\" (UID: \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\") " Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.268938 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-config-data\") pod \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\" (UID: \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\") " Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.268962 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40e23b5f-28fc-4354-94de-90d54908e61b-combined-ca-bundle\") pod \"40e23b5f-28fc-4354-94de-90d54908e61b\" (UID: \"40e23b5f-28fc-4354-94de-90d54908e61b\") " Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.268991 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-ceph\") pod \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\" (UID: \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\") " Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.269027 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42l2g\" (UniqueName: \"kubernetes.io/projected/5d20810a-2efe-43c6-a8e6-92a14834a048-kube-api-access-42l2g\") pod \"5d20810a-2efe-43c6-a8e6-92a14834a048\" (UID: \"5d20810a-2efe-43c6-a8e6-92a14834a048\") " Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.269046 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6dvq\" (UniqueName: \"kubernetes.io/projected/40e23b5f-28fc-4354-94de-90d54908e61b-kube-api-access-z6dvq\") pod \"40e23b5f-28fc-4354-94de-90d54908e61b\" (UID: \"40e23b5f-28fc-4354-94de-90d54908e61b\") " Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.270560 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40e23b5f-28fc-4354-94de-90d54908e61b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "40e23b5f-28fc-4354-94de-90d54908e61b" (UID: "40e23b5f-28fc-4354-94de-90d54908e61b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.275218 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7946b0e6-2de2-4708-ac83-ce1ad398d8a5" (UID: "7946b0e6-2de2-4708-ac83-ce1ad398d8a5"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.275217 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40e23b5f-28fc-4354-94de-90d54908e61b-logs" (OuterVolumeSpecName: "logs") pod "40e23b5f-28fc-4354-94de-90d54908e61b" (UID: "40e23b5f-28fc-4354-94de-90d54908e61b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.275523 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-logs" (OuterVolumeSpecName: "logs") pod "7946b0e6-2de2-4708-ac83-ce1ad398d8a5" (UID: "7946b0e6-2de2-4708-ac83-ce1ad398d8a5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.280474 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d20810a-2efe-43c6-a8e6-92a14834a048-kube-api-access-42l2g" (OuterVolumeSpecName: "kube-api-access-42l2g") pod "5d20810a-2efe-43c6-a8e6-92a14834a048" (UID: "5d20810a-2efe-43c6-a8e6-92a14834a048"). InnerVolumeSpecName "kube-api-access-42l2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.280729 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40e23b5f-28fc-4354-94de-90d54908e61b-kube-api-access-z6dvq" (OuterVolumeSpecName: "kube-api-access-z6dvq") pod "40e23b5f-28fc-4354-94de-90d54908e61b" (UID: "40e23b5f-28fc-4354-94de-90d54908e61b"). InnerVolumeSpecName "kube-api-access-z6dvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.280745 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d20810a-2efe-43c6-a8e6-92a14834a048-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "5d20810a-2efe-43c6-a8e6-92a14834a048" (UID: "5d20810a-2efe-43c6-a8e6-92a14834a048"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.280972 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40e23b5f-28fc-4354-94de-90d54908e61b-scripts" (OuterVolumeSpecName: "scripts") pod "40e23b5f-28fc-4354-94de-90d54908e61b" (UID: "40e23b5f-28fc-4354-94de-90d54908e61b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.281941 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-scripts" (OuterVolumeSpecName: "scripts") pod "7946b0e6-2de2-4708-ac83-ce1ad398d8a5" (UID: "7946b0e6-2de2-4708-ac83-ce1ad398d8a5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.286424 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-kube-api-access-wkf94" (OuterVolumeSpecName: "kube-api-access-wkf94") pod "7946b0e6-2de2-4708-ac83-ce1ad398d8a5" (UID: "7946b0e6-2de2-4708-ac83-ce1ad398d8a5"). InnerVolumeSpecName "kube-api-access-wkf94". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.289654 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-ceph" (OuterVolumeSpecName: "ceph") pod "7946b0e6-2de2-4708-ac83-ce1ad398d8a5" (UID: "7946b0e6-2de2-4708-ac83-ce1ad398d8a5"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.290311 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40e23b5f-28fc-4354-94de-90d54908e61b-ceph" (OuterVolumeSpecName: "ceph") pod "40e23b5f-28fc-4354-94de-90d54908e61b" (UID: "40e23b5f-28fc-4354-94de-90d54908e61b"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.333396 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40e23b5f-28fc-4354-94de-90d54908e61b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "40e23b5f-28fc-4354-94de-90d54908e61b" (UID: "40e23b5f-28fc-4354-94de-90d54908e61b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.345319 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d20810a-2efe-43c6-a8e6-92a14834a048-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5d20810a-2efe-43c6-a8e6-92a14834a048" (UID: "5d20810a-2efe-43c6-a8e6-92a14834a048"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.372992 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40e23b5f-28fc-4354-94de-90d54908e61b-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.373038 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkf94\" (UniqueName: \"kubernetes.io/projected/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-kube-api-access-wkf94\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.373050 4766 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/40e23b5f-28fc-4354-94de-90d54908e61b-ceph\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.373061 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40e23b5f-28fc-4354-94de-90d54908e61b-logs\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.373070 4766 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5d20810a-2efe-43c6-a8e6-92a14834a048-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.373079 4766 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/40e23b5f-28fc-4354-94de-90d54908e61b-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.373092 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d20810a-2efe-43c6-a8e6-92a14834a048-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.373101 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-logs\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.373110 4766 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.373119 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40e23b5f-28fc-4354-94de-90d54908e61b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.389487 4766 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-ceph\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.389558 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42l2g\" (UniqueName: \"kubernetes.io/projected/5d20810a-2efe-43c6-a8e6-92a14834a048-kube-api-access-42l2g\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.389572 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6dvq\" (UniqueName: \"kubernetes.io/projected/40e23b5f-28fc-4354-94de-90d54908e61b-kube-api-access-z6dvq\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.389584 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.402516 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7946b0e6-2de2-4708-ac83-ce1ad398d8a5" (UID: "7946b0e6-2de2-4708-ac83-ce1ad398d8a5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.410359 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-config-data" (OuterVolumeSpecName: "config-data") pod "7946b0e6-2de2-4708-ac83-ce1ad398d8a5" (UID: "7946b0e6-2de2-4708-ac83-ce1ad398d8a5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.434977 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d20810a-2efe-43c6-a8e6-92a14834a048-config-data" (OuterVolumeSpecName: "config-data") pod "5d20810a-2efe-43c6-a8e6-92a14834a048" (UID: "5d20810a-2efe-43c6-a8e6-92a14834a048"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.449441 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40e23b5f-28fc-4354-94de-90d54908e61b-config-data" (OuterVolumeSpecName: "config-data") pod "40e23b5f-28fc-4354-94de-90d54908e61b" (UID: "40e23b5f-28fc-4354-94de-90d54908e61b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.491942 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.491983 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d20810a-2efe-43c6-a8e6-92a14834a048-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.491995 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.492009 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40e23b5f-28fc-4354-94de-90d54908e61b-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.929658 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7946b0e6-2de2-4708-ac83-ce1ad398d8a5","Type":"ContainerDied","Data":"d2a4e4fc66535588e46fed562ba402562d5ce80fbfd5a96ef9e01d567df2004b"} Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.930274 4766 scope.go:117] "RemoveContainer" containerID="ba7a3a0bd3b87ff213481ded18b09fe05a378481a605d5c64f141f56bfac1eae" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.930602 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.942850 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496601-pl6qc" event={"ID":"5d20810a-2efe-43c6-a8e6-92a14834a048","Type":"ContainerDied","Data":"0506a31b4302c185010f640115c79ac98b2bccb6af61fe517bf39b47f821ddd3"} Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.942891 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0506a31b4302c185010f640115c79ac98b2bccb6af61fe517bf39b47f821ddd3" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.942964 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496601-pl6qc" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.973604 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-646c4b5b47-xr8w7" event={"ID":"8806aa45-5ae9-453c-8bc8-23fe8daa8e9d","Type":"ContainerStarted","Data":"5a1b1f2fd93ecc065b4b50e7dd571ff4a7f60f4b4ce4f7d89d8895fe416e14e4"} Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.981411 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b8665dc85-mqdzq" event={"ID":"c267e584-67ae-40ca-90dc-5967ee8be5d5","Type":"ContainerStarted","Data":"a73127ab6a6bd59712e798984e6b455a31980634cc22dc15ea7787471289979e"} Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.981688 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b8665dc85-mqdzq" event={"ID":"c267e584-67ae-40ca-90dc-5967ee8be5d5","Type":"ContainerStarted","Data":"f888ebaacfd27da2d94ec0c0eb3b9f06bd1a64f1ce632a7c2270c12a0a281a24"} Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.981848 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5b8665dc85-mqdzq" podUID="c267e584-67ae-40ca-90dc-5967ee8be5d5" containerName="horizon" containerID="cri-o://a73127ab6a6bd59712e798984e6b455a31980634cc22dc15ea7787471289979e" gracePeriod=30 Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.981829 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5b8665dc85-mqdzq" podUID="c267e584-67ae-40ca-90dc-5967ee8be5d5" containerName="horizon-log" containerID="cri-o://f888ebaacfd27da2d94ec0c0eb3b9f06bd1a64f1ce632a7c2270c12a0a281a24" gracePeriod=30 Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.988253 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.989259 4766 scope.go:117] "RemoveContainer" containerID="cad90a5294d7a585930cf768d8e7c6d25d6344d562eb3235af5a3bc1a335ef10" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.995426 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7c4d556457-cgwh5" event={"ID":"e24a2653-c901-4306-a56b-2e2de8006403","Type":"ContainerStarted","Data":"1e480dbcd993b0ab6a788770045d86acbc61597646aa5360f9b83b164e59d969"} Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.995469 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7c4d556457-cgwh5" event={"ID":"e24a2653-c901-4306-a56b-2e2de8006403","Type":"ContainerStarted","Data":"90394fdff017d58c0e8cd3327168199dc8c7d1df43cf284b9f898399e036a217"} Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.004649 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"40e23b5f-28fc-4354-94de-90d54908e61b","Type":"ContainerDied","Data":"a636aed8819668fe27e888c223782c929538ea199ee28b047c4b35c7334f0992"} Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.004759 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.021810 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.034393 4766 scope.go:117] "RemoveContainer" containerID="ad6524bde7488d90070d2ccbcc60c3eedc219f1cc8c7fa871d2af523184d894a" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.068291 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7946b0e6-2de2-4708-ac83-ce1ad398d8a5" path="/var/lib/kubelet/pods/7946b0e6-2de2-4708-ac83-ce1ad398d8a5/volumes" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.068928 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 18:01:10 crc kubenswrapper[4766]: E0130 18:01:10.069694 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40e23b5f-28fc-4354-94de-90d54908e61b" containerName="glance-log" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.069733 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="40e23b5f-28fc-4354-94de-90d54908e61b" containerName="glance-log" Jan 30 18:01:10 crc kubenswrapper[4766]: E0130 18:01:10.069747 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7946b0e6-2de2-4708-ac83-ce1ad398d8a5" containerName="glance-log" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.069757 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="7946b0e6-2de2-4708-ac83-ce1ad398d8a5" containerName="glance-log" Jan 30 18:01:10 crc kubenswrapper[4766]: E0130 18:01:10.069779 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d20810a-2efe-43c6-a8e6-92a14834a048" containerName="keystone-cron" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.069785 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d20810a-2efe-43c6-a8e6-92a14834a048" containerName="keystone-cron" Jan 30 18:01:10 crc kubenswrapper[4766]: E0130 18:01:10.069796 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7946b0e6-2de2-4708-ac83-ce1ad398d8a5" containerName="glance-httpd" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.069802 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="7946b0e6-2de2-4708-ac83-ce1ad398d8a5" containerName="glance-httpd" Jan 30 18:01:10 crc kubenswrapper[4766]: E0130 18:01:10.069825 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40e23b5f-28fc-4354-94de-90d54908e61b" containerName="glance-httpd" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.069830 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="40e23b5f-28fc-4354-94de-90d54908e61b" containerName="glance-httpd" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.070082 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="7946b0e6-2de2-4708-ac83-ce1ad398d8a5" containerName="glance-log" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.070103 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="7946b0e6-2de2-4708-ac83-ce1ad398d8a5" containerName="glance-httpd" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.070118 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="40e23b5f-28fc-4354-94de-90d54908e61b" containerName="glance-log" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.070127 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="40e23b5f-28fc-4354-94de-90d54908e61b" containerName="glance-httpd" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.070138 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d20810a-2efe-43c6-a8e6-92a14834a048" containerName="keystone-cron" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.071876 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.075998 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.076159 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-fmg4z" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.076160 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.086671 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-646c4b5b47-xr8w7" podStartSLOduration=2.93493563 podStartE2EDuration="10.086646602s" podCreationTimestamp="2026-01-30 18:01:00 +0000 UTC" firstStartedPulling="2026-01-30 18:01:02.101454507 +0000 UTC m=+5916.739411843" lastFinishedPulling="2026-01-30 18:01:09.253165469 +0000 UTC m=+5923.891122815" observedRunningTime="2026-01-30 18:01:10.013024516 +0000 UTC m=+5924.650981862" watchObservedRunningTime="2026-01-30 18:01:10.086646602 +0000 UTC m=+5924.724603948" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.108971 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.110639 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ddc1af26-668d-4715-b17a-e94ee4f5b571-ceph\") pod \"glance-default-external-api-0\" (UID: \"ddc1af26-668d-4715-b17a-e94ee4f5b571\") " pod="openstack/glance-default-external-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.110717 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ddc1af26-668d-4715-b17a-e94ee4f5b571-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ddc1af26-668d-4715-b17a-e94ee4f5b571\") " pod="openstack/glance-default-external-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.111001 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ddc1af26-668d-4715-b17a-e94ee4f5b571-logs\") pod \"glance-default-external-api-0\" (UID: \"ddc1af26-668d-4715-b17a-e94ee4f5b571\") " pod="openstack/glance-default-external-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.111074 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7c4d556457-cgwh5" podStartSLOduration=2.259421653 podStartE2EDuration="9.111059728s" podCreationTimestamp="2026-01-30 18:01:01 +0000 UTC" firstStartedPulling="2026-01-30 18:01:02.400725203 +0000 UTC m=+5917.038682549" lastFinishedPulling="2026-01-30 18:01:09.252363278 +0000 UTC m=+5923.890320624" observedRunningTime="2026-01-30 18:01:10.058843874 +0000 UTC m=+5924.696801240" watchObservedRunningTime="2026-01-30 18:01:10.111059728 +0000 UTC m=+5924.749017074" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.111132 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddc1af26-668d-4715-b17a-e94ee4f5b571-config-data\") pod \"glance-default-external-api-0\" (UID: \"ddc1af26-668d-4715-b17a-e94ee4f5b571\") " pod="openstack/glance-default-external-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.111169 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ddc1af26-668d-4715-b17a-e94ee4f5b571-scripts\") pod \"glance-default-external-api-0\" (UID: \"ddc1af26-668d-4715-b17a-e94ee4f5b571\") " pod="openstack/glance-default-external-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.111213 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhz9z\" (UniqueName: \"kubernetes.io/projected/ddc1af26-668d-4715-b17a-e94ee4f5b571-kube-api-access-jhz9z\") pod \"glance-default-external-api-0\" (UID: \"ddc1af26-668d-4715-b17a-e94ee4f5b571\") " pod="openstack/glance-default-external-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.111393 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddc1af26-668d-4715-b17a-e94ee4f5b571-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ddc1af26-668d-4715-b17a-e94ee4f5b571\") " pod="openstack/glance-default-external-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.134978 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-h2fkl"] Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.137931 4766 scope.go:117] "RemoveContainer" containerID="155d7b6244102b757f3100d53fae683f2499dd63e37d81e454b339bfe1fcf7f8" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.152606 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-h2fkl"] Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.152597 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5b8665dc85-mqdzq" podStartSLOduration=2.702462426 podStartE2EDuration="10.152578509s" podCreationTimestamp="2026-01-30 18:01:00 +0000 UTC" firstStartedPulling="2026-01-30 18:01:01.832344954 +0000 UTC m=+5916.470302300" lastFinishedPulling="2026-01-30 18:01:09.282461037 +0000 UTC m=+5923.920418383" observedRunningTime="2026-01-30 18:01:10.101242401 +0000 UTC m=+5924.739199747" watchObservedRunningTime="2026-01-30 18:01:10.152578509 +0000 UTC m=+5924.790535855" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.214007 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddc1af26-668d-4715-b17a-e94ee4f5b571-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ddc1af26-668d-4715-b17a-e94ee4f5b571\") " pod="openstack/glance-default-external-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.214712 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ddc1af26-668d-4715-b17a-e94ee4f5b571-ceph\") pod \"glance-default-external-api-0\" (UID: \"ddc1af26-668d-4715-b17a-e94ee4f5b571\") " pod="openstack/glance-default-external-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.214741 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ddc1af26-668d-4715-b17a-e94ee4f5b571-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ddc1af26-668d-4715-b17a-e94ee4f5b571\") " pod="openstack/glance-default-external-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.215074 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ddc1af26-668d-4715-b17a-e94ee4f5b571-logs\") pod \"glance-default-external-api-0\" (UID: \"ddc1af26-668d-4715-b17a-e94ee4f5b571\") " pod="openstack/glance-default-external-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.215152 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddc1af26-668d-4715-b17a-e94ee4f5b571-config-data\") pod \"glance-default-external-api-0\" (UID: \"ddc1af26-668d-4715-b17a-e94ee4f5b571\") " pod="openstack/glance-default-external-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.215356 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ddc1af26-668d-4715-b17a-e94ee4f5b571-scripts\") pod \"glance-default-external-api-0\" (UID: \"ddc1af26-668d-4715-b17a-e94ee4f5b571\") " pod="openstack/glance-default-external-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.215381 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhz9z\" (UniqueName: \"kubernetes.io/projected/ddc1af26-668d-4715-b17a-e94ee4f5b571-kube-api-access-jhz9z\") pod \"glance-default-external-api-0\" (UID: \"ddc1af26-668d-4715-b17a-e94ee4f5b571\") " pod="openstack/glance-default-external-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.215875 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ddc1af26-668d-4715-b17a-e94ee4f5b571-logs\") pod \"glance-default-external-api-0\" (UID: \"ddc1af26-668d-4715-b17a-e94ee4f5b571\") " pod="openstack/glance-default-external-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.215964 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ddc1af26-668d-4715-b17a-e94ee4f5b571-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ddc1af26-668d-4715-b17a-e94ee4f5b571\") " pod="openstack/glance-default-external-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.217494 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.218289 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddc1af26-668d-4715-b17a-e94ee4f5b571-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ddc1af26-668d-4715-b17a-e94ee4f5b571\") " pod="openstack/glance-default-external-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.219156 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddc1af26-668d-4715-b17a-e94ee4f5b571-config-data\") pod \"glance-default-external-api-0\" (UID: \"ddc1af26-668d-4715-b17a-e94ee4f5b571\") " pod="openstack/glance-default-external-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.223412 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ddc1af26-668d-4715-b17a-e94ee4f5b571-scripts\") pod \"glance-default-external-api-0\" (UID: \"ddc1af26-668d-4715-b17a-e94ee4f5b571\") " pod="openstack/glance-default-external-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.223980 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ddc1af26-668d-4715-b17a-e94ee4f5b571-ceph\") pod \"glance-default-external-api-0\" (UID: \"ddc1af26-668d-4715-b17a-e94ee4f5b571\") " pod="openstack/glance-default-external-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.234985 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhz9z\" (UniqueName: \"kubernetes.io/projected/ddc1af26-668d-4715-b17a-e94ee4f5b571-kube-api-access-jhz9z\") pod \"glance-default-external-api-0\" (UID: \"ddc1af26-668d-4715-b17a-e94ee4f5b571\") " pod="openstack/glance-default-external-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.235333 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.247256 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.249338 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.252334 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.258222 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.316584 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46fg2\" (UniqueName: \"kubernetes.io/projected/c25f82b3-9296-4814-92b1-59ca5c2bf2a0-kube-api-access-46fg2\") pod \"glance-default-internal-api-0\" (UID: \"c25f82b3-9296-4814-92b1-59ca5c2bf2a0\") " pod="openstack/glance-default-internal-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.316618 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c25f82b3-9296-4814-92b1-59ca5c2bf2a0-logs\") pod \"glance-default-internal-api-0\" (UID: \"c25f82b3-9296-4814-92b1-59ca5c2bf2a0\") " pod="openstack/glance-default-internal-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.317011 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c25f82b3-9296-4814-92b1-59ca5c2bf2a0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c25f82b3-9296-4814-92b1-59ca5c2bf2a0\") " pod="openstack/glance-default-internal-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.317091 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c25f82b3-9296-4814-92b1-59ca5c2bf2a0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c25f82b3-9296-4814-92b1-59ca5c2bf2a0\") " pod="openstack/glance-default-internal-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.317145 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c25f82b3-9296-4814-92b1-59ca5c2bf2a0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c25f82b3-9296-4814-92b1-59ca5c2bf2a0\") " pod="openstack/glance-default-internal-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.317231 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c25f82b3-9296-4814-92b1-59ca5c2bf2a0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c25f82b3-9296-4814-92b1-59ca5c2bf2a0\") " pod="openstack/glance-default-internal-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.317304 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c25f82b3-9296-4814-92b1-59ca5c2bf2a0-ceph\") pod \"glance-default-internal-api-0\" (UID: \"c25f82b3-9296-4814-92b1-59ca5c2bf2a0\") " pod="openstack/glance-default-internal-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.420486 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c25f82b3-9296-4814-92b1-59ca5c2bf2a0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c25f82b3-9296-4814-92b1-59ca5c2bf2a0\") " pod="openstack/glance-default-internal-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.420590 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c25f82b3-9296-4814-92b1-59ca5c2bf2a0-ceph\") pod \"glance-default-internal-api-0\" (UID: \"c25f82b3-9296-4814-92b1-59ca5c2bf2a0\") " pod="openstack/glance-default-internal-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.420661 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46fg2\" (UniqueName: \"kubernetes.io/projected/c25f82b3-9296-4814-92b1-59ca5c2bf2a0-kube-api-access-46fg2\") pod \"glance-default-internal-api-0\" (UID: \"c25f82b3-9296-4814-92b1-59ca5c2bf2a0\") " pod="openstack/glance-default-internal-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.420685 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c25f82b3-9296-4814-92b1-59ca5c2bf2a0-logs\") pod \"glance-default-internal-api-0\" (UID: \"c25f82b3-9296-4814-92b1-59ca5c2bf2a0\") " pod="openstack/glance-default-internal-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.420835 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c25f82b3-9296-4814-92b1-59ca5c2bf2a0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c25f82b3-9296-4814-92b1-59ca5c2bf2a0\") " pod="openstack/glance-default-internal-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.420872 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c25f82b3-9296-4814-92b1-59ca5c2bf2a0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c25f82b3-9296-4814-92b1-59ca5c2bf2a0\") " pod="openstack/glance-default-internal-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.420908 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c25f82b3-9296-4814-92b1-59ca5c2bf2a0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c25f82b3-9296-4814-92b1-59ca5c2bf2a0\") " pod="openstack/glance-default-internal-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.421536 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c25f82b3-9296-4814-92b1-59ca5c2bf2a0-logs\") pod \"glance-default-internal-api-0\" (UID: \"c25f82b3-9296-4814-92b1-59ca5c2bf2a0\") " pod="openstack/glance-default-internal-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.422331 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c25f82b3-9296-4814-92b1-59ca5c2bf2a0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c25f82b3-9296-4814-92b1-59ca5c2bf2a0\") " pod="openstack/glance-default-internal-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.424767 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c25f82b3-9296-4814-92b1-59ca5c2bf2a0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c25f82b3-9296-4814-92b1-59ca5c2bf2a0\") " pod="openstack/glance-default-internal-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.425733 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c25f82b3-9296-4814-92b1-59ca5c2bf2a0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c25f82b3-9296-4814-92b1-59ca5c2bf2a0\") " pod="openstack/glance-default-internal-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.427358 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c25f82b3-9296-4814-92b1-59ca5c2bf2a0-ceph\") pod \"glance-default-internal-api-0\" (UID: \"c25f82b3-9296-4814-92b1-59ca5c2bf2a0\") " pod="openstack/glance-default-internal-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.428795 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c25f82b3-9296-4814-92b1-59ca5c2bf2a0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c25f82b3-9296-4814-92b1-59ca5c2bf2a0\") " pod="openstack/glance-default-internal-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.431091 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.444446 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46fg2\" (UniqueName: \"kubernetes.io/projected/c25f82b3-9296-4814-92b1-59ca5c2bf2a0-kube-api-access-46fg2\") pod \"glance-default-internal-api-0\" (UID: \"c25f82b3-9296-4814-92b1-59ca5c2bf2a0\") " pod="openstack/glance-default-internal-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.639653 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 18:01:11 crc kubenswrapper[4766]: I0130 18:01:11.022986 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 18:01:11 crc kubenswrapper[4766]: I0130 18:01:11.033887 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-646c4b5b47-xr8w7" event={"ID":"8806aa45-5ae9-453c-8bc8-23fe8daa8e9d","Type":"ContainerStarted","Data":"cd3edfbcd9f13bf9b2e70c6b3b5b717d1cb225662e84f5a9a9139e0471a7a39b"} Jan 30 18:01:11 crc kubenswrapper[4766]: I0130 18:01:11.181094 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5b8665dc85-mqdzq" Jan 30 18:01:11 crc kubenswrapper[4766]: I0130 18:01:11.256000 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 18:01:11 crc kubenswrapper[4766]: W0130 18:01:11.257487 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc25f82b3_9296_4814_92b1_59ca5c2bf2a0.slice/crio-5859ac9de7d745405631ba23197b358bcc787498de54dbabd3496654db837c12 WatchSource:0}: Error finding container 5859ac9de7d745405631ba23197b358bcc787498de54dbabd3496654db837c12: Status 404 returned error can't find the container with id 5859ac9de7d745405631ba23197b358bcc787498de54dbabd3496654db837c12 Jan 30 18:01:11 crc kubenswrapper[4766]: I0130 18:01:11.419149 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-646c4b5b47-xr8w7" Jan 30 18:01:11 crc kubenswrapper[4766]: I0130 18:01:11.419243 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-646c4b5b47-xr8w7" Jan 30 18:01:11 crc kubenswrapper[4766]: I0130 18:01:11.898981 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7c4d556457-cgwh5" Jan 30 18:01:11 crc kubenswrapper[4766]: I0130 18:01:11.900682 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7c4d556457-cgwh5" Jan 30 18:01:12 crc kubenswrapper[4766]: I0130 18:01:12.062445 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40e23b5f-28fc-4354-94de-90d54908e61b" path="/var/lib/kubelet/pods/40e23b5f-28fc-4354-94de-90d54908e61b/volumes" Jan 30 18:01:12 crc kubenswrapper[4766]: I0130 18:01:12.063581 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2" path="/var/lib/kubelet/pods/b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2/volumes" Jan 30 18:01:12 crc kubenswrapper[4766]: I0130 18:01:12.081388 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ddc1af26-668d-4715-b17a-e94ee4f5b571","Type":"ContainerStarted","Data":"a6c6a8fe72b93334fbe4ee005ea34677fc95740aecfaa7da8f15120190f5ff3a"} Jan 30 18:01:12 crc kubenswrapper[4766]: I0130 18:01:12.081479 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ddc1af26-668d-4715-b17a-e94ee4f5b571","Type":"ContainerStarted","Data":"10079542e3100095ec78d3606b78f1758626c8beaa2fe23967895215c1e592a3"} Jan 30 18:01:12 crc kubenswrapper[4766]: I0130 18:01:12.092306 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c25f82b3-9296-4814-92b1-59ca5c2bf2a0","Type":"ContainerStarted","Data":"46fe3470a1e8c952a21e5a5c56b106e1450a25e6dcc09ddea71d13186d5cc7eb"} Jan 30 18:01:12 crc kubenswrapper[4766]: I0130 18:01:12.092354 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c25f82b3-9296-4814-92b1-59ca5c2bf2a0","Type":"ContainerStarted","Data":"5859ac9de7d745405631ba23197b358bcc787498de54dbabd3496654db837c12"} Jan 30 18:01:13 crc kubenswrapper[4766]: I0130 18:01:13.125902 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c25f82b3-9296-4814-92b1-59ca5c2bf2a0","Type":"ContainerStarted","Data":"b17245b90c6d6cee0a37f27383e5d755d7649b7adee324e9e95cb666eb4c8082"} Jan 30 18:01:13 crc kubenswrapper[4766]: I0130 18:01:13.130494 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ddc1af26-668d-4715-b17a-e94ee4f5b571","Type":"ContainerStarted","Data":"6a9dfdebbeb7368534cf4006d3c920e47e62b9e8722cc6b77f9bacb63b7b7dcf"} Jan 30 18:01:13 crc kubenswrapper[4766]: I0130 18:01:13.159331 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.1593078549999998 podStartE2EDuration="3.159307855s" podCreationTimestamp="2026-01-30 18:01:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 18:01:13.146024213 +0000 UTC m=+5927.783981569" watchObservedRunningTime="2026-01-30 18:01:13.159307855 +0000 UTC m=+5927.797265201" Jan 30 18:01:13 crc kubenswrapper[4766]: I0130 18:01:13.182210 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.182188599 podStartE2EDuration="4.182188599s" podCreationTimestamp="2026-01-30 18:01:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 18:01:13.179976069 +0000 UTC m=+5927.817933415" watchObservedRunningTime="2026-01-30 18:01:13.182188599 +0000 UTC m=+5927.820145945" Jan 30 18:01:20 crc kubenswrapper[4766]: I0130 18:01:20.431321 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 18:01:20 crc kubenswrapper[4766]: I0130 18:01:20.433236 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 18:01:20 crc kubenswrapper[4766]: I0130 18:01:20.471325 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 18:01:20 crc kubenswrapper[4766]: I0130 18:01:20.480986 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 18:01:20 crc kubenswrapper[4766]: I0130 18:01:20.641547 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 18:01:20 crc kubenswrapper[4766]: I0130 18:01:20.641703 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 18:01:20 crc kubenswrapper[4766]: I0130 18:01:20.671300 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 18:01:20 crc kubenswrapper[4766]: I0130 18:01:20.682455 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 18:01:21 crc kubenswrapper[4766]: I0130 18:01:21.039407 4766 scope.go:117] "RemoveContainer" containerID="e52209522ad6ff86d8d333246ee724873200c61e028d93d4fc612ac4b0977354" Jan 30 18:01:21 crc kubenswrapper[4766]: E0130 18:01:21.039698 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:01:21 crc kubenswrapper[4766]: I0130 18:01:21.216925 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 18:01:21 crc kubenswrapper[4766]: I0130 18:01:21.216960 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 18:01:21 crc kubenswrapper[4766]: I0130 18:01:21.216969 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 18:01:21 crc kubenswrapper[4766]: I0130 18:01:21.217376 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 18:01:21 crc kubenswrapper[4766]: I0130 18:01:21.415193 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-646c4b5b47-xr8w7" podUID="8806aa45-5ae9-453c-8bc8-23fe8daa8e9d" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.108:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.108:8080: connect: connection refused" Jan 30 18:01:21 crc kubenswrapper[4766]: I0130 18:01:21.897196 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7c4d556457-cgwh5" podUID="e24a2653-c901-4306-a56b-2e2de8006403" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.109:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.109:8080: connect: connection refused" Jan 30 18:01:23 crc kubenswrapper[4766]: I0130 18:01:23.230694 4766 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 18:01:23 crc kubenswrapper[4766]: I0130 18:01:23.231070 4766 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 18:01:23 crc kubenswrapper[4766]: I0130 18:01:23.230743 4766 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 18:01:23 crc kubenswrapper[4766]: I0130 18:01:23.231186 4766 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 18:01:23 crc kubenswrapper[4766]: I0130 18:01:23.375261 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 18:01:23 crc kubenswrapper[4766]: I0130 18:01:23.443652 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 18:01:23 crc kubenswrapper[4766]: I0130 18:01:23.694214 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 18:01:23 crc kubenswrapper[4766]: I0130 18:01:23.767820 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 18:01:33 crc kubenswrapper[4766]: I0130 18:01:33.082596 4766 scope.go:117] "RemoveContainer" containerID="61e9004b9e632e72beed11f4761ff65b41d449187e767891bb96ba3995cb339f" Jan 30 18:01:33 crc kubenswrapper[4766]: I0130 18:01:33.107759 4766 scope.go:117] "RemoveContainer" containerID="f8e723715c56394706bb110f28e25bd51569d6ba082c9fb3e8b9a75ae2fcfda9" Jan 30 18:01:33 crc kubenswrapper[4766]: I0130 18:01:33.159130 4766 scope.go:117] "RemoveContainer" containerID="b8510fbc15448bdb8f9309d677310c9146372ad00679154fc9bdb8459d54cf36" Jan 30 18:01:33 crc kubenswrapper[4766]: I0130 18:01:33.424270 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-646c4b5b47-xr8w7" Jan 30 18:01:33 crc kubenswrapper[4766]: I0130 18:01:33.848665 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7c4d556457-cgwh5" Jan 30 18:01:35 crc kubenswrapper[4766]: I0130 18:01:35.084933 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-646c4b5b47-xr8w7" Jan 30 18:01:35 crc kubenswrapper[4766]: I0130 18:01:35.451583 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-7c4d556457-cgwh5" Jan 30 18:01:35 crc kubenswrapper[4766]: I0130 18:01:35.550268 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-646c4b5b47-xr8w7"] Jan 30 18:01:35 crc kubenswrapper[4766]: I0130 18:01:35.550478 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-646c4b5b47-xr8w7" podUID="8806aa45-5ae9-453c-8bc8-23fe8daa8e9d" containerName="horizon-log" containerID="cri-o://5a1b1f2fd93ecc065b4b50e7dd571ff4a7f60f4b4ce4f7d89d8895fe416e14e4" gracePeriod=30 Jan 30 18:01:35 crc kubenswrapper[4766]: I0130 18:01:35.550987 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-646c4b5b47-xr8w7" podUID="8806aa45-5ae9-453c-8bc8-23fe8daa8e9d" containerName="horizon" containerID="cri-o://cd3edfbcd9f13bf9b2e70c6b3b5b717d1cb225662e84f5a9a9139e0471a7a39b" gracePeriod=30 Jan 30 18:01:36 crc kubenswrapper[4766]: I0130 18:01:36.062105 4766 scope.go:117] "RemoveContainer" containerID="e52209522ad6ff86d8d333246ee724873200c61e028d93d4fc612ac4b0977354" Jan 30 18:01:36 crc kubenswrapper[4766]: E0130 18:01:36.062754 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:01:38 crc kubenswrapper[4766]: I0130 18:01:38.049606 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-jdcqq"] Jan 30 18:01:38 crc kubenswrapper[4766]: I0130 18:01:38.052091 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-jdcqq"] Jan 30 18:01:39 crc kubenswrapper[4766]: I0130 18:01:39.027312 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7364-account-create-update-5qkkz"] Jan 30 18:01:39 crc kubenswrapper[4766]: I0130 18:01:39.036203 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7364-account-create-update-5qkkz"] Jan 30 18:01:39 crc kubenswrapper[4766]: I0130 18:01:39.399737 4766 generic.go:334] "Generic (PLEG): container finished" podID="8806aa45-5ae9-453c-8bc8-23fe8daa8e9d" containerID="cd3edfbcd9f13bf9b2e70c6b3b5b717d1cb225662e84f5a9a9139e0471a7a39b" exitCode=0 Jan 30 18:01:39 crc kubenswrapper[4766]: I0130 18:01:39.399805 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-646c4b5b47-xr8w7" event={"ID":"8806aa45-5ae9-453c-8bc8-23fe8daa8e9d","Type":"ContainerDied","Data":"cd3edfbcd9f13bf9b2e70c6b3b5b717d1cb225662e84f5a9a9139e0471a7a39b"} Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.055522 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="632e98c6-d202-4c07-9220-636bd07da76d" path="/var/lib/kubelet/pods/632e98c6-d202-4c07-9220-636bd07da76d/volumes" Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.056768 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d09a627-470a-4719-a1d8-458eda413878" path="/var/lib/kubelet/pods/9d09a627-470a-4719-a1d8-458eda413878/volumes" Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.404560 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b8665dc85-mqdzq" Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.424970 4766 generic.go:334] "Generic (PLEG): container finished" podID="c267e584-67ae-40ca-90dc-5967ee8be5d5" containerID="a73127ab6a6bd59712e798984e6b455a31980634cc22dc15ea7787471289979e" exitCode=137 Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.425016 4766 generic.go:334] "Generic (PLEG): container finished" podID="c267e584-67ae-40ca-90dc-5967ee8be5d5" containerID="f888ebaacfd27da2d94ec0c0eb3b9f06bd1a64f1ce632a7c2270c12a0a281a24" exitCode=137 Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.425048 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b8665dc85-mqdzq" event={"ID":"c267e584-67ae-40ca-90dc-5967ee8be5d5","Type":"ContainerDied","Data":"a73127ab6a6bd59712e798984e6b455a31980634cc22dc15ea7787471289979e"} Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.425103 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b8665dc85-mqdzq" event={"ID":"c267e584-67ae-40ca-90dc-5967ee8be5d5","Type":"ContainerDied","Data":"f888ebaacfd27da2d94ec0c0eb3b9f06bd1a64f1ce632a7c2270c12a0a281a24"} Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.425122 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b8665dc85-mqdzq" event={"ID":"c267e584-67ae-40ca-90dc-5967ee8be5d5","Type":"ContainerDied","Data":"826a0c9cee53980f380468b130146d783aa7261856c38f2757af740808b26324"} Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.425142 4766 scope.go:117] "RemoveContainer" containerID="a73127ab6a6bd59712e798984e6b455a31980634cc22dc15ea7787471289979e" Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.425627 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b8665dc85-mqdzq" Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.520561 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c267e584-67ae-40ca-90dc-5967ee8be5d5-scripts\") pod \"c267e584-67ae-40ca-90dc-5967ee8be5d5\" (UID: \"c267e584-67ae-40ca-90dc-5967ee8be5d5\") " Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.520751 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c267e584-67ae-40ca-90dc-5967ee8be5d5-logs\") pod \"c267e584-67ae-40ca-90dc-5967ee8be5d5\" (UID: \"c267e584-67ae-40ca-90dc-5967ee8be5d5\") " Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.520832 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c267e584-67ae-40ca-90dc-5967ee8be5d5-horizon-secret-key\") pod \"c267e584-67ae-40ca-90dc-5967ee8be5d5\" (UID: \"c267e584-67ae-40ca-90dc-5967ee8be5d5\") " Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.520976 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n87tq\" (UniqueName: \"kubernetes.io/projected/c267e584-67ae-40ca-90dc-5967ee8be5d5-kube-api-access-n87tq\") pod \"c267e584-67ae-40ca-90dc-5967ee8be5d5\" (UID: \"c267e584-67ae-40ca-90dc-5967ee8be5d5\") " Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.521010 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c267e584-67ae-40ca-90dc-5967ee8be5d5-config-data\") pod \"c267e584-67ae-40ca-90dc-5967ee8be5d5\" (UID: \"c267e584-67ae-40ca-90dc-5967ee8be5d5\") " Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.522809 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c267e584-67ae-40ca-90dc-5967ee8be5d5-logs" (OuterVolumeSpecName: "logs") pod "c267e584-67ae-40ca-90dc-5967ee8be5d5" (UID: "c267e584-67ae-40ca-90dc-5967ee8be5d5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.540308 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c267e584-67ae-40ca-90dc-5967ee8be5d5-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "c267e584-67ae-40ca-90dc-5967ee8be5d5" (UID: "c267e584-67ae-40ca-90dc-5967ee8be5d5"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.540511 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c267e584-67ae-40ca-90dc-5967ee8be5d5-kube-api-access-n87tq" (OuterVolumeSpecName: "kube-api-access-n87tq") pod "c267e584-67ae-40ca-90dc-5967ee8be5d5" (UID: "c267e584-67ae-40ca-90dc-5967ee8be5d5"). InnerVolumeSpecName "kube-api-access-n87tq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.553763 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c267e584-67ae-40ca-90dc-5967ee8be5d5-config-data" (OuterVolumeSpecName: "config-data") pod "c267e584-67ae-40ca-90dc-5967ee8be5d5" (UID: "c267e584-67ae-40ca-90dc-5967ee8be5d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.556453 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c267e584-67ae-40ca-90dc-5967ee8be5d5-scripts" (OuterVolumeSpecName: "scripts") pod "c267e584-67ae-40ca-90dc-5967ee8be5d5" (UID: "c267e584-67ae-40ca-90dc-5967ee8be5d5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.620063 4766 scope.go:117] "RemoveContainer" containerID="f888ebaacfd27da2d94ec0c0eb3b9f06bd1a64f1ce632a7c2270c12a0a281a24" Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.623783 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n87tq\" (UniqueName: \"kubernetes.io/projected/c267e584-67ae-40ca-90dc-5967ee8be5d5-kube-api-access-n87tq\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.623816 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c267e584-67ae-40ca-90dc-5967ee8be5d5-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.623829 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c267e584-67ae-40ca-90dc-5967ee8be5d5-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.623840 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c267e584-67ae-40ca-90dc-5967ee8be5d5-logs\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.623852 4766 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c267e584-67ae-40ca-90dc-5967ee8be5d5-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.638225 4766 scope.go:117] "RemoveContainer" containerID="a73127ab6a6bd59712e798984e6b455a31980634cc22dc15ea7787471289979e" Jan 30 18:01:40 crc kubenswrapper[4766]: E0130 18:01:40.638628 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a73127ab6a6bd59712e798984e6b455a31980634cc22dc15ea7787471289979e\": container with ID starting with a73127ab6a6bd59712e798984e6b455a31980634cc22dc15ea7787471289979e not found: ID does not exist" containerID="a73127ab6a6bd59712e798984e6b455a31980634cc22dc15ea7787471289979e" Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.638664 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a73127ab6a6bd59712e798984e6b455a31980634cc22dc15ea7787471289979e"} err="failed to get container status \"a73127ab6a6bd59712e798984e6b455a31980634cc22dc15ea7787471289979e\": rpc error: code = NotFound desc = could not find container \"a73127ab6a6bd59712e798984e6b455a31980634cc22dc15ea7787471289979e\": container with ID starting with a73127ab6a6bd59712e798984e6b455a31980634cc22dc15ea7787471289979e not found: ID does not exist" Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.638685 4766 scope.go:117] "RemoveContainer" containerID="f888ebaacfd27da2d94ec0c0eb3b9f06bd1a64f1ce632a7c2270c12a0a281a24" Jan 30 18:01:40 crc kubenswrapper[4766]: E0130 18:01:40.638929 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f888ebaacfd27da2d94ec0c0eb3b9f06bd1a64f1ce632a7c2270c12a0a281a24\": container with ID starting with f888ebaacfd27da2d94ec0c0eb3b9f06bd1a64f1ce632a7c2270c12a0a281a24 not found: ID does not exist" containerID="f888ebaacfd27da2d94ec0c0eb3b9f06bd1a64f1ce632a7c2270c12a0a281a24" Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.638949 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f888ebaacfd27da2d94ec0c0eb3b9f06bd1a64f1ce632a7c2270c12a0a281a24"} err="failed to get container status \"f888ebaacfd27da2d94ec0c0eb3b9f06bd1a64f1ce632a7c2270c12a0a281a24\": rpc error: code = NotFound desc = could not find container \"f888ebaacfd27da2d94ec0c0eb3b9f06bd1a64f1ce632a7c2270c12a0a281a24\": container with ID starting with f888ebaacfd27da2d94ec0c0eb3b9f06bd1a64f1ce632a7c2270c12a0a281a24 not found: ID does not exist" Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.638963 4766 scope.go:117] "RemoveContainer" containerID="a73127ab6a6bd59712e798984e6b455a31980634cc22dc15ea7787471289979e" Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.639282 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a73127ab6a6bd59712e798984e6b455a31980634cc22dc15ea7787471289979e"} err="failed to get container status \"a73127ab6a6bd59712e798984e6b455a31980634cc22dc15ea7787471289979e\": rpc error: code = NotFound desc = could not find container \"a73127ab6a6bd59712e798984e6b455a31980634cc22dc15ea7787471289979e\": container with ID starting with a73127ab6a6bd59712e798984e6b455a31980634cc22dc15ea7787471289979e not found: ID does not exist" Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.639334 4766 scope.go:117] "RemoveContainer" containerID="f888ebaacfd27da2d94ec0c0eb3b9f06bd1a64f1ce632a7c2270c12a0a281a24" Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.639569 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f888ebaacfd27da2d94ec0c0eb3b9f06bd1a64f1ce632a7c2270c12a0a281a24"} err="failed to get container status \"f888ebaacfd27da2d94ec0c0eb3b9f06bd1a64f1ce632a7c2270c12a0a281a24\": rpc error: code = NotFound desc = could not find container \"f888ebaacfd27da2d94ec0c0eb3b9f06bd1a64f1ce632a7c2270c12a0a281a24\": container with ID starting with f888ebaacfd27da2d94ec0c0eb3b9f06bd1a64f1ce632a7c2270c12a0a281a24 not found: ID does not exist" Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.761899 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5b8665dc85-mqdzq"] Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.770806 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5b8665dc85-mqdzq"] Jan 30 18:01:41 crc kubenswrapper[4766]: I0130 18:01:41.414531 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-646c4b5b47-xr8w7" podUID="8806aa45-5ae9-453c-8bc8-23fe8daa8e9d" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.108:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.108:8080: connect: connection refused" Jan 30 18:01:42 crc kubenswrapper[4766]: I0130 18:01:42.051379 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c267e584-67ae-40ca-90dc-5967ee8be5d5" path="/var/lib/kubelet/pods/c267e584-67ae-40ca-90dc-5967ee8be5d5/volumes" Jan 30 18:01:43 crc kubenswrapper[4766]: I0130 18:01:43.531188 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-757f4f657-jzgr8"] Jan 30 18:01:43 crc kubenswrapper[4766]: E0130 18:01:43.531937 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c267e584-67ae-40ca-90dc-5967ee8be5d5" containerName="horizon-log" Jan 30 18:01:43 crc kubenswrapper[4766]: I0130 18:01:43.531952 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c267e584-67ae-40ca-90dc-5967ee8be5d5" containerName="horizon-log" Jan 30 18:01:43 crc kubenswrapper[4766]: E0130 18:01:43.531995 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c267e584-67ae-40ca-90dc-5967ee8be5d5" containerName="horizon" Jan 30 18:01:43 crc kubenswrapper[4766]: I0130 18:01:43.532004 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c267e584-67ae-40ca-90dc-5967ee8be5d5" containerName="horizon" Jan 30 18:01:43 crc kubenswrapper[4766]: I0130 18:01:43.532170 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c267e584-67ae-40ca-90dc-5967ee8be5d5" containerName="horizon" Jan 30 18:01:43 crc kubenswrapper[4766]: I0130 18:01:43.532207 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c267e584-67ae-40ca-90dc-5967ee8be5d5" containerName="horizon-log" Jan 30 18:01:43 crc kubenswrapper[4766]: I0130 18:01:43.533647 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-757f4f657-jzgr8" Jan 30 18:01:43 crc kubenswrapper[4766]: I0130 18:01:43.580544 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-757f4f657-jzgr8"] Jan 30 18:01:43 crc kubenswrapper[4766]: I0130 18:01:43.591502 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f7b06d45-03c9-406f-8fc0-79428ec9de8f-scripts\") pod \"horizon-757f4f657-jzgr8\" (UID: \"f7b06d45-03c9-406f-8fc0-79428ec9de8f\") " pod="openstack/horizon-757f4f657-jzgr8" Jan 30 18:01:43 crc kubenswrapper[4766]: I0130 18:01:43.591567 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58brx\" (UniqueName: \"kubernetes.io/projected/f7b06d45-03c9-406f-8fc0-79428ec9de8f-kube-api-access-58brx\") pod \"horizon-757f4f657-jzgr8\" (UID: \"f7b06d45-03c9-406f-8fc0-79428ec9de8f\") " pod="openstack/horizon-757f4f657-jzgr8" Jan 30 18:01:43 crc kubenswrapper[4766]: I0130 18:01:43.591630 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f7b06d45-03c9-406f-8fc0-79428ec9de8f-config-data\") pod \"horizon-757f4f657-jzgr8\" (UID: \"f7b06d45-03c9-406f-8fc0-79428ec9de8f\") " pod="openstack/horizon-757f4f657-jzgr8" Jan 30 18:01:43 crc kubenswrapper[4766]: I0130 18:01:43.591674 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f7b06d45-03c9-406f-8fc0-79428ec9de8f-horizon-secret-key\") pod \"horizon-757f4f657-jzgr8\" (UID: \"f7b06d45-03c9-406f-8fc0-79428ec9de8f\") " pod="openstack/horizon-757f4f657-jzgr8" Jan 30 18:01:43 crc kubenswrapper[4766]: I0130 18:01:43.591764 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7b06d45-03c9-406f-8fc0-79428ec9de8f-logs\") pod \"horizon-757f4f657-jzgr8\" (UID: \"f7b06d45-03c9-406f-8fc0-79428ec9de8f\") " pod="openstack/horizon-757f4f657-jzgr8" Jan 30 18:01:43 crc kubenswrapper[4766]: I0130 18:01:43.693577 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58brx\" (UniqueName: \"kubernetes.io/projected/f7b06d45-03c9-406f-8fc0-79428ec9de8f-kube-api-access-58brx\") pod \"horizon-757f4f657-jzgr8\" (UID: \"f7b06d45-03c9-406f-8fc0-79428ec9de8f\") " pod="openstack/horizon-757f4f657-jzgr8" Jan 30 18:01:43 crc kubenswrapper[4766]: I0130 18:01:43.693690 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f7b06d45-03c9-406f-8fc0-79428ec9de8f-config-data\") pod \"horizon-757f4f657-jzgr8\" (UID: \"f7b06d45-03c9-406f-8fc0-79428ec9de8f\") " pod="openstack/horizon-757f4f657-jzgr8" Jan 30 18:01:43 crc kubenswrapper[4766]: I0130 18:01:43.693746 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f7b06d45-03c9-406f-8fc0-79428ec9de8f-horizon-secret-key\") pod \"horizon-757f4f657-jzgr8\" (UID: \"f7b06d45-03c9-406f-8fc0-79428ec9de8f\") " pod="openstack/horizon-757f4f657-jzgr8" Jan 30 18:01:43 crc kubenswrapper[4766]: I0130 18:01:43.693853 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7b06d45-03c9-406f-8fc0-79428ec9de8f-logs\") pod \"horizon-757f4f657-jzgr8\" (UID: \"f7b06d45-03c9-406f-8fc0-79428ec9de8f\") " pod="openstack/horizon-757f4f657-jzgr8" Jan 30 18:01:43 crc kubenswrapper[4766]: I0130 18:01:43.693885 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f7b06d45-03c9-406f-8fc0-79428ec9de8f-scripts\") pod \"horizon-757f4f657-jzgr8\" (UID: \"f7b06d45-03c9-406f-8fc0-79428ec9de8f\") " pod="openstack/horizon-757f4f657-jzgr8" Jan 30 18:01:43 crc kubenswrapper[4766]: I0130 18:01:43.694775 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f7b06d45-03c9-406f-8fc0-79428ec9de8f-scripts\") pod \"horizon-757f4f657-jzgr8\" (UID: \"f7b06d45-03c9-406f-8fc0-79428ec9de8f\") " pod="openstack/horizon-757f4f657-jzgr8" Jan 30 18:01:43 crc kubenswrapper[4766]: I0130 18:01:43.695670 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7b06d45-03c9-406f-8fc0-79428ec9de8f-logs\") pod \"horizon-757f4f657-jzgr8\" (UID: \"f7b06d45-03c9-406f-8fc0-79428ec9de8f\") " pod="openstack/horizon-757f4f657-jzgr8" Jan 30 18:01:43 crc kubenswrapper[4766]: I0130 18:01:43.696917 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f7b06d45-03c9-406f-8fc0-79428ec9de8f-config-data\") pod \"horizon-757f4f657-jzgr8\" (UID: \"f7b06d45-03c9-406f-8fc0-79428ec9de8f\") " pod="openstack/horizon-757f4f657-jzgr8" Jan 30 18:01:43 crc kubenswrapper[4766]: I0130 18:01:43.706220 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f7b06d45-03c9-406f-8fc0-79428ec9de8f-horizon-secret-key\") pod \"horizon-757f4f657-jzgr8\" (UID: \"f7b06d45-03c9-406f-8fc0-79428ec9de8f\") " pod="openstack/horizon-757f4f657-jzgr8" Jan 30 18:01:43 crc kubenswrapper[4766]: I0130 18:01:43.716001 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58brx\" (UniqueName: \"kubernetes.io/projected/f7b06d45-03c9-406f-8fc0-79428ec9de8f-kube-api-access-58brx\") pod \"horizon-757f4f657-jzgr8\" (UID: \"f7b06d45-03c9-406f-8fc0-79428ec9de8f\") " pod="openstack/horizon-757f4f657-jzgr8" Jan 30 18:01:43 crc kubenswrapper[4766]: I0130 18:01:43.857883 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-757f4f657-jzgr8" Jan 30 18:01:44 crc kubenswrapper[4766]: I0130 18:01:44.331331 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-757f4f657-jzgr8"] Jan 30 18:01:44 crc kubenswrapper[4766]: I0130 18:01:44.464795 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-757f4f657-jzgr8" event={"ID":"f7b06d45-03c9-406f-8fc0-79428ec9de8f","Type":"ContainerStarted","Data":"e4b9e99da2dd13870511ef416acc84ef67b11b1bc1f720de214806be59e7a4ff"} Jan 30 18:01:44 crc kubenswrapper[4766]: I0130 18:01:44.827977 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-qr4v8"] Jan 30 18:01:44 crc kubenswrapper[4766]: I0130 18:01:44.829613 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-qr4v8" Jan 30 18:01:44 crc kubenswrapper[4766]: I0130 18:01:44.839309 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-qr4v8"] Jan 30 18:01:44 crc kubenswrapper[4766]: I0130 18:01:44.920114 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-3460-account-create-update-759zj"] Jan 30 18:01:44 crc kubenswrapper[4766]: I0130 18:01:44.921468 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-3460-account-create-update-759zj" Jan 30 18:01:44 crc kubenswrapper[4766]: I0130 18:01:44.923238 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Jan 30 18:01:44 crc kubenswrapper[4766]: I0130 18:01:44.924228 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxcc2\" (UniqueName: \"kubernetes.io/projected/8ac9189d-ff73-4cd5-8299-276858527c74-kube-api-access-xxcc2\") pod \"heat-db-create-qr4v8\" (UID: \"8ac9189d-ff73-4cd5-8299-276858527c74\") " pod="openstack/heat-db-create-qr4v8" Jan 30 18:01:44 crc kubenswrapper[4766]: I0130 18:01:44.924738 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ac9189d-ff73-4cd5-8299-276858527c74-operator-scripts\") pod \"heat-db-create-qr4v8\" (UID: \"8ac9189d-ff73-4cd5-8299-276858527c74\") " pod="openstack/heat-db-create-qr4v8" Jan 30 18:01:44 crc kubenswrapper[4766]: I0130 18:01:44.936965 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-3460-account-create-update-759zj"] Jan 30 18:01:45 crc kubenswrapper[4766]: I0130 18:01:45.028016 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-626hg\" (UniqueName: \"kubernetes.io/projected/f43513bc-2d21-47b3-8acb-b331c5f5f46f-kube-api-access-626hg\") pod \"heat-3460-account-create-update-759zj\" (UID: \"f43513bc-2d21-47b3-8acb-b331c5f5f46f\") " pod="openstack/heat-3460-account-create-update-759zj" Jan 30 18:01:45 crc kubenswrapper[4766]: I0130 18:01:45.028209 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ac9189d-ff73-4cd5-8299-276858527c74-operator-scripts\") pod \"heat-db-create-qr4v8\" (UID: \"8ac9189d-ff73-4cd5-8299-276858527c74\") " pod="openstack/heat-db-create-qr4v8" Jan 30 18:01:45 crc kubenswrapper[4766]: I0130 18:01:45.028328 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f43513bc-2d21-47b3-8acb-b331c5f5f46f-operator-scripts\") pod \"heat-3460-account-create-update-759zj\" (UID: \"f43513bc-2d21-47b3-8acb-b331c5f5f46f\") " pod="openstack/heat-3460-account-create-update-759zj" Jan 30 18:01:45 crc kubenswrapper[4766]: I0130 18:01:45.028767 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxcc2\" (UniqueName: \"kubernetes.io/projected/8ac9189d-ff73-4cd5-8299-276858527c74-kube-api-access-xxcc2\") pod \"heat-db-create-qr4v8\" (UID: \"8ac9189d-ff73-4cd5-8299-276858527c74\") " pod="openstack/heat-db-create-qr4v8" Jan 30 18:01:45 crc kubenswrapper[4766]: I0130 18:01:45.029344 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ac9189d-ff73-4cd5-8299-276858527c74-operator-scripts\") pod \"heat-db-create-qr4v8\" (UID: \"8ac9189d-ff73-4cd5-8299-276858527c74\") " pod="openstack/heat-db-create-qr4v8" Jan 30 18:01:45 crc kubenswrapper[4766]: I0130 18:01:45.058328 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxcc2\" (UniqueName: \"kubernetes.io/projected/8ac9189d-ff73-4cd5-8299-276858527c74-kube-api-access-xxcc2\") pod \"heat-db-create-qr4v8\" (UID: \"8ac9189d-ff73-4cd5-8299-276858527c74\") " pod="openstack/heat-db-create-qr4v8" Jan 30 18:01:45 crc kubenswrapper[4766]: I0130 18:01:45.130765 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-626hg\" (UniqueName: \"kubernetes.io/projected/f43513bc-2d21-47b3-8acb-b331c5f5f46f-kube-api-access-626hg\") pod \"heat-3460-account-create-update-759zj\" (UID: \"f43513bc-2d21-47b3-8acb-b331c5f5f46f\") " pod="openstack/heat-3460-account-create-update-759zj" Jan 30 18:01:45 crc kubenswrapper[4766]: I0130 18:01:45.130843 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f43513bc-2d21-47b3-8acb-b331c5f5f46f-operator-scripts\") pod \"heat-3460-account-create-update-759zj\" (UID: \"f43513bc-2d21-47b3-8acb-b331c5f5f46f\") " pod="openstack/heat-3460-account-create-update-759zj" Jan 30 18:01:45 crc kubenswrapper[4766]: I0130 18:01:45.131582 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f43513bc-2d21-47b3-8acb-b331c5f5f46f-operator-scripts\") pod \"heat-3460-account-create-update-759zj\" (UID: \"f43513bc-2d21-47b3-8acb-b331c5f5f46f\") " pod="openstack/heat-3460-account-create-update-759zj" Jan 30 18:01:45 crc kubenswrapper[4766]: I0130 18:01:45.148256 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-qr4v8" Jan 30 18:01:45 crc kubenswrapper[4766]: I0130 18:01:45.149224 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-626hg\" (UniqueName: \"kubernetes.io/projected/f43513bc-2d21-47b3-8acb-b331c5f5f46f-kube-api-access-626hg\") pod \"heat-3460-account-create-update-759zj\" (UID: \"f43513bc-2d21-47b3-8acb-b331c5f5f46f\") " pod="openstack/heat-3460-account-create-update-759zj" Jan 30 18:01:45 crc kubenswrapper[4766]: I0130 18:01:45.275674 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-3460-account-create-update-759zj" Jan 30 18:01:45 crc kubenswrapper[4766]: I0130 18:01:45.478666 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-757f4f657-jzgr8" event={"ID":"f7b06d45-03c9-406f-8fc0-79428ec9de8f","Type":"ContainerStarted","Data":"c89e08693cd603929762da2a1d881688bf7fe83e4451e349d8b763a26fa9d7a2"} Jan 30 18:01:45 crc kubenswrapper[4766]: I0130 18:01:45.478716 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-757f4f657-jzgr8" event={"ID":"f7b06d45-03c9-406f-8fc0-79428ec9de8f","Type":"ContainerStarted","Data":"b524c7b94ab2f980c470347bfa38008ac97401f82df71c662f598776c49d58a3"} Jan 30 18:01:45 crc kubenswrapper[4766]: I0130 18:01:45.510508 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-757f4f657-jzgr8" podStartSLOduration=2.510486369 podStartE2EDuration="2.510486369s" podCreationTimestamp="2026-01-30 18:01:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 18:01:45.502921553 +0000 UTC m=+5960.140878899" watchObservedRunningTime="2026-01-30 18:01:45.510486369 +0000 UTC m=+5960.148443715" Jan 30 18:01:45 crc kubenswrapper[4766]: I0130 18:01:45.614528 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-qr4v8"] Jan 30 18:01:45 crc kubenswrapper[4766]: W0130 18:01:45.774037 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf43513bc_2d21_47b3_8acb_b331c5f5f46f.slice/crio-c5ac3beff238d977b85f7351dc801c9788a23a826a6bd96eee8b251b30573089 WatchSource:0}: Error finding container c5ac3beff238d977b85f7351dc801c9788a23a826a6bd96eee8b251b30573089: Status 404 returned error can't find the container with id c5ac3beff238d977b85f7351dc801c9788a23a826a6bd96eee8b251b30573089 Jan 30 18:01:45 crc kubenswrapper[4766]: I0130 18:01:45.784552 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-3460-account-create-update-759zj"] Jan 30 18:01:46 crc kubenswrapper[4766]: I0130 18:01:46.488958 4766 generic.go:334] "Generic (PLEG): container finished" podID="f43513bc-2d21-47b3-8acb-b331c5f5f46f" containerID="2372c1e9832f7c23aa19961a5061d572b88f3ebb7135f0f0dc1ca6e4cc7f3513" exitCode=0 Jan 30 18:01:46 crc kubenswrapper[4766]: I0130 18:01:46.489018 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-3460-account-create-update-759zj" event={"ID":"f43513bc-2d21-47b3-8acb-b331c5f5f46f","Type":"ContainerDied","Data":"2372c1e9832f7c23aa19961a5061d572b88f3ebb7135f0f0dc1ca6e4cc7f3513"} Jan 30 18:01:46 crc kubenswrapper[4766]: I0130 18:01:46.489458 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-3460-account-create-update-759zj" event={"ID":"f43513bc-2d21-47b3-8acb-b331c5f5f46f","Type":"ContainerStarted","Data":"c5ac3beff238d977b85f7351dc801c9788a23a826a6bd96eee8b251b30573089"} Jan 30 18:01:46 crc kubenswrapper[4766]: I0130 18:01:46.492437 4766 generic.go:334] "Generic (PLEG): container finished" podID="8ac9189d-ff73-4cd5-8299-276858527c74" containerID="fdc597711293e561af5e386d2cc4ab829c74c387f45fbdb64b6eb6843ce500c5" exitCode=0 Jan 30 18:01:46 crc kubenswrapper[4766]: I0130 18:01:46.492487 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-qr4v8" event={"ID":"8ac9189d-ff73-4cd5-8299-276858527c74","Type":"ContainerDied","Data":"fdc597711293e561af5e386d2cc4ab829c74c387f45fbdb64b6eb6843ce500c5"} Jan 30 18:01:46 crc kubenswrapper[4766]: I0130 18:01:46.492533 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-qr4v8" event={"ID":"8ac9189d-ff73-4cd5-8299-276858527c74","Type":"ContainerStarted","Data":"e75645429bf97d490c8a166c790ed1f2e6c9945b07977ae741786fb7f91fa0f7"} Jan 30 18:01:47 crc kubenswrapper[4766]: I0130 18:01:47.052598 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-6cksv"] Jan 30 18:01:47 crc kubenswrapper[4766]: I0130 18:01:47.063684 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-6cksv"] Jan 30 18:01:48 crc kubenswrapper[4766]: I0130 18:01:48.086815 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1262aa38-ee4d-4579-b034-3669dd58a238" path="/var/lib/kubelet/pods/1262aa38-ee4d-4579-b034-3669dd58a238/volumes" Jan 30 18:01:48 crc kubenswrapper[4766]: I0130 18:01:48.110786 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-qr4v8" Jan 30 18:01:48 crc kubenswrapper[4766]: I0130 18:01:48.120061 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-3460-account-create-update-759zj" Jan 30 18:01:48 crc kubenswrapper[4766]: I0130 18:01:48.193385 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-626hg\" (UniqueName: \"kubernetes.io/projected/f43513bc-2d21-47b3-8acb-b331c5f5f46f-kube-api-access-626hg\") pod \"f43513bc-2d21-47b3-8acb-b331c5f5f46f\" (UID: \"f43513bc-2d21-47b3-8acb-b331c5f5f46f\") " Jan 30 18:01:48 crc kubenswrapper[4766]: I0130 18:01:48.193572 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f43513bc-2d21-47b3-8acb-b331c5f5f46f-operator-scripts\") pod \"f43513bc-2d21-47b3-8acb-b331c5f5f46f\" (UID: \"f43513bc-2d21-47b3-8acb-b331c5f5f46f\") " Jan 30 18:01:48 crc kubenswrapper[4766]: I0130 18:01:48.193669 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ac9189d-ff73-4cd5-8299-276858527c74-operator-scripts\") pod \"8ac9189d-ff73-4cd5-8299-276858527c74\" (UID: \"8ac9189d-ff73-4cd5-8299-276858527c74\") " Jan 30 18:01:48 crc kubenswrapper[4766]: I0130 18:01:48.193708 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxcc2\" (UniqueName: \"kubernetes.io/projected/8ac9189d-ff73-4cd5-8299-276858527c74-kube-api-access-xxcc2\") pod \"8ac9189d-ff73-4cd5-8299-276858527c74\" (UID: \"8ac9189d-ff73-4cd5-8299-276858527c74\") " Jan 30 18:01:48 crc kubenswrapper[4766]: I0130 18:01:48.197336 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f43513bc-2d21-47b3-8acb-b331c5f5f46f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f43513bc-2d21-47b3-8acb-b331c5f5f46f" (UID: "f43513bc-2d21-47b3-8acb-b331c5f5f46f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 18:01:48 crc kubenswrapper[4766]: I0130 18:01:48.197651 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ac9189d-ff73-4cd5-8299-276858527c74-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8ac9189d-ff73-4cd5-8299-276858527c74" (UID: "8ac9189d-ff73-4cd5-8299-276858527c74"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 18:01:48 crc kubenswrapper[4766]: I0130 18:01:48.208522 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f43513bc-2d21-47b3-8acb-b331c5f5f46f-kube-api-access-626hg" (OuterVolumeSpecName: "kube-api-access-626hg") pod "f43513bc-2d21-47b3-8acb-b331c5f5f46f" (UID: "f43513bc-2d21-47b3-8acb-b331c5f5f46f"). InnerVolumeSpecName "kube-api-access-626hg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:01:48 crc kubenswrapper[4766]: I0130 18:01:48.215414 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ac9189d-ff73-4cd5-8299-276858527c74-kube-api-access-xxcc2" (OuterVolumeSpecName: "kube-api-access-xxcc2") pod "8ac9189d-ff73-4cd5-8299-276858527c74" (UID: "8ac9189d-ff73-4cd5-8299-276858527c74"). InnerVolumeSpecName "kube-api-access-xxcc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:01:48 crc kubenswrapper[4766]: I0130 18:01:48.297090 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-626hg\" (UniqueName: \"kubernetes.io/projected/f43513bc-2d21-47b3-8acb-b331c5f5f46f-kube-api-access-626hg\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:48 crc kubenswrapper[4766]: I0130 18:01:48.297138 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f43513bc-2d21-47b3-8acb-b331c5f5f46f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:48 crc kubenswrapper[4766]: I0130 18:01:48.297147 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ac9189d-ff73-4cd5-8299-276858527c74-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:48 crc kubenswrapper[4766]: I0130 18:01:48.297158 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxcc2\" (UniqueName: \"kubernetes.io/projected/8ac9189d-ff73-4cd5-8299-276858527c74-kube-api-access-xxcc2\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:48 crc kubenswrapper[4766]: I0130 18:01:48.512986 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-3460-account-create-update-759zj" event={"ID":"f43513bc-2d21-47b3-8acb-b331c5f5f46f","Type":"ContainerDied","Data":"c5ac3beff238d977b85f7351dc801c9788a23a826a6bd96eee8b251b30573089"} Jan 30 18:01:48 crc kubenswrapper[4766]: I0130 18:01:48.513031 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5ac3beff238d977b85f7351dc801c9788a23a826a6bd96eee8b251b30573089" Jan 30 18:01:48 crc kubenswrapper[4766]: I0130 18:01:48.513033 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-3460-account-create-update-759zj" Jan 30 18:01:48 crc kubenswrapper[4766]: I0130 18:01:48.514880 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-qr4v8" event={"ID":"8ac9189d-ff73-4cd5-8299-276858527c74","Type":"ContainerDied","Data":"e75645429bf97d490c8a166c790ed1f2e6c9945b07977ae741786fb7f91fa0f7"} Jan 30 18:01:48 crc kubenswrapper[4766]: I0130 18:01:48.514908 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-qr4v8" Jan 30 18:01:48 crc kubenswrapper[4766]: I0130 18:01:48.514923 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e75645429bf97d490c8a166c790ed1f2e6c9945b07977ae741786fb7f91fa0f7" Jan 30 18:01:50 crc kubenswrapper[4766]: I0130 18:01:50.176473 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-276pq"] Jan 30 18:01:50 crc kubenswrapper[4766]: E0130 18:01:50.177397 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f43513bc-2d21-47b3-8acb-b331c5f5f46f" containerName="mariadb-account-create-update" Jan 30 18:01:50 crc kubenswrapper[4766]: I0130 18:01:50.177411 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f43513bc-2d21-47b3-8acb-b331c5f5f46f" containerName="mariadb-account-create-update" Jan 30 18:01:50 crc kubenswrapper[4766]: E0130 18:01:50.177430 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ac9189d-ff73-4cd5-8299-276858527c74" containerName="mariadb-database-create" Jan 30 18:01:50 crc kubenswrapper[4766]: I0130 18:01:50.177436 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ac9189d-ff73-4cd5-8299-276858527c74" containerName="mariadb-database-create" Jan 30 18:01:50 crc kubenswrapper[4766]: I0130 18:01:50.177673 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="f43513bc-2d21-47b3-8acb-b331c5f5f46f" containerName="mariadb-account-create-update" Jan 30 18:01:50 crc kubenswrapper[4766]: I0130 18:01:50.177694 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ac9189d-ff73-4cd5-8299-276858527c74" containerName="mariadb-database-create" Jan 30 18:01:50 crc kubenswrapper[4766]: I0130 18:01:50.178558 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-276pq" Jan 30 18:01:50 crc kubenswrapper[4766]: I0130 18:01:50.180502 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-nk49g" Jan 30 18:01:50 crc kubenswrapper[4766]: I0130 18:01:50.181309 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 30 18:01:50 crc kubenswrapper[4766]: I0130 18:01:50.190167 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-276pq"] Jan 30 18:01:50 crc kubenswrapper[4766]: I0130 18:01:50.238024 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05bc6794-04be-40f4-8fa7-552f45a104c0-combined-ca-bundle\") pod \"heat-db-sync-276pq\" (UID: \"05bc6794-04be-40f4-8fa7-552f45a104c0\") " pod="openstack/heat-db-sync-276pq" Jan 30 18:01:50 crc kubenswrapper[4766]: I0130 18:01:50.238289 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9gss\" (UniqueName: \"kubernetes.io/projected/05bc6794-04be-40f4-8fa7-552f45a104c0-kube-api-access-t9gss\") pod \"heat-db-sync-276pq\" (UID: \"05bc6794-04be-40f4-8fa7-552f45a104c0\") " pod="openstack/heat-db-sync-276pq" Jan 30 18:01:50 crc kubenswrapper[4766]: I0130 18:01:50.238478 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05bc6794-04be-40f4-8fa7-552f45a104c0-config-data\") pod \"heat-db-sync-276pq\" (UID: \"05bc6794-04be-40f4-8fa7-552f45a104c0\") " pod="openstack/heat-db-sync-276pq" Jan 30 18:01:50 crc kubenswrapper[4766]: I0130 18:01:50.341350 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05bc6794-04be-40f4-8fa7-552f45a104c0-combined-ca-bundle\") pod \"heat-db-sync-276pq\" (UID: \"05bc6794-04be-40f4-8fa7-552f45a104c0\") " pod="openstack/heat-db-sync-276pq" Jan 30 18:01:50 crc kubenswrapper[4766]: I0130 18:01:50.341493 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9gss\" (UniqueName: \"kubernetes.io/projected/05bc6794-04be-40f4-8fa7-552f45a104c0-kube-api-access-t9gss\") pod \"heat-db-sync-276pq\" (UID: \"05bc6794-04be-40f4-8fa7-552f45a104c0\") " pod="openstack/heat-db-sync-276pq" Jan 30 18:01:50 crc kubenswrapper[4766]: I0130 18:01:50.341571 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05bc6794-04be-40f4-8fa7-552f45a104c0-config-data\") pod \"heat-db-sync-276pq\" (UID: \"05bc6794-04be-40f4-8fa7-552f45a104c0\") " pod="openstack/heat-db-sync-276pq" Jan 30 18:01:50 crc kubenswrapper[4766]: I0130 18:01:50.354439 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05bc6794-04be-40f4-8fa7-552f45a104c0-combined-ca-bundle\") pod \"heat-db-sync-276pq\" (UID: \"05bc6794-04be-40f4-8fa7-552f45a104c0\") " pod="openstack/heat-db-sync-276pq" Jan 30 18:01:50 crc kubenswrapper[4766]: I0130 18:01:50.354996 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05bc6794-04be-40f4-8fa7-552f45a104c0-config-data\") pod \"heat-db-sync-276pq\" (UID: \"05bc6794-04be-40f4-8fa7-552f45a104c0\") " pod="openstack/heat-db-sync-276pq" Jan 30 18:01:50 crc kubenswrapper[4766]: I0130 18:01:50.369537 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9gss\" (UniqueName: \"kubernetes.io/projected/05bc6794-04be-40f4-8fa7-552f45a104c0-kube-api-access-t9gss\") pod \"heat-db-sync-276pq\" (UID: \"05bc6794-04be-40f4-8fa7-552f45a104c0\") " pod="openstack/heat-db-sync-276pq" Jan 30 18:01:50 crc kubenswrapper[4766]: I0130 18:01:50.499558 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-276pq" Jan 30 18:01:51 crc kubenswrapper[4766]: I0130 18:01:51.005892 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-276pq"] Jan 30 18:01:51 crc kubenswrapper[4766]: I0130 18:01:51.014014 4766 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 18:01:51 crc kubenswrapper[4766]: I0130 18:01:51.040451 4766 scope.go:117] "RemoveContainer" containerID="e52209522ad6ff86d8d333246ee724873200c61e028d93d4fc612ac4b0977354" Jan 30 18:01:51 crc kubenswrapper[4766]: E0130 18:01:51.040668 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:01:51 crc kubenswrapper[4766]: I0130 18:01:51.414027 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-646c4b5b47-xr8w7" podUID="8806aa45-5ae9-453c-8bc8-23fe8daa8e9d" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.108:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.108:8080: connect: connection refused" Jan 30 18:01:51 crc kubenswrapper[4766]: I0130 18:01:51.547951 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-276pq" event={"ID":"05bc6794-04be-40f4-8fa7-552f45a104c0","Type":"ContainerStarted","Data":"6e497384c613a0a27c47ed2ee415d94f97b14a0c4324f393f9886b6b4cf7c9b2"} Jan 30 18:01:53 crc kubenswrapper[4766]: I0130 18:01:53.858068 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-757f4f657-jzgr8" Jan 30 18:01:53 crc kubenswrapper[4766]: I0130 18:01:53.858575 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-757f4f657-jzgr8" Jan 30 18:01:58 crc kubenswrapper[4766]: I0130 18:01:58.610966 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-276pq" event={"ID":"05bc6794-04be-40f4-8fa7-552f45a104c0","Type":"ContainerStarted","Data":"2284a65079c4717b672db4a45e6787bcf5bd83c7d786d4d7da7725c5a83bc169"} Jan 30 18:01:58 crc kubenswrapper[4766]: I0130 18:01:58.629111 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-276pq" podStartSLOduration=1.680803206 podStartE2EDuration="8.629090784s" podCreationTimestamp="2026-01-30 18:01:50 +0000 UTC" firstStartedPulling="2026-01-30 18:01:51.01380312 +0000 UTC m=+5965.651760466" lastFinishedPulling="2026-01-30 18:01:57.962090698 +0000 UTC m=+5972.600048044" observedRunningTime="2026-01-30 18:01:58.627660116 +0000 UTC m=+5973.265617462" watchObservedRunningTime="2026-01-30 18:01:58.629090784 +0000 UTC m=+5973.267048130" Jan 30 18:02:00 crc kubenswrapper[4766]: I0130 18:02:00.646481 4766 generic.go:334] "Generic (PLEG): container finished" podID="05bc6794-04be-40f4-8fa7-552f45a104c0" containerID="2284a65079c4717b672db4a45e6787bcf5bd83c7d786d4d7da7725c5a83bc169" exitCode=0 Jan 30 18:02:00 crc kubenswrapper[4766]: I0130 18:02:00.646585 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-276pq" event={"ID":"05bc6794-04be-40f4-8fa7-552f45a104c0","Type":"ContainerDied","Data":"2284a65079c4717b672db4a45e6787bcf5bd83c7d786d4d7da7725c5a83bc169"} Jan 30 18:02:01 crc kubenswrapper[4766]: I0130 18:02:01.413743 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-646c4b5b47-xr8w7" podUID="8806aa45-5ae9-453c-8bc8-23fe8daa8e9d" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.108:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.108:8080: connect: connection refused" Jan 30 18:02:01 crc kubenswrapper[4766]: I0130 18:02:01.413885 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-646c4b5b47-xr8w7" Jan 30 18:02:02 crc kubenswrapper[4766]: I0130 18:02:02.013111 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-276pq" Jan 30 18:02:02 crc kubenswrapper[4766]: I0130 18:02:02.096554 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05bc6794-04be-40f4-8fa7-552f45a104c0-config-data\") pod \"05bc6794-04be-40f4-8fa7-552f45a104c0\" (UID: \"05bc6794-04be-40f4-8fa7-552f45a104c0\") " Jan 30 18:02:02 crc kubenswrapper[4766]: I0130 18:02:02.096681 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9gss\" (UniqueName: \"kubernetes.io/projected/05bc6794-04be-40f4-8fa7-552f45a104c0-kube-api-access-t9gss\") pod \"05bc6794-04be-40f4-8fa7-552f45a104c0\" (UID: \"05bc6794-04be-40f4-8fa7-552f45a104c0\") " Jan 30 18:02:02 crc kubenswrapper[4766]: I0130 18:02:02.096742 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05bc6794-04be-40f4-8fa7-552f45a104c0-combined-ca-bundle\") pod \"05bc6794-04be-40f4-8fa7-552f45a104c0\" (UID: \"05bc6794-04be-40f4-8fa7-552f45a104c0\") " Jan 30 18:02:02 crc kubenswrapper[4766]: I0130 18:02:02.104316 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05bc6794-04be-40f4-8fa7-552f45a104c0-kube-api-access-t9gss" (OuterVolumeSpecName: "kube-api-access-t9gss") pod "05bc6794-04be-40f4-8fa7-552f45a104c0" (UID: "05bc6794-04be-40f4-8fa7-552f45a104c0"). InnerVolumeSpecName "kube-api-access-t9gss". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:02:02 crc kubenswrapper[4766]: I0130 18:02:02.129339 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05bc6794-04be-40f4-8fa7-552f45a104c0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "05bc6794-04be-40f4-8fa7-552f45a104c0" (UID: "05bc6794-04be-40f4-8fa7-552f45a104c0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 18:02:02 crc kubenswrapper[4766]: I0130 18:02:02.182412 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05bc6794-04be-40f4-8fa7-552f45a104c0-config-data" (OuterVolumeSpecName: "config-data") pod "05bc6794-04be-40f4-8fa7-552f45a104c0" (UID: "05bc6794-04be-40f4-8fa7-552f45a104c0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 18:02:02 crc kubenswrapper[4766]: I0130 18:02:02.197907 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9gss\" (UniqueName: \"kubernetes.io/projected/05bc6794-04be-40f4-8fa7-552f45a104c0-kube-api-access-t9gss\") on node \"crc\" DevicePath \"\"" Jan 30 18:02:02 crc kubenswrapper[4766]: I0130 18:02:02.197953 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05bc6794-04be-40f4-8fa7-552f45a104c0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 18:02:02 crc kubenswrapper[4766]: I0130 18:02:02.197966 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05bc6794-04be-40f4-8fa7-552f45a104c0-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 18:02:02 crc kubenswrapper[4766]: I0130 18:02:02.667506 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-276pq" event={"ID":"05bc6794-04be-40f4-8fa7-552f45a104c0","Type":"ContainerDied","Data":"6e497384c613a0a27c47ed2ee415d94f97b14a0c4324f393f9886b6b4cf7c9b2"} Jan 30 18:02:02 crc kubenswrapper[4766]: I0130 18:02:02.667581 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e497384c613a0a27c47ed2ee415d94f97b14a0c4324f393f9886b6b4cf7c9b2" Jan 30 18:02:02 crc kubenswrapper[4766]: I0130 18:02:02.667605 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-276pq" Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.679459 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-54c46d7b9c-z94n2"] Jan 30 18:02:03 crc kubenswrapper[4766]: E0130 18:02:03.680023 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05bc6794-04be-40f4-8fa7-552f45a104c0" containerName="heat-db-sync" Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.680045 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="05bc6794-04be-40f4-8fa7-552f45a104c0" containerName="heat-db-sync" Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.680334 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="05bc6794-04be-40f4-8fa7-552f45a104c0" containerName="heat-db-sync" Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.681005 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-54c46d7b9c-z94n2" Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.691049 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-nk49g" Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.691337 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.691442 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.696740 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-54c46d7b9c-z94n2"] Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.842427 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/364a6690-a249-4765-b86e-b72ca919edb8-combined-ca-bundle\") pod \"heat-engine-54c46d7b9c-z94n2\" (UID: \"364a6690-a249-4765-b86e-b72ca919edb8\") " pod="openstack/heat-engine-54c46d7b9c-z94n2" Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.842886 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttgdl\" (UniqueName: \"kubernetes.io/projected/364a6690-a249-4765-b86e-b72ca919edb8-kube-api-access-ttgdl\") pod \"heat-engine-54c46d7b9c-z94n2\" (UID: \"364a6690-a249-4765-b86e-b72ca919edb8\") " pod="openstack/heat-engine-54c46d7b9c-z94n2" Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.843036 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/364a6690-a249-4765-b86e-b72ca919edb8-config-data\") pod \"heat-engine-54c46d7b9c-z94n2\" (UID: \"364a6690-a249-4765-b86e-b72ca919edb8\") " pod="openstack/heat-engine-54c46d7b9c-z94n2" Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.843264 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/364a6690-a249-4765-b86e-b72ca919edb8-config-data-custom\") pod \"heat-engine-54c46d7b9c-z94n2\" (UID: \"364a6690-a249-4765-b86e-b72ca919edb8\") " pod="openstack/heat-engine-54c46d7b9c-z94n2" Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.860845 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-757f4f657-jzgr8" podUID="f7b06d45-03c9-406f-8fc0-79428ec9de8f" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.112:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.112:8080: connect: connection refused" Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.896224 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-675bcfc5ff-kvdtq"] Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.897902 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-675bcfc5ff-kvdtq" Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.901363 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.938234 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-675bcfc5ff-kvdtq"] Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.948051 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxf86\" (UniqueName: \"kubernetes.io/projected/e11fd011-1725-4cdd-979f-75eecd0329b2-kube-api-access-lxf86\") pod \"heat-api-675bcfc5ff-kvdtq\" (UID: \"e11fd011-1725-4cdd-979f-75eecd0329b2\") " pod="openstack/heat-api-675bcfc5ff-kvdtq" Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.948358 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/364a6690-a249-4765-b86e-b72ca919edb8-combined-ca-bundle\") pod \"heat-engine-54c46d7b9c-z94n2\" (UID: \"364a6690-a249-4765-b86e-b72ca919edb8\") " pod="openstack/heat-engine-54c46d7b9c-z94n2" Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.948456 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttgdl\" (UniqueName: \"kubernetes.io/projected/364a6690-a249-4765-b86e-b72ca919edb8-kube-api-access-ttgdl\") pod \"heat-engine-54c46d7b9c-z94n2\" (UID: \"364a6690-a249-4765-b86e-b72ca919edb8\") " pod="openstack/heat-engine-54c46d7b9c-z94n2" Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.948565 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/364a6690-a249-4765-b86e-b72ca919edb8-config-data\") pod \"heat-engine-54c46d7b9c-z94n2\" (UID: \"364a6690-a249-4765-b86e-b72ca919edb8\") " pod="openstack/heat-engine-54c46d7b9c-z94n2" Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.948714 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e11fd011-1725-4cdd-979f-75eecd0329b2-combined-ca-bundle\") pod \"heat-api-675bcfc5ff-kvdtq\" (UID: \"e11fd011-1725-4cdd-979f-75eecd0329b2\") " pod="openstack/heat-api-675bcfc5ff-kvdtq" Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.948796 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e11fd011-1725-4cdd-979f-75eecd0329b2-config-data\") pod \"heat-api-675bcfc5ff-kvdtq\" (UID: \"e11fd011-1725-4cdd-979f-75eecd0329b2\") " pod="openstack/heat-api-675bcfc5ff-kvdtq" Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.948880 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/364a6690-a249-4765-b86e-b72ca919edb8-config-data-custom\") pod \"heat-engine-54c46d7b9c-z94n2\" (UID: \"364a6690-a249-4765-b86e-b72ca919edb8\") " pod="openstack/heat-engine-54c46d7b9c-z94n2" Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.948961 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e11fd011-1725-4cdd-979f-75eecd0329b2-config-data-custom\") pod \"heat-api-675bcfc5ff-kvdtq\" (UID: \"e11fd011-1725-4cdd-979f-75eecd0329b2\") " pod="openstack/heat-api-675bcfc5ff-kvdtq" Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.958342 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/364a6690-a249-4765-b86e-b72ca919edb8-combined-ca-bundle\") pod \"heat-engine-54c46d7b9c-z94n2\" (UID: \"364a6690-a249-4765-b86e-b72ca919edb8\") " pod="openstack/heat-engine-54c46d7b9c-z94n2" Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.962132 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/364a6690-a249-4765-b86e-b72ca919edb8-config-data\") pod \"heat-engine-54c46d7b9c-z94n2\" (UID: \"364a6690-a249-4765-b86e-b72ca919edb8\") " pod="openstack/heat-engine-54c46d7b9c-z94n2" Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.965530 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/364a6690-a249-4765-b86e-b72ca919edb8-config-data-custom\") pod \"heat-engine-54c46d7b9c-z94n2\" (UID: \"364a6690-a249-4765-b86e-b72ca919edb8\") " pod="openstack/heat-engine-54c46d7b9c-z94n2" Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.972991 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttgdl\" (UniqueName: \"kubernetes.io/projected/364a6690-a249-4765-b86e-b72ca919edb8-kube-api-access-ttgdl\") pod \"heat-engine-54c46d7b9c-z94n2\" (UID: \"364a6690-a249-4765-b86e-b72ca919edb8\") " pod="openstack/heat-engine-54c46d7b9c-z94n2" Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.973067 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-675bf5dcf-ltj5r"] Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.974796 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-675bf5dcf-ltj5r" Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.984849 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.984841 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-675bf5dcf-ltj5r"] Jan 30 18:02:04 crc kubenswrapper[4766]: I0130 18:02:04.025709 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-54c46d7b9c-z94n2" Jan 30 18:02:04 crc kubenswrapper[4766]: I0130 18:02:04.041001 4766 scope.go:117] "RemoveContainer" containerID="e52209522ad6ff86d8d333246ee724873200c61e028d93d4fc612ac4b0977354" Jan 30 18:02:04 crc kubenswrapper[4766]: E0130 18:02:04.041415 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:02:04 crc kubenswrapper[4766]: I0130 18:02:04.052942 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e11fd011-1725-4cdd-979f-75eecd0329b2-combined-ca-bundle\") pod \"heat-api-675bcfc5ff-kvdtq\" (UID: \"e11fd011-1725-4cdd-979f-75eecd0329b2\") " pod="openstack/heat-api-675bcfc5ff-kvdtq" Jan 30 18:02:04 crc kubenswrapper[4766]: I0130 18:02:04.053017 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e11fd011-1725-4cdd-979f-75eecd0329b2-config-data\") pod \"heat-api-675bcfc5ff-kvdtq\" (UID: \"e11fd011-1725-4cdd-979f-75eecd0329b2\") " pod="openstack/heat-api-675bcfc5ff-kvdtq" Jan 30 18:02:04 crc kubenswrapper[4766]: I0130 18:02:04.053060 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e11fd011-1725-4cdd-979f-75eecd0329b2-config-data-custom\") pod \"heat-api-675bcfc5ff-kvdtq\" (UID: \"e11fd011-1725-4cdd-979f-75eecd0329b2\") " pod="openstack/heat-api-675bcfc5ff-kvdtq" Jan 30 18:02:04 crc kubenswrapper[4766]: I0130 18:02:04.053123 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxf86\" (UniqueName: \"kubernetes.io/projected/e11fd011-1725-4cdd-979f-75eecd0329b2-kube-api-access-lxf86\") pod \"heat-api-675bcfc5ff-kvdtq\" (UID: \"e11fd011-1725-4cdd-979f-75eecd0329b2\") " pod="openstack/heat-api-675bcfc5ff-kvdtq" Jan 30 18:02:04 crc kubenswrapper[4766]: I0130 18:02:04.062804 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e11fd011-1725-4cdd-979f-75eecd0329b2-config-data-custom\") pod \"heat-api-675bcfc5ff-kvdtq\" (UID: \"e11fd011-1725-4cdd-979f-75eecd0329b2\") " pod="openstack/heat-api-675bcfc5ff-kvdtq" Jan 30 18:02:04 crc kubenswrapper[4766]: I0130 18:02:04.067163 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e11fd011-1725-4cdd-979f-75eecd0329b2-combined-ca-bundle\") pod \"heat-api-675bcfc5ff-kvdtq\" (UID: \"e11fd011-1725-4cdd-979f-75eecd0329b2\") " pod="openstack/heat-api-675bcfc5ff-kvdtq" Jan 30 18:02:04 crc kubenswrapper[4766]: I0130 18:02:04.071374 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxf86\" (UniqueName: \"kubernetes.io/projected/e11fd011-1725-4cdd-979f-75eecd0329b2-kube-api-access-lxf86\") pod \"heat-api-675bcfc5ff-kvdtq\" (UID: \"e11fd011-1725-4cdd-979f-75eecd0329b2\") " pod="openstack/heat-api-675bcfc5ff-kvdtq" Jan 30 18:02:04 crc kubenswrapper[4766]: I0130 18:02:04.078220 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e11fd011-1725-4cdd-979f-75eecd0329b2-config-data\") pod \"heat-api-675bcfc5ff-kvdtq\" (UID: \"e11fd011-1725-4cdd-979f-75eecd0329b2\") " pod="openstack/heat-api-675bcfc5ff-kvdtq" Jan 30 18:02:04 crc kubenswrapper[4766]: I0130 18:02:04.155723 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/65f44ca0-52f4-4d4a-aeb8-18275fff50eb-config-data-custom\") pod \"heat-cfnapi-675bf5dcf-ltj5r\" (UID: \"65f44ca0-52f4-4d4a-aeb8-18275fff50eb\") " pod="openstack/heat-cfnapi-675bf5dcf-ltj5r" Jan 30 18:02:04 crc kubenswrapper[4766]: I0130 18:02:04.155823 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghhzc\" (UniqueName: \"kubernetes.io/projected/65f44ca0-52f4-4d4a-aeb8-18275fff50eb-kube-api-access-ghhzc\") pod \"heat-cfnapi-675bf5dcf-ltj5r\" (UID: \"65f44ca0-52f4-4d4a-aeb8-18275fff50eb\") " pod="openstack/heat-cfnapi-675bf5dcf-ltj5r" Jan 30 18:02:04 crc kubenswrapper[4766]: I0130 18:02:04.155865 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65f44ca0-52f4-4d4a-aeb8-18275fff50eb-config-data\") pod \"heat-cfnapi-675bf5dcf-ltj5r\" (UID: \"65f44ca0-52f4-4d4a-aeb8-18275fff50eb\") " pod="openstack/heat-cfnapi-675bf5dcf-ltj5r" Jan 30 18:02:04 crc kubenswrapper[4766]: I0130 18:02:04.156972 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65f44ca0-52f4-4d4a-aeb8-18275fff50eb-combined-ca-bundle\") pod \"heat-cfnapi-675bf5dcf-ltj5r\" (UID: \"65f44ca0-52f4-4d4a-aeb8-18275fff50eb\") " pod="openstack/heat-cfnapi-675bf5dcf-ltj5r" Jan 30 18:02:04 crc kubenswrapper[4766]: I0130 18:02:04.233207 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-675bcfc5ff-kvdtq" Jan 30 18:02:04 crc kubenswrapper[4766]: I0130 18:02:04.261767 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/65f44ca0-52f4-4d4a-aeb8-18275fff50eb-config-data-custom\") pod \"heat-cfnapi-675bf5dcf-ltj5r\" (UID: \"65f44ca0-52f4-4d4a-aeb8-18275fff50eb\") " pod="openstack/heat-cfnapi-675bf5dcf-ltj5r" Jan 30 18:02:04 crc kubenswrapper[4766]: I0130 18:02:04.261884 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghhzc\" (UniqueName: \"kubernetes.io/projected/65f44ca0-52f4-4d4a-aeb8-18275fff50eb-kube-api-access-ghhzc\") pod \"heat-cfnapi-675bf5dcf-ltj5r\" (UID: \"65f44ca0-52f4-4d4a-aeb8-18275fff50eb\") " pod="openstack/heat-cfnapi-675bf5dcf-ltj5r" Jan 30 18:02:04 crc kubenswrapper[4766]: I0130 18:02:04.261923 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65f44ca0-52f4-4d4a-aeb8-18275fff50eb-config-data\") pod \"heat-cfnapi-675bf5dcf-ltj5r\" (UID: \"65f44ca0-52f4-4d4a-aeb8-18275fff50eb\") " pod="openstack/heat-cfnapi-675bf5dcf-ltj5r" Jan 30 18:02:04 crc kubenswrapper[4766]: I0130 18:02:04.262015 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65f44ca0-52f4-4d4a-aeb8-18275fff50eb-combined-ca-bundle\") pod \"heat-cfnapi-675bf5dcf-ltj5r\" (UID: \"65f44ca0-52f4-4d4a-aeb8-18275fff50eb\") " pod="openstack/heat-cfnapi-675bf5dcf-ltj5r" Jan 30 18:02:04 crc kubenswrapper[4766]: I0130 18:02:04.281728 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/65f44ca0-52f4-4d4a-aeb8-18275fff50eb-config-data-custom\") pod \"heat-cfnapi-675bf5dcf-ltj5r\" (UID: \"65f44ca0-52f4-4d4a-aeb8-18275fff50eb\") " pod="openstack/heat-cfnapi-675bf5dcf-ltj5r" Jan 30 18:02:04 crc kubenswrapper[4766]: I0130 18:02:04.282063 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65f44ca0-52f4-4d4a-aeb8-18275fff50eb-config-data\") pod \"heat-cfnapi-675bf5dcf-ltj5r\" (UID: \"65f44ca0-52f4-4d4a-aeb8-18275fff50eb\") " pod="openstack/heat-cfnapi-675bf5dcf-ltj5r" Jan 30 18:02:04 crc kubenswrapper[4766]: I0130 18:02:04.288201 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghhzc\" (UniqueName: \"kubernetes.io/projected/65f44ca0-52f4-4d4a-aeb8-18275fff50eb-kube-api-access-ghhzc\") pod \"heat-cfnapi-675bf5dcf-ltj5r\" (UID: \"65f44ca0-52f4-4d4a-aeb8-18275fff50eb\") " pod="openstack/heat-cfnapi-675bf5dcf-ltj5r" Jan 30 18:02:04 crc kubenswrapper[4766]: I0130 18:02:04.297297 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65f44ca0-52f4-4d4a-aeb8-18275fff50eb-combined-ca-bundle\") pod \"heat-cfnapi-675bf5dcf-ltj5r\" (UID: \"65f44ca0-52f4-4d4a-aeb8-18275fff50eb\") " pod="openstack/heat-cfnapi-675bf5dcf-ltj5r" Jan 30 18:02:04 crc kubenswrapper[4766]: I0130 18:02:04.494757 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-675bf5dcf-ltj5r" Jan 30 18:02:04 crc kubenswrapper[4766]: I0130 18:02:04.611809 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-54c46d7b9c-z94n2"] Jan 30 18:02:04 crc kubenswrapper[4766]: I0130 18:02:04.711826 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-54c46d7b9c-z94n2" event={"ID":"364a6690-a249-4765-b86e-b72ca919edb8","Type":"ContainerStarted","Data":"ce8ae0fc85c7a535dbaf17695e17b5e4a72cb6f4ebd4a6284d9c4fdd9ad9ad58"} Jan 30 18:02:04 crc kubenswrapper[4766]: I0130 18:02:04.843302 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-675bcfc5ff-kvdtq"] Jan 30 18:02:04 crc kubenswrapper[4766]: W0130 18:02:04.845767 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode11fd011_1725_4cdd_979f_75eecd0329b2.slice/crio-357bb5060c9ec16c6953b7efbca1c60aa9cc61ba00658c85c5d8be5a6755233b WatchSource:0}: Error finding container 357bb5060c9ec16c6953b7efbca1c60aa9cc61ba00658c85c5d8be5a6755233b: Status 404 returned error can't find the container with id 357bb5060c9ec16c6953b7efbca1c60aa9cc61ba00658c85c5d8be5a6755233b Jan 30 18:02:05 crc kubenswrapper[4766]: I0130 18:02:05.028956 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-675bf5dcf-ltj5r"] Jan 30 18:02:05 crc kubenswrapper[4766]: W0130 18:02:05.035475 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65f44ca0_52f4_4d4a_aeb8_18275fff50eb.slice/crio-9098b323f1c0c9d25f39313a0ceb7a214fbd23cec025829cbf38752afebb54e4 WatchSource:0}: Error finding container 9098b323f1c0c9d25f39313a0ceb7a214fbd23cec025829cbf38752afebb54e4: Status 404 returned error can't find the container with id 9098b323f1c0c9d25f39313a0ceb7a214fbd23cec025829cbf38752afebb54e4 Jan 30 18:02:05 crc kubenswrapper[4766]: I0130 18:02:05.724587 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-675bcfc5ff-kvdtq" event={"ID":"e11fd011-1725-4cdd-979f-75eecd0329b2","Type":"ContainerStarted","Data":"357bb5060c9ec16c6953b7efbca1c60aa9cc61ba00658c85c5d8be5a6755233b"} Jan 30 18:02:05 crc kubenswrapper[4766]: I0130 18:02:05.752297 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-675bf5dcf-ltj5r" event={"ID":"65f44ca0-52f4-4d4a-aeb8-18275fff50eb","Type":"ContainerStarted","Data":"9098b323f1c0c9d25f39313a0ceb7a214fbd23cec025829cbf38752afebb54e4"} Jan 30 18:02:05 crc kubenswrapper[4766]: I0130 18:02:05.754623 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-54c46d7b9c-z94n2" event={"ID":"364a6690-a249-4765-b86e-b72ca919edb8","Type":"ContainerStarted","Data":"aa91cb66d167101f26864989ffb0150f0e46af0db8f973a9b047c4e90830006d"} Jan 30 18:02:05 crc kubenswrapper[4766]: I0130 18:02:05.754760 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-54c46d7b9c-z94n2" Jan 30 18:02:05 crc kubenswrapper[4766]: I0130 18:02:05.758515 4766 generic.go:334] "Generic (PLEG): container finished" podID="8806aa45-5ae9-453c-8bc8-23fe8daa8e9d" containerID="5a1b1f2fd93ecc065b4b50e7dd571ff4a7f60f4b4ce4f7d89d8895fe416e14e4" exitCode=137 Jan 30 18:02:05 crc kubenswrapper[4766]: I0130 18:02:05.758586 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-646c4b5b47-xr8w7" event={"ID":"8806aa45-5ae9-453c-8bc8-23fe8daa8e9d","Type":"ContainerDied","Data":"5a1b1f2fd93ecc065b4b50e7dd571ff4a7f60f4b4ce4f7d89d8895fe416e14e4"} Jan 30 18:02:05 crc kubenswrapper[4766]: I0130 18:02:05.781279 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-54c46d7b9c-z94n2" podStartSLOduration=2.781259208 podStartE2EDuration="2.781259208s" podCreationTimestamp="2026-01-30 18:02:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 18:02:05.770821165 +0000 UTC m=+5980.408778511" watchObservedRunningTime="2026-01-30 18:02:05.781259208 +0000 UTC m=+5980.419216554" Jan 30 18:02:06 crc kubenswrapper[4766]: I0130 18:02:06.052849 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-646c4b5b47-xr8w7" Jan 30 18:02:06 crc kubenswrapper[4766]: I0130 18:02:06.134922 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8806aa45-5ae9-453c-8bc8-23fe8daa8e9d-logs\") pod \"8806aa45-5ae9-453c-8bc8-23fe8daa8e9d\" (UID: \"8806aa45-5ae9-453c-8bc8-23fe8daa8e9d\") " Jan 30 18:02:06 crc kubenswrapper[4766]: I0130 18:02:06.135133 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zd65n\" (UniqueName: \"kubernetes.io/projected/8806aa45-5ae9-453c-8bc8-23fe8daa8e9d-kube-api-access-zd65n\") pod \"8806aa45-5ae9-453c-8bc8-23fe8daa8e9d\" (UID: \"8806aa45-5ae9-453c-8bc8-23fe8daa8e9d\") " Jan 30 18:02:06 crc kubenswrapper[4766]: I0130 18:02:06.135210 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8806aa45-5ae9-453c-8bc8-23fe8daa8e9d-horizon-secret-key\") pod \"8806aa45-5ae9-453c-8bc8-23fe8daa8e9d\" (UID: \"8806aa45-5ae9-453c-8bc8-23fe8daa8e9d\") " Jan 30 18:02:06 crc kubenswrapper[4766]: I0130 18:02:06.135260 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8806aa45-5ae9-453c-8bc8-23fe8daa8e9d-scripts\") pod \"8806aa45-5ae9-453c-8bc8-23fe8daa8e9d\" (UID: \"8806aa45-5ae9-453c-8bc8-23fe8daa8e9d\") " Jan 30 18:02:06 crc kubenswrapper[4766]: I0130 18:02:06.135326 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8806aa45-5ae9-453c-8bc8-23fe8daa8e9d-config-data\") pod \"8806aa45-5ae9-453c-8bc8-23fe8daa8e9d\" (UID: \"8806aa45-5ae9-453c-8bc8-23fe8daa8e9d\") " Jan 30 18:02:06 crc kubenswrapper[4766]: I0130 18:02:06.141741 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8806aa45-5ae9-453c-8bc8-23fe8daa8e9d-logs" (OuterVolumeSpecName: "logs") pod "8806aa45-5ae9-453c-8bc8-23fe8daa8e9d" (UID: "8806aa45-5ae9-453c-8bc8-23fe8daa8e9d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:02:06 crc kubenswrapper[4766]: I0130 18:02:06.147852 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8806aa45-5ae9-453c-8bc8-23fe8daa8e9d-kube-api-access-zd65n" (OuterVolumeSpecName: "kube-api-access-zd65n") pod "8806aa45-5ae9-453c-8bc8-23fe8daa8e9d" (UID: "8806aa45-5ae9-453c-8bc8-23fe8daa8e9d"). InnerVolumeSpecName "kube-api-access-zd65n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:02:06 crc kubenswrapper[4766]: I0130 18:02:06.148043 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8806aa45-5ae9-453c-8bc8-23fe8daa8e9d-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "8806aa45-5ae9-453c-8bc8-23fe8daa8e9d" (UID: "8806aa45-5ae9-453c-8bc8-23fe8daa8e9d"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 18:02:06 crc kubenswrapper[4766]: I0130 18:02:06.174042 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8806aa45-5ae9-453c-8bc8-23fe8daa8e9d-scripts" (OuterVolumeSpecName: "scripts") pod "8806aa45-5ae9-453c-8bc8-23fe8daa8e9d" (UID: "8806aa45-5ae9-453c-8bc8-23fe8daa8e9d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 18:02:06 crc kubenswrapper[4766]: I0130 18:02:06.210233 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8806aa45-5ae9-453c-8bc8-23fe8daa8e9d-config-data" (OuterVolumeSpecName: "config-data") pod "8806aa45-5ae9-453c-8bc8-23fe8daa8e9d" (UID: "8806aa45-5ae9-453c-8bc8-23fe8daa8e9d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 18:02:06 crc kubenswrapper[4766]: I0130 18:02:06.239544 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zd65n\" (UniqueName: \"kubernetes.io/projected/8806aa45-5ae9-453c-8bc8-23fe8daa8e9d-kube-api-access-zd65n\") on node \"crc\" DevicePath \"\"" Jan 30 18:02:06 crc kubenswrapper[4766]: I0130 18:02:06.239860 4766 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8806aa45-5ae9-453c-8bc8-23fe8daa8e9d-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 30 18:02:06 crc kubenswrapper[4766]: I0130 18:02:06.239981 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8806aa45-5ae9-453c-8bc8-23fe8daa8e9d-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 18:02:06 crc kubenswrapper[4766]: I0130 18:02:06.240067 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8806aa45-5ae9-453c-8bc8-23fe8daa8e9d-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 18:02:06 crc kubenswrapper[4766]: I0130 18:02:06.240151 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8806aa45-5ae9-453c-8bc8-23fe8daa8e9d-logs\") on node \"crc\" DevicePath \"\"" Jan 30 18:02:06 crc kubenswrapper[4766]: I0130 18:02:06.778391 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-646c4b5b47-xr8w7" event={"ID":"8806aa45-5ae9-453c-8bc8-23fe8daa8e9d","Type":"ContainerDied","Data":"4f401447cb213f1837b37ef48530e7e3b154870ca692e29ced373b3aa6253a8e"} Jan 30 18:02:06 crc kubenswrapper[4766]: I0130 18:02:06.778427 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-646c4b5b47-xr8w7" Jan 30 18:02:06 crc kubenswrapper[4766]: I0130 18:02:06.779639 4766 scope.go:117] "RemoveContainer" containerID="cd3edfbcd9f13bf9b2e70c6b3b5b717d1cb225662e84f5a9a9139e0471a7a39b" Jan 30 18:02:06 crc kubenswrapper[4766]: I0130 18:02:06.836763 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-646c4b5b47-xr8w7"] Jan 30 18:02:06 crc kubenswrapper[4766]: I0130 18:02:06.849908 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-646c4b5b47-xr8w7"] Jan 30 18:02:07 crc kubenswrapper[4766]: I0130 18:02:07.201687 4766 scope.go:117] "RemoveContainer" containerID="5a1b1f2fd93ecc065b4b50e7dd571ff4a7f60f4b4ce4f7d89d8895fe416e14e4" Jan 30 18:02:07 crc kubenswrapper[4766]: I0130 18:02:07.794073 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-675bcfc5ff-kvdtq" event={"ID":"e11fd011-1725-4cdd-979f-75eecd0329b2","Type":"ContainerStarted","Data":"056daf7f68fc1873b7c3f4bd33a7243161ce3a6b753d744707ed45ee0fb6cf0e"} Jan 30 18:02:07 crc kubenswrapper[4766]: I0130 18:02:07.794478 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-675bcfc5ff-kvdtq" Jan 30 18:02:07 crc kubenswrapper[4766]: I0130 18:02:07.799024 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-675bf5dcf-ltj5r" event={"ID":"65f44ca0-52f4-4d4a-aeb8-18275fff50eb","Type":"ContainerStarted","Data":"ac33631deed2079516e706577c495fb1391ab0237cb596d6f46246e62043f0d0"} Jan 30 18:02:07 crc kubenswrapper[4766]: I0130 18:02:07.799364 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-675bf5dcf-ltj5r" Jan 30 18:02:07 crc kubenswrapper[4766]: I0130 18:02:07.843098 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-675bcfc5ff-kvdtq" podStartSLOduration=2.431763265 podStartE2EDuration="4.843074945s" podCreationTimestamp="2026-01-30 18:02:03 +0000 UTC" firstStartedPulling="2026-01-30 18:02:04.848751007 +0000 UTC m=+5979.486708353" lastFinishedPulling="2026-01-30 18:02:07.260062687 +0000 UTC m=+5981.898020033" observedRunningTime="2026-01-30 18:02:07.821904298 +0000 UTC m=+5982.459861664" watchObservedRunningTime="2026-01-30 18:02:07.843074945 +0000 UTC m=+5982.481032291" Jan 30 18:02:08 crc kubenswrapper[4766]: I0130 18:02:08.053348 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8806aa45-5ae9-453c-8bc8-23fe8daa8e9d" path="/var/lib/kubelet/pods/8806aa45-5ae9-453c-8bc8-23fe8daa8e9d/volumes" Jan 30 18:02:14 crc kubenswrapper[4766]: I0130 18:02:14.058865 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-54c46d7b9c-z94n2" Jan 30 18:02:14 crc kubenswrapper[4766]: I0130 18:02:14.077386 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-675bf5dcf-ltj5r" podStartSLOduration=8.855521919 podStartE2EDuration="11.077363317s" podCreationTimestamp="2026-01-30 18:02:03 +0000 UTC" firstStartedPulling="2026-01-30 18:02:05.038378564 +0000 UTC m=+5979.676335910" lastFinishedPulling="2026-01-30 18:02:07.260219962 +0000 UTC m=+5981.898177308" observedRunningTime="2026-01-30 18:02:07.846671083 +0000 UTC m=+5982.484628439" watchObservedRunningTime="2026-01-30 18:02:14.077363317 +0000 UTC m=+5988.715320663" Jan 30 18:02:15 crc kubenswrapper[4766]: I0130 18:02:15.648646 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-675bcfc5ff-kvdtq" Jan 30 18:02:16 crc kubenswrapper[4766]: I0130 18:02:16.021771 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-757f4f657-jzgr8" Jan 30 18:02:16 crc kubenswrapper[4766]: I0130 18:02:16.056891 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-675bf5dcf-ltj5r" Jan 30 18:02:17 crc kubenswrapper[4766]: I0130 18:02:17.039485 4766 scope.go:117] "RemoveContainer" containerID="e52209522ad6ff86d8d333246ee724873200c61e028d93d4fc612ac4b0977354" Jan 30 18:02:17 crc kubenswrapper[4766]: I0130 18:02:17.868446 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-757f4f657-jzgr8" Jan 30 18:02:17 crc kubenswrapper[4766]: I0130 18:02:17.892267 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerStarted","Data":"14f71626d75ef20c57062d292513e3fd82c4a368099315d09ba80457172d5098"} Jan 30 18:02:17 crc kubenswrapper[4766]: I0130 18:02:17.952837 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7c4d556457-cgwh5"] Jan 30 18:02:17 crc kubenswrapper[4766]: I0130 18:02:17.953073 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7c4d556457-cgwh5" podUID="e24a2653-c901-4306-a56b-2e2de8006403" containerName="horizon-log" containerID="cri-o://90394fdff017d58c0e8cd3327168199dc8c7d1df43cf284b9f898399e036a217" gracePeriod=30 Jan 30 18:02:17 crc kubenswrapper[4766]: I0130 18:02:17.953217 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7c4d556457-cgwh5" podUID="e24a2653-c901-4306-a56b-2e2de8006403" containerName="horizon" containerID="cri-o://1e480dbcd993b0ab6a788770045d86acbc61597646aa5360f9b83b164e59d969" gracePeriod=30 Jan 30 18:02:21 crc kubenswrapper[4766]: I0130 18:02:21.895888 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7c4d556457-cgwh5" podUID="e24a2653-c901-4306-a56b-2e2de8006403" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.109:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.109:8080: connect: connection refused" Jan 30 18:02:21 crc kubenswrapper[4766]: I0130 18:02:21.933879 4766 generic.go:334] "Generic (PLEG): container finished" podID="e24a2653-c901-4306-a56b-2e2de8006403" containerID="1e480dbcd993b0ab6a788770045d86acbc61597646aa5360f9b83b164e59d969" exitCode=0 Jan 30 18:02:21 crc kubenswrapper[4766]: I0130 18:02:21.933933 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7c4d556457-cgwh5" event={"ID":"e24a2653-c901-4306-a56b-2e2de8006403","Type":"ContainerDied","Data":"1e480dbcd993b0ab6a788770045d86acbc61597646aa5360f9b83b164e59d969"} Jan 30 18:02:24 crc kubenswrapper[4766]: I0130 18:02:24.806884 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz"] Jan 30 18:02:24 crc kubenswrapper[4766]: E0130 18:02:24.807904 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8806aa45-5ae9-453c-8bc8-23fe8daa8e9d" containerName="horizon" Jan 30 18:02:24 crc kubenswrapper[4766]: I0130 18:02:24.807917 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8806aa45-5ae9-453c-8bc8-23fe8daa8e9d" containerName="horizon" Jan 30 18:02:24 crc kubenswrapper[4766]: E0130 18:02:24.807943 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8806aa45-5ae9-453c-8bc8-23fe8daa8e9d" containerName="horizon-log" Jan 30 18:02:24 crc kubenswrapper[4766]: I0130 18:02:24.807949 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8806aa45-5ae9-453c-8bc8-23fe8daa8e9d" containerName="horizon-log" Jan 30 18:02:24 crc kubenswrapper[4766]: I0130 18:02:24.808141 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8806aa45-5ae9-453c-8bc8-23fe8daa8e9d" containerName="horizon" Jan 30 18:02:24 crc kubenswrapper[4766]: I0130 18:02:24.808165 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8806aa45-5ae9-453c-8bc8-23fe8daa8e9d" containerName="horizon-log" Jan 30 18:02:24 crc kubenswrapper[4766]: I0130 18:02:24.809740 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz" Jan 30 18:02:24 crc kubenswrapper[4766]: I0130 18:02:24.815322 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 30 18:02:24 crc kubenswrapper[4766]: I0130 18:02:24.823063 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz"] Jan 30 18:02:24 crc kubenswrapper[4766]: I0130 18:02:24.956991 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz\" (UID: \"1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz" Jan 30 18:02:24 crc kubenswrapper[4766]: I0130 18:02:24.957062 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc6xj\" (UniqueName: \"kubernetes.io/projected/1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951-kube-api-access-jc6xj\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz\" (UID: \"1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz" Jan 30 18:02:24 crc kubenswrapper[4766]: I0130 18:02:24.957088 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz\" (UID: \"1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz" Jan 30 18:02:25 crc kubenswrapper[4766]: I0130 18:02:25.059533 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jc6xj\" (UniqueName: \"kubernetes.io/projected/1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951-kube-api-access-jc6xj\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz\" (UID: \"1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz" Jan 30 18:02:25 crc kubenswrapper[4766]: I0130 18:02:25.059605 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz\" (UID: \"1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz" Jan 30 18:02:25 crc kubenswrapper[4766]: I0130 18:02:25.059832 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz\" (UID: \"1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz" Jan 30 18:02:25 crc kubenswrapper[4766]: I0130 18:02:25.060157 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz\" (UID: \"1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz" Jan 30 18:02:25 crc kubenswrapper[4766]: I0130 18:02:25.060359 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz\" (UID: \"1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz" Jan 30 18:02:25 crc kubenswrapper[4766]: I0130 18:02:25.088557 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jc6xj\" (UniqueName: \"kubernetes.io/projected/1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951-kube-api-access-jc6xj\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz\" (UID: \"1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz" Jan 30 18:02:25 crc kubenswrapper[4766]: I0130 18:02:25.139874 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz" Jan 30 18:02:25 crc kubenswrapper[4766]: I0130 18:02:25.766588 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz"] Jan 30 18:02:25 crc kubenswrapper[4766]: W0130 18:02:25.774859 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d8b8ccc_a37c_45d4_97e9_a3eb1bf7f951.slice/crio-72b79caf105f2e2df5c9fdd37b8df4ef9fb379dc9f2b059dc9a5873f6f5b74d9 WatchSource:0}: Error finding container 72b79caf105f2e2df5c9fdd37b8df4ef9fb379dc9f2b059dc9a5873f6f5b74d9: Status 404 returned error can't find the container with id 72b79caf105f2e2df5c9fdd37b8df4ef9fb379dc9f2b059dc9a5873f6f5b74d9 Jan 30 18:02:25 crc kubenswrapper[4766]: I0130 18:02:25.973553 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz" event={"ID":"1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951","Type":"ContainerStarted","Data":"72b79caf105f2e2df5c9fdd37b8df4ef9fb379dc9f2b059dc9a5873f6f5b74d9"} Jan 30 18:02:26 crc kubenswrapper[4766]: I0130 18:02:26.988606 4766 generic.go:334] "Generic (PLEG): container finished" podID="1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951" containerID="7a7e6dc22b12132566cdb1f3372e28ef36a2890626fed1aeffe7f2d40e465b95" exitCode=0 Jan 30 18:02:26 crc kubenswrapper[4766]: I0130 18:02:26.988676 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz" event={"ID":"1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951","Type":"ContainerDied","Data":"7a7e6dc22b12132566cdb1f3372e28ef36a2890626fed1aeffe7f2d40e465b95"} Jan 30 18:02:27 crc kubenswrapper[4766]: I0130 18:02:27.061432 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-tm7r5"] Jan 30 18:02:27 crc kubenswrapper[4766]: I0130 18:02:27.069463 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-823f-account-create-update-pttr7"] Jan 30 18:02:27 crc kubenswrapper[4766]: I0130 18:02:27.078542 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-823f-account-create-update-pttr7"] Jan 30 18:02:27 crc kubenswrapper[4766]: I0130 18:02:27.087504 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-tm7r5"] Jan 30 18:02:28 crc kubenswrapper[4766]: I0130 18:02:28.061024 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5946960e-4a1d-4360-ae75-7648934eeb0c" path="/var/lib/kubelet/pods/5946960e-4a1d-4360-ae75-7648934eeb0c/volumes" Jan 30 18:02:28 crc kubenswrapper[4766]: I0130 18:02:28.062842 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f01e6326-2d83-4889-9b7a-f45b9f6f3063" path="/var/lib/kubelet/pods/f01e6326-2d83-4889-9b7a-f45b9f6f3063/volumes" Jan 30 18:02:30 crc kubenswrapper[4766]: I0130 18:02:30.020890 4766 generic.go:334] "Generic (PLEG): container finished" podID="1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951" containerID="e1774fc0414c2122617112cd356061ed0dbd63ec7d27ae05b2ae3a89ad7e1ad4" exitCode=0 Jan 30 18:02:30 crc kubenswrapper[4766]: I0130 18:02:30.021488 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz" event={"ID":"1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951","Type":"ContainerDied","Data":"e1774fc0414c2122617112cd356061ed0dbd63ec7d27ae05b2ae3a89ad7e1ad4"} Jan 30 18:02:31 crc kubenswrapper[4766]: I0130 18:02:31.032366 4766 generic.go:334] "Generic (PLEG): container finished" podID="1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951" containerID="1ef9355ea802924f0712dff7fbacf7e0ea64aef0a6e13c663de8f7b7767d1a2e" exitCode=0 Jan 30 18:02:31 crc kubenswrapper[4766]: I0130 18:02:31.032416 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz" event={"ID":"1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951","Type":"ContainerDied","Data":"1ef9355ea802924f0712dff7fbacf7e0ea64aef0a6e13c663de8f7b7767d1a2e"} Jan 30 18:02:31 crc kubenswrapper[4766]: I0130 18:02:31.895802 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7c4d556457-cgwh5" podUID="e24a2653-c901-4306-a56b-2e2de8006403" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.109:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.109:8080: connect: connection refused" Jan 30 18:02:32 crc kubenswrapper[4766]: I0130 18:02:32.394231 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz" Jan 30 18:02:32 crc kubenswrapper[4766]: I0130 18:02:32.540416 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jc6xj\" (UniqueName: \"kubernetes.io/projected/1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951-kube-api-access-jc6xj\") pod \"1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951\" (UID: \"1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951\") " Jan 30 18:02:32 crc kubenswrapper[4766]: I0130 18:02:32.540495 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951-util\") pod \"1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951\" (UID: \"1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951\") " Jan 30 18:02:32 crc kubenswrapper[4766]: I0130 18:02:32.540780 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951-bundle\") pod \"1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951\" (UID: \"1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951\") " Jan 30 18:02:32 crc kubenswrapper[4766]: I0130 18:02:32.542843 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951-bundle" (OuterVolumeSpecName: "bundle") pod "1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951" (UID: "1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:02:32 crc kubenswrapper[4766]: I0130 18:02:32.548372 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951-util" (OuterVolumeSpecName: "util") pod "1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951" (UID: "1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:02:32 crc kubenswrapper[4766]: I0130 18:02:32.548742 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951-kube-api-access-jc6xj" (OuterVolumeSpecName: "kube-api-access-jc6xj") pod "1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951" (UID: "1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951"). InnerVolumeSpecName "kube-api-access-jc6xj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:02:32 crc kubenswrapper[4766]: I0130 18:02:32.643421 4766 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951-util\") on node \"crc\" DevicePath \"\"" Jan 30 18:02:32 crc kubenswrapper[4766]: I0130 18:02:32.643470 4766 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 18:02:32 crc kubenswrapper[4766]: I0130 18:02:32.643487 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jc6xj\" (UniqueName: \"kubernetes.io/projected/1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951-kube-api-access-jc6xj\") on node \"crc\" DevicePath \"\"" Jan 30 18:02:33 crc kubenswrapper[4766]: I0130 18:02:33.055907 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz" event={"ID":"1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951","Type":"ContainerDied","Data":"72b79caf105f2e2df5c9fdd37b8df4ef9fb379dc9f2b059dc9a5873f6f5b74d9"} Jan 30 18:02:33 crc kubenswrapper[4766]: I0130 18:02:33.056622 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72b79caf105f2e2df5c9fdd37b8df4ef9fb379dc9f2b059dc9a5873f6f5b74d9" Jan 30 18:02:33 crc kubenswrapper[4766]: I0130 18:02:33.056063 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz" Jan 30 18:02:33 crc kubenswrapper[4766]: I0130 18:02:33.323446 4766 scope.go:117] "RemoveContainer" containerID="e819a03329a60f5f707891aab84349c260acf78c226512ac444ec14f902344ab" Jan 30 18:02:33 crc kubenswrapper[4766]: I0130 18:02:33.346002 4766 scope.go:117] "RemoveContainer" containerID="a53070aa7bf54f8e11851d2a42b467aeddd56da5149b02bbbe37c928d714291e" Jan 30 18:02:33 crc kubenswrapper[4766]: I0130 18:02:33.403498 4766 scope.go:117] "RemoveContainer" containerID="5dc0db8c133f2561de270e8d644a27c259f84f30c2c5e0b609690a8e3867c8ad" Jan 30 18:02:33 crc kubenswrapper[4766]: I0130 18:02:33.446160 4766 scope.go:117] "RemoveContainer" containerID="b4325ef51e7b158001efb6dda87f6f28be293ddce88e91cc9243a0d6ae57bb71" Jan 30 18:02:33 crc kubenswrapper[4766]: I0130 18:02:33.468757 4766 scope.go:117] "RemoveContainer" containerID="ee4c2e79057aa3b57922a39a79c5f1fe75768ec53755ad01f26f4a886101dcae" Jan 30 18:02:36 crc kubenswrapper[4766]: I0130 18:02:36.054021 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-ngkz2"] Jan 30 18:02:36 crc kubenswrapper[4766]: I0130 18:02:36.062289 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-ngkz2"] Jan 30 18:02:38 crc kubenswrapper[4766]: I0130 18:02:38.049576 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fca69b03-2748-4111-8dd8-0cc28cf328d3" path="/var/lib/kubelet/pods/fca69b03-2748-4111-8dd8-0cc28cf328d3/volumes" Jan 30 18:02:41 crc kubenswrapper[4766]: I0130 18:02:41.894984 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7c4d556457-cgwh5" podUID="e24a2653-c901-4306-a56b-2e2de8006403" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.109:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.109:8080: connect: connection refused" Jan 30 18:02:41 crc kubenswrapper[4766]: I0130 18:02:41.895601 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7c4d556457-cgwh5" Jan 30 18:02:41 crc kubenswrapper[4766]: I0130 18:02:41.922527 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-npbz4"] Jan 30 18:02:41 crc kubenswrapper[4766]: E0130 18:02:41.923388 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951" containerName="extract" Jan 30 18:02:41 crc kubenswrapper[4766]: I0130 18:02:41.923412 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951" containerName="extract" Jan 30 18:02:41 crc kubenswrapper[4766]: E0130 18:02:41.923437 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951" containerName="util" Jan 30 18:02:41 crc kubenswrapper[4766]: I0130 18:02:41.923445 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951" containerName="util" Jan 30 18:02:41 crc kubenswrapper[4766]: E0130 18:02:41.923476 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951" containerName="pull" Jan 30 18:02:41 crc kubenswrapper[4766]: I0130 18:02:41.923482 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951" containerName="pull" Jan 30 18:02:41 crc kubenswrapper[4766]: I0130 18:02:41.923666 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951" containerName="extract" Jan 30 18:02:41 crc kubenswrapper[4766]: I0130 18:02:41.924392 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-npbz4" Jan 30 18:02:41 crc kubenswrapper[4766]: I0130 18:02:41.926405 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-b8nc5" Jan 30 18:02:41 crc kubenswrapper[4766]: I0130 18:02:41.926800 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 30 18:02:41 crc kubenswrapper[4766]: I0130 18:02:41.928460 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 30 18:02:41 crc kubenswrapper[4766]: I0130 18:02:41.964907 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-npbz4"] Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.035561 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-946744c6d-v5dzf"] Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.037203 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-946744c6d-v5dzf" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.040666 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-zttmf" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.040906 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.065639 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-946744c6d-qm4dx"] Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.066880 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-946744c6d-qm4dx" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.072447 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgcpm\" (UniqueName: \"kubernetes.io/projected/ed5054c0-0009-40bb-8b4c-6e1a4da07b41-kube-api-access-hgcpm\") pod \"obo-prometheus-operator-68bc856cb9-npbz4\" (UID: \"ed5054c0-0009-40bb-8b4c-6e1a4da07b41\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-npbz4" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.088428 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-946744c6d-v5dzf"] Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.118650 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-946744c6d-qm4dx"] Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.174352 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgcpm\" (UniqueName: \"kubernetes.io/projected/ed5054c0-0009-40bb-8b4c-6e1a4da07b41-kube-api-access-hgcpm\") pod \"obo-prometheus-operator-68bc856cb9-npbz4\" (UID: \"ed5054c0-0009-40bb-8b4c-6e1a4da07b41\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-npbz4" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.174418 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4e9a3cc5-7614-4db3-8c5b-590bff436549-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-946744c6d-v5dzf\" (UID: \"4e9a3cc5-7614-4db3-8c5b-590bff436549\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-946744c6d-v5dzf" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.174458 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/86dd422f-41b2-438f-9a62-e558efc71c90-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-946744c6d-qm4dx\" (UID: \"86dd422f-41b2-438f-9a62-e558efc71c90\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-946744c6d-qm4dx" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.174494 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/86dd422f-41b2-438f-9a62-e558efc71c90-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-946744c6d-qm4dx\" (UID: \"86dd422f-41b2-438f-9a62-e558efc71c90\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-946744c6d-qm4dx" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.174737 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4e9a3cc5-7614-4db3-8c5b-590bff436549-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-946744c6d-v5dzf\" (UID: \"4e9a3cc5-7614-4db3-8c5b-590bff436549\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-946744c6d-v5dzf" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.206836 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgcpm\" (UniqueName: \"kubernetes.io/projected/ed5054c0-0009-40bb-8b4c-6e1a4da07b41-kube-api-access-hgcpm\") pod \"obo-prometheus-operator-68bc856cb9-npbz4\" (UID: \"ed5054c0-0009-40bb-8b4c-6e1a4da07b41\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-npbz4" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.246912 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-npbz4" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.267475 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-zbt8s"] Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.269198 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-zbt8s" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.271302 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-x6cmd" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.278504 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.280566 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4e9a3cc5-7614-4db3-8c5b-590bff436549-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-946744c6d-v5dzf\" (UID: \"4e9a3cc5-7614-4db3-8c5b-590bff436549\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-946744c6d-v5dzf" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.280666 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4e9a3cc5-7614-4db3-8c5b-590bff436549-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-946744c6d-v5dzf\" (UID: \"4e9a3cc5-7614-4db3-8c5b-590bff436549\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-946744c6d-v5dzf" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.280715 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/86dd422f-41b2-438f-9a62-e558efc71c90-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-946744c6d-qm4dx\" (UID: \"86dd422f-41b2-438f-9a62-e558efc71c90\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-946744c6d-qm4dx" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.280766 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/86dd422f-41b2-438f-9a62-e558efc71c90-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-946744c6d-qm4dx\" (UID: \"86dd422f-41b2-438f-9a62-e558efc71c90\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-946744c6d-qm4dx" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.290256 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4e9a3cc5-7614-4db3-8c5b-590bff436549-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-946744c6d-v5dzf\" (UID: \"4e9a3cc5-7614-4db3-8c5b-590bff436549\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-946744c6d-v5dzf" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.290840 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/86dd422f-41b2-438f-9a62-e558efc71c90-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-946744c6d-qm4dx\" (UID: \"86dd422f-41b2-438f-9a62-e558efc71c90\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-946744c6d-qm4dx" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.295781 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4e9a3cc5-7614-4db3-8c5b-590bff436549-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-946744c6d-v5dzf\" (UID: \"4e9a3cc5-7614-4db3-8c5b-590bff436549\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-946744c6d-v5dzf" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.296817 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/86dd422f-41b2-438f-9a62-e558efc71c90-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-946744c6d-qm4dx\" (UID: \"86dd422f-41b2-438f-9a62-e558efc71c90\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-946744c6d-qm4dx" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.324045 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-zbt8s"] Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.369643 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-946744c6d-v5dzf" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.382366 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/ccbd3ff2-7dc6-488c-ae64-d0710464e20d-observability-operator-tls\") pod \"observability-operator-59bdc8b94-zbt8s\" (UID: \"ccbd3ff2-7dc6-488c-ae64-d0710464e20d\") " pod="openshift-operators/observability-operator-59bdc8b94-zbt8s" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.382423 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxgcz\" (UniqueName: \"kubernetes.io/projected/ccbd3ff2-7dc6-488c-ae64-d0710464e20d-kube-api-access-cxgcz\") pod \"observability-operator-59bdc8b94-zbt8s\" (UID: \"ccbd3ff2-7dc6-488c-ae64-d0710464e20d\") " pod="openshift-operators/observability-operator-59bdc8b94-zbt8s" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.400847 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-946744c6d-qm4dx" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.457951 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-bgqzt"] Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.465676 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-bgqzt" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.471070 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-n9zjj" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.484387 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-bgqzt"] Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.486666 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/ccbd3ff2-7dc6-488c-ae64-d0710464e20d-observability-operator-tls\") pod \"observability-operator-59bdc8b94-zbt8s\" (UID: \"ccbd3ff2-7dc6-488c-ae64-d0710464e20d\") " pod="openshift-operators/observability-operator-59bdc8b94-zbt8s" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.486741 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxgcz\" (UniqueName: \"kubernetes.io/projected/ccbd3ff2-7dc6-488c-ae64-d0710464e20d-kube-api-access-cxgcz\") pod \"observability-operator-59bdc8b94-zbt8s\" (UID: \"ccbd3ff2-7dc6-488c-ae64-d0710464e20d\") " pod="openshift-operators/observability-operator-59bdc8b94-zbt8s" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.517234 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/ccbd3ff2-7dc6-488c-ae64-d0710464e20d-observability-operator-tls\") pod \"observability-operator-59bdc8b94-zbt8s\" (UID: \"ccbd3ff2-7dc6-488c-ae64-d0710464e20d\") " pod="openshift-operators/observability-operator-59bdc8b94-zbt8s" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.525095 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxgcz\" (UniqueName: \"kubernetes.io/projected/ccbd3ff2-7dc6-488c-ae64-d0710464e20d-kube-api-access-cxgcz\") pod \"observability-operator-59bdc8b94-zbt8s\" (UID: \"ccbd3ff2-7dc6-488c-ae64-d0710464e20d\") " pod="openshift-operators/observability-operator-59bdc8b94-zbt8s" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.589652 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdhp4\" (UniqueName: \"kubernetes.io/projected/9f9dfe10-4d1d-4081-b3f3-4e7e4be37815-kube-api-access-fdhp4\") pod \"perses-operator-5bf474d74f-bgqzt\" (UID: \"9f9dfe10-4d1d-4081-b3f3-4e7e4be37815\") " pod="openshift-operators/perses-operator-5bf474d74f-bgqzt" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.589846 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/9f9dfe10-4d1d-4081-b3f3-4e7e4be37815-openshift-service-ca\") pod \"perses-operator-5bf474d74f-bgqzt\" (UID: \"9f9dfe10-4d1d-4081-b3f3-4e7e4be37815\") " pod="openshift-operators/perses-operator-5bf474d74f-bgqzt" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.682033 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-zbt8s" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.711543 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/9f9dfe10-4d1d-4081-b3f3-4e7e4be37815-openshift-service-ca\") pod \"perses-operator-5bf474d74f-bgqzt\" (UID: \"9f9dfe10-4d1d-4081-b3f3-4e7e4be37815\") " pod="openshift-operators/perses-operator-5bf474d74f-bgqzt" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.711787 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdhp4\" (UniqueName: \"kubernetes.io/projected/9f9dfe10-4d1d-4081-b3f3-4e7e4be37815-kube-api-access-fdhp4\") pod \"perses-operator-5bf474d74f-bgqzt\" (UID: \"9f9dfe10-4d1d-4081-b3f3-4e7e4be37815\") " pod="openshift-operators/perses-operator-5bf474d74f-bgqzt" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.713307 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/9f9dfe10-4d1d-4081-b3f3-4e7e4be37815-openshift-service-ca\") pod \"perses-operator-5bf474d74f-bgqzt\" (UID: \"9f9dfe10-4d1d-4081-b3f3-4e7e4be37815\") " pod="openshift-operators/perses-operator-5bf474d74f-bgqzt" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.734402 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdhp4\" (UniqueName: \"kubernetes.io/projected/9f9dfe10-4d1d-4081-b3f3-4e7e4be37815-kube-api-access-fdhp4\") pod \"perses-operator-5bf474d74f-bgqzt\" (UID: \"9f9dfe10-4d1d-4081-b3f3-4e7e4be37815\") " pod="openshift-operators/perses-operator-5bf474d74f-bgqzt" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.823702 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-bgqzt" Jan 30 18:02:43 crc kubenswrapper[4766]: I0130 18:02:43.106515 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-npbz4"] Jan 30 18:02:43 crc kubenswrapper[4766]: I0130 18:02:43.120584 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-946744c6d-v5dzf"] Jan 30 18:02:43 crc kubenswrapper[4766]: I0130 18:02:43.149476 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-946744c6d-qm4dx"] Jan 30 18:02:43 crc kubenswrapper[4766]: I0130 18:02:43.194975 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-946744c6d-qm4dx" event={"ID":"86dd422f-41b2-438f-9a62-e558efc71c90","Type":"ContainerStarted","Data":"387328638ab2dee923c355b91402386ac8a610a08ee2db55cac8a5fc4cf85fb2"} Jan 30 18:02:43 crc kubenswrapper[4766]: I0130 18:02:43.196668 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-npbz4" event={"ID":"ed5054c0-0009-40bb-8b4c-6e1a4da07b41","Type":"ContainerStarted","Data":"0400ba551869aca326a34e40e075f0e1333962d5a047499cc7cfe746b5606c79"} Jan 30 18:02:43 crc kubenswrapper[4766]: I0130 18:02:43.198906 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-946744c6d-v5dzf" event={"ID":"4e9a3cc5-7614-4db3-8c5b-590bff436549","Type":"ContainerStarted","Data":"2c82d6e597c8b2c38e64083a01681e90133214195d1a69197b92310389ed04cc"} Jan 30 18:02:43 crc kubenswrapper[4766]: I0130 18:02:43.292302 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-zbt8s"] Jan 30 18:02:43 crc kubenswrapper[4766]: W0130 18:02:43.293347 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podccbd3ff2_7dc6_488c_ae64_d0710464e20d.slice/crio-9599fc1c8b935f03287cde6a3ecab6e7b16ef37431303a991da5b693cc226aab WatchSource:0}: Error finding container 9599fc1c8b935f03287cde6a3ecab6e7b16ef37431303a991da5b693cc226aab: Status 404 returned error can't find the container with id 9599fc1c8b935f03287cde6a3ecab6e7b16ef37431303a991da5b693cc226aab Jan 30 18:02:43 crc kubenswrapper[4766]: I0130 18:02:43.387509 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-bgqzt"] Jan 30 18:02:43 crc kubenswrapper[4766]: W0130 18:02:43.471218 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f9dfe10_4d1d_4081_b3f3_4e7e4be37815.slice/crio-23843974b68c10138420fe9bfa48afe8a2d36cb1976eadf30c0eb0b2fce0d053 WatchSource:0}: Error finding container 23843974b68c10138420fe9bfa48afe8a2d36cb1976eadf30c0eb0b2fce0d053: Status 404 returned error can't find the container with id 23843974b68c10138420fe9bfa48afe8a2d36cb1976eadf30c0eb0b2fce0d053 Jan 30 18:02:44 crc kubenswrapper[4766]: I0130 18:02:44.209165 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-zbt8s" event={"ID":"ccbd3ff2-7dc6-488c-ae64-d0710464e20d","Type":"ContainerStarted","Data":"9599fc1c8b935f03287cde6a3ecab6e7b16ef37431303a991da5b693cc226aab"} Jan 30 18:02:44 crc kubenswrapper[4766]: I0130 18:02:44.210944 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-bgqzt" event={"ID":"9f9dfe10-4d1d-4081-b3f3-4e7e4be37815","Type":"ContainerStarted","Data":"23843974b68c10138420fe9bfa48afe8a2d36cb1976eadf30c0eb0b2fce0d053"} Jan 30 18:02:48 crc kubenswrapper[4766]: I0130 18:02:48.276136 4766 generic.go:334] "Generic (PLEG): container finished" podID="e24a2653-c901-4306-a56b-2e2de8006403" containerID="90394fdff017d58c0e8cd3327168199dc8c7d1df43cf284b9f898399e036a217" exitCode=137 Jan 30 18:02:48 crc kubenswrapper[4766]: I0130 18:02:48.276332 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7c4d556457-cgwh5" event={"ID":"e24a2653-c901-4306-a56b-2e2de8006403","Type":"ContainerDied","Data":"90394fdff017d58c0e8cd3327168199dc8c7d1df43cf284b9f898399e036a217"} Jan 30 18:02:56 crc kubenswrapper[4766]: I0130 18:02:56.206396 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7c4d556457-cgwh5" Jan 30 18:02:56 crc kubenswrapper[4766]: I0130 18:02:56.352624 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e24a2653-c901-4306-a56b-2e2de8006403-scripts\") pod \"e24a2653-c901-4306-a56b-2e2de8006403\" (UID: \"e24a2653-c901-4306-a56b-2e2de8006403\") " Jan 30 18:02:56 crc kubenswrapper[4766]: I0130 18:02:56.352745 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hmkl\" (UniqueName: \"kubernetes.io/projected/e24a2653-c901-4306-a56b-2e2de8006403-kube-api-access-5hmkl\") pod \"e24a2653-c901-4306-a56b-2e2de8006403\" (UID: \"e24a2653-c901-4306-a56b-2e2de8006403\") " Jan 30 18:02:56 crc kubenswrapper[4766]: I0130 18:02:56.352787 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e24a2653-c901-4306-a56b-2e2de8006403-config-data\") pod \"e24a2653-c901-4306-a56b-2e2de8006403\" (UID: \"e24a2653-c901-4306-a56b-2e2de8006403\") " Jan 30 18:02:56 crc kubenswrapper[4766]: I0130 18:02:56.352823 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e24a2653-c901-4306-a56b-2e2de8006403-horizon-secret-key\") pod \"e24a2653-c901-4306-a56b-2e2de8006403\" (UID: \"e24a2653-c901-4306-a56b-2e2de8006403\") " Jan 30 18:02:56 crc kubenswrapper[4766]: I0130 18:02:56.352934 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e24a2653-c901-4306-a56b-2e2de8006403-logs\") pod \"e24a2653-c901-4306-a56b-2e2de8006403\" (UID: \"e24a2653-c901-4306-a56b-2e2de8006403\") " Jan 30 18:02:56 crc kubenswrapper[4766]: I0130 18:02:56.353968 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e24a2653-c901-4306-a56b-2e2de8006403-logs" (OuterVolumeSpecName: "logs") pod "e24a2653-c901-4306-a56b-2e2de8006403" (UID: "e24a2653-c901-4306-a56b-2e2de8006403"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:02:56 crc kubenswrapper[4766]: I0130 18:02:56.359046 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e24a2653-c901-4306-a56b-2e2de8006403-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "e24a2653-c901-4306-a56b-2e2de8006403" (UID: "e24a2653-c901-4306-a56b-2e2de8006403"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 18:02:56 crc kubenswrapper[4766]: I0130 18:02:56.362497 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e24a2653-c901-4306-a56b-2e2de8006403-kube-api-access-5hmkl" (OuterVolumeSpecName: "kube-api-access-5hmkl") pod "e24a2653-c901-4306-a56b-2e2de8006403" (UID: "e24a2653-c901-4306-a56b-2e2de8006403"). InnerVolumeSpecName "kube-api-access-5hmkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:02:56 crc kubenswrapper[4766]: I0130 18:02:56.382077 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e24a2653-c901-4306-a56b-2e2de8006403-config-data" (OuterVolumeSpecName: "config-data") pod "e24a2653-c901-4306-a56b-2e2de8006403" (UID: "e24a2653-c901-4306-a56b-2e2de8006403"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 18:02:56 crc kubenswrapper[4766]: I0130 18:02:56.385899 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e24a2653-c901-4306-a56b-2e2de8006403-scripts" (OuterVolumeSpecName: "scripts") pod "e24a2653-c901-4306-a56b-2e2de8006403" (UID: "e24a2653-c901-4306-a56b-2e2de8006403"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 18:02:56 crc kubenswrapper[4766]: I0130 18:02:56.438885 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7c4d556457-cgwh5" event={"ID":"e24a2653-c901-4306-a56b-2e2de8006403","Type":"ContainerDied","Data":"cadf5bf4bc315740c9e7fe57dc7c31b825904f80226e6412c605c910373f6d91"} Jan 30 18:02:56 crc kubenswrapper[4766]: I0130 18:02:56.438936 4766 scope.go:117] "RemoveContainer" containerID="1e480dbcd993b0ab6a788770045d86acbc61597646aa5360f9b83b164e59d969" Jan 30 18:02:56 crc kubenswrapper[4766]: I0130 18:02:56.439043 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7c4d556457-cgwh5" Jan 30 18:02:56 crc kubenswrapper[4766]: I0130 18:02:56.454824 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e24a2653-c901-4306-a56b-2e2de8006403-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 18:02:56 crc kubenswrapper[4766]: I0130 18:02:56.454864 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hmkl\" (UniqueName: \"kubernetes.io/projected/e24a2653-c901-4306-a56b-2e2de8006403-kube-api-access-5hmkl\") on node \"crc\" DevicePath \"\"" Jan 30 18:02:56 crc kubenswrapper[4766]: I0130 18:02:56.454877 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e24a2653-c901-4306-a56b-2e2de8006403-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 18:02:56 crc kubenswrapper[4766]: I0130 18:02:56.454888 4766 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e24a2653-c901-4306-a56b-2e2de8006403-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 30 18:02:56 crc kubenswrapper[4766]: I0130 18:02:56.454896 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e24a2653-c901-4306-a56b-2e2de8006403-logs\") on node \"crc\" DevicePath \"\"" Jan 30 18:02:56 crc kubenswrapper[4766]: I0130 18:02:56.496189 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7c4d556457-cgwh5"] Jan 30 18:02:56 crc kubenswrapper[4766]: I0130 18:02:56.514808 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7c4d556457-cgwh5"] Jan 30 18:02:56 crc kubenswrapper[4766]: I0130 18:02:56.896208 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7c4d556457-cgwh5" podUID="e24a2653-c901-4306-a56b-2e2de8006403" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.109:8080/dashboard/auth/login/?next=/dashboard/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:02:58 crc kubenswrapper[4766]: I0130 18:02:58.058728 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e24a2653-c901-4306-a56b-2e2de8006403" path="/var/lib/kubelet/pods/e24a2653-c901-4306-a56b-2e2de8006403/volumes" Jan 30 18:03:00 crc kubenswrapper[4766]: E0130 18:03:00.668169 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:2ecf763b02048d2cf4c17967a7b2cacc7afd6af0e963a39579d876f8f4170e3c" Jan 30 18:03:00 crc kubenswrapper[4766]: E0130 18:03:00.668742 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:2ecf763b02048d2cf4c17967a7b2cacc7afd6af0e963a39579d876f8f4170e3c,Command:[],Args:[--namespace=$(NAMESPACE) --images=perses=$(RELATED_IMAGE_PERSES) --images=alertmanager=$(RELATED_IMAGE_ALERTMANAGER) --images=prometheus=$(RELATED_IMAGE_PROMETHEUS) --images=thanos=$(RELATED_IMAGE_THANOS) --images=ui-dashboards=$(RELATED_IMAGE_CONSOLE_DASHBOARDS_PLUGIN) --images=ui-distributed-tracing=$(RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN) --images=ui-distributed-tracing-pf5=$(RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN_PF5) --images=ui-distributed-tracing-pf4=$(RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN_PF4) --images=ui-logging=$(RELATED_IMAGE_CONSOLE_LOGGING_PLUGIN) --images=ui-logging-pf4=$(RELATED_IMAGE_CONSOLE_LOGGING_PLUGIN_PF4) --images=ui-troubleshooting-panel=$(RELATED_IMAGE_CONSOLE_TROUBLESHOOTING_PANEL_PLUGIN) --images=ui-monitoring=$(RELATED_IMAGE_CONSOLE_MONITORING_PLUGIN) --images=ui-monitoring-pf5=$(RELATED_IMAGE_CONSOLE_MONITORING_PLUGIN_PF5) --images=korrel8r=$(RELATED_IMAGE_KORREL8R) --images=health-analyzer=$(RELATED_IMAGE_CLUSTER_HEALTH_ANALYZER) --openshift.enabled=true],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:RELATED_IMAGE_ALERTMANAGER,Value:registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PROMETHEUS,Value:registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:1b555e21bba7c609111ace4380382a696d9aceeb6e9816bf9023b8f689b6c741,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_THANOS,Value:registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PERSES,Value:registry.redhat.io/cluster-observability-operator/perses-rhel9@sha256:e797cdb47beef40b04da7b6d645bca3dc32e6247003c45b56b38efd9e13bf01c,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_DASHBOARDS_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/dashboards-console-plugin-rhel9@sha256:093d2731ac848ed5fd57356b155a19d3bf7b8db96d95b09c5d0095e143f7254f,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/distributed-tracing-console-plugin-rhel9@sha256:7d662a120305e2528acc7e9142b770b5b6a7f4932ddfcadfa4ac953935124895,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN_PF5,Value:registry.redhat.io/cluster-observability-operator/distributed-tracing-console-plugin-pf5-rhel9@sha256:75465aabb0aa427a5c531a8fcde463f6d119afbcc618ebcbf6b7ee9bc8aad160,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN_PF4,Value:registry.redhat.io/cluster-observability-operator/distributed-tracing-console-plugin-pf4-rhel9@sha256:dc18c8d6a4a9a0a574a57cc5082c8a9b26023bd6d69b9732892d584c1dfe5070,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_LOGGING_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/logging-console-plugin-rhel9@sha256:369729978cecdc13c99ef3d179f8eb8a450a4a0cb70b63c27a55a15d1710ba27,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_LOGGING_PLUGIN_PF4,Value:registry.redhat.io/cluster-observability-operator/logging-console-plugin-pf4-rhel9@sha256:d8c7a61d147f62b204d5c5f16864386025393453c9a81ea327bbd25d7765d611,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_TROUBLESHOOTING_PANEL_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/troubleshooting-panel-console-plugin-rhel9@sha256:b4a6eb1cc118a4334b424614959d8b7f361ddd779b3a72690ca49b0a3f26d9b8,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_MONITORING_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/monitoring-console-plugin-rhel9@sha256:21d4fff670893ba4b7fbc528cd49f8b71c8281cede9ef84f0697065bb6a7fc50,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_MONITORING_PLUGIN_PF5,Value:registry.redhat.io/cluster-observability-operator/monitoring-console-plugin-pf5-rhel9@sha256:12d9dbe297a1c3b9df671f21156992082bc483887d851fafe76e5d17321ff474,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KORREL8R,Value:registry.redhat.io/cluster-observability-operator/korrel8r-rhel9@sha256:e65c37f04f6d76a0cbfe05edb3cddf6a8f14f859ee35cf3aebea8fcb991d2c19,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLUSTER_HEALTH_ANALYZER,Value:registry.redhat.io/cluster-observability-operator/cluster-health-analyzer-rhel9@sha256:48e4e178c6eeaa9d5dd77a591c185a311b4b4a5caadb7199d48463123e31dc9e,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.1,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{400 -3} {} 400m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:observability-operator-tls,ReadOnly:true,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cxgcz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000350000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod observability-operator-59bdc8b94-zbt8s_openshift-operators(ccbd3ff2-7dc6-488c-ae64-d0710464e20d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 18:03:00 crc kubenswrapper[4766]: E0130 18:03:00.669976 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/observability-operator-59bdc8b94-zbt8s" podUID="ccbd3ff2-7dc6-488c-ae64-d0710464e20d" Jan 30 18:03:00 crc kubenswrapper[4766]: I0130 18:03:00.830746 4766 scope.go:117] "RemoveContainer" containerID="90394fdff017d58c0e8cd3327168199dc8c7d1df43cf284b9f898399e036a217" Jan 30 18:03:01 crc kubenswrapper[4766]: I0130 18:03:01.489922 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-946744c6d-v5dzf" event={"ID":"4e9a3cc5-7614-4db3-8c5b-590bff436549","Type":"ContainerStarted","Data":"a4b4c9f0f62679eca61ffd8170eae4fb7bc7caf251e227b23da83c9e910015dc"} Jan 30 18:03:01 crc kubenswrapper[4766]: I0130 18:03:01.493328 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-946744c6d-qm4dx" event={"ID":"86dd422f-41b2-438f-9a62-e558efc71c90","Type":"ContainerStarted","Data":"5452329c351a963d4673f268bf0c5fe3355507b6b8abdd7a487e84d95b559d3e"} Jan 30 18:03:01 crc kubenswrapper[4766]: I0130 18:03:01.495524 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-bgqzt" event={"ID":"9f9dfe10-4d1d-4081-b3f3-4e7e4be37815","Type":"ContainerStarted","Data":"6b572102515489d3b29317cf517ffab36ffdaeb05ba662b93e01076576fee807"} Jan 30 18:03:01 crc kubenswrapper[4766]: I0130 18:03:01.495661 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-bgqzt" Jan 30 18:03:01 crc kubenswrapper[4766]: I0130 18:03:01.497311 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-npbz4" event={"ID":"ed5054c0-0009-40bb-8b4c-6e1a4da07b41","Type":"ContainerStarted","Data":"56179728a168c33618e889a6f300e5a6335a23cda4413e7bc85f27223ddcd3ef"} Jan 30 18:03:01 crc kubenswrapper[4766]: E0130 18:03:01.498665 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:2ecf763b02048d2cf4c17967a7b2cacc7afd6af0e963a39579d876f8f4170e3c\\\"\"" pod="openshift-operators/observability-operator-59bdc8b94-zbt8s" podUID="ccbd3ff2-7dc6-488c-ae64-d0710464e20d" Jan 30 18:03:01 crc kubenswrapper[4766]: I0130 18:03:01.521963 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-946744c6d-v5dzf" podStartSLOduration=1.834528229 podStartE2EDuration="19.521941771s" podCreationTimestamp="2026-01-30 18:02:42 +0000 UTC" firstStartedPulling="2026-01-30 18:02:43.145472862 +0000 UTC m=+6017.783430208" lastFinishedPulling="2026-01-30 18:03:00.832886404 +0000 UTC m=+6035.470843750" observedRunningTime="2026-01-30 18:03:01.517778877 +0000 UTC m=+6036.155736223" watchObservedRunningTime="2026-01-30 18:03:01.521941771 +0000 UTC m=+6036.159899117" Jan 30 18:03:01 crc kubenswrapper[4766]: I0130 18:03:01.590560 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-bgqzt" podStartSLOduration=2.249317064 podStartE2EDuration="19.59053873s" podCreationTimestamp="2026-01-30 18:02:42 +0000 UTC" firstStartedPulling="2026-01-30 18:02:43.486972969 +0000 UTC m=+6018.124930315" lastFinishedPulling="2026-01-30 18:03:00.828194635 +0000 UTC m=+6035.466151981" observedRunningTime="2026-01-30 18:03:01.584222278 +0000 UTC m=+6036.222179624" watchObservedRunningTime="2026-01-30 18:03:01.59053873 +0000 UTC m=+6036.228496076" Jan 30 18:03:01 crc kubenswrapper[4766]: I0130 18:03:01.617543 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-npbz4" podStartSLOduration=2.919334992 podStartE2EDuration="20.617522026s" podCreationTimestamp="2026-01-30 18:02:41 +0000 UTC" firstStartedPulling="2026-01-30 18:02:43.129976971 +0000 UTC m=+6017.767934317" lastFinishedPulling="2026-01-30 18:03:00.828164005 +0000 UTC m=+6035.466121351" observedRunningTime="2026-01-30 18:03:01.605511479 +0000 UTC m=+6036.243468825" watchObservedRunningTime="2026-01-30 18:03:01.617522026 +0000 UTC m=+6036.255479372" Jan 30 18:03:01 crc kubenswrapper[4766]: I0130 18:03:01.651832 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-946744c6d-qm4dx" podStartSLOduration=1.985109754 podStartE2EDuration="19.65180665s" podCreationTimestamp="2026-01-30 18:02:42 +0000 UTC" firstStartedPulling="2026-01-30 18:02:43.164029739 +0000 UTC m=+6017.801987085" lastFinishedPulling="2026-01-30 18:03:00.830726635 +0000 UTC m=+6035.468683981" observedRunningTime="2026-01-30 18:03:01.632584906 +0000 UTC m=+6036.270542252" watchObservedRunningTime="2026-01-30 18:03:01.65180665 +0000 UTC m=+6036.289763996" Jan 30 18:03:05 crc kubenswrapper[4766]: I0130 18:03:05.056913 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-4b67-account-create-update-85sd5"] Jan 30 18:03:05 crc kubenswrapper[4766]: I0130 18:03:05.069275 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-5n9p6"] Jan 30 18:03:05 crc kubenswrapper[4766]: I0130 18:03:05.078707 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-4b67-account-create-update-85sd5"] Jan 30 18:03:05 crc kubenswrapper[4766]: I0130 18:03:05.092367 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-5n9p6"] Jan 30 18:03:06 crc kubenswrapper[4766]: I0130 18:03:06.055593 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03ade9e5-b989-431e-995d-1dec1432ed75" path="/var/lib/kubelet/pods/03ade9e5-b989-431e-995d-1dec1432ed75/volumes" Jan 30 18:03:06 crc kubenswrapper[4766]: I0130 18:03:06.057022 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb39a90f-2911-4e3f-a034-025eb6f8077d" path="/var/lib/kubelet/pods/cb39a90f-2911-4e3f-a034-025eb6f8077d/volumes" Jan 30 18:03:12 crc kubenswrapper[4766]: I0130 18:03:12.057732 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-hn8dr"] Jan 30 18:03:12 crc kubenswrapper[4766]: I0130 18:03:12.063133 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-hn8dr"] Jan 30 18:03:12 crc kubenswrapper[4766]: I0130 18:03:12.828874 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-bgqzt" Jan 30 18:03:14 crc kubenswrapper[4766]: I0130 18:03:14.068268 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d89feb8-9495-4c8a-a424-37720df352bb" path="/var/lib/kubelet/pods/2d89feb8-9495-4c8a-a424-37720df352bb/volumes" Jan 30 18:03:15 crc kubenswrapper[4766]: I0130 18:03:15.655526 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-zbt8s" event={"ID":"ccbd3ff2-7dc6-488c-ae64-d0710464e20d","Type":"ContainerStarted","Data":"75bae606af4c056e6d449ad5f7341e03863b098a11a3c404d1fa28d730b4a928"} Jan 30 18:03:15 crc kubenswrapper[4766]: I0130 18:03:15.657591 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-zbt8s" Jan 30 18:03:15 crc kubenswrapper[4766]: I0130 18:03:15.682396 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-zbt8s" podStartSLOduration=1.993970856 podStartE2EDuration="33.682374917s" podCreationTimestamp="2026-01-30 18:02:42 +0000 UTC" firstStartedPulling="2026-01-30 18:02:43.298486683 +0000 UTC m=+6017.936444029" lastFinishedPulling="2026-01-30 18:03:14.986890744 +0000 UTC m=+6049.624848090" observedRunningTime="2026-01-30 18:03:15.675314184 +0000 UTC m=+6050.313271560" watchObservedRunningTime="2026-01-30 18:03:15.682374917 +0000 UTC m=+6050.320332263" Jan 30 18:03:15 crc kubenswrapper[4766]: I0130 18:03:15.710655 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-zbt8s" Jan 30 18:03:18 crc kubenswrapper[4766]: I0130 18:03:18.281608 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 30 18:03:18 crc kubenswrapper[4766]: I0130 18:03:18.282314 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="c0b97605-5664-4ae7-a15d-26b0ae7b4614" containerName="openstackclient" containerID="cri-o://4d5a385a379300f1667fee7b30c6a58a29d62b44dc31d6716fcde576f98cfadd" gracePeriod=2 Jan 30 18:03:18 crc kubenswrapper[4766]: I0130 18:03:18.314924 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 30 18:03:18 crc kubenswrapper[4766]: I0130 18:03:18.345454 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 30 18:03:18 crc kubenswrapper[4766]: E0130 18:03:18.345979 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e24a2653-c901-4306-a56b-2e2de8006403" containerName="horizon" Jan 30 18:03:18 crc kubenswrapper[4766]: I0130 18:03:18.346002 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="e24a2653-c901-4306-a56b-2e2de8006403" containerName="horizon" Jan 30 18:03:18 crc kubenswrapper[4766]: E0130 18:03:18.346021 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0b97605-5664-4ae7-a15d-26b0ae7b4614" containerName="openstackclient" Jan 30 18:03:18 crc kubenswrapper[4766]: I0130 18:03:18.346030 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0b97605-5664-4ae7-a15d-26b0ae7b4614" containerName="openstackclient" Jan 30 18:03:18 crc kubenswrapper[4766]: E0130 18:03:18.346051 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e24a2653-c901-4306-a56b-2e2de8006403" containerName="horizon-log" Jan 30 18:03:18 crc kubenswrapper[4766]: I0130 18:03:18.346059 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="e24a2653-c901-4306-a56b-2e2de8006403" containerName="horizon-log" Jan 30 18:03:18 crc kubenswrapper[4766]: I0130 18:03:18.346332 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="e24a2653-c901-4306-a56b-2e2de8006403" containerName="horizon" Jan 30 18:03:18 crc kubenswrapper[4766]: I0130 18:03:18.346352 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0b97605-5664-4ae7-a15d-26b0ae7b4614" containerName="openstackclient" Jan 30 18:03:18 crc kubenswrapper[4766]: I0130 18:03:18.346378 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="e24a2653-c901-4306-a56b-2e2de8006403" containerName="horizon-log" Jan 30 18:03:18 crc kubenswrapper[4766]: I0130 18:03:18.347420 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 18:03:18 crc kubenswrapper[4766]: I0130 18:03:18.351212 4766 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="c0b97605-5664-4ae7-a15d-26b0ae7b4614" podUID="1f134cd2-6d22-47cd-9ef6-bfdda2701067" Jan 30 18:03:18 crc kubenswrapper[4766]: I0130 18:03:18.360895 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 18:03:18 crc kubenswrapper[4766]: I0130 18:03:18.487979 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzfw7\" (UniqueName: \"kubernetes.io/projected/1f134cd2-6d22-47cd-9ef6-bfdda2701067-kube-api-access-gzfw7\") pod \"openstackclient\" (UID: \"1f134cd2-6d22-47cd-9ef6-bfdda2701067\") " pod="openstack/openstackclient" Jan 30 18:03:18 crc kubenswrapper[4766]: I0130 18:03:18.488089 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1f134cd2-6d22-47cd-9ef6-bfdda2701067-openstack-config\") pod \"openstackclient\" (UID: \"1f134cd2-6d22-47cd-9ef6-bfdda2701067\") " pod="openstack/openstackclient" Jan 30 18:03:18 crc kubenswrapper[4766]: I0130 18:03:18.488266 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1f134cd2-6d22-47cd-9ef6-bfdda2701067-openstack-config-secret\") pod \"openstackclient\" (UID: \"1f134cd2-6d22-47cd-9ef6-bfdda2701067\") " pod="openstack/openstackclient" Jan 30 18:03:18 crc kubenswrapper[4766]: I0130 18:03:18.503877 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 18:03:18 crc kubenswrapper[4766]: I0130 18:03:18.509289 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 18:03:18 crc kubenswrapper[4766]: I0130 18:03:18.512466 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-6lq2r" Jan 30 18:03:18 crc kubenswrapper[4766]: I0130 18:03:18.522649 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 18:03:18 crc kubenswrapper[4766]: I0130 18:03:18.591932 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld2qk\" (UniqueName: \"kubernetes.io/projected/899280ca-43e9-46f7-8204-a90e682a0656-kube-api-access-ld2qk\") pod \"kube-state-metrics-0\" (UID: \"899280ca-43e9-46f7-8204-a90e682a0656\") " pod="openstack/kube-state-metrics-0" Jan 30 18:03:18 crc kubenswrapper[4766]: I0130 18:03:18.592044 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1f134cd2-6d22-47cd-9ef6-bfdda2701067-openstack-config\") pod \"openstackclient\" (UID: \"1f134cd2-6d22-47cd-9ef6-bfdda2701067\") " pod="openstack/openstackclient" Jan 30 18:03:18 crc kubenswrapper[4766]: I0130 18:03:18.592304 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1f134cd2-6d22-47cd-9ef6-bfdda2701067-openstack-config-secret\") pod \"openstackclient\" (UID: \"1f134cd2-6d22-47cd-9ef6-bfdda2701067\") " pod="openstack/openstackclient" Jan 30 18:03:18 crc kubenswrapper[4766]: I0130 18:03:18.592388 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzfw7\" (UniqueName: \"kubernetes.io/projected/1f134cd2-6d22-47cd-9ef6-bfdda2701067-kube-api-access-gzfw7\") pod \"openstackclient\" (UID: \"1f134cd2-6d22-47cd-9ef6-bfdda2701067\") " pod="openstack/openstackclient" Jan 30 18:03:18 crc kubenswrapper[4766]: I0130 18:03:18.593204 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1f134cd2-6d22-47cd-9ef6-bfdda2701067-openstack-config\") pod \"openstackclient\" (UID: \"1f134cd2-6d22-47cd-9ef6-bfdda2701067\") " pod="openstack/openstackclient" Jan 30 18:03:18 crc kubenswrapper[4766]: I0130 18:03:18.610107 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1f134cd2-6d22-47cd-9ef6-bfdda2701067-openstack-config-secret\") pod \"openstackclient\" (UID: \"1f134cd2-6d22-47cd-9ef6-bfdda2701067\") " pod="openstack/openstackclient" Jan 30 18:03:18 crc kubenswrapper[4766]: I0130 18:03:18.642429 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzfw7\" (UniqueName: \"kubernetes.io/projected/1f134cd2-6d22-47cd-9ef6-bfdda2701067-kube-api-access-gzfw7\") pod \"openstackclient\" (UID: \"1f134cd2-6d22-47cd-9ef6-bfdda2701067\") " pod="openstack/openstackclient" Jan 30 18:03:18 crc kubenswrapper[4766]: I0130 18:03:18.680380 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 18:03:18 crc kubenswrapper[4766]: I0130 18:03:18.699531 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ld2qk\" (UniqueName: \"kubernetes.io/projected/899280ca-43e9-46f7-8204-a90e682a0656-kube-api-access-ld2qk\") pod \"kube-state-metrics-0\" (UID: \"899280ca-43e9-46f7-8204-a90e682a0656\") " pod="openstack/kube-state-metrics-0" Jan 30 18:03:18 crc kubenswrapper[4766]: I0130 18:03:18.732009 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ld2qk\" (UniqueName: \"kubernetes.io/projected/899280ca-43e9-46f7-8204-a90e682a0656-kube-api-access-ld2qk\") pod \"kube-state-metrics-0\" (UID: \"899280ca-43e9-46f7-8204-a90e682a0656\") " pod="openstack/kube-state-metrics-0" Jan 30 18:03:18 crc kubenswrapper[4766]: I0130 18:03:18.834856 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 18:03:19 crc kubenswrapper[4766]: I0130 18:03:19.729916 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Jan 30 18:03:19 crc kubenswrapper[4766]: I0130 18:03:19.745971 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Jan 30 18:03:19 crc kubenswrapper[4766]: I0130 18:03:19.753936 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Jan 30 18:03:19 crc kubenswrapper[4766]: I0130 18:03:19.754219 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Jan 30 18:03:19 crc kubenswrapper[4766]: I0130 18:03:19.754334 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Jan 30 18:03:19 crc kubenswrapper[4766]: I0130 18:03:19.754431 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Jan 30 18:03:19 crc kubenswrapper[4766]: I0130 18:03:19.754545 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-4ncrl" Jan 30 18:03:19 crc kubenswrapper[4766]: I0130 18:03:19.819756 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="7c586850-0ed6-4949-9087-0e66405455ce" containerName="galera" probeResult="failure" output="command timed out" Jan 30 18:03:19 crc kubenswrapper[4766]: I0130 18:03:19.868310 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Jan 30 18:03:19 crc kubenswrapper[4766]: I0130 18:03:19.883841 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/9044d49e-1762-437b-86a3-8697b46a1930-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"9044d49e-1762-437b-86a3-8697b46a1930\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 18:03:19 crc kubenswrapper[4766]: I0130 18:03:19.883993 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/9044d49e-1762-437b-86a3-8697b46a1930-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"9044d49e-1762-437b-86a3-8697b46a1930\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 18:03:19 crc kubenswrapper[4766]: I0130 18:03:19.884088 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztt7g\" (UniqueName: \"kubernetes.io/projected/9044d49e-1762-437b-86a3-8697b46a1930-kube-api-access-ztt7g\") pod \"alertmanager-metric-storage-0\" (UID: \"9044d49e-1762-437b-86a3-8697b46a1930\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 18:03:19 crc kubenswrapper[4766]: I0130 18:03:19.884143 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9044d49e-1762-437b-86a3-8697b46a1930-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"9044d49e-1762-437b-86a3-8697b46a1930\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 18:03:19 crc kubenswrapper[4766]: I0130 18:03:19.884285 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9044d49e-1762-437b-86a3-8697b46a1930-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"9044d49e-1762-437b-86a3-8697b46a1930\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 18:03:19 crc kubenswrapper[4766]: I0130 18:03:19.884337 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/9044d49e-1762-437b-86a3-8697b46a1930-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"9044d49e-1762-437b-86a3-8697b46a1930\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 18:03:19 crc kubenswrapper[4766]: I0130 18:03:19.884424 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9044d49e-1762-437b-86a3-8697b46a1930-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"9044d49e-1762-437b-86a3-8697b46a1930\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 18:03:19 crc kubenswrapper[4766]: I0130 18:03:19.985662 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9044d49e-1762-437b-86a3-8697b46a1930-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"9044d49e-1762-437b-86a3-8697b46a1930\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 18:03:19 crc kubenswrapper[4766]: I0130 18:03:19.985726 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/9044d49e-1762-437b-86a3-8697b46a1930-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"9044d49e-1762-437b-86a3-8697b46a1930\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 18:03:19 crc kubenswrapper[4766]: I0130 18:03:19.985767 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9044d49e-1762-437b-86a3-8697b46a1930-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"9044d49e-1762-437b-86a3-8697b46a1930\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 18:03:19 crc kubenswrapper[4766]: I0130 18:03:19.985817 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/9044d49e-1762-437b-86a3-8697b46a1930-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"9044d49e-1762-437b-86a3-8697b46a1930\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 18:03:19 crc kubenswrapper[4766]: I0130 18:03:19.985851 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/9044d49e-1762-437b-86a3-8697b46a1930-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"9044d49e-1762-437b-86a3-8697b46a1930\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 18:03:19 crc kubenswrapper[4766]: I0130 18:03:19.985889 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztt7g\" (UniqueName: \"kubernetes.io/projected/9044d49e-1762-437b-86a3-8697b46a1930-kube-api-access-ztt7g\") pod \"alertmanager-metric-storage-0\" (UID: \"9044d49e-1762-437b-86a3-8697b46a1930\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 18:03:19 crc kubenswrapper[4766]: I0130 18:03:19.985916 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9044d49e-1762-437b-86a3-8697b46a1930-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"9044d49e-1762-437b-86a3-8697b46a1930\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 18:03:19 crc kubenswrapper[4766]: I0130 18:03:19.993037 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.003930 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/9044d49e-1762-437b-86a3-8697b46a1930-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"9044d49e-1762-437b-86a3-8697b46a1930\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.017038 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/9044d49e-1762-437b-86a3-8697b46a1930-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"9044d49e-1762-437b-86a3-8697b46a1930\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.023709 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9044d49e-1762-437b-86a3-8697b46a1930-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"9044d49e-1762-437b-86a3-8697b46a1930\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.024103 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9044d49e-1762-437b-86a3-8697b46a1930-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"9044d49e-1762-437b-86a3-8697b46a1930\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.025616 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/9044d49e-1762-437b-86a3-8697b46a1930-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"9044d49e-1762-437b-86a3-8697b46a1930\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.046662 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9044d49e-1762-437b-86a3-8697b46a1930-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"9044d49e-1762-437b-86a3-8697b46a1930\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.056717 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztt7g\" (UniqueName: \"kubernetes.io/projected/9044d49e-1762-437b-86a3-8697b46a1930-kube-api-access-ztt7g\") pod \"alertmanager-metric-storage-0\" (UID: \"9044d49e-1762-437b-86a3-8697b46a1930\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.084050 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.099421 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.102161 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.121753 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.132513 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.132734 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.132849 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.132944 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.133032 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.133243 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-nkx8g" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.133878 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.143800 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.229528 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.303993 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/23ec4e7c-3732-4892-897e-5b2a5e7c2577-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"23ec4e7c-3732-4892-897e-5b2a5e7c2577\") " pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.304359 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/23ec4e7c-3732-4892-897e-5b2a5e7c2577-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"23ec4e7c-3732-4892-897e-5b2a5e7c2577\") " pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.311389 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/23ec4e7c-3732-4892-897e-5b2a5e7c2577-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"23ec4e7c-3732-4892-897e-5b2a5e7c2577\") " pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.311471 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/23ec4e7c-3732-4892-897e-5b2a5e7c2577-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"23ec4e7c-3732-4892-897e-5b2a5e7c2577\") " pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.311495 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd9sg\" (UniqueName: \"kubernetes.io/projected/23ec4e7c-3732-4892-897e-5b2a5e7c2577-kube-api-access-gd9sg\") pod \"prometheus-metric-storage-0\" (UID: \"23ec4e7c-3732-4892-897e-5b2a5e7c2577\") " pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.311551 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/23ec4e7c-3732-4892-897e-5b2a5e7c2577-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"23ec4e7c-3732-4892-897e-5b2a5e7c2577\") " pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.311576 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5630dc0f-887c-4665-b758-42f6ff12d1dd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5630dc0f-887c-4665-b758-42f6ff12d1dd\") pod \"prometheus-metric-storage-0\" (UID: \"23ec4e7c-3732-4892-897e-5b2a5e7c2577\") " pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.311595 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/23ec4e7c-3732-4892-897e-5b2a5e7c2577-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"23ec4e7c-3732-4892-897e-5b2a5e7c2577\") " pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.311667 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/23ec4e7c-3732-4892-897e-5b2a5e7c2577-config\") pod \"prometheus-metric-storage-0\" (UID: \"23ec4e7c-3732-4892-897e-5b2a5e7c2577\") " pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.311800 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/23ec4e7c-3732-4892-897e-5b2a5e7c2577-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"23ec4e7c-3732-4892-897e-5b2a5e7c2577\") " pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.415362 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/23ec4e7c-3732-4892-897e-5b2a5e7c2577-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"23ec4e7c-3732-4892-897e-5b2a5e7c2577\") " pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.415417 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/23ec4e7c-3732-4892-897e-5b2a5e7c2577-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"23ec4e7c-3732-4892-897e-5b2a5e7c2577\") " pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.415449 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/23ec4e7c-3732-4892-897e-5b2a5e7c2577-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"23ec4e7c-3732-4892-897e-5b2a5e7c2577\") " pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.415478 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/23ec4e7c-3732-4892-897e-5b2a5e7c2577-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"23ec4e7c-3732-4892-897e-5b2a5e7c2577\") " pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.415496 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gd9sg\" (UniqueName: \"kubernetes.io/projected/23ec4e7c-3732-4892-897e-5b2a5e7c2577-kube-api-access-gd9sg\") pod \"prometheus-metric-storage-0\" (UID: \"23ec4e7c-3732-4892-897e-5b2a5e7c2577\") " pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.415521 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/23ec4e7c-3732-4892-897e-5b2a5e7c2577-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"23ec4e7c-3732-4892-897e-5b2a5e7c2577\") " pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.415539 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/23ec4e7c-3732-4892-897e-5b2a5e7c2577-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"23ec4e7c-3732-4892-897e-5b2a5e7c2577\") " pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.415559 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5630dc0f-887c-4665-b758-42f6ff12d1dd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5630dc0f-887c-4665-b758-42f6ff12d1dd\") pod \"prometheus-metric-storage-0\" (UID: \"23ec4e7c-3732-4892-897e-5b2a5e7c2577\") " pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.415592 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/23ec4e7c-3732-4892-897e-5b2a5e7c2577-config\") pod \"prometheus-metric-storage-0\" (UID: \"23ec4e7c-3732-4892-897e-5b2a5e7c2577\") " pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.415644 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/23ec4e7c-3732-4892-897e-5b2a5e7c2577-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"23ec4e7c-3732-4892-897e-5b2a5e7c2577\") " pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.422133 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/23ec4e7c-3732-4892-897e-5b2a5e7c2577-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"23ec4e7c-3732-4892-897e-5b2a5e7c2577\") " pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.422756 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/23ec4e7c-3732-4892-897e-5b2a5e7c2577-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"23ec4e7c-3732-4892-897e-5b2a5e7c2577\") " pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.422806 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/23ec4e7c-3732-4892-897e-5b2a5e7c2577-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"23ec4e7c-3732-4892-897e-5b2a5e7c2577\") " pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.429019 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/23ec4e7c-3732-4892-897e-5b2a5e7c2577-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"23ec4e7c-3732-4892-897e-5b2a5e7c2577\") " pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.432630 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/23ec4e7c-3732-4892-897e-5b2a5e7c2577-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"23ec4e7c-3732-4892-897e-5b2a5e7c2577\") " pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.432737 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/23ec4e7c-3732-4892-897e-5b2a5e7c2577-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"23ec4e7c-3732-4892-897e-5b2a5e7c2577\") " pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.435153 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/23ec4e7c-3732-4892-897e-5b2a5e7c2577-config\") pod \"prometheus-metric-storage-0\" (UID: \"23ec4e7c-3732-4892-897e-5b2a5e7c2577\") " pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.435682 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/23ec4e7c-3732-4892-897e-5b2a5e7c2577-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"23ec4e7c-3732-4892-897e-5b2a5e7c2577\") " pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.444970 4766 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.445014 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5630dc0f-887c-4665-b758-42f6ff12d1dd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5630dc0f-887c-4665-b758-42f6ff12d1dd\") pod \"prometheus-metric-storage-0\" (UID: \"23ec4e7c-3732-4892-897e-5b2a5e7c2577\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/23096af48abc74568aa15792c175d7579a11f0188cc4a814c54861f42a908f6a/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.459224 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gd9sg\" (UniqueName: \"kubernetes.io/projected/23ec4e7c-3732-4892-897e-5b2a5e7c2577-kube-api-access-gd9sg\") pod \"prometheus-metric-storage-0\" (UID: \"23ec4e7c-3732-4892-897e-5b2a5e7c2577\") " pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:21 crc kubenswrapper[4766]: I0130 18:03:20.801693 4766 generic.go:334] "Generic (PLEG): container finished" podID="c0b97605-5664-4ae7-a15d-26b0ae7b4614" containerID="4d5a385a379300f1667fee7b30c6a58a29d62b44dc31d6716fcde576f98cfadd" exitCode=137 Jan 30 18:03:21 crc kubenswrapper[4766]: I0130 18:03:20.823237 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"1f134cd2-6d22-47cd-9ef6-bfdda2701067","Type":"ContainerStarted","Data":"f2a0af1294bb6f2a78b9d34acd4ababe565c7e2427a063600bdad05c7e0d2dbb"} Jan 30 18:03:21 crc kubenswrapper[4766]: I0130 18:03:20.825058 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"899280ca-43e9-46f7-8204-a90e682a0656","Type":"ContainerStarted","Data":"7d59b6622e07dc85bd5da35ec81c6d6cd23bfd09f4a7e92d9da60c9b4860bd55"} Jan 30 18:03:21 crc kubenswrapper[4766]: I0130 18:03:20.832678 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5630dc0f-887c-4665-b758-42f6ff12d1dd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5630dc0f-887c-4665-b758-42f6ff12d1dd\") pod \"prometheus-metric-storage-0\" (UID: \"23ec4e7c-3732-4892-897e-5b2a5e7c2577\") " pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:21 crc kubenswrapper[4766]: I0130 18:03:21.065688 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:21 crc kubenswrapper[4766]: I0130 18:03:21.113155 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Jan 30 18:03:21 crc kubenswrapper[4766]: I0130 18:03:21.274648 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 18:03:21 crc kubenswrapper[4766]: I0130 18:03:21.357700 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c0b97605-5664-4ae7-a15d-26b0ae7b4614-openstack-config-secret\") pod \"c0b97605-5664-4ae7-a15d-26b0ae7b4614\" (UID: \"c0b97605-5664-4ae7-a15d-26b0ae7b4614\") " Jan 30 18:03:21 crc kubenswrapper[4766]: I0130 18:03:21.357742 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c0b97605-5664-4ae7-a15d-26b0ae7b4614-openstack-config\") pod \"c0b97605-5664-4ae7-a15d-26b0ae7b4614\" (UID: \"c0b97605-5664-4ae7-a15d-26b0ae7b4614\") " Jan 30 18:03:21 crc kubenswrapper[4766]: I0130 18:03:21.357993 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgzlc\" (UniqueName: \"kubernetes.io/projected/c0b97605-5664-4ae7-a15d-26b0ae7b4614-kube-api-access-sgzlc\") pod \"c0b97605-5664-4ae7-a15d-26b0ae7b4614\" (UID: \"c0b97605-5664-4ae7-a15d-26b0ae7b4614\") " Jan 30 18:03:21 crc kubenswrapper[4766]: I0130 18:03:21.368517 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0b97605-5664-4ae7-a15d-26b0ae7b4614-kube-api-access-sgzlc" (OuterVolumeSpecName: "kube-api-access-sgzlc") pod "c0b97605-5664-4ae7-a15d-26b0ae7b4614" (UID: "c0b97605-5664-4ae7-a15d-26b0ae7b4614"). InnerVolumeSpecName "kube-api-access-sgzlc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:03:21 crc kubenswrapper[4766]: I0130 18:03:21.439293 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0b97605-5664-4ae7-a15d-26b0ae7b4614-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "c0b97605-5664-4ae7-a15d-26b0ae7b4614" (UID: "c0b97605-5664-4ae7-a15d-26b0ae7b4614"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 18:03:21 crc kubenswrapper[4766]: I0130 18:03:21.472995 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgzlc\" (UniqueName: \"kubernetes.io/projected/c0b97605-5664-4ae7-a15d-26b0ae7b4614-kube-api-access-sgzlc\") on node \"crc\" DevicePath \"\"" Jan 30 18:03:21 crc kubenswrapper[4766]: I0130 18:03:21.473016 4766 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c0b97605-5664-4ae7-a15d-26b0ae7b4614-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 30 18:03:21 crc kubenswrapper[4766]: I0130 18:03:21.477064 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0b97605-5664-4ae7-a15d-26b0ae7b4614-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "c0b97605-5664-4ae7-a15d-26b0ae7b4614" (UID: "c0b97605-5664-4ae7-a15d-26b0ae7b4614"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 18:03:21 crc kubenswrapper[4766]: I0130 18:03:21.575359 4766 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c0b97605-5664-4ae7-a15d-26b0ae7b4614-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 30 18:03:21 crc kubenswrapper[4766]: I0130 18:03:21.867054 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"9044d49e-1762-437b-86a3-8697b46a1930","Type":"ContainerStarted","Data":"1898ea85e5e9452cca0b95051d4d6b4bc3c0f96cfebc8b613d00e6b77376b379"} Jan 30 18:03:21 crc kubenswrapper[4766]: I0130 18:03:21.883712 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"899280ca-43e9-46f7-8204-a90e682a0656","Type":"ContainerStarted","Data":"22f2e4e745f0e1c079977c162ac07934d21a9115853257f65d22002b82a4068a"} Jan 30 18:03:21 crc kubenswrapper[4766]: I0130 18:03:21.885097 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 30 18:03:21 crc kubenswrapper[4766]: I0130 18:03:21.890667 4766 scope.go:117] "RemoveContainer" containerID="4d5a385a379300f1667fee7b30c6a58a29d62b44dc31d6716fcde576f98cfadd" Jan 30 18:03:21 crc kubenswrapper[4766]: I0130 18:03:21.890854 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 18:03:21 crc kubenswrapper[4766]: I0130 18:03:21.904004 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"1f134cd2-6d22-47cd-9ef6-bfdda2701067","Type":"ContainerStarted","Data":"16010720370fa2d9c8c37d5f967c4342d33eafa51ad9fb338b254ae7e5a68eca"} Jan 30 18:03:21 crc kubenswrapper[4766]: I0130 18:03:21.907915 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.360499651 podStartE2EDuration="3.907893968s" podCreationTimestamp="2026-01-30 18:03:18 +0000 UTC" firstStartedPulling="2026-01-30 18:03:19.977354639 +0000 UTC m=+6054.615311985" lastFinishedPulling="2026-01-30 18:03:20.524748966 +0000 UTC m=+6055.162706302" observedRunningTime="2026-01-30 18:03:21.901290469 +0000 UTC m=+6056.539247825" watchObservedRunningTime="2026-01-30 18:03:21.907893968 +0000 UTC m=+6056.545851314" Jan 30 18:03:21 crc kubenswrapper[4766]: I0130 18:03:21.944300 4766 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="c0b97605-5664-4ae7-a15d-26b0ae7b4614" podUID="1f134cd2-6d22-47cd-9ef6-bfdda2701067" Jan 30 18:03:21 crc kubenswrapper[4766]: I0130 18:03:21.947878 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.9478511469999997 podStartE2EDuration="3.947851147s" podCreationTimestamp="2026-01-30 18:03:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 18:03:21.935667715 +0000 UTC m=+6056.573625071" watchObservedRunningTime="2026-01-30 18:03:21.947851147 +0000 UTC m=+6056.585808493" Jan 30 18:03:21 crc kubenswrapper[4766]: I0130 18:03:21.984473 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 30 18:03:22 crc kubenswrapper[4766]: I0130 18:03:22.052195 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0b97605-5664-4ae7-a15d-26b0ae7b4614" path="/var/lib/kubelet/pods/c0b97605-5664-4ae7-a15d-26b0ae7b4614/volumes" Jan 30 18:03:22 crc kubenswrapper[4766]: I0130 18:03:22.924864 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"23ec4e7c-3732-4892-897e-5b2a5e7c2577","Type":"ContainerStarted","Data":"acc95510b6dca7728620f801b6294cdc09c765cee3ff5c480b6293df58bcd009"} Jan 30 18:03:28 crc kubenswrapper[4766]: I0130 18:03:28.840448 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 30 18:03:28 crc kubenswrapper[4766]: I0130 18:03:28.985772 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"23ec4e7c-3732-4892-897e-5b2a5e7c2577","Type":"ContainerStarted","Data":"78a718b6f2340c7b2d5233090bbe962b124e74422c8187153cc73c85bb7f71d5"} Jan 30 18:03:28 crc kubenswrapper[4766]: I0130 18:03:28.994619 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"9044d49e-1762-437b-86a3-8697b46a1930","Type":"ContainerStarted","Data":"631f4fbac2c6779d2988780304de813a617653c2010046896cdf02ea90a344eb"} Jan 30 18:03:33 crc kubenswrapper[4766]: I0130 18:03:33.631681 4766 scope.go:117] "RemoveContainer" containerID="cbcf29702f59854ea3bf4dbf2361e9f8a36e31bd05f0bda1d36ac83ec37ad3db" Jan 30 18:03:33 crc kubenswrapper[4766]: I0130 18:03:33.677925 4766 scope.go:117] "RemoveContainer" containerID="c9458198dfab56b6f64fbd05b1295b35eb049ea1af74a3aa668d258a59d21ba1" Jan 30 18:03:33 crc kubenswrapper[4766]: I0130 18:03:33.747240 4766 scope.go:117] "RemoveContainer" containerID="8866b78d897067600b584d9dee594c511c5628be20331b784f3c260d8792a78a" Jan 30 18:03:33 crc kubenswrapper[4766]: I0130 18:03:33.792721 4766 scope.go:117] "RemoveContainer" containerID="5e5b530396781526c9ca9c2a003890cd79c6f57ae8a59f2f830e10a2d58434d2" Jan 30 18:03:35 crc kubenswrapper[4766]: I0130 18:03:35.056397 4766 generic.go:334] "Generic (PLEG): container finished" podID="9044d49e-1762-437b-86a3-8697b46a1930" containerID="631f4fbac2c6779d2988780304de813a617653c2010046896cdf02ea90a344eb" exitCode=0 Jan 30 18:03:35 crc kubenswrapper[4766]: I0130 18:03:35.056513 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"9044d49e-1762-437b-86a3-8697b46a1930","Type":"ContainerDied","Data":"631f4fbac2c6779d2988780304de813a617653c2010046896cdf02ea90a344eb"} Jan 30 18:03:35 crc kubenswrapper[4766]: I0130 18:03:35.059831 4766 generic.go:334] "Generic (PLEG): container finished" podID="23ec4e7c-3732-4892-897e-5b2a5e7c2577" containerID="78a718b6f2340c7b2d5233090bbe962b124e74422c8187153cc73c85bb7f71d5" exitCode=0 Jan 30 18:03:35 crc kubenswrapper[4766]: I0130 18:03:35.059888 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"23ec4e7c-3732-4892-897e-5b2a5e7c2577","Type":"ContainerDied","Data":"78a718b6f2340c7b2d5233090bbe962b124e74422c8187153cc73c85bb7f71d5"} Jan 30 18:03:38 crc kubenswrapper[4766]: I0130 18:03:38.086468 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"9044d49e-1762-437b-86a3-8697b46a1930","Type":"ContainerStarted","Data":"8ae39afb14dea25b6a784ad28515ded29ebdd679268c89dea22b469e2544719f"} Jan 30 18:03:41 crc kubenswrapper[4766]: I0130 18:03:41.126465 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"9044d49e-1762-437b-86a3-8697b46a1930","Type":"ContainerStarted","Data":"c828660430711f040dc96e08b5dc57a0461147cc0fd0dcc324aa8163e2d939db"} Jan 30 18:03:41 crc kubenswrapper[4766]: I0130 18:03:41.127058 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Jan 30 18:03:41 crc kubenswrapper[4766]: I0130 18:03:41.130521 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Jan 30 18:03:41 crc kubenswrapper[4766]: I0130 18:03:41.206611 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=5.740548211 podStartE2EDuration="22.206581058s" podCreationTimestamp="2026-01-30 18:03:19 +0000 UTC" firstStartedPulling="2026-01-30 18:03:21.1663531 +0000 UTC m=+6055.804310446" lastFinishedPulling="2026-01-30 18:03:37.632385947 +0000 UTC m=+6072.270343293" observedRunningTime="2026-01-30 18:03:41.156688518 +0000 UTC m=+6075.794645874" watchObservedRunningTime="2026-01-30 18:03:41.206581058 +0000 UTC m=+6075.844538404" Jan 30 18:03:42 crc kubenswrapper[4766]: I0130 18:03:42.141402 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"23ec4e7c-3732-4892-897e-5b2a5e7c2577","Type":"ContainerStarted","Data":"9e1fd5e99ee5a0c07c68746f082d9c7487e6e422dcf2c09c82acf1464be6c561"} Jan 30 18:03:45 crc kubenswrapper[4766]: I0130 18:03:45.170358 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"23ec4e7c-3732-4892-897e-5b2a5e7c2577","Type":"ContainerStarted","Data":"2b3bdeb423522d96560be35508f8d5157f86aae69539a8d2200cec35cee94304"} Jan 30 18:03:50 crc kubenswrapper[4766]: I0130 18:03:50.214694 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"23ec4e7c-3732-4892-897e-5b2a5e7c2577","Type":"ContainerStarted","Data":"a01305a117c7c00b3a3ee7d158ae40f0680fffda85f22ea45e5c306cd84570c2"} Jan 30 18:03:50 crc kubenswrapper[4766]: I0130 18:03:50.259014 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=4.089224411 podStartE2EDuration="31.258990875s" podCreationTimestamp="2026-01-30 18:03:19 +0000 UTC" firstStartedPulling="2026-01-30 18:03:22.004970994 +0000 UTC m=+6056.642928340" lastFinishedPulling="2026-01-30 18:03:49.174737458 +0000 UTC m=+6083.812694804" observedRunningTime="2026-01-30 18:03:50.242752993 +0000 UTC m=+6084.880710359" watchObservedRunningTime="2026-01-30 18:03:50.258990875 +0000 UTC m=+6084.896948221" Jan 30 18:03:51 crc kubenswrapper[4766]: I0130 18:03:51.066489 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:51 crc kubenswrapper[4766]: I0130 18:03:51.066881 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:51 crc kubenswrapper[4766]: I0130 18:03:51.068994 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:51 crc kubenswrapper[4766]: I0130 18:03:51.223748 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:52 crc kubenswrapper[4766]: I0130 18:03:52.910062 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 18:03:52 crc kubenswrapper[4766]: I0130 18:03:52.913199 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 18:03:52 crc kubenswrapper[4766]: I0130 18:03:52.920659 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 18:03:52 crc kubenswrapper[4766]: I0130 18:03:52.920782 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 18:03:52 crc kubenswrapper[4766]: I0130 18:03:52.923060 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 18:03:52 crc kubenswrapper[4766]: I0130 18:03:52.992798 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/464bbfb2-a15f-4b08-85d1-bc0fe536c6d7-config-data\") pod \"ceilometer-0\" (UID: \"464bbfb2-a15f-4b08-85d1-bc0fe536c6d7\") " pod="openstack/ceilometer-0" Jan 30 18:03:52 crc kubenswrapper[4766]: I0130 18:03:52.992868 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/464bbfb2-a15f-4b08-85d1-bc0fe536c6d7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"464bbfb2-a15f-4b08-85d1-bc0fe536c6d7\") " pod="openstack/ceilometer-0" Jan 30 18:03:52 crc kubenswrapper[4766]: I0130 18:03:52.992916 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/464bbfb2-a15f-4b08-85d1-bc0fe536c6d7-run-httpd\") pod \"ceilometer-0\" (UID: \"464bbfb2-a15f-4b08-85d1-bc0fe536c6d7\") " pod="openstack/ceilometer-0" Jan 30 18:03:52 crc kubenswrapper[4766]: I0130 18:03:52.992942 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/464bbfb2-a15f-4b08-85d1-bc0fe536c6d7-scripts\") pod \"ceilometer-0\" (UID: \"464bbfb2-a15f-4b08-85d1-bc0fe536c6d7\") " pod="openstack/ceilometer-0" Jan 30 18:03:52 crc kubenswrapper[4766]: I0130 18:03:52.992962 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4z6t\" (UniqueName: \"kubernetes.io/projected/464bbfb2-a15f-4b08-85d1-bc0fe536c6d7-kube-api-access-l4z6t\") pod \"ceilometer-0\" (UID: \"464bbfb2-a15f-4b08-85d1-bc0fe536c6d7\") " pod="openstack/ceilometer-0" Jan 30 18:03:52 crc kubenswrapper[4766]: I0130 18:03:52.992986 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/464bbfb2-a15f-4b08-85d1-bc0fe536c6d7-log-httpd\") pod \"ceilometer-0\" (UID: \"464bbfb2-a15f-4b08-85d1-bc0fe536c6d7\") " pod="openstack/ceilometer-0" Jan 30 18:03:52 crc kubenswrapper[4766]: I0130 18:03:52.993026 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/464bbfb2-a15f-4b08-85d1-bc0fe536c6d7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"464bbfb2-a15f-4b08-85d1-bc0fe536c6d7\") " pod="openstack/ceilometer-0" Jan 30 18:03:53 crc kubenswrapper[4766]: I0130 18:03:53.095476 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/464bbfb2-a15f-4b08-85d1-bc0fe536c6d7-config-data\") pod \"ceilometer-0\" (UID: \"464bbfb2-a15f-4b08-85d1-bc0fe536c6d7\") " pod="openstack/ceilometer-0" Jan 30 18:03:53 crc kubenswrapper[4766]: I0130 18:03:53.095562 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/464bbfb2-a15f-4b08-85d1-bc0fe536c6d7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"464bbfb2-a15f-4b08-85d1-bc0fe536c6d7\") " pod="openstack/ceilometer-0" Jan 30 18:03:53 crc kubenswrapper[4766]: I0130 18:03:53.095618 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/464bbfb2-a15f-4b08-85d1-bc0fe536c6d7-run-httpd\") pod \"ceilometer-0\" (UID: \"464bbfb2-a15f-4b08-85d1-bc0fe536c6d7\") " pod="openstack/ceilometer-0" Jan 30 18:03:53 crc kubenswrapper[4766]: I0130 18:03:53.095646 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/464bbfb2-a15f-4b08-85d1-bc0fe536c6d7-scripts\") pod \"ceilometer-0\" (UID: \"464bbfb2-a15f-4b08-85d1-bc0fe536c6d7\") " pod="openstack/ceilometer-0" Jan 30 18:03:53 crc kubenswrapper[4766]: I0130 18:03:53.095673 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4z6t\" (UniqueName: \"kubernetes.io/projected/464bbfb2-a15f-4b08-85d1-bc0fe536c6d7-kube-api-access-l4z6t\") pod \"ceilometer-0\" (UID: \"464bbfb2-a15f-4b08-85d1-bc0fe536c6d7\") " pod="openstack/ceilometer-0" Jan 30 18:03:53 crc kubenswrapper[4766]: I0130 18:03:53.095701 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/464bbfb2-a15f-4b08-85d1-bc0fe536c6d7-log-httpd\") pod \"ceilometer-0\" (UID: \"464bbfb2-a15f-4b08-85d1-bc0fe536c6d7\") " pod="openstack/ceilometer-0" Jan 30 18:03:53 crc kubenswrapper[4766]: I0130 18:03:53.095743 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/464bbfb2-a15f-4b08-85d1-bc0fe536c6d7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"464bbfb2-a15f-4b08-85d1-bc0fe536c6d7\") " pod="openstack/ceilometer-0" Jan 30 18:03:53 crc kubenswrapper[4766]: I0130 18:03:53.096452 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/464bbfb2-a15f-4b08-85d1-bc0fe536c6d7-run-httpd\") pod \"ceilometer-0\" (UID: \"464bbfb2-a15f-4b08-85d1-bc0fe536c6d7\") " pod="openstack/ceilometer-0" Jan 30 18:03:53 crc kubenswrapper[4766]: I0130 18:03:53.096577 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/464bbfb2-a15f-4b08-85d1-bc0fe536c6d7-log-httpd\") pod \"ceilometer-0\" (UID: \"464bbfb2-a15f-4b08-85d1-bc0fe536c6d7\") " pod="openstack/ceilometer-0" Jan 30 18:03:53 crc kubenswrapper[4766]: I0130 18:03:53.102717 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/464bbfb2-a15f-4b08-85d1-bc0fe536c6d7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"464bbfb2-a15f-4b08-85d1-bc0fe536c6d7\") " pod="openstack/ceilometer-0" Jan 30 18:03:53 crc kubenswrapper[4766]: I0130 18:03:53.102902 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/464bbfb2-a15f-4b08-85d1-bc0fe536c6d7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"464bbfb2-a15f-4b08-85d1-bc0fe536c6d7\") " pod="openstack/ceilometer-0" Jan 30 18:03:53 crc kubenswrapper[4766]: I0130 18:03:53.107590 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/464bbfb2-a15f-4b08-85d1-bc0fe536c6d7-config-data\") pod \"ceilometer-0\" (UID: \"464bbfb2-a15f-4b08-85d1-bc0fe536c6d7\") " pod="openstack/ceilometer-0" Jan 30 18:03:53 crc kubenswrapper[4766]: I0130 18:03:53.108182 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/464bbfb2-a15f-4b08-85d1-bc0fe536c6d7-scripts\") pod \"ceilometer-0\" (UID: \"464bbfb2-a15f-4b08-85d1-bc0fe536c6d7\") " pod="openstack/ceilometer-0" Jan 30 18:03:53 crc kubenswrapper[4766]: I0130 18:03:53.125224 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4z6t\" (UniqueName: \"kubernetes.io/projected/464bbfb2-a15f-4b08-85d1-bc0fe536c6d7-kube-api-access-l4z6t\") pod \"ceilometer-0\" (UID: \"464bbfb2-a15f-4b08-85d1-bc0fe536c6d7\") " pod="openstack/ceilometer-0" Jan 30 18:03:53 crc kubenswrapper[4766]: I0130 18:03:53.294931 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 18:03:53 crc kubenswrapper[4766]: I0130 18:03:53.830503 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 18:03:53 crc kubenswrapper[4766]: W0130 18:03:53.839509 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod464bbfb2_a15f_4b08_85d1_bc0fe536c6d7.slice/crio-8d1404adb469085c84a1918c5fef74149eb84cc043143d7b2c4feb551e5afc6b WatchSource:0}: Error finding container 8d1404adb469085c84a1918c5fef74149eb84cc043143d7b2c4feb551e5afc6b: Status 404 returned error can't find the container with id 8d1404adb469085c84a1918c5fef74149eb84cc043143d7b2c4feb551e5afc6b Jan 30 18:03:54 crc kubenswrapper[4766]: I0130 18:03:54.247757 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"464bbfb2-a15f-4b08-85d1-bc0fe536c6d7","Type":"ContainerStarted","Data":"8d1404adb469085c84a1918c5fef74149eb84cc043143d7b2c4feb551e5afc6b"} Jan 30 18:03:56 crc kubenswrapper[4766]: I0130 18:03:56.266570 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"464bbfb2-a15f-4b08-85d1-bc0fe536c6d7","Type":"ContainerStarted","Data":"7235f146f65a0f3fe18a4b3bc30ab7388c6bb2a3e5cc6f5d1bcd61d01098b740"} Jan 30 18:03:57 crc kubenswrapper[4766]: I0130 18:03:57.277684 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"464bbfb2-a15f-4b08-85d1-bc0fe536c6d7","Type":"ContainerStarted","Data":"0da1ca76997f8dd0abcc2238713676579311a21c6015f851d7ead0458d1ab65a"} Jan 30 18:03:58 crc kubenswrapper[4766]: I0130 18:03:58.287330 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"464bbfb2-a15f-4b08-85d1-bc0fe536c6d7","Type":"ContainerStarted","Data":"b364bfa9b39865e084a9fb7492117ebd9d5a37c920d882b4cf48f6dc5b4e57ec"} Jan 30 18:04:07 crc kubenswrapper[4766]: I0130 18:04:07.367421 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"464bbfb2-a15f-4b08-85d1-bc0fe536c6d7","Type":"ContainerStarted","Data":"c670836f02038ad7c0a2351f1649a016b515672da0963da6e735e03c6bbe5ef3"} Jan 30 18:04:07 crc kubenswrapper[4766]: I0130 18:04:07.368328 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 18:04:07 crc kubenswrapper[4766]: I0130 18:04:07.401989 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.062006504 podStartE2EDuration="15.401961341s" podCreationTimestamp="2026-01-30 18:03:52 +0000 UTC" firstStartedPulling="2026-01-30 18:03:53.84267405 +0000 UTC m=+6088.480631386" lastFinishedPulling="2026-01-30 18:04:06.182628867 +0000 UTC m=+6100.820586223" observedRunningTime="2026-01-30 18:04:07.392692047 +0000 UTC m=+6102.030649393" watchObservedRunningTime="2026-01-30 18:04:07.401961341 +0000 UTC m=+6102.039918717" Jan 30 18:04:12 crc kubenswrapper[4766]: I0130 18:04:12.052253 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-4207-account-create-update-5677m"] Jan 30 18:04:12 crc kubenswrapper[4766]: I0130 18:04:12.060833 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-hkg9q"] Jan 30 18:04:12 crc kubenswrapper[4766]: I0130 18:04:12.072514 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-hkg9q"] Jan 30 18:04:12 crc kubenswrapper[4766]: I0130 18:04:12.082899 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-hsbm5"] Jan 30 18:04:12 crc kubenswrapper[4766]: I0130 18:04:12.091713 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-4207-account-create-update-5677m"] Jan 30 18:04:12 crc kubenswrapper[4766]: I0130 18:04:12.101225 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-dwwb9"] Jan 30 18:04:12 crc kubenswrapper[4766]: I0130 18:04:12.109599 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-hsbm5"] Jan 30 18:04:12 crc kubenswrapper[4766]: I0130 18:04:12.118739 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-dwwb9"] Jan 30 18:04:13 crc kubenswrapper[4766]: I0130 18:04:13.031316 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-4379-account-create-update-xxk7g"] Jan 30 18:04:13 crc kubenswrapper[4766]: I0130 18:04:13.043335 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-4379-account-create-update-xxk7g"] Jan 30 18:04:13 crc kubenswrapper[4766]: I0130 18:04:13.053041 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-1549-account-create-update-qksfj"] Jan 30 18:04:13 crc kubenswrapper[4766]: I0130 18:04:13.065237 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-1549-account-create-update-qksfj"] Jan 30 18:04:14 crc kubenswrapper[4766]: I0130 18:04:14.081198 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03cd48e2-831c-4067-ae82-6aa11c3ed219" path="/var/lib/kubelet/pods/03cd48e2-831c-4067-ae82-6aa11c3ed219/volumes" Jan 30 18:04:14 crc kubenswrapper[4766]: I0130 18:04:14.081812 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="230985b1-39a5-440c-b67a-97bed8481bd6" path="/var/lib/kubelet/pods/230985b1-39a5-440c-b67a-97bed8481bd6/volumes" Jan 30 18:04:14 crc kubenswrapper[4766]: I0130 18:04:14.082724 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8e9cfc2-7b7d-47eb-aece-ed9fe716594a" path="/var/lib/kubelet/pods/a8e9cfc2-7b7d-47eb-aece-ed9fe716594a/volumes" Jan 30 18:04:14 crc kubenswrapper[4766]: I0130 18:04:14.083319 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="caa501cc-1f23-4a0c-b845-31c9ae218be6" path="/var/lib/kubelet/pods/caa501cc-1f23-4a0c-b845-31c9ae218be6/volumes" Jan 30 18:04:14 crc kubenswrapper[4766]: I0130 18:04:14.084326 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2114339-89f3-4232-94e1-d4323d23978b" path="/var/lib/kubelet/pods/e2114339-89f3-4232-94e1-d4323d23978b/volumes" Jan 30 18:04:14 crc kubenswrapper[4766]: I0130 18:04:14.084864 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eda85bd2-cef5-4dba-b322-a9f16aced872" path="/var/lib/kubelet/pods/eda85bd2-cef5-4dba-b322-a9f16aced872/volumes" Jan 30 18:04:23 crc kubenswrapper[4766]: I0130 18:04:23.311092 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 30 18:04:24 crc kubenswrapper[4766]: I0130 18:04:24.065246 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-jccb8"] Jan 30 18:04:24 crc kubenswrapper[4766]: I0130 18:04:24.121207 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-jccb8"] Jan 30 18:04:26 crc kubenswrapper[4766]: I0130 18:04:26.051272 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b37a2812-82ad-4535-84e6-569f9b3765a6" path="/var/lib/kubelet/pods/b37a2812-82ad-4535-84e6-569f9b3765a6/volumes" Jan 30 18:04:33 crc kubenswrapper[4766]: I0130 18:04:33.999342 4766 scope.go:117] "RemoveContainer" containerID="3e558c3b2bd50c7543806cf36f97bd5a41e96ea64aaa7d83bb37281ff7150079" Jan 30 18:04:34 crc kubenswrapper[4766]: I0130 18:04:34.026256 4766 scope.go:117] "RemoveContainer" containerID="4ceebfac5a0b227e854681a12bc5a1070dab4586e24997f6e4a7f702a9563e66" Jan 30 18:04:34 crc kubenswrapper[4766]: I0130 18:04:34.564077 4766 scope.go:117] "RemoveContainer" containerID="b484886b7344df11c7a295d1efb6eeefa526673bc8fccf2d500d87883c528256" Jan 30 18:04:34 crc kubenswrapper[4766]: I0130 18:04:34.635918 4766 scope.go:117] "RemoveContainer" containerID="69d76b9aa9a9c3d7d1a5e0b77ed7034745afa17d311bd1f48a0c475c88982f61" Jan 30 18:04:34 crc kubenswrapper[4766]: I0130 18:04:34.660825 4766 scope.go:117] "RemoveContainer" containerID="84255a253283b95cc39831e777619bfbcbdd030c283ced85e388fb2e68a58195" Jan 30 18:04:34 crc kubenswrapper[4766]: I0130 18:04:34.704510 4766 scope.go:117] "RemoveContainer" containerID="afbdcdecad349aa223b487405699fc3f46bcbef54133e0b074eec4a93f302638" Jan 30 18:04:34 crc kubenswrapper[4766]: I0130 18:04:34.759028 4766 scope.go:117] "RemoveContainer" containerID="c11a5160103bd776a6a5d2558dca488af7e839c269a24583ddad14de582e241f" Jan 30 18:04:37 crc kubenswrapper[4766]: I0130 18:04:37.072032 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-247jx"] Jan 30 18:04:37 crc kubenswrapper[4766]: I0130 18:04:37.132736 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-247jx"] Jan 30 18:04:38 crc kubenswrapper[4766]: I0130 18:04:38.030641 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-5xsrx"] Jan 30 18:04:38 crc kubenswrapper[4766]: I0130 18:04:38.051511 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="202a732a-6c9d-427a-9c87-af7c4af5d184" path="/var/lib/kubelet/pods/202a732a-6c9d-427a-9c87-af7c4af5d184/volumes" Jan 30 18:04:38 crc kubenswrapper[4766]: I0130 18:04:38.052574 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-5xsrx"] Jan 30 18:04:39 crc kubenswrapper[4766]: I0130 18:04:39.045198 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 18:04:39 crc kubenswrapper[4766]: I0130 18:04:39.045524 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 18:04:40 crc kubenswrapper[4766]: I0130 18:04:40.051901 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="083bdb6d-c3f3-412d-9097-48e66c7f28d0" path="/var/lib/kubelet/pods/083bdb6d-c3f3-412d-9097-48e66c7f28d0/volumes" Jan 30 18:04:57 crc kubenswrapper[4766]: I0130 18:04:57.036979 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-nfnj2"] Jan 30 18:04:57 crc kubenswrapper[4766]: I0130 18:04:57.049935 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-nfnj2"] Jan 30 18:04:58 crc kubenswrapper[4766]: I0130 18:04:58.051379 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="018ff185-8917-437b-9c5a-ec143d1fc84a" path="/var/lib/kubelet/pods/018ff185-8917-437b-9c5a-ec143d1fc84a/volumes" Jan 30 18:05:09 crc kubenswrapper[4766]: I0130 18:05:09.045521 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 18:05:09 crc kubenswrapper[4766]: I0130 18:05:09.046073 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 18:05:34 crc kubenswrapper[4766]: I0130 18:05:34.922723 4766 scope.go:117] "RemoveContainer" containerID="1027fcfd70b26fa66fbb26590d7374bf1ac4b410943bffac851c340bb52079f0" Jan 30 18:05:34 crc kubenswrapper[4766]: I0130 18:05:34.973302 4766 scope.go:117] "RemoveContainer" containerID="a0f13e7a67d3cb517e1228d6222bbee0f7e7c79bd8b7aaaddf752c4e348579af" Jan 30 18:05:35 crc kubenswrapper[4766]: I0130 18:05:35.018727 4766 scope.go:117] "RemoveContainer" containerID="622b9b57d1c8ffadafcb076f305a5bdc22e042ba182b300a03ff05dbcdcc46b3" Jan 30 18:05:35 crc kubenswrapper[4766]: I0130 18:05:35.047834 4766 scope.go:117] "RemoveContainer" containerID="5aac27e83d1cb5ca2446b49d301ad805fafea78ed00e6ab9d06fdf982c7ca496" Jan 30 18:05:39 crc kubenswrapper[4766]: I0130 18:05:39.045663 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 18:05:39 crc kubenswrapper[4766]: I0130 18:05:39.046148 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 18:05:39 crc kubenswrapper[4766]: I0130 18:05:39.046234 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 18:05:39 crc kubenswrapper[4766]: I0130 18:05:39.047446 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"14f71626d75ef20c57062d292513e3fd82c4a368099315d09ba80457172d5098"} pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 18:05:39 crc kubenswrapper[4766]: I0130 18:05:39.047551 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" containerID="cri-o://14f71626d75ef20c57062d292513e3fd82c4a368099315d09ba80457172d5098" gracePeriod=600 Jan 30 18:05:39 crc kubenswrapper[4766]: I0130 18:05:39.065105 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b2b1-account-create-update-vjtsm"] Jan 30 18:05:39 crc kubenswrapper[4766]: I0130 18:05:39.078380 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-b2b1-account-create-update-vjtsm"] Jan 30 18:05:39 crc kubenswrapper[4766]: I0130 18:05:39.086790 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-h7zjx"] Jan 30 18:05:39 crc kubenswrapper[4766]: I0130 18:05:39.094768 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-h7zjx"] Jan 30 18:05:39 crc kubenswrapper[4766]: I0130 18:05:39.315636 4766 generic.go:334] "Generic (PLEG): container finished" podID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerID="14f71626d75ef20c57062d292513e3fd82c4a368099315d09ba80457172d5098" exitCode=0 Jan 30 18:05:39 crc kubenswrapper[4766]: I0130 18:05:39.315681 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerDied","Data":"14f71626d75ef20c57062d292513e3fd82c4a368099315d09ba80457172d5098"} Jan 30 18:05:39 crc kubenswrapper[4766]: I0130 18:05:39.315718 4766 scope.go:117] "RemoveContainer" containerID="e52209522ad6ff86d8d333246ee724873200c61e028d93d4fc612ac4b0977354" Jan 30 18:05:40 crc kubenswrapper[4766]: I0130 18:05:40.050488 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f3c8440-d3be-418a-a446-f3f592a864bd" path="/var/lib/kubelet/pods/3f3c8440-d3be-418a-a446-f3f592a864bd/volumes" Jan 30 18:05:40 crc kubenswrapper[4766]: I0130 18:05:40.051964 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="912d4cef-a7f3-40a4-b498-f1da7361a15c" path="/var/lib/kubelet/pods/912d4cef-a7f3-40a4-b498-f1da7361a15c/volumes" Jan 30 18:05:40 crc kubenswrapper[4766]: I0130 18:05:40.326565 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerStarted","Data":"d30d418df26d8fd4a7d7f4c28a838feebfa8adef2cda9408a1193bbf367cfb2a"} Jan 30 18:05:46 crc kubenswrapper[4766]: I0130 18:05:46.038534 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-7fd4h"] Jan 30 18:05:46 crc kubenswrapper[4766]: I0130 18:05:46.056652 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-7fd4h"] Jan 30 18:05:48 crc kubenswrapper[4766]: I0130 18:05:48.056054 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d92cbfe-71f2-4dc5-981b-0c52c1169a2d" path="/var/lib/kubelet/pods/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d/volumes" Jan 30 18:06:35 crc kubenswrapper[4766]: I0130 18:06:35.208444 4766 scope.go:117] "RemoveContainer" containerID="d2335e8782f353fb6442350bea576a44e02bef8eea5ae5d217798cc04d676963" Jan 30 18:06:35 crc kubenswrapper[4766]: I0130 18:06:35.252934 4766 scope.go:117] "RemoveContainer" containerID="7890c44e699b67486d1b5e46be24d9577006c39ba9eaa68133e8d00b60940bba" Jan 30 18:06:35 crc kubenswrapper[4766]: I0130 18:06:35.293252 4766 scope.go:117] "RemoveContainer" containerID="9bcd8e7065331188bb35aae678322da7e0860c541ad8d16bf36d90aeac08ac0d" Jan 30 18:07:39 crc kubenswrapper[4766]: I0130 18:07:39.045169 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 18:07:39 crc kubenswrapper[4766]: I0130 18:07:39.045740 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 18:08:01 crc kubenswrapper[4766]: I0130 18:08:01.350270 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7fvqb"] Jan 30 18:08:01 crc kubenswrapper[4766]: I0130 18:08:01.355538 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7fvqb" Jan 30 18:08:01 crc kubenswrapper[4766]: I0130 18:08:01.360447 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7fvqb"] Jan 30 18:08:01 crc kubenswrapper[4766]: I0130 18:08:01.425599 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5661aa7a-4ca4-43a3-8a14-32ba85ecd02a-catalog-content\") pod \"redhat-operators-7fvqb\" (UID: \"5661aa7a-4ca4-43a3-8a14-32ba85ecd02a\") " pod="openshift-marketplace/redhat-operators-7fvqb" Jan 30 18:08:01 crc kubenswrapper[4766]: I0130 18:08:01.425810 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5661aa7a-4ca4-43a3-8a14-32ba85ecd02a-utilities\") pod \"redhat-operators-7fvqb\" (UID: \"5661aa7a-4ca4-43a3-8a14-32ba85ecd02a\") " pod="openshift-marketplace/redhat-operators-7fvqb" Jan 30 18:08:01 crc kubenswrapper[4766]: I0130 18:08:01.425880 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dclp2\" (UniqueName: \"kubernetes.io/projected/5661aa7a-4ca4-43a3-8a14-32ba85ecd02a-kube-api-access-dclp2\") pod \"redhat-operators-7fvqb\" (UID: \"5661aa7a-4ca4-43a3-8a14-32ba85ecd02a\") " pod="openshift-marketplace/redhat-operators-7fvqb" Jan 30 18:08:01 crc kubenswrapper[4766]: I0130 18:08:01.527655 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5661aa7a-4ca4-43a3-8a14-32ba85ecd02a-utilities\") pod \"redhat-operators-7fvqb\" (UID: \"5661aa7a-4ca4-43a3-8a14-32ba85ecd02a\") " pod="openshift-marketplace/redhat-operators-7fvqb" Jan 30 18:08:01 crc kubenswrapper[4766]: I0130 18:08:01.527774 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dclp2\" (UniqueName: \"kubernetes.io/projected/5661aa7a-4ca4-43a3-8a14-32ba85ecd02a-kube-api-access-dclp2\") pod \"redhat-operators-7fvqb\" (UID: \"5661aa7a-4ca4-43a3-8a14-32ba85ecd02a\") " pod="openshift-marketplace/redhat-operators-7fvqb" Jan 30 18:08:01 crc kubenswrapper[4766]: I0130 18:08:01.527801 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5661aa7a-4ca4-43a3-8a14-32ba85ecd02a-catalog-content\") pod \"redhat-operators-7fvqb\" (UID: \"5661aa7a-4ca4-43a3-8a14-32ba85ecd02a\") " pod="openshift-marketplace/redhat-operators-7fvqb" Jan 30 18:08:01 crc kubenswrapper[4766]: I0130 18:08:01.528592 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5661aa7a-4ca4-43a3-8a14-32ba85ecd02a-catalog-content\") pod \"redhat-operators-7fvqb\" (UID: \"5661aa7a-4ca4-43a3-8a14-32ba85ecd02a\") " pod="openshift-marketplace/redhat-operators-7fvqb" Jan 30 18:08:01 crc kubenswrapper[4766]: I0130 18:08:01.528704 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5661aa7a-4ca4-43a3-8a14-32ba85ecd02a-utilities\") pod \"redhat-operators-7fvqb\" (UID: \"5661aa7a-4ca4-43a3-8a14-32ba85ecd02a\") " pod="openshift-marketplace/redhat-operators-7fvqb" Jan 30 18:08:01 crc kubenswrapper[4766]: I0130 18:08:01.548263 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dclp2\" (UniqueName: \"kubernetes.io/projected/5661aa7a-4ca4-43a3-8a14-32ba85ecd02a-kube-api-access-dclp2\") pod \"redhat-operators-7fvqb\" (UID: \"5661aa7a-4ca4-43a3-8a14-32ba85ecd02a\") " pod="openshift-marketplace/redhat-operators-7fvqb" Jan 30 18:08:01 crc kubenswrapper[4766]: I0130 18:08:01.674435 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7fvqb" Jan 30 18:08:02 crc kubenswrapper[4766]: I0130 18:08:02.188338 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7fvqb"] Jan 30 18:08:02 crc kubenswrapper[4766]: I0130 18:08:02.721119 4766 generic.go:334] "Generic (PLEG): container finished" podID="5661aa7a-4ca4-43a3-8a14-32ba85ecd02a" containerID="6d2d86bf651bcceb29e8c4c66da842c27e540e29df11a4d2ed626b77b27807f2" exitCode=0 Jan 30 18:08:02 crc kubenswrapper[4766]: I0130 18:08:02.721230 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7fvqb" event={"ID":"5661aa7a-4ca4-43a3-8a14-32ba85ecd02a","Type":"ContainerDied","Data":"6d2d86bf651bcceb29e8c4c66da842c27e540e29df11a4d2ed626b77b27807f2"} Jan 30 18:08:02 crc kubenswrapper[4766]: I0130 18:08:02.722434 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7fvqb" event={"ID":"5661aa7a-4ca4-43a3-8a14-32ba85ecd02a","Type":"ContainerStarted","Data":"adbc220b2deb8c6b2c23f688ed49bbfcc93a05709363d598129b754f45c43c1c"} Jan 30 18:08:02 crc kubenswrapper[4766]: I0130 18:08:02.724836 4766 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 18:08:05 crc kubenswrapper[4766]: I0130 18:08:05.750071 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7fvqb" event={"ID":"5661aa7a-4ca4-43a3-8a14-32ba85ecd02a","Type":"ContainerStarted","Data":"a86fa1772cbf6c37eba8bc0e141542ddbc7b252719137b45b7c04e5f670abb7f"} Jan 30 18:08:09 crc kubenswrapper[4766]: I0130 18:08:09.045705 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 18:08:09 crc kubenswrapper[4766]: I0130 18:08:09.046030 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 18:08:13 crc kubenswrapper[4766]: I0130 18:08:13.847746 4766 generic.go:334] "Generic (PLEG): container finished" podID="5661aa7a-4ca4-43a3-8a14-32ba85ecd02a" containerID="a86fa1772cbf6c37eba8bc0e141542ddbc7b252719137b45b7c04e5f670abb7f" exitCode=0 Jan 30 18:08:13 crc kubenswrapper[4766]: I0130 18:08:13.847835 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7fvqb" event={"ID":"5661aa7a-4ca4-43a3-8a14-32ba85ecd02a","Type":"ContainerDied","Data":"a86fa1772cbf6c37eba8bc0e141542ddbc7b252719137b45b7c04e5f670abb7f"} Jan 30 18:08:14 crc kubenswrapper[4766]: I0130 18:08:14.859951 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7fvqb" event={"ID":"5661aa7a-4ca4-43a3-8a14-32ba85ecd02a","Type":"ContainerStarted","Data":"7bc76810479052680c6e9b88c4cc0d915275925573caace53f4dd52943301218"} Jan 30 18:08:14 crc kubenswrapper[4766]: I0130 18:08:14.886374 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7fvqb" podStartSLOduration=2.308503745 podStartE2EDuration="13.886352383s" podCreationTimestamp="2026-01-30 18:08:01 +0000 UTC" firstStartedPulling="2026-01-30 18:08:02.724629037 +0000 UTC m=+6337.362586383" lastFinishedPulling="2026-01-30 18:08:14.302477675 +0000 UTC m=+6348.940435021" observedRunningTime="2026-01-30 18:08:14.881574513 +0000 UTC m=+6349.519531869" watchObservedRunningTime="2026-01-30 18:08:14.886352383 +0000 UTC m=+6349.524309739" Jan 30 18:08:21 crc kubenswrapper[4766]: I0130 18:08:21.675346 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7fvqb" Jan 30 18:08:21 crc kubenswrapper[4766]: I0130 18:08:21.675931 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7fvqb" Jan 30 18:08:22 crc kubenswrapper[4766]: I0130 18:08:22.724710 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7fvqb" podUID="5661aa7a-4ca4-43a3-8a14-32ba85ecd02a" containerName="registry-server" probeResult="failure" output=< Jan 30 18:08:22 crc kubenswrapper[4766]: timeout: failed to connect service ":50051" within 1s Jan 30 18:08:22 crc kubenswrapper[4766]: > Jan 30 18:08:25 crc kubenswrapper[4766]: I0130 18:08:25.049119 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7m77k"] Jan 30 18:08:25 crc kubenswrapper[4766]: I0130 18:08:25.051629 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7m77k" Jan 30 18:08:25 crc kubenswrapper[4766]: I0130 18:08:25.063119 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7m77k"] Jan 30 18:08:25 crc kubenswrapper[4766]: I0130 18:08:25.146206 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/452703b6-c53d-4432-8d58-cbdf354b0887-utilities\") pod \"redhat-marketplace-7m77k\" (UID: \"452703b6-c53d-4432-8d58-cbdf354b0887\") " pod="openshift-marketplace/redhat-marketplace-7m77k" Jan 30 18:08:25 crc kubenswrapper[4766]: I0130 18:08:25.146282 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8hsg\" (UniqueName: \"kubernetes.io/projected/452703b6-c53d-4432-8d58-cbdf354b0887-kube-api-access-k8hsg\") pod \"redhat-marketplace-7m77k\" (UID: \"452703b6-c53d-4432-8d58-cbdf354b0887\") " pod="openshift-marketplace/redhat-marketplace-7m77k" Jan 30 18:08:25 crc kubenswrapper[4766]: I0130 18:08:25.146427 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/452703b6-c53d-4432-8d58-cbdf354b0887-catalog-content\") pod \"redhat-marketplace-7m77k\" (UID: \"452703b6-c53d-4432-8d58-cbdf354b0887\") " pod="openshift-marketplace/redhat-marketplace-7m77k" Jan 30 18:08:25 crc kubenswrapper[4766]: I0130 18:08:25.248972 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/452703b6-c53d-4432-8d58-cbdf354b0887-catalog-content\") pod \"redhat-marketplace-7m77k\" (UID: \"452703b6-c53d-4432-8d58-cbdf354b0887\") " pod="openshift-marketplace/redhat-marketplace-7m77k" Jan 30 18:08:25 crc kubenswrapper[4766]: I0130 18:08:25.249437 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/452703b6-c53d-4432-8d58-cbdf354b0887-utilities\") pod \"redhat-marketplace-7m77k\" (UID: \"452703b6-c53d-4432-8d58-cbdf354b0887\") " pod="openshift-marketplace/redhat-marketplace-7m77k" Jan 30 18:08:25 crc kubenswrapper[4766]: I0130 18:08:25.249477 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8hsg\" (UniqueName: \"kubernetes.io/projected/452703b6-c53d-4432-8d58-cbdf354b0887-kube-api-access-k8hsg\") pod \"redhat-marketplace-7m77k\" (UID: \"452703b6-c53d-4432-8d58-cbdf354b0887\") " pod="openshift-marketplace/redhat-marketplace-7m77k" Jan 30 18:08:25 crc kubenswrapper[4766]: I0130 18:08:25.249516 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/452703b6-c53d-4432-8d58-cbdf354b0887-catalog-content\") pod \"redhat-marketplace-7m77k\" (UID: \"452703b6-c53d-4432-8d58-cbdf354b0887\") " pod="openshift-marketplace/redhat-marketplace-7m77k" Jan 30 18:08:25 crc kubenswrapper[4766]: I0130 18:08:25.249726 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/452703b6-c53d-4432-8d58-cbdf354b0887-utilities\") pod \"redhat-marketplace-7m77k\" (UID: \"452703b6-c53d-4432-8d58-cbdf354b0887\") " pod="openshift-marketplace/redhat-marketplace-7m77k" Jan 30 18:08:25 crc kubenswrapper[4766]: I0130 18:08:25.270499 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8hsg\" (UniqueName: \"kubernetes.io/projected/452703b6-c53d-4432-8d58-cbdf354b0887-kube-api-access-k8hsg\") pod \"redhat-marketplace-7m77k\" (UID: \"452703b6-c53d-4432-8d58-cbdf354b0887\") " pod="openshift-marketplace/redhat-marketplace-7m77k" Jan 30 18:08:25 crc kubenswrapper[4766]: I0130 18:08:25.380288 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7m77k" Jan 30 18:08:25 crc kubenswrapper[4766]: I0130 18:08:25.931823 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7m77k"] Jan 30 18:08:25 crc kubenswrapper[4766]: I0130 18:08:25.952640 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7m77k" event={"ID":"452703b6-c53d-4432-8d58-cbdf354b0887","Type":"ContainerStarted","Data":"9f20e475f5a6055842a49abfae865ce70cc486c8717f74afea1dbb07ab14232e"} Jan 30 18:08:26 crc kubenswrapper[4766]: I0130 18:08:26.963558 4766 generic.go:334] "Generic (PLEG): container finished" podID="452703b6-c53d-4432-8d58-cbdf354b0887" containerID="0143093a358ae22f4f76b820ec38f6643f0639577c55be627fe18675ef719623" exitCode=0 Jan 30 18:08:26 crc kubenswrapper[4766]: I0130 18:08:26.963611 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7m77k" event={"ID":"452703b6-c53d-4432-8d58-cbdf354b0887","Type":"ContainerDied","Data":"0143093a358ae22f4f76b820ec38f6643f0639577c55be627fe18675ef719623"} Jan 30 18:08:28 crc kubenswrapper[4766]: I0130 18:08:28.056188 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-db-create-d22q5"] Jan 30 18:08:28 crc kubenswrapper[4766]: I0130 18:08:28.056524 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-db-create-d22q5"] Jan 30 18:08:28 crc kubenswrapper[4766]: I0130 18:08:28.989118 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7m77k" event={"ID":"452703b6-c53d-4432-8d58-cbdf354b0887","Type":"ContainerStarted","Data":"4bb4bd623a0d7d96f86b5d8208aee11cda4d37f8b9675df7dc40c16263269f2c"} Jan 30 18:08:30 crc kubenswrapper[4766]: I0130 18:08:30.000683 4766 generic.go:334] "Generic (PLEG): container finished" podID="452703b6-c53d-4432-8d58-cbdf354b0887" containerID="4bb4bd623a0d7d96f86b5d8208aee11cda4d37f8b9675df7dc40c16263269f2c" exitCode=0 Jan 30 18:08:30 crc kubenswrapper[4766]: I0130 18:08:30.000797 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7m77k" event={"ID":"452703b6-c53d-4432-8d58-cbdf354b0887","Type":"ContainerDied","Data":"4bb4bd623a0d7d96f86b5d8208aee11cda4d37f8b9675df7dc40c16263269f2c"} Jan 30 18:08:30 crc kubenswrapper[4766]: I0130 18:08:30.036323 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-c8b6-account-create-update-vqz78"] Jan 30 18:08:30 crc kubenswrapper[4766]: I0130 18:08:30.051264 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="944d7612-c3af-4bbd-b193-a2769b8d362d" path="/var/lib/kubelet/pods/944d7612-c3af-4bbd-b193-a2769b8d362d/volumes" Jan 30 18:08:30 crc kubenswrapper[4766]: I0130 18:08:30.052139 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-c8b6-account-create-update-vqz78"] Jan 30 18:08:31 crc kubenswrapper[4766]: I0130 18:08:31.013934 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7m77k" event={"ID":"452703b6-c53d-4432-8d58-cbdf354b0887","Type":"ContainerStarted","Data":"4c7916779ac8431f2061a8acd0631dac027febf2f4d0ecf027e5eb9495c40fb8"} Jan 30 18:08:31 crc kubenswrapper[4766]: I0130 18:08:31.034696 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7m77k" podStartSLOduration=2.506141884 podStartE2EDuration="6.034677363s" podCreationTimestamp="2026-01-30 18:08:25 +0000 UTC" firstStartedPulling="2026-01-30 18:08:26.965635125 +0000 UTC m=+6361.603592481" lastFinishedPulling="2026-01-30 18:08:30.494170614 +0000 UTC m=+6365.132127960" observedRunningTime="2026-01-30 18:08:31.030100447 +0000 UTC m=+6365.668057793" watchObservedRunningTime="2026-01-30 18:08:31.034677363 +0000 UTC m=+6365.672634709" Jan 30 18:08:32 crc kubenswrapper[4766]: I0130 18:08:32.050642 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fc91a16-cfbf-425d-bca1-f23f53f60beb" path="/var/lib/kubelet/pods/6fc91a16-cfbf-425d-bca1-f23f53f60beb/volumes" Jan 30 18:08:32 crc kubenswrapper[4766]: I0130 18:08:32.730542 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7fvqb" podUID="5661aa7a-4ca4-43a3-8a14-32ba85ecd02a" containerName="registry-server" probeResult="failure" output=< Jan 30 18:08:32 crc kubenswrapper[4766]: timeout: failed to connect service ":50051" within 1s Jan 30 18:08:32 crc kubenswrapper[4766]: > Jan 30 18:08:35 crc kubenswrapper[4766]: I0130 18:08:35.110661 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-persistence-db-create-v77vj"] Jan 30 18:08:35 crc kubenswrapper[4766]: I0130 18:08:35.120028 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-persistence-db-create-v77vj"] Jan 30 18:08:35 crc kubenswrapper[4766]: I0130 18:08:35.381379 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7m77k" Jan 30 18:08:35 crc kubenswrapper[4766]: I0130 18:08:35.381443 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7m77k" Jan 30 18:08:35 crc kubenswrapper[4766]: I0130 18:08:35.413564 4766 scope.go:117] "RemoveContainer" containerID="c63229617d55f96821911e32ef6a34d5a26df3748957060c5998ef3872acbfa5" Jan 30 18:08:35 crc kubenswrapper[4766]: I0130 18:08:35.438592 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7m77k" Jan 30 18:08:35 crc kubenswrapper[4766]: I0130 18:08:35.452283 4766 scope.go:117] "RemoveContainer" containerID="82482b6c103da4e33a65a68c2aa8077854641cba347d1131ff453c1ad0a27d26" Jan 30 18:08:36 crc kubenswrapper[4766]: I0130 18:08:36.031379 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-1019-account-create-update-skkw9"] Jan 30 18:08:36 crc kubenswrapper[4766]: I0130 18:08:36.118013 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0550f6c1-ed1f-405f-8420-507890f13d75" path="/var/lib/kubelet/pods/0550f6c1-ed1f-405f-8420-507890f13d75/volumes" Jan 30 18:08:36 crc kubenswrapper[4766]: I0130 18:08:36.120583 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-1019-account-create-update-skkw9"] Jan 30 18:08:36 crc kubenswrapper[4766]: I0130 18:08:36.199780 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7m77k" Jan 30 18:08:36 crc kubenswrapper[4766]: I0130 18:08:36.249527 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7m77k"] Jan 30 18:08:38 crc kubenswrapper[4766]: I0130 18:08:38.051834 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c327fe8-260c-4117-b55e-3612be41da79" path="/var/lib/kubelet/pods/0c327fe8-260c-4117-b55e-3612be41da79/volumes" Jan 30 18:08:38 crc kubenswrapper[4766]: I0130 18:08:38.147009 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7m77k" podUID="452703b6-c53d-4432-8d58-cbdf354b0887" containerName="registry-server" containerID="cri-o://4c7916779ac8431f2061a8acd0631dac027febf2f4d0ecf027e5eb9495c40fb8" gracePeriod=2 Jan 30 18:08:39 crc kubenswrapper[4766]: I0130 18:08:39.046242 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 18:08:39 crc kubenswrapper[4766]: I0130 18:08:39.046720 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 18:08:39 crc kubenswrapper[4766]: I0130 18:08:39.046793 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 18:08:39 crc kubenswrapper[4766]: I0130 18:08:39.047852 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d30d418df26d8fd4a7d7f4c28a838feebfa8adef2cda9408a1193bbf367cfb2a"} pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 18:08:39 crc kubenswrapper[4766]: I0130 18:08:39.047965 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" containerID="cri-o://d30d418df26d8fd4a7d7f4c28a838feebfa8adef2cda9408a1193bbf367cfb2a" gracePeriod=600 Jan 30 18:08:40 crc kubenswrapper[4766]: I0130 18:08:40.166287 4766 generic.go:334] "Generic (PLEG): container finished" podID="452703b6-c53d-4432-8d58-cbdf354b0887" containerID="4c7916779ac8431f2061a8acd0631dac027febf2f4d0ecf027e5eb9495c40fb8" exitCode=0 Jan 30 18:08:40 crc kubenswrapper[4766]: I0130 18:08:40.166430 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7m77k" event={"ID":"452703b6-c53d-4432-8d58-cbdf354b0887","Type":"ContainerDied","Data":"4c7916779ac8431f2061a8acd0631dac027febf2f4d0ecf027e5eb9495c40fb8"} Jan 30 18:08:40 crc kubenswrapper[4766]: I0130 18:08:40.170169 4766 generic.go:334] "Generic (PLEG): container finished" podID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerID="d30d418df26d8fd4a7d7f4c28a838feebfa8adef2cda9408a1193bbf367cfb2a" exitCode=0 Jan 30 18:08:40 crc kubenswrapper[4766]: I0130 18:08:40.170246 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerDied","Data":"d30d418df26d8fd4a7d7f4c28a838feebfa8adef2cda9408a1193bbf367cfb2a"} Jan 30 18:08:40 crc kubenswrapper[4766]: I0130 18:08:40.170290 4766 scope.go:117] "RemoveContainer" containerID="14f71626d75ef20c57062d292513e3fd82c4a368099315d09ba80457172d5098" Jan 30 18:08:40 crc kubenswrapper[4766]: E0130 18:08:40.272751 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:08:40 crc kubenswrapper[4766]: I0130 18:08:40.503465 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7m77k" Jan 30 18:08:40 crc kubenswrapper[4766]: I0130 18:08:40.615902 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/452703b6-c53d-4432-8d58-cbdf354b0887-catalog-content\") pod \"452703b6-c53d-4432-8d58-cbdf354b0887\" (UID: \"452703b6-c53d-4432-8d58-cbdf354b0887\") " Jan 30 18:08:40 crc kubenswrapper[4766]: I0130 18:08:40.616023 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8hsg\" (UniqueName: \"kubernetes.io/projected/452703b6-c53d-4432-8d58-cbdf354b0887-kube-api-access-k8hsg\") pod \"452703b6-c53d-4432-8d58-cbdf354b0887\" (UID: \"452703b6-c53d-4432-8d58-cbdf354b0887\") " Jan 30 18:08:40 crc kubenswrapper[4766]: I0130 18:08:40.616209 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/452703b6-c53d-4432-8d58-cbdf354b0887-utilities\") pod \"452703b6-c53d-4432-8d58-cbdf354b0887\" (UID: \"452703b6-c53d-4432-8d58-cbdf354b0887\") " Jan 30 18:08:40 crc kubenswrapper[4766]: I0130 18:08:40.616989 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/452703b6-c53d-4432-8d58-cbdf354b0887-utilities" (OuterVolumeSpecName: "utilities") pod "452703b6-c53d-4432-8d58-cbdf354b0887" (UID: "452703b6-c53d-4432-8d58-cbdf354b0887"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:08:40 crc kubenswrapper[4766]: I0130 18:08:40.622377 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/452703b6-c53d-4432-8d58-cbdf354b0887-kube-api-access-k8hsg" (OuterVolumeSpecName: "kube-api-access-k8hsg") pod "452703b6-c53d-4432-8d58-cbdf354b0887" (UID: "452703b6-c53d-4432-8d58-cbdf354b0887"). InnerVolumeSpecName "kube-api-access-k8hsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:08:40 crc kubenswrapper[4766]: I0130 18:08:40.640489 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/452703b6-c53d-4432-8d58-cbdf354b0887-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "452703b6-c53d-4432-8d58-cbdf354b0887" (UID: "452703b6-c53d-4432-8d58-cbdf354b0887"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:08:40 crc kubenswrapper[4766]: I0130 18:08:40.718852 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8hsg\" (UniqueName: \"kubernetes.io/projected/452703b6-c53d-4432-8d58-cbdf354b0887-kube-api-access-k8hsg\") on node \"crc\" DevicePath \"\"" Jan 30 18:08:40 crc kubenswrapper[4766]: I0130 18:08:40.719159 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/452703b6-c53d-4432-8d58-cbdf354b0887-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 18:08:40 crc kubenswrapper[4766]: I0130 18:08:40.719259 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/452703b6-c53d-4432-8d58-cbdf354b0887-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 18:08:41 crc kubenswrapper[4766]: I0130 18:08:41.181823 4766 scope.go:117] "RemoveContainer" containerID="d30d418df26d8fd4a7d7f4c28a838feebfa8adef2cda9408a1193bbf367cfb2a" Jan 30 18:08:41 crc kubenswrapper[4766]: E0130 18:08:41.182129 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:08:41 crc kubenswrapper[4766]: I0130 18:08:41.183959 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7m77k" event={"ID":"452703b6-c53d-4432-8d58-cbdf354b0887","Type":"ContainerDied","Data":"9f20e475f5a6055842a49abfae865ce70cc486c8717f74afea1dbb07ab14232e"} Jan 30 18:08:41 crc kubenswrapper[4766]: I0130 18:08:41.184074 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7m77k" Jan 30 18:08:41 crc kubenswrapper[4766]: I0130 18:08:41.184090 4766 scope.go:117] "RemoveContainer" containerID="4c7916779ac8431f2061a8acd0631dac027febf2f4d0ecf027e5eb9495c40fb8" Jan 30 18:08:41 crc kubenswrapper[4766]: I0130 18:08:41.219882 4766 scope.go:117] "RemoveContainer" containerID="4bb4bd623a0d7d96f86b5d8208aee11cda4d37f8b9675df7dc40c16263269f2c" Jan 30 18:08:41 crc kubenswrapper[4766]: I0130 18:08:41.225267 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7m77k"] Jan 30 18:08:41 crc kubenswrapper[4766]: I0130 18:08:41.234476 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7m77k"] Jan 30 18:08:41 crc kubenswrapper[4766]: I0130 18:08:41.257314 4766 scope.go:117] "RemoveContainer" containerID="0143093a358ae22f4f76b820ec38f6643f0639577c55be627fe18675ef719623" Jan 30 18:08:41 crc kubenswrapper[4766]: I0130 18:08:41.728935 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7fvqb" Jan 30 18:08:41 crc kubenswrapper[4766]: I0130 18:08:41.811467 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7fvqb" Jan 30 18:08:42 crc kubenswrapper[4766]: I0130 18:08:42.052995 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="452703b6-c53d-4432-8d58-cbdf354b0887" path="/var/lib/kubelet/pods/452703b6-c53d-4432-8d58-cbdf354b0887/volumes" Jan 30 18:08:42 crc kubenswrapper[4766]: I0130 18:08:42.742647 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7fvqb"] Jan 30 18:08:43 crc kubenswrapper[4766]: I0130 18:08:43.214724 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7fvqb" podUID="5661aa7a-4ca4-43a3-8a14-32ba85ecd02a" containerName="registry-server" containerID="cri-o://7bc76810479052680c6e9b88c4cc0d915275925573caace53f4dd52943301218" gracePeriod=2 Jan 30 18:08:43 crc kubenswrapper[4766]: I0130 18:08:43.695285 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7fvqb" Jan 30 18:08:43 crc kubenswrapper[4766]: I0130 18:08:43.782490 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dclp2\" (UniqueName: \"kubernetes.io/projected/5661aa7a-4ca4-43a3-8a14-32ba85ecd02a-kube-api-access-dclp2\") pod \"5661aa7a-4ca4-43a3-8a14-32ba85ecd02a\" (UID: \"5661aa7a-4ca4-43a3-8a14-32ba85ecd02a\") " Jan 30 18:08:43 crc kubenswrapper[4766]: I0130 18:08:43.782593 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5661aa7a-4ca4-43a3-8a14-32ba85ecd02a-catalog-content\") pod \"5661aa7a-4ca4-43a3-8a14-32ba85ecd02a\" (UID: \"5661aa7a-4ca4-43a3-8a14-32ba85ecd02a\") " Jan 30 18:08:43 crc kubenswrapper[4766]: I0130 18:08:43.782811 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5661aa7a-4ca4-43a3-8a14-32ba85ecd02a-utilities\") pod \"5661aa7a-4ca4-43a3-8a14-32ba85ecd02a\" (UID: \"5661aa7a-4ca4-43a3-8a14-32ba85ecd02a\") " Jan 30 18:08:43 crc kubenswrapper[4766]: I0130 18:08:43.783925 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5661aa7a-4ca4-43a3-8a14-32ba85ecd02a-utilities" (OuterVolumeSpecName: "utilities") pod "5661aa7a-4ca4-43a3-8a14-32ba85ecd02a" (UID: "5661aa7a-4ca4-43a3-8a14-32ba85ecd02a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:08:43 crc kubenswrapper[4766]: I0130 18:08:43.788076 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5661aa7a-4ca4-43a3-8a14-32ba85ecd02a-kube-api-access-dclp2" (OuterVolumeSpecName: "kube-api-access-dclp2") pod "5661aa7a-4ca4-43a3-8a14-32ba85ecd02a" (UID: "5661aa7a-4ca4-43a3-8a14-32ba85ecd02a"). InnerVolumeSpecName "kube-api-access-dclp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:08:43 crc kubenswrapper[4766]: I0130 18:08:43.885555 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5661aa7a-4ca4-43a3-8a14-32ba85ecd02a-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 18:08:43 crc kubenswrapper[4766]: I0130 18:08:43.885762 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dclp2\" (UniqueName: \"kubernetes.io/projected/5661aa7a-4ca4-43a3-8a14-32ba85ecd02a-kube-api-access-dclp2\") on node \"crc\" DevicePath \"\"" Jan 30 18:08:43 crc kubenswrapper[4766]: I0130 18:08:43.893837 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5661aa7a-4ca4-43a3-8a14-32ba85ecd02a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5661aa7a-4ca4-43a3-8a14-32ba85ecd02a" (UID: "5661aa7a-4ca4-43a3-8a14-32ba85ecd02a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:08:43 crc kubenswrapper[4766]: I0130 18:08:43.987330 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5661aa7a-4ca4-43a3-8a14-32ba85ecd02a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 18:08:44 crc kubenswrapper[4766]: I0130 18:08:44.234675 4766 generic.go:334] "Generic (PLEG): container finished" podID="5661aa7a-4ca4-43a3-8a14-32ba85ecd02a" containerID="7bc76810479052680c6e9b88c4cc0d915275925573caace53f4dd52943301218" exitCode=0 Jan 30 18:08:44 crc kubenswrapper[4766]: I0130 18:08:44.234785 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7fvqb" event={"ID":"5661aa7a-4ca4-43a3-8a14-32ba85ecd02a","Type":"ContainerDied","Data":"7bc76810479052680c6e9b88c4cc0d915275925573caace53f4dd52943301218"} Jan 30 18:08:44 crc kubenswrapper[4766]: I0130 18:08:44.234837 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7fvqb" event={"ID":"5661aa7a-4ca4-43a3-8a14-32ba85ecd02a","Type":"ContainerDied","Data":"adbc220b2deb8c6b2c23f688ed49bbfcc93a05709363d598129b754f45c43c1c"} Jan 30 18:08:44 crc kubenswrapper[4766]: I0130 18:08:44.234871 4766 scope.go:117] "RemoveContainer" containerID="7bc76810479052680c6e9b88c4cc0d915275925573caace53f4dd52943301218" Jan 30 18:08:44 crc kubenswrapper[4766]: I0130 18:08:44.235215 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7fvqb" Jan 30 18:08:44 crc kubenswrapper[4766]: I0130 18:08:44.266305 4766 scope.go:117] "RemoveContainer" containerID="a86fa1772cbf6c37eba8bc0e141542ddbc7b252719137b45b7c04e5f670abb7f" Jan 30 18:08:44 crc kubenswrapper[4766]: I0130 18:08:44.272994 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7fvqb"] Jan 30 18:08:44 crc kubenswrapper[4766]: I0130 18:08:44.282940 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7fvqb"] Jan 30 18:08:44 crc kubenswrapper[4766]: I0130 18:08:44.299357 4766 scope.go:117] "RemoveContainer" containerID="6d2d86bf651bcceb29e8c4c66da842c27e540e29df11a4d2ed626b77b27807f2" Jan 30 18:08:44 crc kubenswrapper[4766]: I0130 18:08:44.338432 4766 scope.go:117] "RemoveContainer" containerID="7bc76810479052680c6e9b88c4cc0d915275925573caace53f4dd52943301218" Jan 30 18:08:44 crc kubenswrapper[4766]: E0130 18:08:44.338886 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bc76810479052680c6e9b88c4cc0d915275925573caace53f4dd52943301218\": container with ID starting with 7bc76810479052680c6e9b88c4cc0d915275925573caace53f4dd52943301218 not found: ID does not exist" containerID="7bc76810479052680c6e9b88c4cc0d915275925573caace53f4dd52943301218" Jan 30 18:08:44 crc kubenswrapper[4766]: I0130 18:08:44.338933 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bc76810479052680c6e9b88c4cc0d915275925573caace53f4dd52943301218"} err="failed to get container status \"7bc76810479052680c6e9b88c4cc0d915275925573caace53f4dd52943301218\": rpc error: code = NotFound desc = could not find container \"7bc76810479052680c6e9b88c4cc0d915275925573caace53f4dd52943301218\": container with ID starting with 7bc76810479052680c6e9b88c4cc0d915275925573caace53f4dd52943301218 not found: ID does not exist" Jan 30 18:08:44 crc kubenswrapper[4766]: I0130 18:08:44.338956 4766 scope.go:117] "RemoveContainer" containerID="a86fa1772cbf6c37eba8bc0e141542ddbc7b252719137b45b7c04e5f670abb7f" Jan 30 18:08:44 crc kubenswrapper[4766]: E0130 18:08:44.339359 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a86fa1772cbf6c37eba8bc0e141542ddbc7b252719137b45b7c04e5f670abb7f\": container with ID starting with a86fa1772cbf6c37eba8bc0e141542ddbc7b252719137b45b7c04e5f670abb7f not found: ID does not exist" containerID="a86fa1772cbf6c37eba8bc0e141542ddbc7b252719137b45b7c04e5f670abb7f" Jan 30 18:08:44 crc kubenswrapper[4766]: I0130 18:08:44.339386 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a86fa1772cbf6c37eba8bc0e141542ddbc7b252719137b45b7c04e5f670abb7f"} err="failed to get container status \"a86fa1772cbf6c37eba8bc0e141542ddbc7b252719137b45b7c04e5f670abb7f\": rpc error: code = NotFound desc = could not find container \"a86fa1772cbf6c37eba8bc0e141542ddbc7b252719137b45b7c04e5f670abb7f\": container with ID starting with a86fa1772cbf6c37eba8bc0e141542ddbc7b252719137b45b7c04e5f670abb7f not found: ID does not exist" Jan 30 18:08:44 crc kubenswrapper[4766]: I0130 18:08:44.339403 4766 scope.go:117] "RemoveContainer" containerID="6d2d86bf651bcceb29e8c4c66da842c27e540e29df11a4d2ed626b77b27807f2" Jan 30 18:08:44 crc kubenswrapper[4766]: E0130 18:08:44.339670 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d2d86bf651bcceb29e8c4c66da842c27e540e29df11a4d2ed626b77b27807f2\": container with ID starting with 6d2d86bf651bcceb29e8c4c66da842c27e540e29df11a4d2ed626b77b27807f2 not found: ID does not exist" containerID="6d2d86bf651bcceb29e8c4c66da842c27e540e29df11a4d2ed626b77b27807f2" Jan 30 18:08:44 crc kubenswrapper[4766]: I0130 18:08:44.339698 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d2d86bf651bcceb29e8c4c66da842c27e540e29df11a4d2ed626b77b27807f2"} err="failed to get container status \"6d2d86bf651bcceb29e8c4c66da842c27e540e29df11a4d2ed626b77b27807f2\": rpc error: code = NotFound desc = could not find container \"6d2d86bf651bcceb29e8c4c66da842c27e540e29df11a4d2ed626b77b27807f2\": container with ID starting with 6d2d86bf651bcceb29e8c4c66da842c27e540e29df11a4d2ed626b77b27807f2 not found: ID does not exist" Jan 30 18:08:46 crc kubenswrapper[4766]: I0130 18:08:46.058955 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5661aa7a-4ca4-43a3-8a14-32ba85ecd02a" path="/var/lib/kubelet/pods/5661aa7a-4ca4-43a3-8a14-32ba85ecd02a/volumes" Jan 30 18:08:55 crc kubenswrapper[4766]: I0130 18:08:55.039497 4766 scope.go:117] "RemoveContainer" containerID="d30d418df26d8fd4a7d7f4c28a838feebfa8adef2cda9408a1193bbf367cfb2a" Jan 30 18:08:55 crc kubenswrapper[4766]: E0130 18:08:55.040269 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:09:10 crc kubenswrapper[4766]: I0130 18:09:10.041429 4766 scope.go:117] "RemoveContainer" containerID="d30d418df26d8fd4a7d7f4c28a838feebfa8adef2cda9408a1193bbf367cfb2a" Jan 30 18:09:10 crc kubenswrapper[4766]: E0130 18:09:10.042199 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:09:12 crc kubenswrapper[4766]: I0130 18:09:12.075565 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-db-sync-8nm42"] Jan 30 18:09:12 crc kubenswrapper[4766]: I0130 18:09:12.075870 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-db-sync-8nm42"] Jan 30 18:09:14 crc kubenswrapper[4766]: I0130 18:09:14.051391 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd5031f6-51af-4f63-8bc4-4a518f58ddd4" path="/var/lib/kubelet/pods/fd5031f6-51af-4f63-8bc4-4a518f58ddd4/volumes" Jan 30 18:09:21 crc kubenswrapper[4766]: I0130 18:09:21.039447 4766 scope.go:117] "RemoveContainer" containerID="d30d418df26d8fd4a7d7f4c28a838feebfa8adef2cda9408a1193bbf367cfb2a" Jan 30 18:09:21 crc kubenswrapper[4766]: E0130 18:09:21.040255 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:09:22 crc kubenswrapper[4766]: I0130 18:09:22.302557 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dfxhv"] Jan 30 18:09:22 crc kubenswrapper[4766]: E0130 18:09:22.303497 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="452703b6-c53d-4432-8d58-cbdf354b0887" containerName="extract-content" Jan 30 18:09:22 crc kubenswrapper[4766]: I0130 18:09:22.303519 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="452703b6-c53d-4432-8d58-cbdf354b0887" containerName="extract-content" Jan 30 18:09:22 crc kubenswrapper[4766]: E0130 18:09:22.303547 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5661aa7a-4ca4-43a3-8a14-32ba85ecd02a" containerName="registry-server" Jan 30 18:09:22 crc kubenswrapper[4766]: I0130 18:09:22.303555 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="5661aa7a-4ca4-43a3-8a14-32ba85ecd02a" containerName="registry-server" Jan 30 18:09:22 crc kubenswrapper[4766]: E0130 18:09:22.303578 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="452703b6-c53d-4432-8d58-cbdf354b0887" containerName="registry-server" Jan 30 18:09:22 crc kubenswrapper[4766]: I0130 18:09:22.303588 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="452703b6-c53d-4432-8d58-cbdf354b0887" containerName="registry-server" Jan 30 18:09:22 crc kubenswrapper[4766]: E0130 18:09:22.303613 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5661aa7a-4ca4-43a3-8a14-32ba85ecd02a" containerName="extract-utilities" Jan 30 18:09:22 crc kubenswrapper[4766]: I0130 18:09:22.303621 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="5661aa7a-4ca4-43a3-8a14-32ba85ecd02a" containerName="extract-utilities" Jan 30 18:09:22 crc kubenswrapper[4766]: E0130 18:09:22.303640 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="452703b6-c53d-4432-8d58-cbdf354b0887" containerName="extract-utilities" Jan 30 18:09:22 crc kubenswrapper[4766]: I0130 18:09:22.303648 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="452703b6-c53d-4432-8d58-cbdf354b0887" containerName="extract-utilities" Jan 30 18:09:22 crc kubenswrapper[4766]: E0130 18:09:22.303670 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5661aa7a-4ca4-43a3-8a14-32ba85ecd02a" containerName="extract-content" Jan 30 18:09:22 crc kubenswrapper[4766]: I0130 18:09:22.303678 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="5661aa7a-4ca4-43a3-8a14-32ba85ecd02a" containerName="extract-content" Jan 30 18:09:22 crc kubenswrapper[4766]: I0130 18:09:22.303939 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="5661aa7a-4ca4-43a3-8a14-32ba85ecd02a" containerName="registry-server" Jan 30 18:09:22 crc kubenswrapper[4766]: I0130 18:09:22.303967 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="452703b6-c53d-4432-8d58-cbdf354b0887" containerName="registry-server" Jan 30 18:09:22 crc kubenswrapper[4766]: I0130 18:09:22.305910 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dfxhv" Jan 30 18:09:22 crc kubenswrapper[4766]: I0130 18:09:22.328339 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dfxhv"] Jan 30 18:09:22 crc kubenswrapper[4766]: I0130 18:09:22.431035 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef7abb63-975d-41fe-9e07-406bd855526f-utilities\") pod \"certified-operators-dfxhv\" (UID: \"ef7abb63-975d-41fe-9e07-406bd855526f\") " pod="openshift-marketplace/certified-operators-dfxhv" Jan 30 18:09:22 crc kubenswrapper[4766]: I0130 18:09:22.431363 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqlbs\" (UniqueName: \"kubernetes.io/projected/ef7abb63-975d-41fe-9e07-406bd855526f-kube-api-access-tqlbs\") pod \"certified-operators-dfxhv\" (UID: \"ef7abb63-975d-41fe-9e07-406bd855526f\") " pod="openshift-marketplace/certified-operators-dfxhv" Jan 30 18:09:22 crc kubenswrapper[4766]: I0130 18:09:22.431533 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef7abb63-975d-41fe-9e07-406bd855526f-catalog-content\") pod \"certified-operators-dfxhv\" (UID: \"ef7abb63-975d-41fe-9e07-406bd855526f\") " pod="openshift-marketplace/certified-operators-dfxhv" Jan 30 18:09:22 crc kubenswrapper[4766]: I0130 18:09:22.533996 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqlbs\" (UniqueName: \"kubernetes.io/projected/ef7abb63-975d-41fe-9e07-406bd855526f-kube-api-access-tqlbs\") pod \"certified-operators-dfxhv\" (UID: \"ef7abb63-975d-41fe-9e07-406bd855526f\") " pod="openshift-marketplace/certified-operators-dfxhv" Jan 30 18:09:22 crc kubenswrapper[4766]: I0130 18:09:22.534102 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef7abb63-975d-41fe-9e07-406bd855526f-catalog-content\") pod \"certified-operators-dfxhv\" (UID: \"ef7abb63-975d-41fe-9e07-406bd855526f\") " pod="openshift-marketplace/certified-operators-dfxhv" Jan 30 18:09:22 crc kubenswrapper[4766]: I0130 18:09:22.534131 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef7abb63-975d-41fe-9e07-406bd855526f-utilities\") pod \"certified-operators-dfxhv\" (UID: \"ef7abb63-975d-41fe-9e07-406bd855526f\") " pod="openshift-marketplace/certified-operators-dfxhv" Jan 30 18:09:22 crc kubenswrapper[4766]: I0130 18:09:22.534894 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef7abb63-975d-41fe-9e07-406bd855526f-catalog-content\") pod \"certified-operators-dfxhv\" (UID: \"ef7abb63-975d-41fe-9e07-406bd855526f\") " pod="openshift-marketplace/certified-operators-dfxhv" Jan 30 18:09:22 crc kubenswrapper[4766]: I0130 18:09:22.535007 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef7abb63-975d-41fe-9e07-406bd855526f-utilities\") pod \"certified-operators-dfxhv\" (UID: \"ef7abb63-975d-41fe-9e07-406bd855526f\") " pod="openshift-marketplace/certified-operators-dfxhv" Jan 30 18:09:22 crc kubenswrapper[4766]: I0130 18:09:22.558664 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqlbs\" (UniqueName: \"kubernetes.io/projected/ef7abb63-975d-41fe-9e07-406bd855526f-kube-api-access-tqlbs\") pod \"certified-operators-dfxhv\" (UID: \"ef7abb63-975d-41fe-9e07-406bd855526f\") " pod="openshift-marketplace/certified-operators-dfxhv" Jan 30 18:09:22 crc kubenswrapper[4766]: I0130 18:09:22.666476 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dfxhv" Jan 30 18:09:23 crc kubenswrapper[4766]: I0130 18:09:23.175219 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dfxhv"] Jan 30 18:09:23 crc kubenswrapper[4766]: I0130 18:09:23.595206 4766 generic.go:334] "Generic (PLEG): container finished" podID="ef7abb63-975d-41fe-9e07-406bd855526f" containerID="67a59d5114cfec40ecce26e63d77ac1f822597430c59c816a6c5488929d95d16" exitCode=0 Jan 30 18:09:23 crc kubenswrapper[4766]: I0130 18:09:23.595299 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dfxhv" event={"ID":"ef7abb63-975d-41fe-9e07-406bd855526f","Type":"ContainerDied","Data":"67a59d5114cfec40ecce26e63d77ac1f822597430c59c816a6c5488929d95d16"} Jan 30 18:09:23 crc kubenswrapper[4766]: I0130 18:09:23.595551 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dfxhv" event={"ID":"ef7abb63-975d-41fe-9e07-406bd855526f","Type":"ContainerStarted","Data":"aa2967831de8a967e1d47e44ca024dc8293456b0c2c5eff8b5ff4b43f600fab6"} Jan 30 18:09:24 crc kubenswrapper[4766]: I0130 18:09:24.607912 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dfxhv" event={"ID":"ef7abb63-975d-41fe-9e07-406bd855526f","Type":"ContainerStarted","Data":"95b62283b88f0b95f6d203bf6faf01256d49e5261ebe9d4c205cf140e428d8f5"} Jan 30 18:09:26 crc kubenswrapper[4766]: I0130 18:09:26.626082 4766 generic.go:334] "Generic (PLEG): container finished" podID="ef7abb63-975d-41fe-9e07-406bd855526f" containerID="95b62283b88f0b95f6d203bf6faf01256d49e5261ebe9d4c205cf140e428d8f5" exitCode=0 Jan 30 18:09:26 crc kubenswrapper[4766]: I0130 18:09:26.626147 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dfxhv" event={"ID":"ef7abb63-975d-41fe-9e07-406bd855526f","Type":"ContainerDied","Data":"95b62283b88f0b95f6d203bf6faf01256d49e5261ebe9d4c205cf140e428d8f5"} Jan 30 18:09:27 crc kubenswrapper[4766]: I0130 18:09:27.637213 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dfxhv" event={"ID":"ef7abb63-975d-41fe-9e07-406bd855526f","Type":"ContainerStarted","Data":"7ed2e4a8f77301f74dbe8fcf94da96d12de751e354f6b87901da9ed53cd936de"} Jan 30 18:09:27 crc kubenswrapper[4766]: I0130 18:09:27.664068 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dfxhv" podStartSLOduration=2.198170092 podStartE2EDuration="5.664047841s" podCreationTimestamp="2026-01-30 18:09:22 +0000 UTC" firstStartedPulling="2026-01-30 18:09:23.596999316 +0000 UTC m=+6418.234956662" lastFinishedPulling="2026-01-30 18:09:27.062877075 +0000 UTC m=+6421.700834411" observedRunningTime="2026-01-30 18:09:27.656152756 +0000 UTC m=+6422.294110102" watchObservedRunningTime="2026-01-30 18:09:27.664047841 +0000 UTC m=+6422.302005187" Jan 30 18:09:32 crc kubenswrapper[4766]: I0130 18:09:32.666912 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dfxhv" Jan 30 18:09:32 crc kubenswrapper[4766]: I0130 18:09:32.667532 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dfxhv" Jan 30 18:09:32 crc kubenswrapper[4766]: I0130 18:09:32.712548 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dfxhv" Jan 30 18:09:32 crc kubenswrapper[4766]: I0130 18:09:32.772864 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dfxhv" Jan 30 18:09:32 crc kubenswrapper[4766]: I0130 18:09:32.958330 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dfxhv"] Jan 30 18:09:34 crc kubenswrapper[4766]: I0130 18:09:34.723796 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dfxhv" podUID="ef7abb63-975d-41fe-9e07-406bd855526f" containerName="registry-server" containerID="cri-o://7ed2e4a8f77301f74dbe8fcf94da96d12de751e354f6b87901da9ed53cd936de" gracePeriod=2 Jan 30 18:09:35 crc kubenswrapper[4766]: I0130 18:09:35.433762 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dfxhv" Jan 30 18:09:35 crc kubenswrapper[4766]: I0130 18:09:35.529990 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef7abb63-975d-41fe-9e07-406bd855526f-utilities\") pod \"ef7abb63-975d-41fe-9e07-406bd855526f\" (UID: \"ef7abb63-975d-41fe-9e07-406bd855526f\") " Jan 30 18:09:35 crc kubenswrapper[4766]: I0130 18:09:35.530490 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqlbs\" (UniqueName: \"kubernetes.io/projected/ef7abb63-975d-41fe-9e07-406bd855526f-kube-api-access-tqlbs\") pod \"ef7abb63-975d-41fe-9e07-406bd855526f\" (UID: \"ef7abb63-975d-41fe-9e07-406bd855526f\") " Jan 30 18:09:35 crc kubenswrapper[4766]: I0130 18:09:35.530552 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef7abb63-975d-41fe-9e07-406bd855526f-catalog-content\") pod \"ef7abb63-975d-41fe-9e07-406bd855526f\" (UID: \"ef7abb63-975d-41fe-9e07-406bd855526f\") " Jan 30 18:09:35 crc kubenswrapper[4766]: I0130 18:09:35.531265 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef7abb63-975d-41fe-9e07-406bd855526f-utilities" (OuterVolumeSpecName: "utilities") pod "ef7abb63-975d-41fe-9e07-406bd855526f" (UID: "ef7abb63-975d-41fe-9e07-406bd855526f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:09:35 crc kubenswrapper[4766]: I0130 18:09:35.539027 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef7abb63-975d-41fe-9e07-406bd855526f-kube-api-access-tqlbs" (OuterVolumeSpecName: "kube-api-access-tqlbs") pod "ef7abb63-975d-41fe-9e07-406bd855526f" (UID: "ef7abb63-975d-41fe-9e07-406bd855526f"). InnerVolumeSpecName "kube-api-access-tqlbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:09:35 crc kubenswrapper[4766]: I0130 18:09:35.551243 4766 scope.go:117] "RemoveContainer" containerID="ac71d8e70f653ebbdd2675504fd0957f83245a57664fca40a163d39e26aa650a" Jan 30 18:09:35 crc kubenswrapper[4766]: I0130 18:09:35.634330 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef7abb63-975d-41fe-9e07-406bd855526f-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 18:09:35 crc kubenswrapper[4766]: I0130 18:09:35.634392 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqlbs\" (UniqueName: \"kubernetes.io/projected/ef7abb63-975d-41fe-9e07-406bd855526f-kube-api-access-tqlbs\") on node \"crc\" DevicePath \"\"" Jan 30 18:09:35 crc kubenswrapper[4766]: I0130 18:09:35.664792 4766 scope.go:117] "RemoveContainer" containerID="31c8a3d4fa3c5871f82c77326d881824b1b083a480b009f1be2bb206710bb303" Jan 30 18:09:35 crc kubenswrapper[4766]: I0130 18:09:35.737243 4766 generic.go:334] "Generic (PLEG): container finished" podID="ef7abb63-975d-41fe-9e07-406bd855526f" containerID="7ed2e4a8f77301f74dbe8fcf94da96d12de751e354f6b87901da9ed53cd936de" exitCode=0 Jan 30 18:09:35 crc kubenswrapper[4766]: I0130 18:09:35.737316 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dfxhv" event={"ID":"ef7abb63-975d-41fe-9e07-406bd855526f","Type":"ContainerDied","Data":"7ed2e4a8f77301f74dbe8fcf94da96d12de751e354f6b87901da9ed53cd936de"} Jan 30 18:09:35 crc kubenswrapper[4766]: I0130 18:09:35.737353 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dfxhv" event={"ID":"ef7abb63-975d-41fe-9e07-406bd855526f","Type":"ContainerDied","Data":"aa2967831de8a967e1d47e44ca024dc8293456b0c2c5eff8b5ff4b43f600fab6"} Jan 30 18:09:35 crc kubenswrapper[4766]: I0130 18:09:35.737377 4766 scope.go:117] "RemoveContainer" containerID="7ed2e4a8f77301f74dbe8fcf94da96d12de751e354f6b87901da9ed53cd936de" Jan 30 18:09:35 crc kubenswrapper[4766]: I0130 18:09:35.737515 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dfxhv" Jan 30 18:09:35 crc kubenswrapper[4766]: I0130 18:09:35.750834 4766 scope.go:117] "RemoveContainer" containerID="1156fa8967f6790101764cbd5a85756c89530dcced500933e43bdf4774cc947c" Jan 30 18:09:35 crc kubenswrapper[4766]: I0130 18:09:35.788626 4766 scope.go:117] "RemoveContainer" containerID="95b62283b88f0b95f6d203bf6faf01256d49e5261ebe9d4c205cf140e428d8f5" Jan 30 18:09:35 crc kubenswrapper[4766]: I0130 18:09:35.810585 4766 scope.go:117] "RemoveContainer" containerID="b39ea84d36ef42f8927d7576b9afa12181f150184fa9861bc236ee65bcdde03a" Jan 30 18:09:35 crc kubenswrapper[4766]: I0130 18:09:35.832300 4766 scope.go:117] "RemoveContainer" containerID="67a59d5114cfec40ecce26e63d77ac1f822597430c59c816a6c5488929d95d16" Jan 30 18:09:35 crc kubenswrapper[4766]: I0130 18:09:35.863783 4766 scope.go:117] "RemoveContainer" containerID="7ed2e4a8f77301f74dbe8fcf94da96d12de751e354f6b87901da9ed53cd936de" Jan 30 18:09:35 crc kubenswrapper[4766]: E0130 18:09:35.864253 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ed2e4a8f77301f74dbe8fcf94da96d12de751e354f6b87901da9ed53cd936de\": container with ID starting with 7ed2e4a8f77301f74dbe8fcf94da96d12de751e354f6b87901da9ed53cd936de not found: ID does not exist" containerID="7ed2e4a8f77301f74dbe8fcf94da96d12de751e354f6b87901da9ed53cd936de" Jan 30 18:09:35 crc kubenswrapper[4766]: I0130 18:09:35.864349 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ed2e4a8f77301f74dbe8fcf94da96d12de751e354f6b87901da9ed53cd936de"} err="failed to get container status \"7ed2e4a8f77301f74dbe8fcf94da96d12de751e354f6b87901da9ed53cd936de\": rpc error: code = NotFound desc = could not find container \"7ed2e4a8f77301f74dbe8fcf94da96d12de751e354f6b87901da9ed53cd936de\": container with ID starting with 7ed2e4a8f77301f74dbe8fcf94da96d12de751e354f6b87901da9ed53cd936de not found: ID does not exist" Jan 30 18:09:35 crc kubenswrapper[4766]: I0130 18:09:35.864429 4766 scope.go:117] "RemoveContainer" containerID="95b62283b88f0b95f6d203bf6faf01256d49e5261ebe9d4c205cf140e428d8f5" Jan 30 18:09:35 crc kubenswrapper[4766]: E0130 18:09:35.864725 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95b62283b88f0b95f6d203bf6faf01256d49e5261ebe9d4c205cf140e428d8f5\": container with ID starting with 95b62283b88f0b95f6d203bf6faf01256d49e5261ebe9d4c205cf140e428d8f5 not found: ID does not exist" containerID="95b62283b88f0b95f6d203bf6faf01256d49e5261ebe9d4c205cf140e428d8f5" Jan 30 18:09:35 crc kubenswrapper[4766]: I0130 18:09:35.864774 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95b62283b88f0b95f6d203bf6faf01256d49e5261ebe9d4c205cf140e428d8f5"} err="failed to get container status \"95b62283b88f0b95f6d203bf6faf01256d49e5261ebe9d4c205cf140e428d8f5\": rpc error: code = NotFound desc = could not find container \"95b62283b88f0b95f6d203bf6faf01256d49e5261ebe9d4c205cf140e428d8f5\": container with ID starting with 95b62283b88f0b95f6d203bf6faf01256d49e5261ebe9d4c205cf140e428d8f5 not found: ID does not exist" Jan 30 18:09:35 crc kubenswrapper[4766]: I0130 18:09:35.864807 4766 scope.go:117] "RemoveContainer" containerID="67a59d5114cfec40ecce26e63d77ac1f822597430c59c816a6c5488929d95d16" Jan 30 18:09:35 crc kubenswrapper[4766]: E0130 18:09:35.865072 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67a59d5114cfec40ecce26e63d77ac1f822597430c59c816a6c5488929d95d16\": container with ID starting with 67a59d5114cfec40ecce26e63d77ac1f822597430c59c816a6c5488929d95d16 not found: ID does not exist" containerID="67a59d5114cfec40ecce26e63d77ac1f822597430c59c816a6c5488929d95d16" Jan 30 18:09:35 crc kubenswrapper[4766]: I0130 18:09:35.865161 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67a59d5114cfec40ecce26e63d77ac1f822597430c59c816a6c5488929d95d16"} err="failed to get container status \"67a59d5114cfec40ecce26e63d77ac1f822597430c59c816a6c5488929d95d16\": rpc error: code = NotFound desc = could not find container \"67a59d5114cfec40ecce26e63d77ac1f822597430c59c816a6c5488929d95d16\": container with ID starting with 67a59d5114cfec40ecce26e63d77ac1f822597430c59c816a6c5488929d95d16 not found: ID does not exist" Jan 30 18:09:36 crc kubenswrapper[4766]: I0130 18:09:36.046293 4766 scope.go:117] "RemoveContainer" containerID="d30d418df26d8fd4a7d7f4c28a838feebfa8adef2cda9408a1193bbf367cfb2a" Jan 30 18:09:36 crc kubenswrapper[4766]: E0130 18:09:36.046621 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:09:36 crc kubenswrapper[4766]: I0130 18:09:36.383349 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef7abb63-975d-41fe-9e07-406bd855526f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ef7abb63-975d-41fe-9e07-406bd855526f" (UID: "ef7abb63-975d-41fe-9e07-406bd855526f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:09:36 crc kubenswrapper[4766]: I0130 18:09:36.453430 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef7abb63-975d-41fe-9e07-406bd855526f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 18:09:36 crc kubenswrapper[4766]: I0130 18:09:36.671555 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dfxhv"] Jan 30 18:09:36 crc kubenswrapper[4766]: I0130 18:09:36.684050 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dfxhv"] Jan 30 18:09:38 crc kubenswrapper[4766]: I0130 18:09:38.053655 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef7abb63-975d-41fe-9e07-406bd855526f" path="/var/lib/kubelet/pods/ef7abb63-975d-41fe-9e07-406bd855526f/volumes" Jan 30 18:09:51 crc kubenswrapper[4766]: I0130 18:09:51.039844 4766 scope.go:117] "RemoveContainer" containerID="d30d418df26d8fd4a7d7f4c28a838feebfa8adef2cda9408a1193bbf367cfb2a" Jan 30 18:09:51 crc kubenswrapper[4766]: E0130 18:09:51.040555 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:10:03 crc kubenswrapper[4766]: I0130 18:10:03.040432 4766 scope.go:117] "RemoveContainer" containerID="d30d418df26d8fd4a7d7f4c28a838feebfa8adef2cda9408a1193bbf367cfb2a" Jan 30 18:10:03 crc kubenswrapper[4766]: E0130 18:10:03.041530 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:10:14 crc kubenswrapper[4766]: I0130 18:10:14.039003 4766 scope.go:117] "RemoveContainer" containerID="d30d418df26d8fd4a7d7f4c28a838feebfa8adef2cda9408a1193bbf367cfb2a" Jan 30 18:10:14 crc kubenswrapper[4766]: E0130 18:10:14.039684 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:10:25 crc kubenswrapper[4766]: I0130 18:10:25.039412 4766 scope.go:117] "RemoveContainer" containerID="d30d418df26d8fd4a7d7f4c28a838feebfa8adef2cda9408a1193bbf367cfb2a" Jan 30 18:10:25 crc kubenswrapper[4766]: E0130 18:10:25.040237 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:10:29 crc kubenswrapper[4766]: I0130 18:10:29.815985 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vzpss/must-gather-w799p"] Jan 30 18:10:29 crc kubenswrapper[4766]: E0130 18:10:29.817014 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef7abb63-975d-41fe-9e07-406bd855526f" containerName="registry-server" Jan 30 18:10:29 crc kubenswrapper[4766]: I0130 18:10:29.817028 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef7abb63-975d-41fe-9e07-406bd855526f" containerName="registry-server" Jan 30 18:10:29 crc kubenswrapper[4766]: E0130 18:10:29.817043 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef7abb63-975d-41fe-9e07-406bd855526f" containerName="extract-utilities" Jan 30 18:10:29 crc kubenswrapper[4766]: I0130 18:10:29.817049 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef7abb63-975d-41fe-9e07-406bd855526f" containerName="extract-utilities" Jan 30 18:10:29 crc kubenswrapper[4766]: E0130 18:10:29.817064 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef7abb63-975d-41fe-9e07-406bd855526f" containerName="extract-content" Jan 30 18:10:29 crc kubenswrapper[4766]: I0130 18:10:29.817069 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef7abb63-975d-41fe-9e07-406bd855526f" containerName="extract-content" Jan 30 18:10:29 crc kubenswrapper[4766]: I0130 18:10:29.817266 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef7abb63-975d-41fe-9e07-406bd855526f" containerName="registry-server" Jan 30 18:10:29 crc kubenswrapper[4766]: I0130 18:10:29.818354 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vzpss/must-gather-w799p" Jan 30 18:10:29 crc kubenswrapper[4766]: I0130 18:10:29.820950 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-vzpss"/"kube-root-ca.crt" Jan 30 18:10:29 crc kubenswrapper[4766]: I0130 18:10:29.821207 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-vzpss"/"openshift-service-ca.crt" Jan 30 18:10:29 crc kubenswrapper[4766]: I0130 18:10:29.821425 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-vzpss"/"default-dockercfg-rd7z6" Jan 30 18:10:29 crc kubenswrapper[4766]: I0130 18:10:29.827517 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vzpss/must-gather-w799p"] Jan 30 18:10:29 crc kubenswrapper[4766]: I0130 18:10:29.909664 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/857930ca-2670-4ab4-ba29-ece210bd2af5-must-gather-output\") pod \"must-gather-w799p\" (UID: \"857930ca-2670-4ab4-ba29-ece210bd2af5\") " pod="openshift-must-gather-vzpss/must-gather-w799p" Jan 30 18:10:29 crc kubenswrapper[4766]: I0130 18:10:29.909752 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrqvc\" (UniqueName: \"kubernetes.io/projected/857930ca-2670-4ab4-ba29-ece210bd2af5-kube-api-access-mrqvc\") pod \"must-gather-w799p\" (UID: \"857930ca-2670-4ab4-ba29-ece210bd2af5\") " pod="openshift-must-gather-vzpss/must-gather-w799p" Jan 30 18:10:30 crc kubenswrapper[4766]: I0130 18:10:30.011947 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/857930ca-2670-4ab4-ba29-ece210bd2af5-must-gather-output\") pod \"must-gather-w799p\" (UID: \"857930ca-2670-4ab4-ba29-ece210bd2af5\") " pod="openshift-must-gather-vzpss/must-gather-w799p" Jan 30 18:10:30 crc kubenswrapper[4766]: I0130 18:10:30.012740 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/857930ca-2670-4ab4-ba29-ece210bd2af5-must-gather-output\") pod \"must-gather-w799p\" (UID: \"857930ca-2670-4ab4-ba29-ece210bd2af5\") " pod="openshift-must-gather-vzpss/must-gather-w799p" Jan 30 18:10:30 crc kubenswrapper[4766]: I0130 18:10:30.012895 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrqvc\" (UniqueName: \"kubernetes.io/projected/857930ca-2670-4ab4-ba29-ece210bd2af5-kube-api-access-mrqvc\") pod \"must-gather-w799p\" (UID: \"857930ca-2670-4ab4-ba29-ece210bd2af5\") " pod="openshift-must-gather-vzpss/must-gather-w799p" Jan 30 18:10:30 crc kubenswrapper[4766]: I0130 18:10:30.031371 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrqvc\" (UniqueName: \"kubernetes.io/projected/857930ca-2670-4ab4-ba29-ece210bd2af5-kube-api-access-mrqvc\") pod \"must-gather-w799p\" (UID: \"857930ca-2670-4ab4-ba29-ece210bd2af5\") " pod="openshift-must-gather-vzpss/must-gather-w799p" Jan 30 18:10:30 crc kubenswrapper[4766]: I0130 18:10:30.147286 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vzpss/must-gather-w799p" Jan 30 18:10:30 crc kubenswrapper[4766]: I0130 18:10:30.687622 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vzpss/must-gather-w799p"] Jan 30 18:10:31 crc kubenswrapper[4766]: I0130 18:10:31.249206 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vzpss/must-gather-w799p" event={"ID":"857930ca-2670-4ab4-ba29-ece210bd2af5","Type":"ContainerStarted","Data":"52a83fa3d0421a4c02b1382ecdde5f2c954b6d5a37559a41d3ebe5dfe743483d"} Jan 30 18:10:35 crc kubenswrapper[4766]: I0130 18:10:35.308148 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vzpss/must-gather-w799p" event={"ID":"857930ca-2670-4ab4-ba29-ece210bd2af5","Type":"ContainerStarted","Data":"776a408dabef3cda5dfcce8b8d2f50984cb6bb6711550c6bec4c470e6ef1c7d8"} Jan 30 18:10:36 crc kubenswrapper[4766]: I0130 18:10:36.317798 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vzpss/must-gather-w799p" event={"ID":"857930ca-2670-4ab4-ba29-ece210bd2af5","Type":"ContainerStarted","Data":"fdd06e0bdd56096dd8720c76934293ed2794220f217e02573d2cd3ab6e769401"} Jan 30 18:10:36 crc kubenswrapper[4766]: I0130 18:10:36.340770 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-vzpss/must-gather-w799p" podStartSLOduration=3.183107659 podStartE2EDuration="7.340750177s" podCreationTimestamp="2026-01-30 18:10:29 +0000 UTC" firstStartedPulling="2026-01-30 18:10:30.690101571 +0000 UTC m=+6485.328058917" lastFinishedPulling="2026-01-30 18:10:34.847744089 +0000 UTC m=+6489.485701435" observedRunningTime="2026-01-30 18:10:36.340289995 +0000 UTC m=+6490.978247351" watchObservedRunningTime="2026-01-30 18:10:36.340750177 +0000 UTC m=+6490.978707513" Jan 30 18:10:38 crc kubenswrapper[4766]: I0130 18:10:38.039969 4766 scope.go:117] "RemoveContainer" containerID="d30d418df26d8fd4a7d7f4c28a838feebfa8adef2cda9408a1193bbf367cfb2a" Jan 30 18:10:38 crc kubenswrapper[4766]: E0130 18:10:38.040934 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:10:40 crc kubenswrapper[4766]: I0130 18:10:40.288883 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vzpss/crc-debug-4k4tt"] Jan 30 18:10:40 crc kubenswrapper[4766]: I0130 18:10:40.291517 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vzpss/crc-debug-4k4tt" Jan 30 18:10:40 crc kubenswrapper[4766]: I0130 18:10:40.442982 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swlmx\" (UniqueName: \"kubernetes.io/projected/bb173c41-5a03-41e9-9607-2af3fadd2bb0-kube-api-access-swlmx\") pod \"crc-debug-4k4tt\" (UID: \"bb173c41-5a03-41e9-9607-2af3fadd2bb0\") " pod="openshift-must-gather-vzpss/crc-debug-4k4tt" Jan 30 18:10:40 crc kubenswrapper[4766]: I0130 18:10:40.443654 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bb173c41-5a03-41e9-9607-2af3fadd2bb0-host\") pod \"crc-debug-4k4tt\" (UID: \"bb173c41-5a03-41e9-9607-2af3fadd2bb0\") " pod="openshift-must-gather-vzpss/crc-debug-4k4tt" Jan 30 18:10:40 crc kubenswrapper[4766]: I0130 18:10:40.546299 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bb173c41-5a03-41e9-9607-2af3fadd2bb0-host\") pod \"crc-debug-4k4tt\" (UID: \"bb173c41-5a03-41e9-9607-2af3fadd2bb0\") " pod="openshift-must-gather-vzpss/crc-debug-4k4tt" Jan 30 18:10:40 crc kubenswrapper[4766]: I0130 18:10:40.546412 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bb173c41-5a03-41e9-9607-2af3fadd2bb0-host\") pod \"crc-debug-4k4tt\" (UID: \"bb173c41-5a03-41e9-9607-2af3fadd2bb0\") " pod="openshift-must-gather-vzpss/crc-debug-4k4tt" Jan 30 18:10:40 crc kubenswrapper[4766]: I0130 18:10:40.546514 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swlmx\" (UniqueName: \"kubernetes.io/projected/bb173c41-5a03-41e9-9607-2af3fadd2bb0-kube-api-access-swlmx\") pod \"crc-debug-4k4tt\" (UID: \"bb173c41-5a03-41e9-9607-2af3fadd2bb0\") " pod="openshift-must-gather-vzpss/crc-debug-4k4tt" Jan 30 18:10:40 crc kubenswrapper[4766]: I0130 18:10:40.566364 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swlmx\" (UniqueName: \"kubernetes.io/projected/bb173c41-5a03-41e9-9607-2af3fadd2bb0-kube-api-access-swlmx\") pod \"crc-debug-4k4tt\" (UID: \"bb173c41-5a03-41e9-9607-2af3fadd2bb0\") " pod="openshift-must-gather-vzpss/crc-debug-4k4tt" Jan 30 18:10:40 crc kubenswrapper[4766]: I0130 18:10:40.615163 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vzpss/crc-debug-4k4tt" Jan 30 18:10:40 crc kubenswrapper[4766]: W0130 18:10:40.658523 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb173c41_5a03_41e9_9607_2af3fadd2bb0.slice/crio-573a6b91d725472fdfb205c55e448cc38aa6231b4ffbf6de976e5eaeaf9078bb WatchSource:0}: Error finding container 573a6b91d725472fdfb205c55e448cc38aa6231b4ffbf6de976e5eaeaf9078bb: Status 404 returned error can't find the container with id 573a6b91d725472fdfb205c55e448cc38aa6231b4ffbf6de976e5eaeaf9078bb Jan 30 18:10:41 crc kubenswrapper[4766]: I0130 18:10:41.368625 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vzpss/crc-debug-4k4tt" event={"ID":"bb173c41-5a03-41e9-9607-2af3fadd2bb0","Type":"ContainerStarted","Data":"573a6b91d725472fdfb205c55e448cc38aa6231b4ffbf6de976e5eaeaf9078bb"} Jan 30 18:10:52 crc kubenswrapper[4766]: I0130 18:10:52.039823 4766 scope.go:117] "RemoveContainer" containerID="d30d418df26d8fd4a7d7f4c28a838feebfa8adef2cda9408a1193bbf367cfb2a" Jan 30 18:10:52 crc kubenswrapper[4766]: E0130 18:10:52.067071 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:10:53 crc kubenswrapper[4766]: I0130 18:10:53.497272 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vzpss/crc-debug-4k4tt" event={"ID":"bb173c41-5a03-41e9-9607-2af3fadd2bb0","Type":"ContainerStarted","Data":"549166e86899f93f0e412300b625a0698a6ead854b19e39901ccb892da798543"} Jan 30 18:10:53 crc kubenswrapper[4766]: I0130 18:10:53.524080 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-vzpss/crc-debug-4k4tt" podStartSLOduration=0.959054018 podStartE2EDuration="13.524054538s" podCreationTimestamp="2026-01-30 18:10:40 +0000 UTC" firstStartedPulling="2026-01-30 18:10:40.66211199 +0000 UTC m=+6495.300069346" lastFinishedPulling="2026-01-30 18:10:53.22711252 +0000 UTC m=+6507.865069866" observedRunningTime="2026-01-30 18:10:53.514292372 +0000 UTC m=+6508.152249718" watchObservedRunningTime="2026-01-30 18:10:53.524054538 +0000 UTC m=+6508.162011884" Jan 30 18:11:07 crc kubenswrapper[4766]: I0130 18:11:07.040390 4766 scope.go:117] "RemoveContainer" containerID="d30d418df26d8fd4a7d7f4c28a838feebfa8adef2cda9408a1193bbf367cfb2a" Jan 30 18:11:07 crc kubenswrapper[4766]: E0130 18:11:07.041309 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:11:16 crc kubenswrapper[4766]: I0130 18:11:16.701346 4766 generic.go:334] "Generic (PLEG): container finished" podID="bb173c41-5a03-41e9-9607-2af3fadd2bb0" containerID="549166e86899f93f0e412300b625a0698a6ead854b19e39901ccb892da798543" exitCode=0 Jan 30 18:11:16 crc kubenswrapper[4766]: I0130 18:11:16.701415 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vzpss/crc-debug-4k4tt" event={"ID":"bb173c41-5a03-41e9-9607-2af3fadd2bb0","Type":"ContainerDied","Data":"549166e86899f93f0e412300b625a0698a6ead854b19e39901ccb892da798543"} Jan 30 18:11:17 crc kubenswrapper[4766]: I0130 18:11:17.841113 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vzpss/crc-debug-4k4tt" Jan 30 18:11:17 crc kubenswrapper[4766]: I0130 18:11:17.874732 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vzpss/crc-debug-4k4tt"] Jan 30 18:11:17 crc kubenswrapper[4766]: I0130 18:11:17.883263 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vzpss/crc-debug-4k4tt"] Jan 30 18:11:17 crc kubenswrapper[4766]: I0130 18:11:17.931750 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bb173c41-5a03-41e9-9607-2af3fadd2bb0-host\") pod \"bb173c41-5a03-41e9-9607-2af3fadd2bb0\" (UID: \"bb173c41-5a03-41e9-9607-2af3fadd2bb0\") " Jan 30 18:11:17 crc kubenswrapper[4766]: I0130 18:11:17.931902 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb173c41-5a03-41e9-9607-2af3fadd2bb0-host" (OuterVolumeSpecName: "host") pod "bb173c41-5a03-41e9-9607-2af3fadd2bb0" (UID: "bb173c41-5a03-41e9-9607-2af3fadd2bb0"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 18:11:17 crc kubenswrapper[4766]: I0130 18:11:17.932253 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swlmx\" (UniqueName: \"kubernetes.io/projected/bb173c41-5a03-41e9-9607-2af3fadd2bb0-kube-api-access-swlmx\") pod \"bb173c41-5a03-41e9-9607-2af3fadd2bb0\" (UID: \"bb173c41-5a03-41e9-9607-2af3fadd2bb0\") " Jan 30 18:11:17 crc kubenswrapper[4766]: I0130 18:11:17.932934 4766 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bb173c41-5a03-41e9-9607-2af3fadd2bb0-host\") on node \"crc\" DevicePath \"\"" Jan 30 18:11:17 crc kubenswrapper[4766]: I0130 18:11:17.939796 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb173c41-5a03-41e9-9607-2af3fadd2bb0-kube-api-access-swlmx" (OuterVolumeSpecName: "kube-api-access-swlmx") pod "bb173c41-5a03-41e9-9607-2af3fadd2bb0" (UID: "bb173c41-5a03-41e9-9607-2af3fadd2bb0"). InnerVolumeSpecName "kube-api-access-swlmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:11:18 crc kubenswrapper[4766]: I0130 18:11:18.035072 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swlmx\" (UniqueName: \"kubernetes.io/projected/bb173c41-5a03-41e9-9607-2af3fadd2bb0-kube-api-access-swlmx\") on node \"crc\" DevicePath \"\"" Jan 30 18:11:18 crc kubenswrapper[4766]: I0130 18:11:18.051116 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb173c41-5a03-41e9-9607-2af3fadd2bb0" path="/var/lib/kubelet/pods/bb173c41-5a03-41e9-9607-2af3fadd2bb0/volumes" Jan 30 18:11:18 crc kubenswrapper[4766]: I0130 18:11:18.720589 4766 scope.go:117] "RemoveContainer" containerID="549166e86899f93f0e412300b625a0698a6ead854b19e39901ccb892da798543" Jan 30 18:11:18 crc kubenswrapper[4766]: I0130 18:11:18.720643 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vzpss/crc-debug-4k4tt" Jan 30 18:11:19 crc kubenswrapper[4766]: I0130 18:11:19.039385 4766 scope.go:117] "RemoveContainer" containerID="d30d418df26d8fd4a7d7f4c28a838feebfa8adef2cda9408a1193bbf367cfb2a" Jan 30 18:11:19 crc kubenswrapper[4766]: E0130 18:11:19.040019 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:11:19 crc kubenswrapper[4766]: I0130 18:11:19.067974 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vzpss/crc-debug-sfqcx"] Jan 30 18:11:19 crc kubenswrapper[4766]: E0130 18:11:19.068539 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb173c41-5a03-41e9-9607-2af3fadd2bb0" containerName="container-00" Jan 30 18:11:19 crc kubenswrapper[4766]: I0130 18:11:19.068566 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb173c41-5a03-41e9-9607-2af3fadd2bb0" containerName="container-00" Jan 30 18:11:19 crc kubenswrapper[4766]: I0130 18:11:19.068803 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb173c41-5a03-41e9-9607-2af3fadd2bb0" containerName="container-00" Jan 30 18:11:19 crc kubenswrapper[4766]: I0130 18:11:19.069651 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vzpss/crc-debug-sfqcx" Jan 30 18:11:19 crc kubenswrapper[4766]: I0130 18:11:19.153310 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nckkf\" (UniqueName: \"kubernetes.io/projected/e793baf0-20e5-4275-b2ca-28cc4203be80-kube-api-access-nckkf\") pod \"crc-debug-sfqcx\" (UID: \"e793baf0-20e5-4275-b2ca-28cc4203be80\") " pod="openshift-must-gather-vzpss/crc-debug-sfqcx" Jan 30 18:11:19 crc kubenswrapper[4766]: I0130 18:11:19.153871 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e793baf0-20e5-4275-b2ca-28cc4203be80-host\") pod \"crc-debug-sfqcx\" (UID: \"e793baf0-20e5-4275-b2ca-28cc4203be80\") " pod="openshift-must-gather-vzpss/crc-debug-sfqcx" Jan 30 18:11:19 crc kubenswrapper[4766]: I0130 18:11:19.255626 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nckkf\" (UniqueName: \"kubernetes.io/projected/e793baf0-20e5-4275-b2ca-28cc4203be80-kube-api-access-nckkf\") pod \"crc-debug-sfqcx\" (UID: \"e793baf0-20e5-4275-b2ca-28cc4203be80\") " pod="openshift-must-gather-vzpss/crc-debug-sfqcx" Jan 30 18:11:19 crc kubenswrapper[4766]: I0130 18:11:19.255701 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e793baf0-20e5-4275-b2ca-28cc4203be80-host\") pod \"crc-debug-sfqcx\" (UID: \"e793baf0-20e5-4275-b2ca-28cc4203be80\") " pod="openshift-must-gather-vzpss/crc-debug-sfqcx" Jan 30 18:11:19 crc kubenswrapper[4766]: I0130 18:11:19.255897 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e793baf0-20e5-4275-b2ca-28cc4203be80-host\") pod \"crc-debug-sfqcx\" (UID: \"e793baf0-20e5-4275-b2ca-28cc4203be80\") " pod="openshift-must-gather-vzpss/crc-debug-sfqcx" Jan 30 18:11:19 crc kubenswrapper[4766]: I0130 18:11:19.278960 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nckkf\" (UniqueName: \"kubernetes.io/projected/e793baf0-20e5-4275-b2ca-28cc4203be80-kube-api-access-nckkf\") pod \"crc-debug-sfqcx\" (UID: \"e793baf0-20e5-4275-b2ca-28cc4203be80\") " pod="openshift-must-gather-vzpss/crc-debug-sfqcx" Jan 30 18:11:19 crc kubenswrapper[4766]: I0130 18:11:19.386750 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vzpss/crc-debug-sfqcx" Jan 30 18:11:19 crc kubenswrapper[4766]: I0130 18:11:19.736800 4766 generic.go:334] "Generic (PLEG): container finished" podID="e793baf0-20e5-4275-b2ca-28cc4203be80" containerID="4e748bfeb8a44757dba03123d857374e782525241c642923def478bde4fb254d" exitCode=1 Jan 30 18:11:19 crc kubenswrapper[4766]: I0130 18:11:19.736896 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vzpss/crc-debug-sfqcx" event={"ID":"e793baf0-20e5-4275-b2ca-28cc4203be80","Type":"ContainerDied","Data":"4e748bfeb8a44757dba03123d857374e782525241c642923def478bde4fb254d"} Jan 30 18:11:19 crc kubenswrapper[4766]: I0130 18:11:19.737453 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vzpss/crc-debug-sfqcx" event={"ID":"e793baf0-20e5-4275-b2ca-28cc4203be80","Type":"ContainerStarted","Data":"89dbc8d740b089d80c12d950faa56c545d4bde43689cb20a7ab9dbb853db3b1d"} Jan 30 18:11:19 crc kubenswrapper[4766]: I0130 18:11:19.771997 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vzpss/crc-debug-sfqcx"] Jan 30 18:11:19 crc kubenswrapper[4766]: I0130 18:11:19.780701 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vzpss/crc-debug-sfqcx"] Jan 30 18:11:20 crc kubenswrapper[4766]: I0130 18:11:20.870279 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vzpss/crc-debug-sfqcx" Jan 30 18:11:20 crc kubenswrapper[4766]: I0130 18:11:20.992965 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e793baf0-20e5-4275-b2ca-28cc4203be80-host\") pod \"e793baf0-20e5-4275-b2ca-28cc4203be80\" (UID: \"e793baf0-20e5-4275-b2ca-28cc4203be80\") " Jan 30 18:11:20 crc kubenswrapper[4766]: I0130 18:11:20.993057 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nckkf\" (UniqueName: \"kubernetes.io/projected/e793baf0-20e5-4275-b2ca-28cc4203be80-kube-api-access-nckkf\") pod \"e793baf0-20e5-4275-b2ca-28cc4203be80\" (UID: \"e793baf0-20e5-4275-b2ca-28cc4203be80\") " Jan 30 18:11:20 crc kubenswrapper[4766]: I0130 18:11:20.993437 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e793baf0-20e5-4275-b2ca-28cc4203be80-host" (OuterVolumeSpecName: "host") pod "e793baf0-20e5-4275-b2ca-28cc4203be80" (UID: "e793baf0-20e5-4275-b2ca-28cc4203be80"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 18:11:20 crc kubenswrapper[4766]: I0130 18:11:20.994164 4766 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e793baf0-20e5-4275-b2ca-28cc4203be80-host\") on node \"crc\" DevicePath \"\"" Jan 30 18:11:20 crc kubenswrapper[4766]: I0130 18:11:20.998444 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e793baf0-20e5-4275-b2ca-28cc4203be80-kube-api-access-nckkf" (OuterVolumeSpecName: "kube-api-access-nckkf") pod "e793baf0-20e5-4275-b2ca-28cc4203be80" (UID: "e793baf0-20e5-4275-b2ca-28cc4203be80"). InnerVolumeSpecName "kube-api-access-nckkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:11:21 crc kubenswrapper[4766]: I0130 18:11:21.096269 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nckkf\" (UniqueName: \"kubernetes.io/projected/e793baf0-20e5-4275-b2ca-28cc4203be80-kube-api-access-nckkf\") on node \"crc\" DevicePath \"\"" Jan 30 18:11:21 crc kubenswrapper[4766]: I0130 18:11:21.770340 4766 scope.go:117] "RemoveContainer" containerID="4e748bfeb8a44757dba03123d857374e782525241c642923def478bde4fb254d" Jan 30 18:11:21 crc kubenswrapper[4766]: I0130 18:11:21.770554 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vzpss/crc-debug-sfqcx" Jan 30 18:11:22 crc kubenswrapper[4766]: I0130 18:11:22.052007 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e793baf0-20e5-4275-b2ca-28cc4203be80" path="/var/lib/kubelet/pods/e793baf0-20e5-4275-b2ca-28cc4203be80/volumes" Jan 30 18:11:30 crc kubenswrapper[4766]: I0130 18:11:30.039525 4766 scope.go:117] "RemoveContainer" containerID="d30d418df26d8fd4a7d7f4c28a838feebfa8adef2cda9408a1193bbf367cfb2a" Jan 30 18:11:30 crc kubenswrapper[4766]: E0130 18:11:30.040300 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:11:41 crc kubenswrapper[4766]: I0130 18:11:41.039655 4766 scope.go:117] "RemoveContainer" containerID="d30d418df26d8fd4a7d7f4c28a838feebfa8adef2cda9408a1193bbf367cfb2a" Jan 30 18:11:41 crc kubenswrapper[4766]: E0130 18:11:41.041515 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:11:49 crc kubenswrapper[4766]: I0130 18:11:49.047765 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-3460-account-create-update-759zj"] Jan 30 18:11:49 crc kubenswrapper[4766]: I0130 18:11:49.060701 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-3460-account-create-update-759zj"] Jan 30 18:11:49 crc kubenswrapper[4766]: I0130 18:11:49.069734 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-qr4v8"] Jan 30 18:11:49 crc kubenswrapper[4766]: I0130 18:11:49.076993 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-qr4v8"] Jan 30 18:11:50 crc kubenswrapper[4766]: I0130 18:11:50.052180 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ac9189d-ff73-4cd5-8299-276858527c74" path="/var/lib/kubelet/pods/8ac9189d-ff73-4cd5-8299-276858527c74/volumes" Jan 30 18:11:50 crc kubenswrapper[4766]: I0130 18:11:50.053650 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f43513bc-2d21-47b3-8acb-b331c5f5f46f" path="/var/lib/kubelet/pods/f43513bc-2d21-47b3-8acb-b331c5f5f46f/volumes" Jan 30 18:11:56 crc kubenswrapper[4766]: I0130 18:11:56.045515 4766 scope.go:117] "RemoveContainer" containerID="d30d418df26d8fd4a7d7f4c28a838feebfa8adef2cda9408a1193bbf367cfb2a" Jan 30 18:11:56 crc kubenswrapper[4766]: E0130 18:11:56.046417 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:12:00 crc kubenswrapper[4766]: I0130 18:12:00.274328 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_9044d49e-1762-437b-86a3-8697b46a1930/init-config-reloader/0.log" Jan 30 18:12:00 crc kubenswrapper[4766]: I0130 18:12:00.522693 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_9044d49e-1762-437b-86a3-8697b46a1930/init-config-reloader/0.log" Jan 30 18:12:00 crc kubenswrapper[4766]: I0130 18:12:00.524311 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_9044d49e-1762-437b-86a3-8697b46a1930/alertmanager/0.log" Jan 30 18:12:00 crc kubenswrapper[4766]: I0130 18:12:00.602963 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_9044d49e-1762-437b-86a3-8697b46a1930/config-reloader/0.log" Jan 30 18:12:00 crc kubenswrapper[4766]: I0130 18:12:00.719052 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5b7b4f6b66-crqxp_c0607eb3-be12-4282-ac48-55b5220b4888/barbican-api/0.log" Jan 30 18:12:00 crc kubenswrapper[4766]: I0130 18:12:00.751643 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5b7b4f6b66-crqxp_c0607eb3-be12-4282-ac48-55b5220b4888/barbican-api-log/0.log" Jan 30 18:12:00 crc kubenswrapper[4766]: I0130 18:12:00.897314 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-78445c974-66754_a6132938-2052-4889-b1d7-2e43deb664e1/barbican-keystone-listener/0.log" Jan 30 18:12:00 crc kubenswrapper[4766]: I0130 18:12:00.919274 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-78445c974-66754_a6132938-2052-4889-b1d7-2e43deb664e1/barbican-keystone-listener-log/0.log" Jan 30 18:12:01 crc kubenswrapper[4766]: I0130 18:12:01.023892 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-84dcf975b7-fj984_eb8f2fee-863e-4c1e-90af-6ed7a631a4ac/barbican-worker/0.log" Jan 30 18:12:01 crc kubenswrapper[4766]: I0130 18:12:01.097519 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-84dcf975b7-fj984_eb8f2fee-863e-4c1e-90af-6ed7a631a4ac/barbican-worker-log/0.log" Jan 30 18:12:01 crc kubenswrapper[4766]: I0130 18:12:01.215399 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_464bbfb2-a15f-4b08-85d1-bc0fe536c6d7/ceilometer-central-agent/0.log" Jan 30 18:12:01 crc kubenswrapper[4766]: I0130 18:12:01.279513 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_464bbfb2-a15f-4b08-85d1-bc0fe536c6d7/ceilometer-notification-agent/0.log" Jan 30 18:12:01 crc kubenswrapper[4766]: I0130 18:12:01.313975 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_464bbfb2-a15f-4b08-85d1-bc0fe536c6d7/proxy-httpd/0.log" Jan 30 18:12:01 crc kubenswrapper[4766]: I0130 18:12:01.366926 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_464bbfb2-a15f-4b08-85d1-bc0fe536c6d7/sg-core/0.log" Jan 30 18:12:01 crc kubenswrapper[4766]: I0130 18:12:01.504652 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_e9a81891-2796-4952-bf9e-9a9f83668e34/cinder-api-log/0.log" Jan 30 18:12:01 crc kubenswrapper[4766]: I0130 18:12:01.528031 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_e9a81891-2796-4952-bf9e-9a9f83668e34/cinder-api/0.log" Jan 30 18:12:01 crc kubenswrapper[4766]: I0130 18:12:01.726450 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_1a4ab9dd-be94-4701-a0ba-55dde27e9543/probe/0.log" Jan 30 18:12:01 crc kubenswrapper[4766]: I0130 18:12:01.804706 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_1a4ab9dd-be94-4701-a0ba-55dde27e9543/cinder-backup/0.log" Jan 30 18:12:01 crc kubenswrapper[4766]: I0130 18:12:01.865986 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_598edf34-3970-416e-b9fb-4de69de61ca1/cinder-scheduler/0.log" Jan 30 18:12:01 crc kubenswrapper[4766]: I0130 18:12:01.945535 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_598edf34-3970-416e-b9fb-4de69de61ca1/probe/0.log" Jan 30 18:12:02 crc kubenswrapper[4766]: I0130 18:12:02.027502 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_cf1121a2-7545-40c9-9280-9337e94554d9/cinder-volume/0.log" Jan 30 18:12:02 crc kubenswrapper[4766]: I0130 18:12:02.037278 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-276pq"] Jan 30 18:12:02 crc kubenswrapper[4766]: I0130 18:12:02.064822 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-276pq"] Jan 30 18:12:02 crc kubenswrapper[4766]: I0130 18:12:02.093835 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_cf1121a2-7545-40c9-9280-9337e94554d9/probe/0.log" Jan 30 18:12:02 crc kubenswrapper[4766]: I0130 18:12:02.245893 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-8687c8cf7-7zxrr_c2333655-ed62-419c-a0cc-04a4c9f36938/init/0.log" Jan 30 18:12:02 crc kubenswrapper[4766]: I0130 18:12:02.390894 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-8687c8cf7-7zxrr_c2333655-ed62-419c-a0cc-04a4c9f36938/init/0.log" Jan 30 18:12:02 crc kubenswrapper[4766]: I0130 18:12:02.422065 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-8687c8cf7-7zxrr_c2333655-ed62-419c-a0cc-04a4c9f36938/dnsmasq-dns/0.log" Jan 30 18:12:02 crc kubenswrapper[4766]: I0130 18:12:02.425615 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_ddc1af26-668d-4715-b17a-e94ee4f5b571/glance-httpd/0.log" Jan 30 18:12:02 crc kubenswrapper[4766]: I0130 18:12:02.603963 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_ddc1af26-668d-4715-b17a-e94ee4f5b571/glance-log/0.log" Jan 30 18:12:02 crc kubenswrapper[4766]: I0130 18:12:02.629740 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_c25f82b3-9296-4814-92b1-59ca5c2bf2a0/glance-httpd/0.log" Jan 30 18:12:02 crc kubenswrapper[4766]: I0130 18:12:02.699861 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_c25f82b3-9296-4814-92b1-59ca5c2bf2a0/glance-log/0.log" Jan 30 18:12:02 crc kubenswrapper[4766]: I0130 18:12:02.865337 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-675bcfc5ff-kvdtq_e11fd011-1725-4cdd-979f-75eecd0329b2/heat-api/0.log" Jan 30 18:12:02 crc kubenswrapper[4766]: I0130 18:12:02.930335 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-675bf5dcf-ltj5r_65f44ca0-52f4-4d4a-aeb8-18275fff50eb/heat-cfnapi/0.log" Jan 30 18:12:03 crc kubenswrapper[4766]: I0130 18:12:03.122816 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-54c46d7b9c-z94n2_364a6690-a249-4765-b86e-b72ca919edb8/heat-engine/0.log" Jan 30 18:12:03 crc kubenswrapper[4766]: I0130 18:12:03.403031 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-757f4f657-jzgr8_f7b06d45-03c9-406f-8fc0-79428ec9de8f/horizon/0.log" Jan 30 18:12:03 crc kubenswrapper[4766]: I0130 18:12:03.500067 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-757f4f657-jzgr8_f7b06d45-03c9-406f-8fc0-79428ec9de8f/horizon-log/0.log" Jan 30 18:12:03 crc kubenswrapper[4766]: I0130 18:12:03.547662 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29496601-pl6qc_5d20810a-2efe-43c6-a8e6-92a14834a048/keystone-cron/0.log" Jan 30 18:12:03 crc kubenswrapper[4766]: I0130 18:12:03.776686 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_899280ca-43e9-46f7-8204-a90e682a0656/kube-state-metrics/0.log" Jan 30 18:12:03 crc kubenswrapper[4766]: I0130 18:12:03.803847 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-d9bc78c74-tqx5h_d2175d86-a673-4c75-9344-d410bff4770a/keystone-api/0.log" Jan 30 18:12:04 crc kubenswrapper[4766]: I0130 18:12:04.001380 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-copy-data_d76c2935-d3e2-401f-bdd0-878e885a5add/adoption/0.log" Jan 30 18:12:04 crc kubenswrapper[4766]: I0130 18:12:04.051619 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05bc6794-04be-40f4-8fa7-552f45a104c0" path="/var/lib/kubelet/pods/05bc6794-04be-40f4-8fa7-552f45a104c0/volumes" Jan 30 18:12:04 crc kubenswrapper[4766]: I0130 18:12:04.324480 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-577cfcb8f7-k7t7l_f8fd7445-369a-43d1-8b68-6a3d7b2abbe3/neutron-api/0.log" Jan 30 18:12:04 crc kubenswrapper[4766]: I0130 18:12:04.375136 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-577cfcb8f7-k7t7l_f8fd7445-369a-43d1-8b68-6a3d7b2abbe3/neutron-httpd/0.log" Jan 30 18:12:04 crc kubenswrapper[4766]: I0130 18:12:04.576208 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_af618003-f485-4daa-bedb-d1408b4547bb/nova-api-api/0.log" Jan 30 18:12:04 crc kubenswrapper[4766]: I0130 18:12:04.734393 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_af618003-f485-4daa-bedb-d1408b4547bb/nova-api-log/0.log" Jan 30 18:12:04 crc kubenswrapper[4766]: I0130 18:12:04.860213 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_463fa20b-ef02-4b0a-ae8e-3fed6dc02c37/nova-cell0-conductor-conductor/0.log" Jan 30 18:12:05 crc kubenswrapper[4766]: I0130 18:12:05.022292 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_b4061a48-dd7c-4b2f-aa8b-422eb8f65c1e/nova-cell1-conductor-conductor/0.log" Jan 30 18:12:05 crc kubenswrapper[4766]: I0130 18:12:05.236499 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_5d4aa9c5-4f42-495a-921f-986b170dafe4/nova-cell1-novncproxy-novncproxy/0.log" Jan 30 18:12:05 crc kubenswrapper[4766]: I0130 18:12:05.291336 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_374fa21e-428d-4383-9124-5272df0552d4/nova-metadata-metadata/0.log" Jan 30 18:12:05 crc kubenswrapper[4766]: I0130 18:12:05.311119 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_374fa21e-428d-4383-9124-5272df0552d4/nova-metadata-log/0.log" Jan 30 18:12:05 crc kubenswrapper[4766]: I0130 18:12:05.526799 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_782b2122-c6f0-424d-85b1-efb911f37e20/nova-scheduler-scheduler/0.log" Jan 30 18:12:05 crc kubenswrapper[4766]: I0130 18:12:05.582372 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-5c95b64c75-5mhgs_0eb984d4-df63-4a4e-b808-e30c97f6f606/init/0.log" Jan 30 18:12:05 crc kubenswrapper[4766]: I0130 18:12:05.825440 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-5c95b64c75-5mhgs_0eb984d4-df63-4a4e-b808-e30c97f6f606/octavia-api-provider-agent/0.log" Jan 30 18:12:05 crc kubenswrapper[4766]: I0130 18:12:05.853086 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-5c95b64c75-5mhgs_0eb984d4-df63-4a4e-b808-e30c97f6f606/init/0.log" Jan 30 18:12:05 crc kubenswrapper[4766]: I0130 18:12:05.992750 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-5c95b64c75-5mhgs_0eb984d4-df63-4a4e-b808-e30c97f6f606/octavia-api/0.log" Jan 30 18:12:06 crc kubenswrapper[4766]: I0130 18:12:06.047706 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-422fs_1c79d934-7880-4883-bee6-c60ea7745616/init/0.log" Jan 30 18:12:06 crc kubenswrapper[4766]: I0130 18:12:06.234368 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-422fs_1c79d934-7880-4883-bee6-c60ea7745616/octavia-healthmanager/0.log" Jan 30 18:12:06 crc kubenswrapper[4766]: I0130 18:12:06.293538 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-422fs_1c79d934-7880-4883-bee6-c60ea7745616/init/0.log" Jan 30 18:12:06 crc kubenswrapper[4766]: I0130 18:12:06.400658 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-f25c5_7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a/init/0.log" Jan 30 18:12:06 crc kubenswrapper[4766]: I0130 18:12:06.621907 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-f25c5_7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a/init/0.log" Jan 30 18:12:06 crc kubenswrapper[4766]: I0130 18:12:06.663577 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-image-upload-59f8cff499-b9qv6_a2dd03c7-c095-4563-9107-802624d1e4f5/init/0.log" Jan 30 18:12:06 crc kubenswrapper[4766]: I0130 18:12:06.721691 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-f25c5_7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a/octavia-housekeeping/0.log" Jan 30 18:12:06 crc kubenswrapper[4766]: I0130 18:12:06.854310 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-image-upload-59f8cff499-b9qv6_a2dd03c7-c095-4563-9107-802624d1e4f5/octavia-amphora-httpd/0.log" Jan 30 18:12:06 crc kubenswrapper[4766]: I0130 18:12:06.945004 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-image-upload-59f8cff499-b9qv6_a2dd03c7-c095-4563-9107-802624d1e4f5/init/0.log" Jan 30 18:12:06 crc kubenswrapper[4766]: I0130 18:12:06.992702 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-l7mdv_37d87bf7-0bd7-4201-b0e3-0d1b8062c930/init/0.log" Jan 30 18:12:07 crc kubenswrapper[4766]: I0130 18:12:07.355412 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-l7mdv_37d87bf7-0bd7-4201-b0e3-0d1b8062c930/init/0.log" Jan 30 18:12:07 crc kubenswrapper[4766]: I0130 18:12:07.415953 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-l7mdv_37d87bf7-0bd7-4201-b0e3-0d1b8062c930/octavia-rsyslog/0.log" Jan 30 18:12:07 crc kubenswrapper[4766]: I0130 18:12:07.483146 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-qrfbg_5aade569-1bea-4133-8ea3-51cea870143d/init/0.log" Jan 30 18:12:07 crc kubenswrapper[4766]: I0130 18:12:07.699335 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-qrfbg_5aade569-1bea-4133-8ea3-51cea870143d/init/0.log" Jan 30 18:12:07 crc kubenswrapper[4766]: I0130 18:12:07.801057 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_7c586850-0ed6-4949-9087-0e66405455ce/mysql-bootstrap/0.log" Jan 30 18:12:07 crc kubenswrapper[4766]: I0130 18:12:07.842780 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-qrfbg_5aade569-1bea-4133-8ea3-51cea870143d/octavia-worker/0.log" Jan 30 18:12:08 crc kubenswrapper[4766]: I0130 18:12:08.015548 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_7c586850-0ed6-4949-9087-0e66405455ce/mysql-bootstrap/0.log" Jan 30 18:12:08 crc kubenswrapper[4766]: I0130 18:12:08.025520 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_7c586850-0ed6-4949-9087-0e66405455ce/galera/0.log" Jan 30 18:12:08 crc kubenswrapper[4766]: I0130 18:12:08.144556 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_57e546c4-803f-4379-b5fb-de5ec7f0c79f/mysql-bootstrap/0.log" Jan 30 18:12:08 crc kubenswrapper[4766]: I0130 18:12:08.321676 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_57e546c4-803f-4379-b5fb-de5ec7f0c79f/mysql-bootstrap/0.log" Jan 30 18:12:08 crc kubenswrapper[4766]: I0130 18:12:08.429997 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_57e546c4-803f-4379-b5fb-de5ec7f0c79f/galera/0.log" Jan 30 18:12:08 crc kubenswrapper[4766]: I0130 18:12:08.441452 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_1f134cd2-6d22-47cd-9ef6-bfdda2701067/openstackclient/0.log" Jan 30 18:12:08 crc kubenswrapper[4766]: I0130 18:12:08.666710 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-k9frg_8d8369af-eac5-4d31-b183-1a542da452c5/ovn-controller/0.log" Jan 30 18:12:08 crc kubenswrapper[4766]: I0130 18:12:08.770954 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-8hgh6_1a4fbcc6-ea61-45d4-b3c4-ecaf44f460c5/openstack-network-exporter/0.log" Jan 30 18:12:08 crc kubenswrapper[4766]: I0130 18:12:08.989334 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-b4vlg_aa514cb2-1f05-42a6-a181-f4f62250bd7c/ovsdb-server-init/0.log" Jan 30 18:12:09 crc kubenswrapper[4766]: I0130 18:12:09.217907 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-b4vlg_aa514cb2-1f05-42a6-a181-f4f62250bd7c/ovsdb-server-init/0.log" Jan 30 18:12:09 crc kubenswrapper[4766]: I0130 18:12:09.226936 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-b4vlg_aa514cb2-1f05-42a6-a181-f4f62250bd7c/ovsdb-server/0.log" Jan 30 18:12:09 crc kubenswrapper[4766]: I0130 18:12:09.228663 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-b4vlg_aa514cb2-1f05-42a6-a181-f4f62250bd7c/ovs-vswitchd/0.log" Jan 30 18:12:09 crc kubenswrapper[4766]: I0130 18:12:09.463125 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-copy-data_7fb6354d-977f-494f-9a51-0a1b8f48c686/adoption/0.log" Jan 30 18:12:09 crc kubenswrapper[4766]: I0130 18:12:09.514161 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_9743ed16-7558-435e-9f72-3688bd1102d7/ovn-northd/0.log" Jan 30 18:12:09 crc kubenswrapper[4766]: I0130 18:12:09.539534 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_9743ed16-7558-435e-9f72-3688bd1102d7/openstack-network-exporter/0.log" Jan 30 18:12:09 crc kubenswrapper[4766]: I0130 18:12:09.766553 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_b16a682c-8a11-4113-82e8-b361a1d8881e/openstack-network-exporter/0.log" Jan 30 18:12:09 crc kubenswrapper[4766]: I0130 18:12:09.781601 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_b16a682c-8a11-4113-82e8-b361a1d8881e/ovsdbserver-nb/0.log" Jan 30 18:12:09 crc kubenswrapper[4766]: I0130 18:12:09.955844 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_b29c551b-31dd-4264-b3f0-04fde1a2529f/openstack-network-exporter/0.log" Jan 30 18:12:09 crc kubenswrapper[4766]: I0130 18:12:09.994924 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_b29c551b-31dd-4264-b3f0-04fde1a2529f/ovsdbserver-nb/0.log" Jan 30 18:12:10 crc kubenswrapper[4766]: I0130 18:12:10.057764 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_2591e329-01bd-4573-8590-6e3f62bfb187/openstack-network-exporter/0.log" Jan 30 18:12:10 crc kubenswrapper[4766]: I0130 18:12:10.170695 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_2591e329-01bd-4573-8590-6e3f62bfb187/ovsdbserver-nb/0.log" Jan 30 18:12:10 crc kubenswrapper[4766]: I0130 18:12:10.245477 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_1053f18b-60a9-44c8-84f5-77bc506a83c1/openstack-network-exporter/0.log" Jan 30 18:12:10 crc kubenswrapper[4766]: I0130 18:12:10.340369 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_1053f18b-60a9-44c8-84f5-77bc506a83c1/ovsdbserver-sb/0.log" Jan 30 18:12:10 crc kubenswrapper[4766]: I0130 18:12:10.413750 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_08baa9d0-2942-4a73-a75a-d13dc2148bb0/memcached/0.log" Jan 30 18:12:10 crc kubenswrapper[4766]: I0130 18:12:10.496743 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_95b4e121-951b-4c45-a227-1ec8638a2320/openstack-network-exporter/0.log" Jan 30 18:12:10 crc kubenswrapper[4766]: I0130 18:12:10.550961 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_95b4e121-951b-4c45-a227-1ec8638a2320/ovsdbserver-sb/0.log" Jan 30 18:12:10 crc kubenswrapper[4766]: I0130 18:12:10.579230 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_76df5ae8-0eeb-4bb5-86ee-1c416397a186/openstack-network-exporter/0.log" Jan 30 18:12:10 crc kubenswrapper[4766]: I0130 18:12:10.687410 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_76df5ae8-0eeb-4bb5-86ee-1c416397a186/ovsdbserver-sb/0.log" Jan 30 18:12:10 crc kubenswrapper[4766]: I0130 18:12:10.760683 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6cf79c7456-bp9jt_234231ef-1ed0-40ff-a4a8-0d9f533d39de/placement-api/0.log" Jan 30 18:12:10 crc kubenswrapper[4766]: I0130 18:12:10.807223 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6cf79c7456-bp9jt_234231ef-1ed0-40ff-a4a8-0d9f533d39de/placement-log/0.log" Jan 30 18:12:10 crc kubenswrapper[4766]: I0130 18:12:10.903509 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_23ec4e7c-3732-4892-897e-5b2a5e7c2577/init-config-reloader/0.log" Jan 30 18:12:11 crc kubenswrapper[4766]: I0130 18:12:11.040492 4766 scope.go:117] "RemoveContainer" containerID="d30d418df26d8fd4a7d7f4c28a838feebfa8adef2cda9408a1193bbf367cfb2a" Jan 30 18:12:11 crc kubenswrapper[4766]: E0130 18:12:11.041228 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:12:11 crc kubenswrapper[4766]: I0130 18:12:11.060490 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_23ec4e7c-3732-4892-897e-5b2a5e7c2577/prometheus/0.log" Jan 30 18:12:11 crc kubenswrapper[4766]: I0130 18:12:11.068665 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_23ec4e7c-3732-4892-897e-5b2a5e7c2577/init-config-reloader/0.log" Jan 30 18:12:11 crc kubenswrapper[4766]: I0130 18:12:11.092570 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_23ec4e7c-3732-4892-897e-5b2a5e7c2577/config-reloader/0.log" Jan 30 18:12:11 crc kubenswrapper[4766]: I0130 18:12:11.106241 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_23ec4e7c-3732-4892-897e-5b2a5e7c2577/thanos-sidecar/0.log" Jan 30 18:12:11 crc kubenswrapper[4766]: I0130 18:12:11.229587 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_1fd0348d-2f44-4961-9503-eb8ce09016d8/setup-container/0.log" Jan 30 18:12:11 crc kubenswrapper[4766]: I0130 18:12:11.423384 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_1fd0348d-2f44-4961-9503-eb8ce09016d8/setup-container/0.log" Jan 30 18:12:11 crc kubenswrapper[4766]: I0130 18:12:11.467008 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_1fd0348d-2f44-4961-9503-eb8ce09016d8/rabbitmq/0.log" Jan 30 18:12:11 crc kubenswrapper[4766]: I0130 18:12:11.508985 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b579b360-d367-4637-8bf4-24be247f4daf/setup-container/0.log" Jan 30 18:12:11 crc kubenswrapper[4766]: I0130 18:12:11.659694 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b579b360-d367-4637-8bf4-24be247f4daf/setup-container/0.log" Jan 30 18:12:11 crc kubenswrapper[4766]: I0130 18:12:11.669614 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b579b360-d367-4637-8bf4-24be247f4daf/rabbitmq/0.log" Jan 30 18:12:22 crc kubenswrapper[4766]: I0130 18:12:22.039250 4766 scope.go:117] "RemoveContainer" containerID="d30d418df26d8fd4a7d7f4c28a838feebfa8adef2cda9408a1193bbf367cfb2a" Jan 30 18:12:22 crc kubenswrapper[4766]: E0130 18:12:22.040015 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:12:31 crc kubenswrapper[4766]: I0130 18:12:31.531110 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-fc589b45f-ssl7s_46a7c725-b480-4f85-91d0-24831e713b26/manager/0.log" Jan 30 18:12:31 crc kubenswrapper[4766]: I0130 18:12:31.609853 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv_cbc79777-d574-4d18-953a-6d51b5c2bd84/util/0.log" Jan 30 18:12:31 crc kubenswrapper[4766]: I0130 18:12:31.757120 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv_cbc79777-d574-4d18-953a-6d51b5c2bd84/util/0.log" Jan 30 18:12:31 crc kubenswrapper[4766]: I0130 18:12:31.840461 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv_cbc79777-d574-4d18-953a-6d51b5c2bd84/pull/0.log" Jan 30 18:12:31 crc kubenswrapper[4766]: I0130 18:12:31.857547 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv_cbc79777-d574-4d18-953a-6d51b5c2bd84/pull/0.log" Jan 30 18:12:31 crc kubenswrapper[4766]: I0130 18:12:31.987861 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv_cbc79777-d574-4d18-953a-6d51b5c2bd84/util/0.log" Jan 30 18:12:32 crc kubenswrapper[4766]: I0130 18:12:32.006484 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv_cbc79777-d574-4d18-953a-6d51b5c2bd84/extract/0.log" Jan 30 18:12:32 crc kubenswrapper[4766]: I0130 18:12:32.016694 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv_cbc79777-d574-4d18-953a-6d51b5c2bd84/pull/0.log" Jan 30 18:12:32 crc kubenswrapper[4766]: I0130 18:12:32.234410 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-8f4c5cb64-rjgtk_c610cc53-6813-4c5b-86e9-b421aaa21666/manager/0.log" Jan 30 18:12:32 crc kubenswrapper[4766]: I0130 18:12:32.273922 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-787499fbb-mlkcx_72b84e1c-8ed8-4fae-8dff-ca2576579904/manager/0.log" Jan 30 18:12:32 crc kubenswrapper[4766]: I0130 18:12:32.503142 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-65dc6c8d9c-8hrwp_2a5fe995-2904-4751-ae74-958efaa8596a/manager/0.log" Jan 30 18:12:32 crc kubenswrapper[4766]: I0130 18:12:32.555516 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-6bfc9d4d48-7287m_d34f90ce-9c03-441f-85cb-67b1666672fc/manager/0.log" Jan 30 18:12:32 crc kubenswrapper[4766]: I0130 18:12:32.675684 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-lhxhc_be908bdc-d0b5-4409-b088-b9b51de3cfb0/manager/0.log" Jan 30 18:12:32 crc kubenswrapper[4766]: I0130 18:12:32.886827 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6fd9bbb6f6-jhbv7_16fd0d31-da4c-4c6b-bbc4-8302daee3ee5/manager/0.log" Jan 30 18:12:33 crc kubenswrapper[4766]: I0130 18:12:33.119706 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-64469b487f-xkfn6_b0db2f42-5872-4cac-9ee0-5990c49e0a26/manager/0.log" Jan 30 18:12:33 crc kubenswrapper[4766]: I0130 18:12:33.222110 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7d96d95959-l4pbc_0974b654-1fc0-4d97-9be3-eca153de4c57/manager/0.log" Jan 30 18:12:33 crc kubenswrapper[4766]: I0130 18:12:33.281541 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-ddthn_09fcb126-016c-4b79-91d5-90e98e3da7f3/manager/0.log" Jan 30 18:12:33 crc kubenswrapper[4766]: I0130 18:12:33.447140 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-jzztd_1ea9d2ea-ca11-428c-ab61-28bf391bcd4f/manager/0.log" Jan 30 18:12:33 crc kubenswrapper[4766]: I0130 18:12:33.558882 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-576995988b-kkvlj_d4c39f8d-f83d-4311-bb99-24dfa7eaeafd/manager/0.log" Jan 30 18:12:33 crc kubenswrapper[4766]: I0130 18:12:33.839635 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5644b66645-6jc7f_0582a100-4b50-452f-baca-e67b4d6f2891/manager/0.log" Jan 30 18:12:33 crc kubenswrapper[4766]: I0130 18:12:33.844559 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-694c6dcf95-swq4p_a5c180e1-f6bc-49b7-b4cf-4812d0bba5ac/manager/0.log" Jan 30 18:12:33 crc kubenswrapper[4766]: I0130 18:12:33.924924 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r_90a2893c-9d38-4d53-93d9-a50421172933/manager/0.log" Jan 30 18:12:34 crc kubenswrapper[4766]: I0130 18:12:34.183231 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5c7c85d9bc-85t58_e1df6663-4a1f-4900-8eba-215a6f08beb0/operator/0.log" Jan 30 18:12:34 crc kubenswrapper[4766]: I0130 18:12:34.394787 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-dpb9n_502b8426-9711-4e00-b59f-743352003f2b/registry-server/0.log" Jan 30 18:12:34 crc kubenswrapper[4766]: I0130 18:12:34.720221 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-2jmqd_8ce68dfd-dfb4-42b9-8fc1-0da8b8336b90/manager/0.log" Jan 30 18:12:34 crc kubenswrapper[4766]: I0130 18:12:34.760527 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-bm24k_04cf0394-fb7b-41a9-a9bb-6fec8537d393/manager/0.log" Jan 30 18:12:35 crc kubenswrapper[4766]: I0130 18:12:35.002601 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-49xwp_dc1c52ba-db5b-40ac-87da-de36346e8491/operator/0.log" Jan 30 18:12:35 crc kubenswrapper[4766]: I0130 18:12:35.039160 4766 scope.go:117] "RemoveContainer" containerID="d30d418df26d8fd4a7d7f4c28a838feebfa8adef2cda9408a1193bbf367cfb2a" Jan 30 18:12:35 crc kubenswrapper[4766]: E0130 18:12:35.039435 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:12:35 crc kubenswrapper[4766]: I0130 18:12:35.472560 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-566d8d7445-l44w4_5eacef6b-7362-4c43-912a-eb3e6ccce6e9/manager/0.log" Jan 30 18:12:35 crc kubenswrapper[4766]: I0130 18:12:35.670773 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-69484b8d9d-tqxks_0c603c94-f0b0-4820-a5a1-0ab9a76ceb49/manager/0.log" Jan 30 18:12:35 crc kubenswrapper[4766]: I0130 18:12:35.671953 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-d7xxm_c03d46f4-f454-4b31-b4c7-5c324390d8ec/manager/0.log" Jan 30 18:12:35 crc kubenswrapper[4766]: I0130 18:12:35.834783 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-86bf68df65-m95g8_b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f/manager/0.log" Jan 30 18:12:35 crc kubenswrapper[4766]: I0130 18:12:35.858399 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-586b95b788-dklb4_55fb4fd9-f80b-474b-b9c9-758720536349/manager/0.log" Jan 30 18:12:36 crc kubenswrapper[4766]: I0130 18:12:36.021685 4766 scope.go:117] "RemoveContainer" containerID="2372c1e9832f7c23aa19961a5061d572b88f3ebb7135f0f0dc1ca6e4cc7f3513" Jan 30 18:12:36 crc kubenswrapper[4766]: I0130 18:12:36.048398 4766 scope.go:117] "RemoveContainer" containerID="fdc597711293e561af5e386d2cc4ab829c74c387f45fbdb64b6eb6843ce500c5" Jan 30 18:12:36 crc kubenswrapper[4766]: I0130 18:12:36.098578 4766 scope.go:117] "RemoveContainer" containerID="2284a65079c4717b672db4a45e6787bcf5bd83c7d786d4d7da7725c5a83bc169" Jan 30 18:12:46 crc kubenswrapper[4766]: I0130 18:12:46.052260 4766 scope.go:117] "RemoveContainer" containerID="d30d418df26d8fd4a7d7f4c28a838feebfa8adef2cda9408a1193bbf367cfb2a" Jan 30 18:12:46 crc kubenswrapper[4766]: E0130 18:12:46.053869 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:12:54 crc kubenswrapper[4766]: I0130 18:12:54.043506 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-28vp9_bb325f25-00bb-4519-99d5-94ea7bbcd9d5/control-plane-machine-set-operator/0.log" Jan 30 18:12:54 crc kubenswrapper[4766]: I0130 18:12:54.271676 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-jn8dp_8acca84e-2800-4a20-b3e8-84e021d1c001/kube-rbac-proxy/0.log" Jan 30 18:12:54 crc kubenswrapper[4766]: I0130 18:12:54.326130 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-jn8dp_8acca84e-2800-4a20-b3e8-84e021d1c001/machine-api-operator/0.log" Jan 30 18:12:57 crc kubenswrapper[4766]: I0130 18:12:57.040371 4766 scope.go:117] "RemoveContainer" containerID="d30d418df26d8fd4a7d7f4c28a838feebfa8adef2cda9408a1193bbf367cfb2a" Jan 30 18:12:57 crc kubenswrapper[4766]: E0130 18:12:57.040973 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:13:05 crc kubenswrapper[4766]: I0130 18:13:05.664755 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-545d4d4674-9lmrd_d635eb48-c2c9-404e-9ffb-c8385134670b/cert-manager-controller/0.log" Jan 30 18:13:05 crc kubenswrapper[4766]: I0130 18:13:05.835524 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-6888856db4-ltbxj_92fa5747-17c3-4b1c-a66a-e8b0a1d6f622/cert-manager-webhook/0.log" Jan 30 18:13:05 crc kubenswrapper[4766]: I0130 18:13:05.905352 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-5545bd876-qr6lx_b1682925-c14f-425a-b072-535a37cdca48/cert-manager-cainjector/0.log" Jan 30 18:13:08 crc kubenswrapper[4766]: I0130 18:13:08.040353 4766 scope.go:117] "RemoveContainer" containerID="d30d418df26d8fd4a7d7f4c28a838feebfa8adef2cda9408a1193bbf367cfb2a" Jan 30 18:13:08 crc kubenswrapper[4766]: E0130 18:13:08.041032 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:13:16 crc kubenswrapper[4766]: I0130 18:13:16.289251 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hbhzj"] Jan 30 18:13:16 crc kubenswrapper[4766]: E0130 18:13:16.295487 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e793baf0-20e5-4275-b2ca-28cc4203be80" containerName="container-00" Jan 30 18:13:16 crc kubenswrapper[4766]: I0130 18:13:16.295528 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="e793baf0-20e5-4275-b2ca-28cc4203be80" containerName="container-00" Jan 30 18:13:16 crc kubenswrapper[4766]: I0130 18:13:16.295754 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="e793baf0-20e5-4275-b2ca-28cc4203be80" containerName="container-00" Jan 30 18:13:16 crc kubenswrapper[4766]: I0130 18:13:16.297470 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hbhzj" Jan 30 18:13:16 crc kubenswrapper[4766]: I0130 18:13:16.307099 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hbhzj"] Jan 30 18:13:16 crc kubenswrapper[4766]: I0130 18:13:16.515072 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z97sz\" (UniqueName: \"kubernetes.io/projected/898542cd-ea0d-42c2-9988-ea4a384d8851-kube-api-access-z97sz\") pod \"community-operators-hbhzj\" (UID: \"898542cd-ea0d-42c2-9988-ea4a384d8851\") " pod="openshift-marketplace/community-operators-hbhzj" Jan 30 18:13:16 crc kubenswrapper[4766]: I0130 18:13:16.515261 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/898542cd-ea0d-42c2-9988-ea4a384d8851-utilities\") pod \"community-operators-hbhzj\" (UID: \"898542cd-ea0d-42c2-9988-ea4a384d8851\") " pod="openshift-marketplace/community-operators-hbhzj" Jan 30 18:13:16 crc kubenswrapper[4766]: I0130 18:13:16.515545 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/898542cd-ea0d-42c2-9988-ea4a384d8851-catalog-content\") pod \"community-operators-hbhzj\" (UID: \"898542cd-ea0d-42c2-9988-ea4a384d8851\") " pod="openshift-marketplace/community-operators-hbhzj" Jan 30 18:13:16 crc kubenswrapper[4766]: I0130 18:13:16.617498 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/898542cd-ea0d-42c2-9988-ea4a384d8851-catalog-content\") pod \"community-operators-hbhzj\" (UID: \"898542cd-ea0d-42c2-9988-ea4a384d8851\") " pod="openshift-marketplace/community-operators-hbhzj" Jan 30 18:13:16 crc kubenswrapper[4766]: I0130 18:13:16.617623 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z97sz\" (UniqueName: \"kubernetes.io/projected/898542cd-ea0d-42c2-9988-ea4a384d8851-kube-api-access-z97sz\") pod \"community-operators-hbhzj\" (UID: \"898542cd-ea0d-42c2-9988-ea4a384d8851\") " pod="openshift-marketplace/community-operators-hbhzj" Jan 30 18:13:16 crc kubenswrapper[4766]: I0130 18:13:16.617667 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/898542cd-ea0d-42c2-9988-ea4a384d8851-utilities\") pod \"community-operators-hbhzj\" (UID: \"898542cd-ea0d-42c2-9988-ea4a384d8851\") " pod="openshift-marketplace/community-operators-hbhzj" Jan 30 18:13:16 crc kubenswrapper[4766]: I0130 18:13:16.617985 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/898542cd-ea0d-42c2-9988-ea4a384d8851-catalog-content\") pod \"community-operators-hbhzj\" (UID: \"898542cd-ea0d-42c2-9988-ea4a384d8851\") " pod="openshift-marketplace/community-operators-hbhzj" Jan 30 18:13:16 crc kubenswrapper[4766]: I0130 18:13:16.618064 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/898542cd-ea0d-42c2-9988-ea4a384d8851-utilities\") pod \"community-operators-hbhzj\" (UID: \"898542cd-ea0d-42c2-9988-ea4a384d8851\") " pod="openshift-marketplace/community-operators-hbhzj" Jan 30 18:13:16 crc kubenswrapper[4766]: I0130 18:13:16.643017 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z97sz\" (UniqueName: \"kubernetes.io/projected/898542cd-ea0d-42c2-9988-ea4a384d8851-kube-api-access-z97sz\") pod \"community-operators-hbhzj\" (UID: \"898542cd-ea0d-42c2-9988-ea4a384d8851\") " pod="openshift-marketplace/community-operators-hbhzj" Jan 30 18:13:16 crc kubenswrapper[4766]: I0130 18:13:16.741680 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hbhzj" Jan 30 18:13:17 crc kubenswrapper[4766]: I0130 18:13:17.345220 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hbhzj"] Jan 30 18:13:17 crc kubenswrapper[4766]: I0130 18:13:17.744458 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-d2p2z_d30ca6b4-bd87-4d25-92dd-f3d94410f2a3/nmstate-console-plugin/0.log" Jan 30 18:13:17 crc kubenswrapper[4766]: I0130 18:13:17.963211 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-82wxr_121c0166-75c7-4f39-a07b-c89cb81d2fd8/nmstate-handler/0.log" Jan 30 18:13:17 crc kubenswrapper[4766]: I0130 18:13:17.989283 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-wv52c_46ac0f62-2413-4258-a957-35039942d0f7/kube-rbac-proxy/0.log" Jan 30 18:13:18 crc kubenswrapper[4766]: I0130 18:13:18.004557 4766 generic.go:334] "Generic (PLEG): container finished" podID="898542cd-ea0d-42c2-9988-ea4a384d8851" containerID="47b461760e05b7fdc2d23bd256c7ed70493f479d1dff51eeb7622cd1c26a6e34" exitCode=0 Jan 30 18:13:18 crc kubenswrapper[4766]: I0130 18:13:18.004603 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hbhzj" event={"ID":"898542cd-ea0d-42c2-9988-ea4a384d8851","Type":"ContainerDied","Data":"47b461760e05b7fdc2d23bd256c7ed70493f479d1dff51eeb7622cd1c26a6e34"} Jan 30 18:13:18 crc kubenswrapper[4766]: I0130 18:13:18.004646 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hbhzj" event={"ID":"898542cd-ea0d-42c2-9988-ea4a384d8851","Type":"ContainerStarted","Data":"d556a6d4ba3b27fc2742e8c095741dd6d4af9660f74c660ee0fa4ba9a2509a03"} Jan 30 18:13:18 crc kubenswrapper[4766]: I0130 18:13:18.007124 4766 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 18:13:18 crc kubenswrapper[4766]: I0130 18:13:18.045072 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-wv52c_46ac0f62-2413-4258-a957-35039942d0f7/nmstate-metrics/0.log" Jan 30 18:13:18 crc kubenswrapper[4766]: I0130 18:13:18.176489 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-v6mpm_463d1450-7318-4003-b30d-82dc9e1bec53/nmstate-operator/0.log" Jan 30 18:13:18 crc kubenswrapper[4766]: I0130 18:13:18.226509 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-zj7fb_ed7e34e5-c04e-4852-b4a3-9e28fd5f960d/nmstate-webhook/0.log" Jan 30 18:13:19 crc kubenswrapper[4766]: I0130 18:13:19.040092 4766 scope.go:117] "RemoveContainer" containerID="d30d418df26d8fd4a7d7f4c28a838feebfa8adef2cda9408a1193bbf367cfb2a" Jan 30 18:13:19 crc kubenswrapper[4766]: E0130 18:13:19.040473 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:13:20 crc kubenswrapper[4766]: I0130 18:13:20.025329 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hbhzj" event={"ID":"898542cd-ea0d-42c2-9988-ea4a384d8851","Type":"ContainerStarted","Data":"89f99b5b5394067c00f4fb459376eb984dac95046ef36418ec7099a9db2cde0f"} Jan 30 18:13:22 crc kubenswrapper[4766]: I0130 18:13:22.048656 4766 generic.go:334] "Generic (PLEG): container finished" podID="898542cd-ea0d-42c2-9988-ea4a384d8851" containerID="89f99b5b5394067c00f4fb459376eb984dac95046ef36418ec7099a9db2cde0f" exitCode=0 Jan 30 18:13:22 crc kubenswrapper[4766]: I0130 18:13:22.050942 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hbhzj" event={"ID":"898542cd-ea0d-42c2-9988-ea4a384d8851","Type":"ContainerDied","Data":"89f99b5b5394067c00f4fb459376eb984dac95046ef36418ec7099a9db2cde0f"} Jan 30 18:13:23 crc kubenswrapper[4766]: I0130 18:13:23.061221 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hbhzj" event={"ID":"898542cd-ea0d-42c2-9988-ea4a384d8851","Type":"ContainerStarted","Data":"52da343bdf4929f3fbe2d6f09dba0bcd967d568030291d355d7ed4ba26398205"} Jan 30 18:13:23 crc kubenswrapper[4766]: I0130 18:13:23.083539 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hbhzj" podStartSLOduration=2.537404093 podStartE2EDuration="7.083516378s" podCreationTimestamp="2026-01-30 18:13:16 +0000 UTC" firstStartedPulling="2026-01-30 18:13:18.006814839 +0000 UTC m=+6652.644772185" lastFinishedPulling="2026-01-30 18:13:22.552927124 +0000 UTC m=+6657.190884470" observedRunningTime="2026-01-30 18:13:23.07810329 +0000 UTC m=+6657.716060646" watchObservedRunningTime="2026-01-30 18:13:23.083516378 +0000 UTC m=+6657.721473744" Jan 30 18:13:26 crc kubenswrapper[4766]: I0130 18:13:26.742210 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hbhzj" Jan 30 18:13:26 crc kubenswrapper[4766]: I0130 18:13:26.743950 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hbhzj" Jan 30 18:13:26 crc kubenswrapper[4766]: I0130 18:13:26.807919 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hbhzj" Jan 30 18:13:27 crc kubenswrapper[4766]: I0130 18:13:27.149079 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hbhzj" Jan 30 18:13:30 crc kubenswrapper[4766]: I0130 18:13:30.252107 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hbhzj"] Jan 30 18:13:30 crc kubenswrapper[4766]: I0130 18:13:30.253264 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hbhzj" podUID="898542cd-ea0d-42c2-9988-ea4a384d8851" containerName="registry-server" containerID="cri-o://52da343bdf4929f3fbe2d6f09dba0bcd967d568030291d355d7ed4ba26398205" gracePeriod=2 Jan 30 18:13:30 crc kubenswrapper[4766]: I0130 18:13:30.708269 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hbhzj" Jan 30 18:13:30 crc kubenswrapper[4766]: I0130 18:13:30.823918 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/898542cd-ea0d-42c2-9988-ea4a384d8851-catalog-content\") pod \"898542cd-ea0d-42c2-9988-ea4a384d8851\" (UID: \"898542cd-ea0d-42c2-9988-ea4a384d8851\") " Jan 30 18:13:30 crc kubenswrapper[4766]: I0130 18:13:30.823999 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/898542cd-ea0d-42c2-9988-ea4a384d8851-utilities\") pod \"898542cd-ea0d-42c2-9988-ea4a384d8851\" (UID: \"898542cd-ea0d-42c2-9988-ea4a384d8851\") " Jan 30 18:13:30 crc kubenswrapper[4766]: I0130 18:13:30.824216 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z97sz\" (UniqueName: \"kubernetes.io/projected/898542cd-ea0d-42c2-9988-ea4a384d8851-kube-api-access-z97sz\") pod \"898542cd-ea0d-42c2-9988-ea4a384d8851\" (UID: \"898542cd-ea0d-42c2-9988-ea4a384d8851\") " Jan 30 18:13:30 crc kubenswrapper[4766]: I0130 18:13:30.825034 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/898542cd-ea0d-42c2-9988-ea4a384d8851-utilities" (OuterVolumeSpecName: "utilities") pod "898542cd-ea0d-42c2-9988-ea4a384d8851" (UID: "898542cd-ea0d-42c2-9988-ea4a384d8851"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:13:30 crc kubenswrapper[4766]: I0130 18:13:30.831992 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/898542cd-ea0d-42c2-9988-ea4a384d8851-kube-api-access-z97sz" (OuterVolumeSpecName: "kube-api-access-z97sz") pod "898542cd-ea0d-42c2-9988-ea4a384d8851" (UID: "898542cd-ea0d-42c2-9988-ea4a384d8851"). InnerVolumeSpecName "kube-api-access-z97sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:13:30 crc kubenswrapper[4766]: I0130 18:13:30.888351 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/898542cd-ea0d-42c2-9988-ea4a384d8851-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "898542cd-ea0d-42c2-9988-ea4a384d8851" (UID: "898542cd-ea0d-42c2-9988-ea4a384d8851"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:13:30 crc kubenswrapper[4766]: I0130 18:13:30.927153 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/898542cd-ea0d-42c2-9988-ea4a384d8851-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 18:13:30 crc kubenswrapper[4766]: I0130 18:13:30.927218 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z97sz\" (UniqueName: \"kubernetes.io/projected/898542cd-ea0d-42c2-9988-ea4a384d8851-kube-api-access-z97sz\") on node \"crc\" DevicePath \"\"" Jan 30 18:13:30 crc kubenswrapper[4766]: I0130 18:13:30.927232 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/898542cd-ea0d-42c2-9988-ea4a384d8851-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 18:13:31 crc kubenswrapper[4766]: I0130 18:13:31.025711 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-npbz4_ed5054c0-0009-40bb-8b4c-6e1a4da07b41/prometheus-operator/0.log" Jan 30 18:13:31 crc kubenswrapper[4766]: I0130 18:13:31.128248 4766 generic.go:334] "Generic (PLEG): container finished" podID="898542cd-ea0d-42c2-9988-ea4a384d8851" containerID="52da343bdf4929f3fbe2d6f09dba0bcd967d568030291d355d7ed4ba26398205" exitCode=0 Jan 30 18:13:31 crc kubenswrapper[4766]: I0130 18:13:31.128297 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hbhzj" event={"ID":"898542cd-ea0d-42c2-9988-ea4a384d8851","Type":"ContainerDied","Data":"52da343bdf4929f3fbe2d6f09dba0bcd967d568030291d355d7ed4ba26398205"} Jan 30 18:13:31 crc kubenswrapper[4766]: I0130 18:13:31.128332 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hbhzj" event={"ID":"898542cd-ea0d-42c2-9988-ea4a384d8851","Type":"ContainerDied","Data":"d556a6d4ba3b27fc2742e8c095741dd6d4af9660f74c660ee0fa4ba9a2509a03"} Jan 30 18:13:31 crc kubenswrapper[4766]: I0130 18:13:31.128333 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hbhzj" Jan 30 18:13:31 crc kubenswrapper[4766]: I0130 18:13:31.128352 4766 scope.go:117] "RemoveContainer" containerID="52da343bdf4929f3fbe2d6f09dba0bcd967d568030291d355d7ed4ba26398205" Jan 30 18:13:31 crc kubenswrapper[4766]: I0130 18:13:31.148597 4766 scope.go:117] "RemoveContainer" containerID="89f99b5b5394067c00f4fb459376eb984dac95046ef36418ec7099a9db2cde0f" Jan 30 18:13:31 crc kubenswrapper[4766]: I0130 18:13:31.162652 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hbhzj"] Jan 30 18:13:31 crc kubenswrapper[4766]: I0130 18:13:31.172000 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hbhzj"] Jan 30 18:13:31 crc kubenswrapper[4766]: I0130 18:13:31.189494 4766 scope.go:117] "RemoveContainer" containerID="47b461760e05b7fdc2d23bd256c7ed70493f479d1dff51eeb7622cd1c26a6e34" Jan 30 18:13:31 crc kubenswrapper[4766]: I0130 18:13:31.224207 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-946744c6d-qm4dx_86dd422f-41b2-438f-9a62-e558efc71c90/prometheus-operator-admission-webhook/0.log" Jan 30 18:13:31 crc kubenswrapper[4766]: I0130 18:13:31.232590 4766 scope.go:117] "RemoveContainer" containerID="52da343bdf4929f3fbe2d6f09dba0bcd967d568030291d355d7ed4ba26398205" Jan 30 18:13:31 crc kubenswrapper[4766]: E0130 18:13:31.233021 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52da343bdf4929f3fbe2d6f09dba0bcd967d568030291d355d7ed4ba26398205\": container with ID starting with 52da343bdf4929f3fbe2d6f09dba0bcd967d568030291d355d7ed4ba26398205 not found: ID does not exist" containerID="52da343bdf4929f3fbe2d6f09dba0bcd967d568030291d355d7ed4ba26398205" Jan 30 18:13:31 crc kubenswrapper[4766]: I0130 18:13:31.233070 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52da343bdf4929f3fbe2d6f09dba0bcd967d568030291d355d7ed4ba26398205"} err="failed to get container status \"52da343bdf4929f3fbe2d6f09dba0bcd967d568030291d355d7ed4ba26398205\": rpc error: code = NotFound desc = could not find container \"52da343bdf4929f3fbe2d6f09dba0bcd967d568030291d355d7ed4ba26398205\": container with ID starting with 52da343bdf4929f3fbe2d6f09dba0bcd967d568030291d355d7ed4ba26398205 not found: ID does not exist" Jan 30 18:13:31 crc kubenswrapper[4766]: I0130 18:13:31.233098 4766 scope.go:117] "RemoveContainer" containerID="89f99b5b5394067c00f4fb459376eb984dac95046ef36418ec7099a9db2cde0f" Jan 30 18:13:31 crc kubenswrapper[4766]: E0130 18:13:31.233463 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89f99b5b5394067c00f4fb459376eb984dac95046ef36418ec7099a9db2cde0f\": container with ID starting with 89f99b5b5394067c00f4fb459376eb984dac95046ef36418ec7099a9db2cde0f not found: ID does not exist" containerID="89f99b5b5394067c00f4fb459376eb984dac95046ef36418ec7099a9db2cde0f" Jan 30 18:13:31 crc kubenswrapper[4766]: I0130 18:13:31.233512 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89f99b5b5394067c00f4fb459376eb984dac95046ef36418ec7099a9db2cde0f"} err="failed to get container status \"89f99b5b5394067c00f4fb459376eb984dac95046ef36418ec7099a9db2cde0f\": rpc error: code = NotFound desc = could not find container \"89f99b5b5394067c00f4fb459376eb984dac95046ef36418ec7099a9db2cde0f\": container with ID starting with 89f99b5b5394067c00f4fb459376eb984dac95046ef36418ec7099a9db2cde0f not found: ID does not exist" Jan 30 18:13:31 crc kubenswrapper[4766]: I0130 18:13:31.233539 4766 scope.go:117] "RemoveContainer" containerID="47b461760e05b7fdc2d23bd256c7ed70493f479d1dff51eeb7622cd1c26a6e34" Jan 30 18:13:31 crc kubenswrapper[4766]: E0130 18:13:31.233969 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47b461760e05b7fdc2d23bd256c7ed70493f479d1dff51eeb7622cd1c26a6e34\": container with ID starting with 47b461760e05b7fdc2d23bd256c7ed70493f479d1dff51eeb7622cd1c26a6e34 not found: ID does not exist" containerID="47b461760e05b7fdc2d23bd256c7ed70493f479d1dff51eeb7622cd1c26a6e34" Jan 30 18:13:31 crc kubenswrapper[4766]: I0130 18:13:31.233993 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47b461760e05b7fdc2d23bd256c7ed70493f479d1dff51eeb7622cd1c26a6e34"} err="failed to get container status \"47b461760e05b7fdc2d23bd256c7ed70493f479d1dff51eeb7622cd1c26a6e34\": rpc error: code = NotFound desc = could not find container \"47b461760e05b7fdc2d23bd256c7ed70493f479d1dff51eeb7622cd1c26a6e34\": container with ID starting with 47b461760e05b7fdc2d23bd256c7ed70493f479d1dff51eeb7622cd1c26a6e34 not found: ID does not exist" Jan 30 18:13:31 crc kubenswrapper[4766]: I0130 18:13:31.280122 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-946744c6d-v5dzf_4e9a3cc5-7614-4db3-8c5b-590bff436549/prometheus-operator-admission-webhook/0.log" Jan 30 18:13:31 crc kubenswrapper[4766]: I0130 18:13:31.417789 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-zbt8s_ccbd3ff2-7dc6-488c-ae64-d0710464e20d/operator/0.log" Jan 30 18:13:31 crc kubenswrapper[4766]: I0130 18:13:31.460157 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-bgqzt_9f9dfe10-4d1d-4081-b3f3-4e7e4be37815/perses-operator/0.log" Jan 30 18:13:32 crc kubenswrapper[4766]: I0130 18:13:32.039202 4766 scope.go:117] "RemoveContainer" containerID="d30d418df26d8fd4a7d7f4c28a838feebfa8adef2cda9408a1193bbf367cfb2a" Jan 30 18:13:32 crc kubenswrapper[4766]: E0130 18:13:32.039698 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:13:32 crc kubenswrapper[4766]: I0130 18:13:32.051189 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="898542cd-ea0d-42c2-9988-ea4a384d8851" path="/var/lib/kubelet/pods/898542cd-ea0d-42c2-9988-ea4a384d8851/volumes" Jan 30 18:13:43 crc kubenswrapper[4766]: I0130 18:13:43.040032 4766 scope.go:117] "RemoveContainer" containerID="d30d418df26d8fd4a7d7f4c28a838feebfa8adef2cda9408a1193bbf367cfb2a" Jan 30 18:13:44 crc kubenswrapper[4766]: I0130 18:13:44.237760 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerStarted","Data":"0ede35b5fdca5259b34db0ae953855db165e425553aff7582713bfc641edf363"} Jan 30 18:13:44 crc kubenswrapper[4766]: I0130 18:13:44.298346 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-7v5hl_f4f6fbd7-b3c4-4f9f-8689-6ef8bfffc873/kube-rbac-proxy/0.log" Jan 30 18:13:44 crc kubenswrapper[4766]: I0130 18:13:44.580719 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fr242_4a563046-adc2-4e82-9b89-a549d3f06250/cp-frr-files/0.log" Jan 30 18:13:44 crc kubenswrapper[4766]: I0130 18:13:44.695945 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-7v5hl_f4f6fbd7-b3c4-4f9f-8689-6ef8bfffc873/controller/0.log" Jan 30 18:13:44 crc kubenswrapper[4766]: I0130 18:13:44.847359 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fr242_4a563046-adc2-4e82-9b89-a549d3f06250/cp-frr-files/0.log" Jan 30 18:13:44 crc kubenswrapper[4766]: I0130 18:13:44.856493 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fr242_4a563046-adc2-4e82-9b89-a549d3f06250/cp-reloader/0.log" Jan 30 18:13:44 crc kubenswrapper[4766]: I0130 18:13:44.867083 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fr242_4a563046-adc2-4e82-9b89-a549d3f06250/cp-metrics/0.log" Jan 30 18:13:44 crc kubenswrapper[4766]: I0130 18:13:44.930541 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fr242_4a563046-adc2-4e82-9b89-a549d3f06250/cp-reloader/0.log" Jan 30 18:13:45 crc kubenswrapper[4766]: I0130 18:13:45.107616 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fr242_4a563046-adc2-4e82-9b89-a549d3f06250/cp-reloader/0.log" Jan 30 18:13:45 crc kubenswrapper[4766]: I0130 18:13:45.107709 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fr242_4a563046-adc2-4e82-9b89-a549d3f06250/cp-frr-files/0.log" Jan 30 18:13:45 crc kubenswrapper[4766]: I0130 18:13:45.142387 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fr242_4a563046-adc2-4e82-9b89-a549d3f06250/cp-metrics/0.log" Jan 30 18:13:45 crc kubenswrapper[4766]: I0130 18:13:45.163265 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fr242_4a563046-adc2-4e82-9b89-a549d3f06250/cp-metrics/0.log" Jan 30 18:13:45 crc kubenswrapper[4766]: I0130 18:13:45.326590 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fr242_4a563046-adc2-4e82-9b89-a549d3f06250/cp-metrics/0.log" Jan 30 18:13:45 crc kubenswrapper[4766]: I0130 18:13:45.327975 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fr242_4a563046-adc2-4e82-9b89-a549d3f06250/cp-reloader/0.log" Jan 30 18:13:45 crc kubenswrapper[4766]: I0130 18:13:45.329968 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fr242_4a563046-adc2-4e82-9b89-a549d3f06250/cp-frr-files/0.log" Jan 30 18:13:45 crc kubenswrapper[4766]: I0130 18:13:45.333645 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fr242_4a563046-adc2-4e82-9b89-a549d3f06250/controller/0.log" Jan 30 18:13:45 crc kubenswrapper[4766]: I0130 18:13:45.490654 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fr242_4a563046-adc2-4e82-9b89-a549d3f06250/frr-metrics/0.log" Jan 30 18:13:45 crc kubenswrapper[4766]: I0130 18:13:45.542440 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fr242_4a563046-adc2-4e82-9b89-a549d3f06250/kube-rbac-proxy/0.log" Jan 30 18:13:45 crc kubenswrapper[4766]: I0130 18:13:45.584865 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fr242_4a563046-adc2-4e82-9b89-a549d3f06250/kube-rbac-proxy-frr/0.log" Jan 30 18:13:45 crc kubenswrapper[4766]: I0130 18:13:45.708228 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fr242_4a563046-adc2-4e82-9b89-a549d3f06250/reloader/0.log" Jan 30 18:13:45 crc kubenswrapper[4766]: I0130 18:13:45.845066 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-z9cbg_85bd5ff3-9577-4598-92a9-f24f00c56187/frr-k8s-webhook-server/0.log" Jan 30 18:13:46 crc kubenswrapper[4766]: I0130 18:13:46.117732 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5d87dd9885-cpjtx_8f4ddea0-a380-401d-849f-6968d6d80e8b/manager/0.log" Jan 30 18:13:46 crc kubenswrapper[4766]: I0130 18:13:46.200396 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-698996dc4d-5ps7v_5aa43b8e-3f06-441e-ade0-264da132ec73/webhook-server/0.log" Jan 30 18:13:46 crc kubenswrapper[4766]: I0130 18:13:46.315903 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-pfspk_4ad0227f-0410-4f5e-bfc5-7dd96164c9b5/kube-rbac-proxy/0.log" Jan 30 18:13:47 crc kubenswrapper[4766]: I0130 18:13:47.190351 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-pfspk_4ad0227f-0410-4f5e-bfc5-7dd96164c9b5/speaker/0.log" Jan 30 18:13:48 crc kubenswrapper[4766]: I0130 18:13:48.055059 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fr242_4a563046-adc2-4e82-9b89-a549d3f06250/frr/0.log" Jan 30 18:13:57 crc kubenswrapper[4766]: I0130 18:13:57.622392 4766 patch_prober.go:28] interesting pod/oauth-openshift-6fffd54687-fl5rm container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:13:57 crc kubenswrapper[4766]: I0130 18:13:57.623228 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" podUID="dfb08685-43c0-4cd6-bb82-51f5df825923" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 18:13:59 crc kubenswrapper[4766]: I0130 18:13:59.915768 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz_246ff80e-3711-4ffe-8fdb-0942844aef18/util/0.log" Jan 30 18:14:00 crc kubenswrapper[4766]: I0130 18:14:00.153880 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz_246ff80e-3711-4ffe-8fdb-0942844aef18/util/0.log" Jan 30 18:14:00 crc kubenswrapper[4766]: I0130 18:14:00.192495 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz_246ff80e-3711-4ffe-8fdb-0942844aef18/pull/0.log" Jan 30 18:14:00 crc kubenswrapper[4766]: I0130 18:14:00.237320 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz_246ff80e-3711-4ffe-8fdb-0942844aef18/pull/0.log" Jan 30 18:14:00 crc kubenswrapper[4766]: I0130 18:14:00.357407 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz_246ff80e-3711-4ffe-8fdb-0942844aef18/util/0.log" Jan 30 18:14:00 crc kubenswrapper[4766]: I0130 18:14:00.377612 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz_246ff80e-3711-4ffe-8fdb-0942844aef18/pull/0.log" Jan 30 18:14:00 crc kubenswrapper[4766]: I0130 18:14:00.420839 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz_246ff80e-3711-4ffe-8fdb-0942844aef18/extract/0.log" Jan 30 18:14:00 crc kubenswrapper[4766]: I0130 18:14:00.611427 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn_7cde9372-207a-40f0-829b-1e0b5c662ec1/util/0.log" Jan 30 18:14:00 crc kubenswrapper[4766]: I0130 18:14:00.802357 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn_7cde9372-207a-40f0-829b-1e0b5c662ec1/util/0.log" Jan 30 18:14:00 crc kubenswrapper[4766]: I0130 18:14:00.810799 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn_7cde9372-207a-40f0-829b-1e0b5c662ec1/pull/0.log" Jan 30 18:14:00 crc kubenswrapper[4766]: I0130 18:14:00.810930 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn_7cde9372-207a-40f0-829b-1e0b5c662ec1/pull/0.log" Jan 30 18:14:00 crc kubenswrapper[4766]: I0130 18:14:00.996616 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn_7cde9372-207a-40f0-829b-1e0b5c662ec1/extract/0.log" Jan 30 18:14:01 crc kubenswrapper[4766]: I0130 18:14:01.012914 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn_7cde9372-207a-40f0-829b-1e0b5c662ec1/pull/0.log" Jan 30 18:14:01 crc kubenswrapper[4766]: I0130 18:14:01.051041 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn_7cde9372-207a-40f0-829b-1e0b5c662ec1/util/0.log" Jan 30 18:14:01 crc kubenswrapper[4766]: I0130 18:14:01.173313 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs_a2619907-b01e-44ad-99e7-a1ae313da017/util/0.log" Jan 30 18:14:01 crc kubenswrapper[4766]: I0130 18:14:01.336871 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs_a2619907-b01e-44ad-99e7-a1ae313da017/util/0.log" Jan 30 18:14:01 crc kubenswrapper[4766]: I0130 18:14:01.386556 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs_a2619907-b01e-44ad-99e7-a1ae313da017/pull/0.log" Jan 30 18:14:01 crc kubenswrapper[4766]: I0130 18:14:01.388097 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs_a2619907-b01e-44ad-99e7-a1ae313da017/pull/0.log" Jan 30 18:14:01 crc kubenswrapper[4766]: I0130 18:14:01.587247 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs_a2619907-b01e-44ad-99e7-a1ae313da017/util/0.log" Jan 30 18:14:01 crc kubenswrapper[4766]: I0130 18:14:01.617261 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs_a2619907-b01e-44ad-99e7-a1ae313da017/pull/0.log" Jan 30 18:14:01 crc kubenswrapper[4766]: I0130 18:14:01.642651 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs_a2619907-b01e-44ad-99e7-a1ae313da017/extract/0.log" Jan 30 18:14:01 crc kubenswrapper[4766]: I0130 18:14:01.828660 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz_1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951/util/0.log" Jan 30 18:14:02 crc kubenswrapper[4766]: I0130 18:14:02.016177 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz_1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951/util/0.log" Jan 30 18:14:02 crc kubenswrapper[4766]: I0130 18:14:02.020280 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz_1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951/pull/0.log" Jan 30 18:14:02 crc kubenswrapper[4766]: I0130 18:14:02.042113 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz_1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951/pull/0.log" Jan 30 18:14:02 crc kubenswrapper[4766]: I0130 18:14:02.220662 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz_1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951/extract/0.log" Jan 30 18:14:02 crc kubenswrapper[4766]: I0130 18:14:02.245073 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz_1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951/pull/0.log" Jan 30 18:14:02 crc kubenswrapper[4766]: I0130 18:14:02.265016 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz_1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951/util/0.log" Jan 30 18:14:02 crc kubenswrapper[4766]: I0130 18:14:02.401000 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xbqw6_8d7c1afe-4961-4d01-9513-635a558d6eba/extract-utilities/0.log" Jan 30 18:14:02 crc kubenswrapper[4766]: I0130 18:14:02.609524 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xbqw6_8d7c1afe-4961-4d01-9513-635a558d6eba/extract-utilities/0.log" Jan 30 18:14:02 crc kubenswrapper[4766]: I0130 18:14:02.612720 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xbqw6_8d7c1afe-4961-4d01-9513-635a558d6eba/extract-content/0.log" Jan 30 18:14:02 crc kubenswrapper[4766]: I0130 18:14:02.625620 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xbqw6_8d7c1afe-4961-4d01-9513-635a558d6eba/extract-content/0.log" Jan 30 18:14:02 crc kubenswrapper[4766]: I0130 18:14:02.991303 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xbqw6_8d7c1afe-4961-4d01-9513-635a558d6eba/extract-utilities/0.log" Jan 30 18:14:03 crc kubenswrapper[4766]: I0130 18:14:03.015613 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xbqw6_8d7c1afe-4961-4d01-9513-635a558d6eba/extract-content/0.log" Jan 30 18:14:03 crc kubenswrapper[4766]: I0130 18:14:03.175465 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9s94z_45931cc3-9fdc-43a0-bc52-7ac389c4f75b/extract-utilities/0.log" Jan 30 18:14:03 crc kubenswrapper[4766]: I0130 18:14:03.429795 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9s94z_45931cc3-9fdc-43a0-bc52-7ac389c4f75b/extract-content/0.log" Jan 30 18:14:03 crc kubenswrapper[4766]: I0130 18:14:03.449093 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9s94z_45931cc3-9fdc-43a0-bc52-7ac389c4f75b/extract-utilities/0.log" Jan 30 18:14:03 crc kubenswrapper[4766]: I0130 18:14:03.466880 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9s94z_45931cc3-9fdc-43a0-bc52-7ac389c4f75b/extract-content/0.log" Jan 30 18:14:03 crc kubenswrapper[4766]: I0130 18:14:03.581160 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xbqw6_8d7c1afe-4961-4d01-9513-635a558d6eba/registry-server/0.log" Jan 30 18:14:03 crc kubenswrapper[4766]: I0130 18:14:03.665905 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9s94z_45931cc3-9fdc-43a0-bc52-7ac389c4f75b/extract-utilities/0.log" Jan 30 18:14:03 crc kubenswrapper[4766]: I0130 18:14:03.683534 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9s94z_45931cc3-9fdc-43a0-bc52-7ac389c4f75b/extract-content/0.log" Jan 30 18:14:03 crc kubenswrapper[4766]: I0130 18:14:03.899634 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-rwhkx_2b001665-9e64-4f29-b35f-5f702206ae07/marketplace-operator/0.log" Jan 30 18:14:03 crc kubenswrapper[4766]: I0130 18:14:03.937521 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-d8wb8_5bf71edb-8510-412d-95bd-028b90482ad1/extract-utilities/0.log" Jan 30 18:14:04 crc kubenswrapper[4766]: I0130 18:14:04.215338 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-d8wb8_5bf71edb-8510-412d-95bd-028b90482ad1/extract-utilities/0.log" Jan 30 18:14:04 crc kubenswrapper[4766]: I0130 18:14:04.241940 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-d8wb8_5bf71edb-8510-412d-95bd-028b90482ad1/extract-content/0.log" Jan 30 18:14:04 crc kubenswrapper[4766]: I0130 18:14:04.248155 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-d8wb8_5bf71edb-8510-412d-95bd-028b90482ad1/extract-content/0.log" Jan 30 18:14:04 crc kubenswrapper[4766]: I0130 18:14:04.432437 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-d8wb8_5bf71edb-8510-412d-95bd-028b90482ad1/extract-content/0.log" Jan 30 18:14:04 crc kubenswrapper[4766]: I0130 18:14:04.501909 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-d8wb8_5bf71edb-8510-412d-95bd-028b90482ad1/extract-utilities/0.log" Jan 30 18:14:04 crc kubenswrapper[4766]: I0130 18:14:04.680850 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hxmkb_d84c1be7-4d75-42f5-a45d-cd83378aadca/extract-utilities/0.log" Jan 30 18:14:04 crc kubenswrapper[4766]: I0130 18:14:04.883485 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-d8wb8_5bf71edb-8510-412d-95bd-028b90482ad1/registry-server/0.log" Jan 30 18:14:04 crc kubenswrapper[4766]: I0130 18:14:04.929536 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hxmkb_d84c1be7-4d75-42f5-a45d-cd83378aadca/extract-content/0.log" Jan 30 18:14:04 crc kubenswrapper[4766]: I0130 18:14:04.939012 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hxmkb_d84c1be7-4d75-42f5-a45d-cd83378aadca/extract-utilities/0.log" Jan 30 18:14:04 crc kubenswrapper[4766]: I0130 18:14:04.958164 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9s94z_45931cc3-9fdc-43a0-bc52-7ac389c4f75b/registry-server/0.log" Jan 30 18:14:05 crc kubenswrapper[4766]: I0130 18:14:05.005254 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hxmkb_d84c1be7-4d75-42f5-a45d-cd83378aadca/extract-content/0.log" Jan 30 18:14:05 crc kubenswrapper[4766]: I0130 18:14:05.186245 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hxmkb_d84c1be7-4d75-42f5-a45d-cd83378aadca/extract-utilities/0.log" Jan 30 18:14:05 crc kubenswrapper[4766]: I0130 18:14:05.193069 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hxmkb_d84c1be7-4d75-42f5-a45d-cd83378aadca/extract-content/0.log" Jan 30 18:14:05 crc kubenswrapper[4766]: I0130 18:14:05.435273 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hxmkb_d84c1be7-4d75-42f5-a45d-cd83378aadca/registry-server/0.log" Jan 30 18:14:17 crc kubenswrapper[4766]: I0130 18:14:17.597968 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-npbz4_ed5054c0-0009-40bb-8b4c-6e1a4da07b41/prometheus-operator/0.log" Jan 30 18:14:17 crc kubenswrapper[4766]: I0130 18:14:17.714462 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-946744c6d-qm4dx_86dd422f-41b2-438f-9a62-e558efc71c90/prometheus-operator-admission-webhook/0.log" Jan 30 18:14:17 crc kubenswrapper[4766]: I0130 18:14:17.724006 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-946744c6d-v5dzf_4e9a3cc5-7614-4db3-8c5b-590bff436549/prometheus-operator-admission-webhook/0.log" Jan 30 18:14:17 crc kubenswrapper[4766]: I0130 18:14:17.836917 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-zbt8s_ccbd3ff2-7dc6-488c-ae64-d0710464e20d/operator/0.log" Jan 30 18:14:17 crc kubenswrapper[4766]: I0130 18:14:17.936692 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-bgqzt_9f9dfe10-4d1d-4081-b3f3-4e7e4be37815/perses-operator/0.log" Jan 30 18:15:00 crc kubenswrapper[4766]: I0130 18:15:00.168443 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496615-8pt8n"] Jan 30 18:15:00 crc kubenswrapper[4766]: E0130 18:15:00.169438 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="898542cd-ea0d-42c2-9988-ea4a384d8851" containerName="extract-utilities" Jan 30 18:15:00 crc kubenswrapper[4766]: I0130 18:15:00.169458 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="898542cd-ea0d-42c2-9988-ea4a384d8851" containerName="extract-utilities" Jan 30 18:15:00 crc kubenswrapper[4766]: E0130 18:15:00.169505 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="898542cd-ea0d-42c2-9988-ea4a384d8851" containerName="registry-server" Jan 30 18:15:00 crc kubenswrapper[4766]: I0130 18:15:00.169513 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="898542cd-ea0d-42c2-9988-ea4a384d8851" containerName="registry-server" Jan 30 18:15:00 crc kubenswrapper[4766]: E0130 18:15:00.169534 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="898542cd-ea0d-42c2-9988-ea4a384d8851" containerName="extract-content" Jan 30 18:15:00 crc kubenswrapper[4766]: I0130 18:15:00.169542 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="898542cd-ea0d-42c2-9988-ea4a384d8851" containerName="extract-content" Jan 30 18:15:00 crc kubenswrapper[4766]: I0130 18:15:00.169723 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="898542cd-ea0d-42c2-9988-ea4a384d8851" containerName="registry-server" Jan 30 18:15:00 crc kubenswrapper[4766]: I0130 18:15:00.170502 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496615-8pt8n" Jan 30 18:15:00 crc kubenswrapper[4766]: I0130 18:15:00.172323 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 18:15:00 crc kubenswrapper[4766]: I0130 18:15:00.175675 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 18:15:00 crc kubenswrapper[4766]: I0130 18:15:00.185913 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496615-8pt8n"] Jan 30 18:15:00 crc kubenswrapper[4766]: I0130 18:15:00.323505 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f974fd4d-c161-41f5-b6c4-1466867ec240-config-volume\") pod \"collect-profiles-29496615-8pt8n\" (UID: \"f974fd4d-c161-41f5-b6c4-1466867ec240\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496615-8pt8n" Jan 30 18:15:00 crc kubenswrapper[4766]: I0130 18:15:00.323610 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9vbs\" (UniqueName: \"kubernetes.io/projected/f974fd4d-c161-41f5-b6c4-1466867ec240-kube-api-access-z9vbs\") pod \"collect-profiles-29496615-8pt8n\" (UID: \"f974fd4d-c161-41f5-b6c4-1466867ec240\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496615-8pt8n" Jan 30 18:15:00 crc kubenswrapper[4766]: I0130 18:15:00.323680 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f974fd4d-c161-41f5-b6c4-1466867ec240-secret-volume\") pod \"collect-profiles-29496615-8pt8n\" (UID: \"f974fd4d-c161-41f5-b6c4-1466867ec240\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496615-8pt8n" Jan 30 18:15:00 crc kubenswrapper[4766]: I0130 18:15:00.425750 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f974fd4d-c161-41f5-b6c4-1466867ec240-config-volume\") pod \"collect-profiles-29496615-8pt8n\" (UID: \"f974fd4d-c161-41f5-b6c4-1466867ec240\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496615-8pt8n" Jan 30 18:15:00 crc kubenswrapper[4766]: I0130 18:15:00.425828 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9vbs\" (UniqueName: \"kubernetes.io/projected/f974fd4d-c161-41f5-b6c4-1466867ec240-kube-api-access-z9vbs\") pod \"collect-profiles-29496615-8pt8n\" (UID: \"f974fd4d-c161-41f5-b6c4-1466867ec240\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496615-8pt8n" Jan 30 18:15:00 crc kubenswrapper[4766]: I0130 18:15:00.425872 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f974fd4d-c161-41f5-b6c4-1466867ec240-secret-volume\") pod \"collect-profiles-29496615-8pt8n\" (UID: \"f974fd4d-c161-41f5-b6c4-1466867ec240\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496615-8pt8n" Jan 30 18:15:00 crc kubenswrapper[4766]: I0130 18:15:00.426754 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f974fd4d-c161-41f5-b6c4-1466867ec240-config-volume\") pod \"collect-profiles-29496615-8pt8n\" (UID: \"f974fd4d-c161-41f5-b6c4-1466867ec240\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496615-8pt8n" Jan 30 18:15:00 crc kubenswrapper[4766]: I0130 18:15:00.431642 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f974fd4d-c161-41f5-b6c4-1466867ec240-secret-volume\") pod \"collect-profiles-29496615-8pt8n\" (UID: \"f974fd4d-c161-41f5-b6c4-1466867ec240\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496615-8pt8n" Jan 30 18:15:00 crc kubenswrapper[4766]: I0130 18:15:00.455659 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9vbs\" (UniqueName: \"kubernetes.io/projected/f974fd4d-c161-41f5-b6c4-1466867ec240-kube-api-access-z9vbs\") pod \"collect-profiles-29496615-8pt8n\" (UID: \"f974fd4d-c161-41f5-b6c4-1466867ec240\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496615-8pt8n" Jan 30 18:15:00 crc kubenswrapper[4766]: I0130 18:15:00.522720 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496615-8pt8n" Jan 30 18:15:00 crc kubenswrapper[4766]: I0130 18:15:00.994163 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496615-8pt8n"] Jan 30 18:15:01 crc kubenswrapper[4766]: I0130 18:15:01.948685 4766 generic.go:334] "Generic (PLEG): container finished" podID="f974fd4d-c161-41f5-b6c4-1466867ec240" containerID="d1176e19e2835c2856e4fcfcfc22dd8a9e0ab5466990c91977282d854b6f777e" exitCode=0 Jan 30 18:15:01 crc kubenswrapper[4766]: I0130 18:15:01.948789 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496615-8pt8n" event={"ID":"f974fd4d-c161-41f5-b6c4-1466867ec240","Type":"ContainerDied","Data":"d1176e19e2835c2856e4fcfcfc22dd8a9e0ab5466990c91977282d854b6f777e"} Jan 30 18:15:01 crc kubenswrapper[4766]: I0130 18:15:01.949210 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496615-8pt8n" event={"ID":"f974fd4d-c161-41f5-b6c4-1466867ec240","Type":"ContainerStarted","Data":"6d5bcf9cd16e2d7cda8a5933abdde2fa152bd8e4447a9b066e95a907939c0e16"} Jan 30 18:15:03 crc kubenswrapper[4766]: I0130 18:15:03.351110 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496615-8pt8n" Jan 30 18:15:03 crc kubenswrapper[4766]: I0130 18:15:03.499113 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9vbs\" (UniqueName: \"kubernetes.io/projected/f974fd4d-c161-41f5-b6c4-1466867ec240-kube-api-access-z9vbs\") pod \"f974fd4d-c161-41f5-b6c4-1466867ec240\" (UID: \"f974fd4d-c161-41f5-b6c4-1466867ec240\") " Jan 30 18:15:03 crc kubenswrapper[4766]: I0130 18:15:03.499283 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f974fd4d-c161-41f5-b6c4-1466867ec240-secret-volume\") pod \"f974fd4d-c161-41f5-b6c4-1466867ec240\" (UID: \"f974fd4d-c161-41f5-b6c4-1466867ec240\") " Jan 30 18:15:03 crc kubenswrapper[4766]: I0130 18:15:03.499353 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f974fd4d-c161-41f5-b6c4-1466867ec240-config-volume\") pod \"f974fd4d-c161-41f5-b6c4-1466867ec240\" (UID: \"f974fd4d-c161-41f5-b6c4-1466867ec240\") " Jan 30 18:15:03 crc kubenswrapper[4766]: I0130 18:15:03.501694 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f974fd4d-c161-41f5-b6c4-1466867ec240-config-volume" (OuterVolumeSpecName: "config-volume") pod "f974fd4d-c161-41f5-b6c4-1466867ec240" (UID: "f974fd4d-c161-41f5-b6c4-1466867ec240"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 18:15:03 crc kubenswrapper[4766]: I0130 18:15:03.520354 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f974fd4d-c161-41f5-b6c4-1466867ec240-kube-api-access-z9vbs" (OuterVolumeSpecName: "kube-api-access-z9vbs") pod "f974fd4d-c161-41f5-b6c4-1466867ec240" (UID: "f974fd4d-c161-41f5-b6c4-1466867ec240"). InnerVolumeSpecName "kube-api-access-z9vbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:15:03 crc kubenswrapper[4766]: I0130 18:15:03.520473 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f974fd4d-c161-41f5-b6c4-1466867ec240-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f974fd4d-c161-41f5-b6c4-1466867ec240" (UID: "f974fd4d-c161-41f5-b6c4-1466867ec240"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 18:15:03 crc kubenswrapper[4766]: I0130 18:15:03.601368 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9vbs\" (UniqueName: \"kubernetes.io/projected/f974fd4d-c161-41f5-b6c4-1466867ec240-kube-api-access-z9vbs\") on node \"crc\" DevicePath \"\"" Jan 30 18:15:03 crc kubenswrapper[4766]: I0130 18:15:03.601408 4766 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f974fd4d-c161-41f5-b6c4-1466867ec240-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 18:15:03 crc kubenswrapper[4766]: I0130 18:15:03.601421 4766 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f974fd4d-c161-41f5-b6c4-1466867ec240-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 18:15:03 crc kubenswrapper[4766]: I0130 18:15:03.966424 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496615-8pt8n" event={"ID":"f974fd4d-c161-41f5-b6c4-1466867ec240","Type":"ContainerDied","Data":"6d5bcf9cd16e2d7cda8a5933abdde2fa152bd8e4447a9b066e95a907939c0e16"} Jan 30 18:15:03 crc kubenswrapper[4766]: I0130 18:15:03.966728 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d5bcf9cd16e2d7cda8a5933abdde2fa152bd8e4447a9b066e95a907939c0e16" Jan 30 18:15:03 crc kubenswrapper[4766]: I0130 18:15:03.966532 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496615-8pt8n" Jan 30 18:15:04 crc kubenswrapper[4766]: I0130 18:15:04.449144 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496570-t4zn4"] Jan 30 18:15:04 crc kubenswrapper[4766]: I0130 18:15:04.461283 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496570-t4zn4"] Jan 30 18:15:06 crc kubenswrapper[4766]: I0130 18:15:06.052757 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d5ff932-157e-49bf-9f1e-b4dc767de05e" path="/var/lib/kubelet/pods/1d5ff932-157e-49bf-9f1e-b4dc767de05e/volumes" Jan 30 18:15:36 crc kubenswrapper[4766]: I0130 18:15:36.264258 4766 scope.go:117] "RemoveContainer" containerID="2114380f0112baa1ec046121feaf5820547d68532f27b3cf3f25db273ce53dee" Jan 30 18:15:50 crc kubenswrapper[4766]: I0130 18:15:50.373957 4766 generic.go:334] "Generic (PLEG): container finished" podID="857930ca-2670-4ab4-ba29-ece210bd2af5" containerID="776a408dabef3cda5dfcce8b8d2f50984cb6bb6711550c6bec4c470e6ef1c7d8" exitCode=0 Jan 30 18:15:50 crc kubenswrapper[4766]: I0130 18:15:50.373999 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vzpss/must-gather-w799p" event={"ID":"857930ca-2670-4ab4-ba29-ece210bd2af5","Type":"ContainerDied","Data":"776a408dabef3cda5dfcce8b8d2f50984cb6bb6711550c6bec4c470e6ef1c7d8"} Jan 30 18:15:50 crc kubenswrapper[4766]: I0130 18:15:50.375133 4766 scope.go:117] "RemoveContainer" containerID="776a408dabef3cda5dfcce8b8d2f50984cb6bb6711550c6bec4c470e6ef1c7d8" Jan 30 18:15:51 crc kubenswrapper[4766]: I0130 18:15:51.049567 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vzpss_must-gather-w799p_857930ca-2670-4ab4-ba29-ece210bd2af5/gather/0.log" Jan 30 18:15:59 crc kubenswrapper[4766]: I0130 18:15:59.067454 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vzpss/must-gather-w799p"] Jan 30 18:15:59 crc kubenswrapper[4766]: I0130 18:15:59.068068 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-vzpss/must-gather-w799p" podUID="857930ca-2670-4ab4-ba29-ece210bd2af5" containerName="copy" containerID="cri-o://fdd06e0bdd56096dd8720c76934293ed2794220f217e02573d2cd3ab6e769401" gracePeriod=2 Jan 30 18:15:59 crc kubenswrapper[4766]: I0130 18:15:59.080306 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vzpss/must-gather-w799p"] Jan 30 18:15:59 crc kubenswrapper[4766]: I0130 18:15:59.465998 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vzpss_must-gather-w799p_857930ca-2670-4ab4-ba29-ece210bd2af5/copy/0.log" Jan 30 18:15:59 crc kubenswrapper[4766]: I0130 18:15:59.467540 4766 generic.go:334] "Generic (PLEG): container finished" podID="857930ca-2670-4ab4-ba29-ece210bd2af5" containerID="fdd06e0bdd56096dd8720c76934293ed2794220f217e02573d2cd3ab6e769401" exitCode=143 Jan 30 18:15:59 crc kubenswrapper[4766]: I0130 18:15:59.467594 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52a83fa3d0421a4c02b1382ecdde5f2c954b6d5a37559a41d3ebe5dfe743483d" Jan 30 18:15:59 crc kubenswrapper[4766]: I0130 18:15:59.535868 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vzpss_must-gather-w799p_857930ca-2670-4ab4-ba29-ece210bd2af5/copy/0.log" Jan 30 18:15:59 crc kubenswrapper[4766]: I0130 18:15:59.536238 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vzpss/must-gather-w799p" Jan 30 18:15:59 crc kubenswrapper[4766]: I0130 18:15:59.646563 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/857930ca-2670-4ab4-ba29-ece210bd2af5-must-gather-output\") pod \"857930ca-2670-4ab4-ba29-ece210bd2af5\" (UID: \"857930ca-2670-4ab4-ba29-ece210bd2af5\") " Jan 30 18:15:59 crc kubenswrapper[4766]: I0130 18:15:59.646671 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrqvc\" (UniqueName: \"kubernetes.io/projected/857930ca-2670-4ab4-ba29-ece210bd2af5-kube-api-access-mrqvc\") pod \"857930ca-2670-4ab4-ba29-ece210bd2af5\" (UID: \"857930ca-2670-4ab4-ba29-ece210bd2af5\") " Jan 30 18:15:59 crc kubenswrapper[4766]: I0130 18:15:59.653468 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/857930ca-2670-4ab4-ba29-ece210bd2af5-kube-api-access-mrqvc" (OuterVolumeSpecName: "kube-api-access-mrqvc") pod "857930ca-2670-4ab4-ba29-ece210bd2af5" (UID: "857930ca-2670-4ab4-ba29-ece210bd2af5"). InnerVolumeSpecName "kube-api-access-mrqvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:15:59 crc kubenswrapper[4766]: I0130 18:15:59.750316 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrqvc\" (UniqueName: \"kubernetes.io/projected/857930ca-2670-4ab4-ba29-ece210bd2af5-kube-api-access-mrqvc\") on node \"crc\" DevicePath \"\"" Jan 30 18:15:59 crc kubenswrapper[4766]: I0130 18:15:59.806732 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/857930ca-2670-4ab4-ba29-ece210bd2af5-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "857930ca-2670-4ab4-ba29-ece210bd2af5" (UID: "857930ca-2670-4ab4-ba29-ece210bd2af5"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:15:59 crc kubenswrapper[4766]: I0130 18:15:59.860291 4766 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/857930ca-2670-4ab4-ba29-ece210bd2af5-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 30 18:16:00 crc kubenswrapper[4766]: I0130 18:16:00.052953 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="857930ca-2670-4ab4-ba29-ece210bd2af5" path="/var/lib/kubelet/pods/857930ca-2670-4ab4-ba29-ece210bd2af5/volumes" Jan 30 18:16:00 crc kubenswrapper[4766]: I0130 18:16:00.475776 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vzpss/must-gather-w799p" Jan 30 18:16:09 crc kubenswrapper[4766]: I0130 18:16:09.044940 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 18:16:09 crc kubenswrapper[4766]: I0130 18:16:09.045631 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 18:16:36 crc kubenswrapper[4766]: I0130 18:16:36.339677 4766 scope.go:117] "RemoveContainer" containerID="fdd06e0bdd56096dd8720c76934293ed2794220f217e02573d2cd3ab6e769401" Jan 30 18:16:36 crc kubenswrapper[4766]: I0130 18:16:36.370194 4766 scope.go:117] "RemoveContainer" containerID="776a408dabef3cda5dfcce8b8d2f50984cb6bb6711550c6bec4c470e6ef1c7d8" Jan 30 18:16:39 crc kubenswrapper[4766]: I0130 18:16:39.045499 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 18:16:39 crc kubenswrapper[4766]: I0130 18:16:39.045891 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 18:17:09 crc kubenswrapper[4766]: I0130 18:17:09.044939 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 18:17:09 crc kubenswrapper[4766]: I0130 18:17:09.045546 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 18:17:09 crc kubenswrapper[4766]: I0130 18:17:09.045588 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 18:17:09 crc kubenswrapper[4766]: I0130 18:17:09.046033 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0ede35b5fdca5259b34db0ae953855db165e425553aff7582713bfc641edf363"} pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 18:17:09 crc kubenswrapper[4766]: I0130 18:17:09.046080 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" containerID="cri-o://0ede35b5fdca5259b34db0ae953855db165e425553aff7582713bfc641edf363" gracePeriod=600 Jan 30 18:17:09 crc kubenswrapper[4766]: I0130 18:17:09.193442 4766 generic.go:334] "Generic (PLEG): container finished" podID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerID="0ede35b5fdca5259b34db0ae953855db165e425553aff7582713bfc641edf363" exitCode=0 Jan 30 18:17:09 crc kubenswrapper[4766]: I0130 18:17:09.193497 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerDied","Data":"0ede35b5fdca5259b34db0ae953855db165e425553aff7582713bfc641edf363"} Jan 30 18:17:09 crc kubenswrapper[4766]: I0130 18:17:09.193565 4766 scope.go:117] "RemoveContainer" containerID="d30d418df26d8fd4a7d7f4c28a838feebfa8adef2cda9408a1193bbf367cfb2a" Jan 30 18:17:10 crc kubenswrapper[4766]: I0130 18:17:10.204459 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerStarted","Data":"fd13e073ddfc7e30e655ecb7ad5c4e75009901f530223a8332344d3d9e5f1cc1"} Jan 30 18:18:25 crc kubenswrapper[4766]: I0130 18:18:25.007018 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-j67z4"] Jan 30 18:18:25 crc kubenswrapper[4766]: E0130 18:18:25.007989 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f974fd4d-c161-41f5-b6c4-1466867ec240" containerName="collect-profiles" Jan 30 18:18:25 crc kubenswrapper[4766]: I0130 18:18:25.008001 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f974fd4d-c161-41f5-b6c4-1466867ec240" containerName="collect-profiles" Jan 30 18:18:25 crc kubenswrapper[4766]: E0130 18:18:25.008010 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="857930ca-2670-4ab4-ba29-ece210bd2af5" containerName="copy" Jan 30 18:18:25 crc kubenswrapper[4766]: I0130 18:18:25.008016 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="857930ca-2670-4ab4-ba29-ece210bd2af5" containerName="copy" Jan 30 18:18:25 crc kubenswrapper[4766]: E0130 18:18:25.008036 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="857930ca-2670-4ab4-ba29-ece210bd2af5" containerName="gather" Jan 30 18:18:25 crc kubenswrapper[4766]: I0130 18:18:25.008042 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="857930ca-2670-4ab4-ba29-ece210bd2af5" containerName="gather" Jan 30 18:18:25 crc kubenswrapper[4766]: I0130 18:18:25.008241 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="857930ca-2670-4ab4-ba29-ece210bd2af5" containerName="gather" Jan 30 18:18:25 crc kubenswrapper[4766]: I0130 18:18:25.008265 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="857930ca-2670-4ab4-ba29-ece210bd2af5" containerName="copy" Jan 30 18:18:25 crc kubenswrapper[4766]: I0130 18:18:25.008277 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="f974fd4d-c161-41f5-b6c4-1466867ec240" containerName="collect-profiles" Jan 30 18:18:25 crc kubenswrapper[4766]: I0130 18:18:25.011339 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j67z4" Jan 30 18:18:25 crc kubenswrapper[4766]: I0130 18:18:25.021033 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j67z4"] Jan 30 18:18:25 crc kubenswrapper[4766]: I0130 18:18:25.083983 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq8p9\" (UniqueName: \"kubernetes.io/projected/7fdbb96f-4b42-4115-8adc-aecc858b58a8-kube-api-access-pq8p9\") pod \"redhat-operators-j67z4\" (UID: \"7fdbb96f-4b42-4115-8adc-aecc858b58a8\") " pod="openshift-marketplace/redhat-operators-j67z4" Jan 30 18:18:25 crc kubenswrapper[4766]: I0130 18:18:25.084042 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fdbb96f-4b42-4115-8adc-aecc858b58a8-catalog-content\") pod \"redhat-operators-j67z4\" (UID: \"7fdbb96f-4b42-4115-8adc-aecc858b58a8\") " pod="openshift-marketplace/redhat-operators-j67z4" Jan 30 18:18:25 crc kubenswrapper[4766]: I0130 18:18:25.084323 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fdbb96f-4b42-4115-8adc-aecc858b58a8-utilities\") pod \"redhat-operators-j67z4\" (UID: \"7fdbb96f-4b42-4115-8adc-aecc858b58a8\") " pod="openshift-marketplace/redhat-operators-j67z4" Jan 30 18:18:25 crc kubenswrapper[4766]: I0130 18:18:25.186738 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fdbb96f-4b42-4115-8adc-aecc858b58a8-utilities\") pod \"redhat-operators-j67z4\" (UID: \"7fdbb96f-4b42-4115-8adc-aecc858b58a8\") " pod="openshift-marketplace/redhat-operators-j67z4" Jan 30 18:18:25 crc kubenswrapper[4766]: I0130 18:18:25.186995 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pq8p9\" (UniqueName: \"kubernetes.io/projected/7fdbb96f-4b42-4115-8adc-aecc858b58a8-kube-api-access-pq8p9\") pod \"redhat-operators-j67z4\" (UID: \"7fdbb96f-4b42-4115-8adc-aecc858b58a8\") " pod="openshift-marketplace/redhat-operators-j67z4" Jan 30 18:18:25 crc kubenswrapper[4766]: I0130 18:18:25.187026 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fdbb96f-4b42-4115-8adc-aecc858b58a8-catalog-content\") pod \"redhat-operators-j67z4\" (UID: \"7fdbb96f-4b42-4115-8adc-aecc858b58a8\") " pod="openshift-marketplace/redhat-operators-j67z4" Jan 30 18:18:25 crc kubenswrapper[4766]: I0130 18:18:25.187240 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fdbb96f-4b42-4115-8adc-aecc858b58a8-utilities\") pod \"redhat-operators-j67z4\" (UID: \"7fdbb96f-4b42-4115-8adc-aecc858b58a8\") " pod="openshift-marketplace/redhat-operators-j67z4" Jan 30 18:18:25 crc kubenswrapper[4766]: I0130 18:18:25.187329 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fdbb96f-4b42-4115-8adc-aecc858b58a8-catalog-content\") pod \"redhat-operators-j67z4\" (UID: \"7fdbb96f-4b42-4115-8adc-aecc858b58a8\") " pod="openshift-marketplace/redhat-operators-j67z4" Jan 30 18:18:25 crc kubenswrapper[4766]: I0130 18:18:25.206160 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pq8p9\" (UniqueName: \"kubernetes.io/projected/7fdbb96f-4b42-4115-8adc-aecc858b58a8-kube-api-access-pq8p9\") pod \"redhat-operators-j67z4\" (UID: \"7fdbb96f-4b42-4115-8adc-aecc858b58a8\") " pod="openshift-marketplace/redhat-operators-j67z4" Jan 30 18:18:25 crc kubenswrapper[4766]: I0130 18:18:25.356562 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j67z4" Jan 30 18:18:25 crc kubenswrapper[4766]: I0130 18:18:25.892008 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j67z4"] Jan 30 18:18:25 crc kubenswrapper[4766]: I0130 18:18:25.903545 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j67z4" event={"ID":"7fdbb96f-4b42-4115-8adc-aecc858b58a8","Type":"ContainerStarted","Data":"25aa98c51df868e605113402bbbdfcb90f7ebc0e1858d2fc58b8c12fc49ffbe0"} Jan 30 18:18:26 crc kubenswrapper[4766]: I0130 18:18:26.918054 4766 generic.go:334] "Generic (PLEG): container finished" podID="7fdbb96f-4b42-4115-8adc-aecc858b58a8" containerID="fd22a93597e0fcd80270e012e247c2498c85cfc199144123dfdeb638c35ec56a" exitCode=0 Jan 30 18:18:26 crc kubenswrapper[4766]: I0130 18:18:26.918154 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j67z4" event={"ID":"7fdbb96f-4b42-4115-8adc-aecc858b58a8","Type":"ContainerDied","Data":"fd22a93597e0fcd80270e012e247c2498c85cfc199144123dfdeb638c35ec56a"} Jan 30 18:18:26 crc kubenswrapper[4766]: I0130 18:18:26.921869 4766 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 18:18:27 crc kubenswrapper[4766]: I0130 18:18:27.938219 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j67z4" event={"ID":"7fdbb96f-4b42-4115-8adc-aecc858b58a8","Type":"ContainerStarted","Data":"accd4aa68dfcfc94c34d23626b9e29fbbe824c4a630f6014d83e22ddcc946be1"} Jan 30 18:18:32 crc kubenswrapper[4766]: I0130 18:18:32.986263 4766 generic.go:334] "Generic (PLEG): container finished" podID="7fdbb96f-4b42-4115-8adc-aecc858b58a8" containerID="accd4aa68dfcfc94c34d23626b9e29fbbe824c4a630f6014d83e22ddcc946be1" exitCode=0 Jan 30 18:18:32 crc kubenswrapper[4766]: I0130 18:18:32.986449 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j67z4" event={"ID":"7fdbb96f-4b42-4115-8adc-aecc858b58a8","Type":"ContainerDied","Data":"accd4aa68dfcfc94c34d23626b9e29fbbe824c4a630f6014d83e22ddcc946be1"} Jan 30 18:18:33 crc kubenswrapper[4766]: I0130 18:18:33.997478 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j67z4" event={"ID":"7fdbb96f-4b42-4115-8adc-aecc858b58a8","Type":"ContainerStarted","Data":"95f875ad78f56bbe0fd804a30ed48bba629edce322d1e02fe23faddb987a6bb1"} Jan 30 18:18:34 crc kubenswrapper[4766]: I0130 18:18:34.022933 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-j67z4" podStartSLOduration=3.169050514 podStartE2EDuration="10.022916107s" podCreationTimestamp="2026-01-30 18:18:24 +0000 UTC" firstStartedPulling="2026-01-30 18:18:26.92147366 +0000 UTC m=+6961.559431006" lastFinishedPulling="2026-01-30 18:18:33.775339253 +0000 UTC m=+6968.413296599" observedRunningTime="2026-01-30 18:18:34.013635244 +0000 UTC m=+6968.651592610" watchObservedRunningTime="2026-01-30 18:18:34.022916107 +0000 UTC m=+6968.660873453" Jan 30 18:18:35 crc kubenswrapper[4766]: I0130 18:18:35.357077 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-j67z4" Jan 30 18:18:35 crc kubenswrapper[4766]: I0130 18:18:35.358539 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-j67z4" Jan 30 18:18:36 crc kubenswrapper[4766]: I0130 18:18:36.405611 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-j67z4" podUID="7fdbb96f-4b42-4115-8adc-aecc858b58a8" containerName="registry-server" probeResult="failure" output=< Jan 30 18:18:36 crc kubenswrapper[4766]: timeout: failed to connect service ":50051" within 1s Jan 30 18:18:36 crc kubenswrapper[4766]: > Jan 30 18:18:41 crc kubenswrapper[4766]: I0130 18:18:41.635452 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-n9grz"] Jan 30 18:18:41 crc kubenswrapper[4766]: I0130 18:18:41.638706 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n9grz" Jan 30 18:18:41 crc kubenswrapper[4766]: I0130 18:18:41.650010 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n9grz"] Jan 30 18:18:41 crc kubenswrapper[4766]: I0130 18:18:41.737724 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ptfw\" (UniqueName: \"kubernetes.io/projected/dfe908b4-6dc0-40b6-8592-d0a6d1507e13-kube-api-access-4ptfw\") pod \"redhat-marketplace-n9grz\" (UID: \"dfe908b4-6dc0-40b6-8592-d0a6d1507e13\") " pod="openshift-marketplace/redhat-marketplace-n9grz" Jan 30 18:18:41 crc kubenswrapper[4766]: I0130 18:18:41.737787 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfe908b4-6dc0-40b6-8592-d0a6d1507e13-utilities\") pod \"redhat-marketplace-n9grz\" (UID: \"dfe908b4-6dc0-40b6-8592-d0a6d1507e13\") " pod="openshift-marketplace/redhat-marketplace-n9grz" Jan 30 18:18:41 crc kubenswrapper[4766]: I0130 18:18:41.737832 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfe908b4-6dc0-40b6-8592-d0a6d1507e13-catalog-content\") pod \"redhat-marketplace-n9grz\" (UID: \"dfe908b4-6dc0-40b6-8592-d0a6d1507e13\") " pod="openshift-marketplace/redhat-marketplace-n9grz" Jan 30 18:18:41 crc kubenswrapper[4766]: I0130 18:18:41.840039 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ptfw\" (UniqueName: \"kubernetes.io/projected/dfe908b4-6dc0-40b6-8592-d0a6d1507e13-kube-api-access-4ptfw\") pod \"redhat-marketplace-n9grz\" (UID: \"dfe908b4-6dc0-40b6-8592-d0a6d1507e13\") " pod="openshift-marketplace/redhat-marketplace-n9grz" Jan 30 18:18:41 crc kubenswrapper[4766]: I0130 18:18:41.840356 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfe908b4-6dc0-40b6-8592-d0a6d1507e13-utilities\") pod \"redhat-marketplace-n9grz\" (UID: \"dfe908b4-6dc0-40b6-8592-d0a6d1507e13\") " pod="openshift-marketplace/redhat-marketplace-n9grz" Jan 30 18:18:41 crc kubenswrapper[4766]: I0130 18:18:41.840403 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfe908b4-6dc0-40b6-8592-d0a6d1507e13-catalog-content\") pod \"redhat-marketplace-n9grz\" (UID: \"dfe908b4-6dc0-40b6-8592-d0a6d1507e13\") " pod="openshift-marketplace/redhat-marketplace-n9grz" Jan 30 18:18:41 crc kubenswrapper[4766]: I0130 18:18:41.840907 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfe908b4-6dc0-40b6-8592-d0a6d1507e13-utilities\") pod \"redhat-marketplace-n9grz\" (UID: \"dfe908b4-6dc0-40b6-8592-d0a6d1507e13\") " pod="openshift-marketplace/redhat-marketplace-n9grz" Jan 30 18:18:41 crc kubenswrapper[4766]: I0130 18:18:41.840919 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfe908b4-6dc0-40b6-8592-d0a6d1507e13-catalog-content\") pod \"redhat-marketplace-n9grz\" (UID: \"dfe908b4-6dc0-40b6-8592-d0a6d1507e13\") " pod="openshift-marketplace/redhat-marketplace-n9grz" Jan 30 18:18:41 crc kubenswrapper[4766]: I0130 18:18:41.865411 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ptfw\" (UniqueName: \"kubernetes.io/projected/dfe908b4-6dc0-40b6-8592-d0a6d1507e13-kube-api-access-4ptfw\") pod \"redhat-marketplace-n9grz\" (UID: \"dfe908b4-6dc0-40b6-8592-d0a6d1507e13\") " pod="openshift-marketplace/redhat-marketplace-n9grz" Jan 30 18:18:41 crc kubenswrapper[4766]: I0130 18:18:41.969646 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n9grz" Jan 30 18:18:42 crc kubenswrapper[4766]: I0130 18:18:42.457585 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n9grz"] Jan 30 18:18:42 crc kubenswrapper[4766]: W0130 18:18:42.464768 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddfe908b4_6dc0_40b6_8592_d0a6d1507e13.slice/crio-a815ecf4d5032a72a5755ae676639b9c66a77248cd4b9c5de256ded4e60bbf71 WatchSource:0}: Error finding container a815ecf4d5032a72a5755ae676639b9c66a77248cd4b9c5de256ded4e60bbf71: Status 404 returned error can't find the container with id a815ecf4d5032a72a5755ae676639b9c66a77248cd4b9c5de256ded4e60bbf71 Jan 30 18:18:43 crc kubenswrapper[4766]: I0130 18:18:43.095026 4766 generic.go:334] "Generic (PLEG): container finished" podID="dfe908b4-6dc0-40b6-8592-d0a6d1507e13" containerID="5b2c5909b27aa5c513b4f049693b6dedcff6647ded838b819eeaede80b25c33a" exitCode=0 Jan 30 18:18:43 crc kubenswrapper[4766]: I0130 18:18:43.095105 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n9grz" event={"ID":"dfe908b4-6dc0-40b6-8592-d0a6d1507e13","Type":"ContainerDied","Data":"5b2c5909b27aa5c513b4f049693b6dedcff6647ded838b819eeaede80b25c33a"} Jan 30 18:18:43 crc kubenswrapper[4766]: I0130 18:18:43.095359 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n9grz" event={"ID":"dfe908b4-6dc0-40b6-8592-d0a6d1507e13","Type":"ContainerStarted","Data":"a815ecf4d5032a72a5755ae676639b9c66a77248cd4b9c5de256ded4e60bbf71"} Jan 30 18:18:45 crc kubenswrapper[4766]: I0130 18:18:45.121794 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n9grz" event={"ID":"dfe908b4-6dc0-40b6-8592-d0a6d1507e13","Type":"ContainerStarted","Data":"1d7e258ec18efb6ce41c4151e906585c737b1e1eecfeab899dcfca107bcc1ccb"} Jan 30 18:18:45 crc kubenswrapper[4766]: I0130 18:18:45.408759 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-j67z4" Jan 30 18:18:45 crc kubenswrapper[4766]: I0130 18:18:45.457806 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-j67z4" Jan 30 18:18:45 crc kubenswrapper[4766]: E0130 18:18:45.465827 4766 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddfe908b4_6dc0_40b6_8592_d0a6d1507e13.slice/crio-conmon-1d7e258ec18efb6ce41c4151e906585c737b1e1eecfeab899dcfca107bcc1ccb.scope\": RecentStats: unable to find data in memory cache]" Jan 30 18:18:46 crc kubenswrapper[4766]: I0130 18:18:46.133667 4766 generic.go:334] "Generic (PLEG): container finished" podID="dfe908b4-6dc0-40b6-8592-d0a6d1507e13" containerID="1d7e258ec18efb6ce41c4151e906585c737b1e1eecfeab899dcfca107bcc1ccb" exitCode=0 Jan 30 18:18:46 crc kubenswrapper[4766]: I0130 18:18:46.133806 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n9grz" event={"ID":"dfe908b4-6dc0-40b6-8592-d0a6d1507e13","Type":"ContainerDied","Data":"1d7e258ec18efb6ce41c4151e906585c737b1e1eecfeab899dcfca107bcc1ccb"} Jan 30 18:18:47 crc kubenswrapper[4766]: I0130 18:18:47.144434 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n9grz" event={"ID":"dfe908b4-6dc0-40b6-8592-d0a6d1507e13","Type":"ContainerStarted","Data":"59bcd37c6e301c290ce2df88c7608fb511c354303d5b90be0ea71a7941217745"} Jan 30 18:18:47 crc kubenswrapper[4766]: I0130 18:18:47.171377 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-n9grz" podStartSLOduration=2.749143761 podStartE2EDuration="6.171359063s" podCreationTimestamp="2026-01-30 18:18:41 +0000 UTC" firstStartedPulling="2026-01-30 18:18:43.097635807 +0000 UTC m=+6977.735593153" lastFinishedPulling="2026-01-30 18:18:46.519851109 +0000 UTC m=+6981.157808455" observedRunningTime="2026-01-30 18:18:47.170261333 +0000 UTC m=+6981.808218699" watchObservedRunningTime="2026-01-30 18:18:47.171359063 +0000 UTC m=+6981.809316409" Jan 30 18:18:47 crc kubenswrapper[4766]: I0130 18:18:47.199122 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j67z4"] Jan 30 18:18:47 crc kubenswrapper[4766]: I0130 18:18:47.199426 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-j67z4" podUID="7fdbb96f-4b42-4115-8adc-aecc858b58a8" containerName="registry-server" containerID="cri-o://95f875ad78f56bbe0fd804a30ed48bba629edce322d1e02fe23faddb987a6bb1" gracePeriod=2 Jan 30 18:18:47 crc kubenswrapper[4766]: I0130 18:18:47.781504 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j67z4" Jan 30 18:18:47 crc kubenswrapper[4766]: I0130 18:18:47.916353 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fdbb96f-4b42-4115-8adc-aecc858b58a8-catalog-content\") pod \"7fdbb96f-4b42-4115-8adc-aecc858b58a8\" (UID: \"7fdbb96f-4b42-4115-8adc-aecc858b58a8\") " Jan 30 18:18:47 crc kubenswrapper[4766]: I0130 18:18:47.916502 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pq8p9\" (UniqueName: \"kubernetes.io/projected/7fdbb96f-4b42-4115-8adc-aecc858b58a8-kube-api-access-pq8p9\") pod \"7fdbb96f-4b42-4115-8adc-aecc858b58a8\" (UID: \"7fdbb96f-4b42-4115-8adc-aecc858b58a8\") " Jan 30 18:18:47 crc kubenswrapper[4766]: I0130 18:18:47.916765 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fdbb96f-4b42-4115-8adc-aecc858b58a8-utilities\") pod \"7fdbb96f-4b42-4115-8adc-aecc858b58a8\" (UID: \"7fdbb96f-4b42-4115-8adc-aecc858b58a8\") " Jan 30 18:18:47 crc kubenswrapper[4766]: I0130 18:18:47.918705 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fdbb96f-4b42-4115-8adc-aecc858b58a8-utilities" (OuterVolumeSpecName: "utilities") pod "7fdbb96f-4b42-4115-8adc-aecc858b58a8" (UID: "7fdbb96f-4b42-4115-8adc-aecc858b58a8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:18:47 crc kubenswrapper[4766]: I0130 18:18:47.937366 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fdbb96f-4b42-4115-8adc-aecc858b58a8-kube-api-access-pq8p9" (OuterVolumeSpecName: "kube-api-access-pq8p9") pod "7fdbb96f-4b42-4115-8adc-aecc858b58a8" (UID: "7fdbb96f-4b42-4115-8adc-aecc858b58a8"). InnerVolumeSpecName "kube-api-access-pq8p9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:18:48 crc kubenswrapper[4766]: I0130 18:18:48.019269 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fdbb96f-4b42-4115-8adc-aecc858b58a8-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 18:18:48 crc kubenswrapper[4766]: I0130 18:18:48.019561 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pq8p9\" (UniqueName: \"kubernetes.io/projected/7fdbb96f-4b42-4115-8adc-aecc858b58a8-kube-api-access-pq8p9\") on node \"crc\" DevicePath \"\"" Jan 30 18:18:48 crc kubenswrapper[4766]: I0130 18:18:48.059133 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fdbb96f-4b42-4115-8adc-aecc858b58a8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7fdbb96f-4b42-4115-8adc-aecc858b58a8" (UID: "7fdbb96f-4b42-4115-8adc-aecc858b58a8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:18:48 crc kubenswrapper[4766]: I0130 18:18:48.121717 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fdbb96f-4b42-4115-8adc-aecc858b58a8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 18:18:48 crc kubenswrapper[4766]: I0130 18:18:48.155038 4766 generic.go:334] "Generic (PLEG): container finished" podID="7fdbb96f-4b42-4115-8adc-aecc858b58a8" containerID="95f875ad78f56bbe0fd804a30ed48bba629edce322d1e02fe23faddb987a6bb1" exitCode=0 Jan 30 18:18:48 crc kubenswrapper[4766]: I0130 18:18:48.155109 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j67z4" event={"ID":"7fdbb96f-4b42-4115-8adc-aecc858b58a8","Type":"ContainerDied","Data":"95f875ad78f56bbe0fd804a30ed48bba629edce322d1e02fe23faddb987a6bb1"} Jan 30 18:18:48 crc kubenswrapper[4766]: I0130 18:18:48.155134 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j67z4" Jan 30 18:18:48 crc kubenswrapper[4766]: I0130 18:18:48.155161 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j67z4" event={"ID":"7fdbb96f-4b42-4115-8adc-aecc858b58a8","Type":"ContainerDied","Data":"25aa98c51df868e605113402bbbdfcb90f7ebc0e1858d2fc58b8c12fc49ffbe0"} Jan 30 18:18:48 crc kubenswrapper[4766]: I0130 18:18:48.155206 4766 scope.go:117] "RemoveContainer" containerID="95f875ad78f56bbe0fd804a30ed48bba629edce322d1e02fe23faddb987a6bb1" Jan 30 18:18:48 crc kubenswrapper[4766]: I0130 18:18:48.175589 4766 scope.go:117] "RemoveContainer" containerID="accd4aa68dfcfc94c34d23626b9e29fbbe824c4a630f6014d83e22ddcc946be1" Jan 30 18:18:48 crc kubenswrapper[4766]: I0130 18:18:48.192466 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j67z4"] Jan 30 18:18:48 crc kubenswrapper[4766]: I0130 18:18:48.200543 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-j67z4"] Jan 30 18:18:48 crc kubenswrapper[4766]: I0130 18:18:48.214503 4766 scope.go:117] "RemoveContainer" containerID="fd22a93597e0fcd80270e012e247c2498c85cfc199144123dfdeb638c35ec56a" Jan 30 18:18:48 crc kubenswrapper[4766]: I0130 18:18:48.251662 4766 scope.go:117] "RemoveContainer" containerID="95f875ad78f56bbe0fd804a30ed48bba629edce322d1e02fe23faddb987a6bb1" Jan 30 18:18:48 crc kubenswrapper[4766]: E0130 18:18:48.252439 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95f875ad78f56bbe0fd804a30ed48bba629edce322d1e02fe23faddb987a6bb1\": container with ID starting with 95f875ad78f56bbe0fd804a30ed48bba629edce322d1e02fe23faddb987a6bb1 not found: ID does not exist" containerID="95f875ad78f56bbe0fd804a30ed48bba629edce322d1e02fe23faddb987a6bb1" Jan 30 18:18:48 crc kubenswrapper[4766]: I0130 18:18:48.252478 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95f875ad78f56bbe0fd804a30ed48bba629edce322d1e02fe23faddb987a6bb1"} err="failed to get container status \"95f875ad78f56bbe0fd804a30ed48bba629edce322d1e02fe23faddb987a6bb1\": rpc error: code = NotFound desc = could not find container \"95f875ad78f56bbe0fd804a30ed48bba629edce322d1e02fe23faddb987a6bb1\": container with ID starting with 95f875ad78f56bbe0fd804a30ed48bba629edce322d1e02fe23faddb987a6bb1 not found: ID does not exist" Jan 30 18:18:48 crc kubenswrapper[4766]: I0130 18:18:48.252523 4766 scope.go:117] "RemoveContainer" containerID="accd4aa68dfcfc94c34d23626b9e29fbbe824c4a630f6014d83e22ddcc946be1" Jan 30 18:18:48 crc kubenswrapper[4766]: E0130 18:18:48.253353 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"accd4aa68dfcfc94c34d23626b9e29fbbe824c4a630f6014d83e22ddcc946be1\": container with ID starting with accd4aa68dfcfc94c34d23626b9e29fbbe824c4a630f6014d83e22ddcc946be1 not found: ID does not exist" containerID="accd4aa68dfcfc94c34d23626b9e29fbbe824c4a630f6014d83e22ddcc946be1" Jan 30 18:18:48 crc kubenswrapper[4766]: I0130 18:18:48.253384 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"accd4aa68dfcfc94c34d23626b9e29fbbe824c4a630f6014d83e22ddcc946be1"} err="failed to get container status \"accd4aa68dfcfc94c34d23626b9e29fbbe824c4a630f6014d83e22ddcc946be1\": rpc error: code = NotFound desc = could not find container \"accd4aa68dfcfc94c34d23626b9e29fbbe824c4a630f6014d83e22ddcc946be1\": container with ID starting with accd4aa68dfcfc94c34d23626b9e29fbbe824c4a630f6014d83e22ddcc946be1 not found: ID does not exist" Jan 30 18:18:48 crc kubenswrapper[4766]: I0130 18:18:48.253403 4766 scope.go:117] "RemoveContainer" containerID="fd22a93597e0fcd80270e012e247c2498c85cfc199144123dfdeb638c35ec56a" Jan 30 18:18:48 crc kubenswrapper[4766]: E0130 18:18:48.253726 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd22a93597e0fcd80270e012e247c2498c85cfc199144123dfdeb638c35ec56a\": container with ID starting with fd22a93597e0fcd80270e012e247c2498c85cfc199144123dfdeb638c35ec56a not found: ID does not exist" containerID="fd22a93597e0fcd80270e012e247c2498c85cfc199144123dfdeb638c35ec56a" Jan 30 18:18:48 crc kubenswrapper[4766]: I0130 18:18:48.253777 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd22a93597e0fcd80270e012e247c2498c85cfc199144123dfdeb638c35ec56a"} err="failed to get container status \"fd22a93597e0fcd80270e012e247c2498c85cfc199144123dfdeb638c35ec56a\": rpc error: code = NotFound desc = could not find container \"fd22a93597e0fcd80270e012e247c2498c85cfc199144123dfdeb638c35ec56a\": container with ID starting with fd22a93597e0fcd80270e012e247c2498c85cfc199144123dfdeb638c35ec56a not found: ID does not exist" Jan 30 18:18:50 crc kubenswrapper[4766]: I0130 18:18:50.056410 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fdbb96f-4b42-4115-8adc-aecc858b58a8" path="/var/lib/kubelet/pods/7fdbb96f-4b42-4115-8adc-aecc858b58a8/volumes" Jan 30 18:18:51 crc kubenswrapper[4766]: I0130 18:18:51.970128 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-n9grz" Jan 30 18:18:51 crc kubenswrapper[4766]: I0130 18:18:51.970704 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-n9grz" Jan 30 18:18:52 crc kubenswrapper[4766]: I0130 18:18:52.021830 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-n9grz" Jan 30 18:18:52 crc kubenswrapper[4766]: I0130 18:18:52.251797 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-n9grz" Jan 30 18:18:52 crc kubenswrapper[4766]: I0130 18:18:52.598078 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n9grz"] Jan 30 18:18:54 crc kubenswrapper[4766]: I0130 18:18:54.221858 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-n9grz" podUID="dfe908b4-6dc0-40b6-8592-d0a6d1507e13" containerName="registry-server" containerID="cri-o://59bcd37c6e301c290ce2df88c7608fb511c354303d5b90be0ea71a7941217745" gracePeriod=2 Jan 30 18:18:54 crc kubenswrapper[4766]: I0130 18:18:54.715702 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n9grz" Jan 30 18:18:54 crc kubenswrapper[4766]: I0130 18:18:54.866728 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ptfw\" (UniqueName: \"kubernetes.io/projected/dfe908b4-6dc0-40b6-8592-d0a6d1507e13-kube-api-access-4ptfw\") pod \"dfe908b4-6dc0-40b6-8592-d0a6d1507e13\" (UID: \"dfe908b4-6dc0-40b6-8592-d0a6d1507e13\") " Jan 30 18:18:54 crc kubenswrapper[4766]: I0130 18:18:54.866957 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfe908b4-6dc0-40b6-8592-d0a6d1507e13-utilities\") pod \"dfe908b4-6dc0-40b6-8592-d0a6d1507e13\" (UID: \"dfe908b4-6dc0-40b6-8592-d0a6d1507e13\") " Jan 30 18:18:54 crc kubenswrapper[4766]: I0130 18:18:54.866992 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfe908b4-6dc0-40b6-8592-d0a6d1507e13-catalog-content\") pod \"dfe908b4-6dc0-40b6-8592-d0a6d1507e13\" (UID: \"dfe908b4-6dc0-40b6-8592-d0a6d1507e13\") " Jan 30 18:18:54 crc kubenswrapper[4766]: I0130 18:18:54.867800 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfe908b4-6dc0-40b6-8592-d0a6d1507e13-utilities" (OuterVolumeSpecName: "utilities") pod "dfe908b4-6dc0-40b6-8592-d0a6d1507e13" (UID: "dfe908b4-6dc0-40b6-8592-d0a6d1507e13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:18:54 crc kubenswrapper[4766]: I0130 18:18:54.872605 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfe908b4-6dc0-40b6-8592-d0a6d1507e13-kube-api-access-4ptfw" (OuterVolumeSpecName: "kube-api-access-4ptfw") pod "dfe908b4-6dc0-40b6-8592-d0a6d1507e13" (UID: "dfe908b4-6dc0-40b6-8592-d0a6d1507e13"). InnerVolumeSpecName "kube-api-access-4ptfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:18:54 crc kubenswrapper[4766]: I0130 18:18:54.894768 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfe908b4-6dc0-40b6-8592-d0a6d1507e13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dfe908b4-6dc0-40b6-8592-d0a6d1507e13" (UID: "dfe908b4-6dc0-40b6-8592-d0a6d1507e13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:18:54 crc kubenswrapper[4766]: I0130 18:18:54.969767 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ptfw\" (UniqueName: \"kubernetes.io/projected/dfe908b4-6dc0-40b6-8592-d0a6d1507e13-kube-api-access-4ptfw\") on node \"crc\" DevicePath \"\"" Jan 30 18:18:54 crc kubenswrapper[4766]: I0130 18:18:54.969813 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfe908b4-6dc0-40b6-8592-d0a6d1507e13-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 18:18:54 crc kubenswrapper[4766]: I0130 18:18:54.969824 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfe908b4-6dc0-40b6-8592-d0a6d1507e13-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 18:18:55 crc kubenswrapper[4766]: I0130 18:18:55.232428 4766 generic.go:334] "Generic (PLEG): container finished" podID="dfe908b4-6dc0-40b6-8592-d0a6d1507e13" containerID="59bcd37c6e301c290ce2df88c7608fb511c354303d5b90be0ea71a7941217745" exitCode=0 Jan 30 18:18:55 crc kubenswrapper[4766]: I0130 18:18:55.232490 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n9grz" event={"ID":"dfe908b4-6dc0-40b6-8592-d0a6d1507e13","Type":"ContainerDied","Data":"59bcd37c6e301c290ce2df88c7608fb511c354303d5b90be0ea71a7941217745"} Jan 30 18:18:55 crc kubenswrapper[4766]: I0130 18:18:55.232740 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n9grz" event={"ID":"dfe908b4-6dc0-40b6-8592-d0a6d1507e13","Type":"ContainerDied","Data":"a815ecf4d5032a72a5755ae676639b9c66a77248cd4b9c5de256ded4e60bbf71"} Jan 30 18:18:55 crc kubenswrapper[4766]: I0130 18:18:55.232763 4766 scope.go:117] "RemoveContainer" containerID="59bcd37c6e301c290ce2df88c7608fb511c354303d5b90be0ea71a7941217745" Jan 30 18:18:55 crc kubenswrapper[4766]: I0130 18:18:55.232507 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n9grz" Jan 30 18:18:55 crc kubenswrapper[4766]: I0130 18:18:55.250482 4766 scope.go:117] "RemoveContainer" containerID="1d7e258ec18efb6ce41c4151e906585c737b1e1eecfeab899dcfca107bcc1ccb" Jan 30 18:18:55 crc kubenswrapper[4766]: I0130 18:18:55.269202 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n9grz"] Jan 30 18:18:55 crc kubenswrapper[4766]: I0130 18:18:55.277560 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-n9grz"] Jan 30 18:18:55 crc kubenswrapper[4766]: I0130 18:18:55.278083 4766 scope.go:117] "RemoveContainer" containerID="5b2c5909b27aa5c513b4f049693b6dedcff6647ded838b819eeaede80b25c33a" Jan 30 18:18:55 crc kubenswrapper[4766]: I0130 18:18:55.313237 4766 scope.go:117] "RemoveContainer" containerID="59bcd37c6e301c290ce2df88c7608fb511c354303d5b90be0ea71a7941217745" Jan 30 18:18:55 crc kubenswrapper[4766]: E0130 18:18:55.313700 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59bcd37c6e301c290ce2df88c7608fb511c354303d5b90be0ea71a7941217745\": container with ID starting with 59bcd37c6e301c290ce2df88c7608fb511c354303d5b90be0ea71a7941217745 not found: ID does not exist" containerID="59bcd37c6e301c290ce2df88c7608fb511c354303d5b90be0ea71a7941217745" Jan 30 18:18:55 crc kubenswrapper[4766]: I0130 18:18:55.313739 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59bcd37c6e301c290ce2df88c7608fb511c354303d5b90be0ea71a7941217745"} err="failed to get container status \"59bcd37c6e301c290ce2df88c7608fb511c354303d5b90be0ea71a7941217745\": rpc error: code = NotFound desc = could not find container \"59bcd37c6e301c290ce2df88c7608fb511c354303d5b90be0ea71a7941217745\": container with ID starting with 59bcd37c6e301c290ce2df88c7608fb511c354303d5b90be0ea71a7941217745 not found: ID does not exist" Jan 30 18:18:55 crc kubenswrapper[4766]: I0130 18:18:55.313768 4766 scope.go:117] "RemoveContainer" containerID="1d7e258ec18efb6ce41c4151e906585c737b1e1eecfeab899dcfca107bcc1ccb" Jan 30 18:18:55 crc kubenswrapper[4766]: E0130 18:18:55.314245 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d7e258ec18efb6ce41c4151e906585c737b1e1eecfeab899dcfca107bcc1ccb\": container with ID starting with 1d7e258ec18efb6ce41c4151e906585c737b1e1eecfeab899dcfca107bcc1ccb not found: ID does not exist" containerID="1d7e258ec18efb6ce41c4151e906585c737b1e1eecfeab899dcfca107bcc1ccb" Jan 30 18:18:55 crc kubenswrapper[4766]: I0130 18:18:55.314272 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d7e258ec18efb6ce41c4151e906585c737b1e1eecfeab899dcfca107bcc1ccb"} err="failed to get container status \"1d7e258ec18efb6ce41c4151e906585c737b1e1eecfeab899dcfca107bcc1ccb\": rpc error: code = NotFound desc = could not find container \"1d7e258ec18efb6ce41c4151e906585c737b1e1eecfeab899dcfca107bcc1ccb\": container with ID starting with 1d7e258ec18efb6ce41c4151e906585c737b1e1eecfeab899dcfca107bcc1ccb not found: ID does not exist" Jan 30 18:18:55 crc kubenswrapper[4766]: I0130 18:18:55.314286 4766 scope.go:117] "RemoveContainer" containerID="5b2c5909b27aa5c513b4f049693b6dedcff6647ded838b819eeaede80b25c33a" Jan 30 18:18:55 crc kubenswrapper[4766]: E0130 18:18:55.314559 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b2c5909b27aa5c513b4f049693b6dedcff6647ded838b819eeaede80b25c33a\": container with ID starting with 5b2c5909b27aa5c513b4f049693b6dedcff6647ded838b819eeaede80b25c33a not found: ID does not exist" containerID="5b2c5909b27aa5c513b4f049693b6dedcff6647ded838b819eeaede80b25c33a" Jan 30 18:18:55 crc kubenswrapper[4766]: I0130 18:18:55.314583 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b2c5909b27aa5c513b4f049693b6dedcff6647ded838b819eeaede80b25c33a"} err="failed to get container status \"5b2c5909b27aa5c513b4f049693b6dedcff6647ded838b819eeaede80b25c33a\": rpc error: code = NotFound desc = could not find container \"5b2c5909b27aa5c513b4f049693b6dedcff6647ded838b819eeaede80b25c33a\": container with ID starting with 5b2c5909b27aa5c513b4f049693b6dedcff6647ded838b819eeaede80b25c33a not found: ID does not exist" Jan 30 18:18:56 crc kubenswrapper[4766]: I0130 18:18:56.052066 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfe908b4-6dc0-40b6-8592-d0a6d1507e13" path="/var/lib/kubelet/pods/dfe908b4-6dc0-40b6-8592-d0a6d1507e13/volumes" Jan 30 18:19:09 crc kubenswrapper[4766]: I0130 18:19:09.045406 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 18:19:09 crc kubenswrapper[4766]: I0130 18:19:09.045941 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515137173077024460 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015137173077017375 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015137154752016517 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015137154752015467 5ustar corecore